Presenters

Jui MhatreFollow

Disciplines

Digital Communications and Networking

Abstract (300 words maximum)

With the increasing number of mobile devices (MD), IoT devices, and computation-intensive tasks deployed on these devices, there is a need to increase the efficiency and speed of the deliverable. Due to inadequate resources, it is infeasible to compute all the tasks locally. Similarly, due to time constraints, it is not possible to compute the entire task at a remote site. Edge computing (EC) and cloud computing (CC) play the role of providing the resources to these devices on the fly. But a major drawback is increased delay and energy consumption due to transmission and offloading of computation tasks to these remote systems. There is a need to divide the task for computation at local sites, edge servers, and cloud servers to complete tasks with minimum delay and energy consumption. This paper proposes offloading strategy computation using Multi-Period Deep Deterministic Policy Gradient (MP-DDPG) algorithm based on Reinforcement Learning (RL) to optimize the latency caused and energy consumed. We formulate our problem as a Multi-period Markov Decision Process (MP-MDP). In this paper, we use the two-tier offloading architecture including more than one mobile device (MD), two EC-servers, and one CC-server as computation sites. Further, we also compare our proposed algorithm using one-tier architecture and one edge server with the Deep Deterministic Policy Gradient (DDPG) algorithm with similar architecture.

Academic department under which the project should be listed

CCSE - Computer Science

Primary Investigator (PI) Name

Dr. Ahyoung Lee

Share

COinS
 

Joint Latency-Energy optimization scheme for Offloading in Mobile Edge computing environment based in Deep Reinforcement Learning

With the increasing number of mobile devices (MD), IoT devices, and computation-intensive tasks deployed on these devices, there is a need to increase the efficiency and speed of the deliverable. Due to inadequate resources, it is infeasible to compute all the tasks locally. Similarly, due to time constraints, it is not possible to compute the entire task at a remote site. Edge computing (EC) and cloud computing (CC) play the role of providing the resources to these devices on the fly. But a major drawback is increased delay and energy consumption due to transmission and offloading of computation tasks to these remote systems. There is a need to divide the task for computation at local sites, edge servers, and cloud servers to complete tasks with minimum delay and energy consumption. This paper proposes offloading strategy computation using Multi-Period Deep Deterministic Policy Gradient (MP-DDPG) algorithm based on Reinforcement Learning (RL) to optimize the latency caused and energy consumed. We formulate our problem as a Multi-period Markov Decision Process (MP-MDP). In this paper, we use the two-tier offloading architecture including more than one mobile device (MD), two EC-servers, and one CC-server as computation sites. Further, we also compare our proposed algorithm using one-tier architecture and one edge server with the Deep Deterministic Policy Gradient (DDPG) algorithm with similar architecture.

blog comments powered by Disqus