Next Article in Journal
A Novel Control Strategy to Active Power Filter with Load Voltage Support Considering Current Harmonic Compensation
Next Article in Special Issue
DUKMSVM: A Framework of Deep Uniform Kernel Mapping Support Vector Machine for Short Text Classification
Previous Article in Journal
CRANK: A Hybrid Model for User and Content Sentiment Classification Using Social Context and Community Detection
Previous Article in Special Issue
Forecasting Daily Temperatures with Different Time Interval Data Using Deep Neural Networks
Article

Adaptive Real-Time Offloading Decision-Making for Mobile Edges: Deep Reinforcement Learning Framework and Simulation Results

1
School of Computer Science and Engineering, Chung-Ang University, Seoul 06974, Korea
2
School of Electrical Engineering, Korea University, Seoul 02841, Korea
3
Multimedia Processing Lab., Samsung Advanced Institute of Technology, Suwon 16677, Korea
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2020, 10(5), 1663; https://doi.org/10.3390/app10051663
Received: 29 January 2020 / Revised: 25 February 2020 / Accepted: 27 February 2020 / Published: 1 March 2020
(This article belongs to the Special Issue Advances in Deep Learning Ⅱ)
This paper proposes a novel dynamic offloading decision method which is inspired by deep reinforcement learning (DRL). In order to realize real-time communications in mobile edge computing systems, an efficient task offloading algorithm is required. When the decision of actions (offloading enabled, i.e., computing in clouds or offloading disabled, i.e., computing in local edges) is made by the proposed DRL-based dynamic algorithm in each unit time, it is required to consider real-time/seamless data transmission and energy-efficiency in mobile edge devices. Therefore, our proposed dynamic offloading decision algorithm is designed for the joint optimization of delay and energy-efficient communications based on DRL framework. According to the performance evaluation via data-intensive simulations, this paper verifies that the proposed dynamic algorithm achieves desired performance. View Full-Text
Keywords: mobile edge computing; offloading; real-time; deep reinforcement learning; deep Q-network mobile edge computing; offloading; real-time; deep reinforcement learning; deep Q-network
Show Figures

Figure 1

MDPI and ACS Style

Park, S.; Kwon, D.; Kim, J.; Lee, Y.K.; Cho, S. Adaptive Real-Time Offloading Decision-Making for Mobile Edges: Deep Reinforcement Learning Framework and Simulation Results. Appl. Sci. 2020, 10, 1663. https://doi.org/10.3390/app10051663

AMA Style

Park S, Kwon D, Kim J, Lee YK, Cho S. Adaptive Real-Time Offloading Decision-Making for Mobile Edges: Deep Reinforcement Learning Framework and Simulation Results. Applied Sciences. 2020; 10(5):1663. https://doi.org/10.3390/app10051663

Chicago/Turabian Style

Park, Soohyun, Dohyun Kwon, Joongheon Kim, Youn K. Lee, and Sungrae Cho. 2020. "Adaptive Real-Time Offloading Decision-Making for Mobile Edges: Deep Reinforcement Learning Framework and Simulation Results" Applied Sciences 10, no. 5: 1663. https://doi.org/10.3390/app10051663

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop