Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = MEC server status decision

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 7852 KB  
Article
MEC Server Status Optimization Framework for Energy Efficient MEC Systems by Taking a Deep-Learning Approach
by Minseok Koo and Jaesung Park
Future Internet 2024, 16(12), 441; https://doi.org/10.3390/fi16120441 - 28 Nov 2024
Cited by 1 | Viewed by 1207
Abstract
Reducing energy consumption in a MEC (Multi-Access Edge Computing) system is a critical goal, both for lowering operational expenses and promoting environmental sustainability. In this paper, we focus on the problem of managing the sleep state of MEC servers (MECSs) to decrease the [...] Read more.
Reducing energy consumption in a MEC (Multi-Access Edge Computing) system is a critical goal, both for lowering operational expenses and promoting environmental sustainability. In this paper, we focus on the problem of managing the sleep state of MEC servers (MECSs) to decrease the overall energy consumption of a MEC system while providing users acceptable service delays. The proposed method achieves this objective through dynamic orchestration of MECS activation states based on systematic analysis of workload distribution patterns. To facilitate this optimization, we formulate the MECS sleep control mechanism as a constrained combinatorial optimization problem. To resolve the formulated problem, we take a deep-learning approach. We develop a task arrival rate predictor using a spatio-temporal graph convolution network (STGCN). We then integrate this predicted information with the queue length distribution to form the input state for our deep reinforcement learning (DRL) agent. To verify the effectiveness of our proposed framework, we conduct comprehensive simulation studies incorporating real-world operational datasets, with comparative evaluation against established metaheuristic optimization techniques. The results indicate that our method demonstrates robust performance in MECS state optimization, maintaining operational efficiency despite prediction uncertainties. Accordingly, the proposed approach yields substantial improvements in system performance metrics, including enhanced energy utilization efficiency, decreased service delay violation rate, and reduced computational latency in operational state determination. Full article
(This article belongs to the Special Issue Convergence of IoT, Edge and Cloud Systems)
Show Figures

Figure 1

24 pages, 5468 KB  
Article
A Federated Learning and Deep Reinforcement Learning-Based Method with Two Types of Agents for Computation Offload
by Song Liu, Shiyuan Yang, Hanze Zhang and Weiguo Wu
Sensors 2023, 23(4), 2243; https://doi.org/10.3390/s23042243 - 16 Feb 2023
Cited by 9 | Viewed by 3773
Abstract
With the rise of latency-sensitive and computationally intensive applications in mobile edge computing (MEC) environments, the computation offloading strategy has been widely studied to meet the low-latency demands of these applications. However, the uncertainty of various tasks and the time-varying conditions of wireless [...] Read more.
With the rise of latency-sensitive and computationally intensive applications in mobile edge computing (MEC) environments, the computation offloading strategy has been widely studied to meet the low-latency demands of these applications. However, the uncertainty of various tasks and the time-varying conditions of wireless networks make it difficult for mobile devices to make efficient decisions. The existing methods also face the problems of long-delay decisions and user data privacy disclosures. In this paper, we present the FDRT, a federated learning and deep reinforcement learning-based method with two types of agents for computation offload, to minimize the system latency. FDRT uses a multi-agent collaborative computation offloading strategy, namely, DRT. DRT divides the offloading decision into whether to compute tasks locally and whether to offload tasks to MEC servers. The designed DDQN agent considers the task information, its own resources, and the network status conditions of mobile devices, and the designed D3QN agent considers these conditions of all MEC servers in the collaborative cloud-side end MEC system; both jointly learn the optimal decision. FDRT also applies federated learning to reduce communication overhead and optimize the model training of DRT by designing a new parameter aggregation method, while protecting user data privacy. The simulation results showed that DRT effectively reduced the average task execution delay by up to 50% compared with several baselines and state-of-the-art offloading strategies. FRDT also accelerates the convergence rate of multi-agent training and reduces the training time of DRT by 61.7%. Full article
Show Figures

Figure 1

30 pages, 1393 KB  
Article
Dynamic Task Migration Combining Energy Efficiency and Load Balancing Optimization in Three-Tier UAV-Enabled Mobile Edge Computing System
by Wu Ouyang, Zhigang Chen, Jia Wu, Genghua Yu and Heng Zhang
Electronics 2021, 10(2), 190; https://doi.org/10.3390/electronics10020190 - 15 Jan 2021
Cited by 19 | Viewed by 3456
Abstract
As transportation becomes more convenient and efficient, users move faster and faster. When a user leaves the service range of the original edge server, the original edge server needs to migrate the tasks offloaded by the user to other edge servers. An effective [...] Read more.
As transportation becomes more convenient and efficient, users move faster and faster. When a user leaves the service range of the original edge server, the original edge server needs to migrate the tasks offloaded by the user to other edge servers. An effective task migration strategy needs to fully consider the location of users, the load status of edge servers, and energy consumption, which make designing an effective task migration strategy a challenge. In this paper, we innovatively proposed a mobile edge computing (MEC) system architecture consisting of multiple smart mobile devices (SMDs), multiple unmanned aerial vehicle (UAV), and a base station (BS). Moreover, we establish the model of the Markov decision process with unknown rewards (MDPUR) based on the traditional Markov decision process (MDP), which comprehensively considers the three aspects of the migration distance, the residual energy status of the UAVs, and the load status of the UAVs. Based on the MDPUR model, we propose a advantage-based value iteration (ABVI) algorithm to obtain the effective task migration strategy, which can help the UAV group to achieve load balancing and reduce the total energy consumption of the UAV group under the premise of ensuring user service quality. Finally, the results of simulation experiments show that the ABVI algorithm is effective. In particular, the ABVI algorithm has better performance than the traditional value iterative algorithm. And in a dynamic environment, the ABVI algorithm is also very robust. Full article
(This article belongs to the Special Issue Telecommunication Networks)
Show Figures

Figure 1

Back to TopTop