Resource Allocation in Cloud–Edge–End Cooperation Networks

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Networks".

Deadline for manuscript submissions: closed (28 February 2023) | Viewed by 8417

Special Issue Editors

Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
Interests: future networks; big data for networking; mobile edge computing
Special Issues, Collections and Topics in MDPI journals
Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
Interests: networking AI; networked control system; intelligent optimization
Special Issues, Collections and Topics in MDPI journals
School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
Interests: wireless security towards B5G/6G; physical layer security; AI empowered security
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the explosion in global mobile traffic, the improvement of network quality of service and the quality of end-user experience, by effectively allocating heterogeneous network resources, is a key issue that urgently requires a solution in future wireless networks. The collaborative allocation of cloud–edge–end resources can be studied to effectively overcome problems such as high latency and traffic load caused by cloud computing. Meanwhile, it can successfully avoid the problems caused by edge computing in terms of constrained, heterogeneous and dynamical resources, so as to satisfy the differentiated service requirements of massive Internet applications, and improve content distribution and resources utilization. However, realizing the collaborative allocation and intelligent optimization of cloud–edge–end resources in practical communication scenarios is challenging, and there are still many important open research problems. This Special Issue seeks to explore collaborative allocation and the intelligent optimization of cloud–edge–end resources and invites novel contributions from researchers and practitioners.

This Special Issue aims to provide a forum for the latest research, innovations, and applications of resource allocation in cloud–edge–end cooperation envrionments, in order to bridge the gap between theory and applications. We are seeking high-quality original research papers on topics including, but not limited to:

  • Survey on the collaborative allocation and intelligent optimization of cloud–edge–end resources;
  • Resource allocation and mobility management of cloud–edge–end cooperation;
  • Machine learning and big-data-aided cloud–edge–end cooperation;
  • Cloud–edge–end cooperation for integrated terrestrial and aerial 6G networks;
  • Service-aware resource allocation for cloud–edge–end cooperation;
  • Cloud–edge–end cooperation for ultra-reliable low-latency communication;
  • QoS/QoE-aware resource allocation for cloud–edge–end cooperation;
  • Mobility-aware content caching for cloud–edge–end cooperation;
  • Cross-layer design, optimization and performance analysis for cloud–edge–end cooperation;
  • Cross-domain collaborative allocation for cloud–edge–end resources;
  • Advanced spatio-temporal prediction for content popularity;
  • Coordinated and complementary transformation of heterogeneous cloud–edge–end resources.

Prof. Dr. Chao Fang
Prof. Dr. Zhuwei Wang
Dr. Na Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cloud–edge–end resources
  • machine learning and big-data-aided
  • cloud–edge–end cooperation
  • 6G networks
  • low-latency communication
  • QoS/QoE-aware
  • mobility-aware

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 890 KiB  
Article
Intelligent Computation Offloading Mechanism with Content Cache in Mobile Edge Computing
by Feixiang Li, Chao Fang, Mingzhe Liu, Ning Li and Tian Sun
Electronics 2023, 12(5), 1254; https://doi.org/10.3390/electronics12051254 - 06 Mar 2023
Cited by 1 | Viewed by 1376
Abstract
Edge computing is a promising technology to enable user equipment to share computing resources for task offloading. Due to the characteristics of the computing resource, how to design an efficient computation incentive mechanism with the appropriate task offloading and resource allocation strategies is [...] Read more.
Edge computing is a promising technology to enable user equipment to share computing resources for task offloading. Due to the characteristics of the computing resource, how to design an efficient computation incentive mechanism with the appropriate task offloading and resource allocation strategies is an essential issue. In this manuscript, we proposed an intelligent computation offloading mechanism with content cache in mobile edge computing. First, we provide the network framework for computation offloading with content cache in mobile edge computing. Then, by deriving necessary and sufficient conditions, an optimal contract is designed to obtain the joint task offloading, resource allocation, and a computation strategy with an intelligent mechanism. Simulation results demonstrate the efficiency of our proposed approach. Full article
(This article belongs to the Special Issue Resource Allocation in Cloud–Edge–End Cooperation Networks)
Show Figures

Figure 1

18 pages, 689 KiB  
Article
Task Offloading and Resource Allocation for Tasks with Varied Requirements in Mobile Edge Computing Networks
by Li Dong, Wenji He and Haipeng Yao
Electronics 2023, 12(2), 366; https://doi.org/10.3390/electronics12020366 - 10 Jan 2023
Cited by 3 | Viewed by 2088
Abstract
Edge computing enables devices with insufficient computing resources to offload their tasks to the edge for computing, to improve the service experience. Some existing work has noticed that the data size of offloaded tasks played a role in resource allocation shares but has [...] Read more.
Edge computing enables devices with insufficient computing resources to offload their tasks to the edge for computing, to improve the service experience. Some existing work has noticed that the data size of offloaded tasks played a role in resource allocation shares but has not delved further into how the data size of an offloaded task affects resource allocation. Among offloaded tasks, those with larger data sizes often consume a larger share of system resources, potentially even monopolizing system resources if the data size is large enough. As a result, tasks with small or regular sizes lose the opportunity to be offloaded to the edge due to their limited data size. To address this issue, we introduce the concept of an emergency factor to penalize tasks with immense sizes for monopolizing system resources, while supporting tasks with small sizes to contend for system resources. The joint offloading decision and resource allocation problem is formulated as a mixed-integer nonlinear programming (MINLP) problem and further decomposed into an offloading decision subproblem and a resource allocation subproblem. Using the KKT conditions, we design a bisection search-based algorithm to find the optimal resource allocation scheme. Additionally, we propose a linear-search-based coordinate descent (CD) algorithm to identify the optimal offloading decision. Numerical results show that our proposed algorithm converges to the optimal scheme (for the minimal delay) when tasks are of regular size. Moreover, when tasks of immense, small and regular sizes coexist in the system, our scheme can exclude tasks of immense size from edge resource allocation, while still enabling tasks of small size to be offloaded. Full article
(This article belongs to the Special Issue Resource Allocation in Cloud–Edge–End Cooperation Networks)
Show Figures

Figure 1

11 pages, 722 KiB  
Article
Decompose Auto-Transformer Time Series Anomaly Detection for Network Management
by Bo Wu, Chao Fang, Zhenjie Yao, Yanhui Tu and Yixin Chen
Electronics 2023, 12(2), 354; https://doi.org/10.3390/electronics12020354 - 10 Jan 2023
Cited by 2 | Viewed by 2516
Abstract
Time series anomaly detection through unsupervised methods has been an active research area in recent years due to its enormous potential for networks management. The representation and reconstruction of time series have made extraordinary progress in existing works. However, time series is known [...] Read more.
Time series anomaly detection through unsupervised methods has been an active research area in recent years due to its enormous potential for networks management. The representation and reconstruction of time series have made extraordinary progress in existing works. However, time series is known to be complex in terms of their temporal dependency and stochasticity, which makes anomaly detection difficult. To this end, we propose a novel approach based on a decomposition auto-transformer networks(DATN) for time series anomaly detection. The time series is decomposed into seasonal and trend components, and renovated as a basic inner block deep model. With this design, transformers can decompose complex time series in a progressive manner. We also design an auto-transfomer block that determines dependencies and representation aggregation at the sub-series level based on series seasonal and trend components. Moreover, the complex transformer decoder is replaced by a simple linear decoder, which makes the model more efficient. Extensive experiments on various public benchmarks demonstrate that our method has achieved state-of-the-art performance. Full article
(This article belongs to the Special Issue Resource Allocation in Cloud–Edge–End Cooperation Networks)
Show Figures

Figure 1

15 pages, 529 KiB  
Article
Robust Hierarchical Federated Learning with Anomaly Detection in Cloud-Edge-End Cooperation Networks
by Yujie Zhou, Ruyan Wang, Xingyue Mo, Zhidu Li and Tong Tang
Electronics 2023, 12(1), 112; https://doi.org/10.3390/electronics12010112 - 27 Dec 2022
Cited by 2 | Viewed by 1609
Abstract
Federated learning (FL) enables devices to collaborate on machine learning (ML) model training with distributed data while preserving privacy. However, the traditional FL is inefficient and costly in cloud–edge–end cooperation networks since the adopted classical client-server communication framework fails to consider the real [...] Read more.
Federated learning (FL) enables devices to collaborate on machine learning (ML) model training with distributed data while preserving privacy. However, the traditional FL is inefficient and costly in cloud–edge–end cooperation networks since the adopted classical client-server communication framework fails to consider the real network structure. Moreover, malicious attackers and malfunctioning clients may be implied in all participators to exert adverse impacts as abnormal behaviours on the FL process. To address the above challenges, we leverage cloud–edge–end cooperation to propose a robust hierarchical federated learning (R-HFL) framework to enhance inherent system resistance to abnormal behaviours while improving communication efficiency in practical networks and keeping the advantages of the traditional FL. Specifically, we introduce a hierarchical cloud–edge–end collaboration-based FL framework to reduce communication costs. For the framework, we design a detection mechanism as partial cosine similarity (PCS) to filter adverse clients to improve performance, where the proposed lightweight technique has high computation parallelization. Besides, we theoretically discuss the influence of the proposed PCS on the convergence and stabilization of FL. Finally, the experimental results show that the proposed R-HFL always outperforms baselines in general cases under malicious attacks, which further shows the effectiveness of our scheme. Full article
(This article belongs to the Special Issue Resource Allocation in Cloud–Edge–End Cooperation Networks)
Show Figures

Figure 1

Back to TopTop