Special Issue "From Edge Computing to Distributed Cloud—A Novel Paradigm toward 6G Networks"

A special issue of Journal of Sensor and Actuator Networks (ISSN 2224-2708). This special issue belongs to the section "Network Services and Applications".

Deadline for manuscript submissions: closed (20 December 2020).

Special Issue Editors

Dr. Antonio Virdis
E-Mail Website
Guest Editor
Department of Information Engineering, Università di Pisa, Pisa, Italy
Interests: edge computing; wireless networks; mobile computing; performance evaluation; simulation
Dr. Dario Sabella
E-Mail Website
Guest Editor
Intel Deutschland GmbH, Munich, Germany
Interests: LTE; 5G; 6G; edge computing; distributed computing; energy efficiency; RRM

Special Issue Information

Dear Colleagues,

Edge computing is a consolidated paradigm that is widely recognized as a key enabler for modern communication networks. Edge computing has been shown to have the potential to boost and extend the power of cloud computing toward the network edge. By bringing intelligence closer to the end users and devices, edge computing provides support to the massive diffusion of Internet of Things (IoT) deployments, in modern scenarios such as smart environments (smart cities, smart buildings, etc.), Intelligent Transportation, and Industrial IoT.

In addition to that, this trend is moving from edge computing to a wider concept of distributed computing, characterized by the presence of multiple application instances, both at the user and edge sides. The potential of edge computing (and in particular its evolution toward distributed cloud) fits multiple modern communication technologies, ranging from cellular networks (5G, Beyond5G, and even 6G) to wireless sensor and actuator networks, and paves the way to novel and even more pervasive IoT applications. As this technology becomes more and more adopted in the above fields, new research challenges are being posed.

First, distributed cloud can lead to novel computing architectures and systems, taking advantage of the closeness between the computing power and the end user, but also handling the distributed nature of the available resources. Serverless approaches to design applications in distributed computing environments are thus promising tools to support this view. Second, to efficiently support the interactions and cooperation among edge nodes, the network has to provide novel and cloud-native architectures, protocols, and resource-management systems to handle communication and computing in a joint manner. Finally, the great potential of Artificial Intelligence can find in distributed cloud a breeding ground to become even more pervasive. Approaches such as federated and distributed learning can nicely fit on top of distributed cloud, both to create novel services for the end user and to aid the management of networks, possibly leading to the emerging concept of autonomous networks.

Distributed cloud in its various facets is undoubtedly a hot topic in research. Thanks to its interdisciplinary nature, it represents a strong driver for many scientific fields. In this Special Issue, we aim at collecting research contributions from multiple fields, and we invite academic and industry researchers in computer science and engineering, electrical engineering, and communication engineering, as well as ICT industry engineers and practitioners, to contribute with original articles in all aspects of distributed cloud, having a particular focus on the IoT context. The works for this Special Issue may also consider aspects related to the network–computing intersection, including the leverage of QoS, or any studies on QoS requirements in distributed applications.

The topics of interest include but are not limited to:

  • Edge-computing-based applications for the IoT;
  • Edge-aided computation offloading for the IoT;
  • Novel architectures and application for distributed edge computing;
  • IoT-service migration at the network edge;
  • Edge-based support for distributed IoT applications;
  • Distributed and federated learning approaches for edge computing;
  • Support for network slicing at the network edge;
  • Edge-based support for URLLC and critical applications;
  • Mathematical models for edge computing;
  • Performance evaluation of edge computing systems.

Dr. Antonio Virdis
Dr. Dario Sabella
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Sensor and Actuator Networks is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Internet of Things
  • 5G mobile communication
  • 6G communication systems
  • edge computing
  • distributed computing
  • serverless
  • distributed learning
  • federated learning

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Resource Management and Model Personalization for Federated Learning over Wireless Edge Networks
J. Sens. Actuator Netw. 2021, 10(1), 17; https://doi.org/10.3390/jsan10010017 - 23 Feb 2021
Viewed by 769
Abstract
Client and Internet of Things devices are increasingly equipped with the ability to sense, process, and communicate data with high efficiency. This is resulting in a major shift in machine learning (ML) computation at the network edge. Distributed learning approaches such as federated [...] Read more.
Client and Internet of Things devices are increasingly equipped with the ability to sense, process, and communicate data with high efficiency. This is resulting in a major shift in machine learning (ML) computation at the network edge. Distributed learning approaches such as federated learning that move ML training to end devices have emerged, promising lower latency and bandwidth costs and enhanced privacy of end users’ data. However, new challenges that arise from the heterogeneous nature of the devices’ communication rates, compute capabilities, and the limited observability of the training data at each device must be addressed. All these factors can significantly affect the training performance in terms of overall accuracy, model fairness, and convergence time. We present compute-communication and data importance-aware resource management schemes optimizing these metrics and evaluate the training performance on benchmark datasets. We also develop a federated meta-learning solution, based on task similarity, that serves as a sample efficient initialization for federated learning, as well as improves model personalization and generalization across non-IID (independent, identically distributed) data. We present experimental results on benchmark federated learning datasets to highlight the performance gains of the proposed methods in comparison to the well-known federated averaging algorithm and its variants. Full article
Show Figures

Figure 1

Article
Virtualizing AI at the Distributed Edge towards Intelligent IoT Applications
J. Sens. Actuator Netw. 2021, 10(1), 13; https://doi.org/10.3390/jsan10010013 - 08 Feb 2021
Cited by 1 | Viewed by 1175
Abstract
Several Internet of Things (IoT) applications are booming which rely on advanced artificial intelligence (AI) and, in particular, machine learning (ML) algorithms to assist the users and make decisions on their behalf in a large variety of contexts, such as smart homes, smart [...] Read more.
Several Internet of Things (IoT) applications are booming which rely on advanced artificial intelligence (AI) and, in particular, machine learning (ML) algorithms to assist the users and make decisions on their behalf in a large variety of contexts, such as smart homes, smart cities, smart factories. Although the traditional approach is to deploy such compute-intensive algorithms into the centralized cloud, the recent proliferation of low-cost, AI-powered microcontrollers and consumer devices paves the way for having the intelligence pervasively spread along the cloud-to-things continuum. The take off of such a promising vision may be hurdled by the resource constraints of IoT devices and by the heterogeneity of (mostly proprietary) AI-embedded software and hardware platforms. In this paper, we propose a solution for the AI distributed deployment at the deep edge, which lays its foundation in the IoT virtualization concept. We design a virtualization layer hosted at the network edge that is in charge of the semantic description of AI-embedded IoT devices, and, hence, it can expose as well as augment their cognitive capabilities in order to feed intelligent IoT applications. The proposal has been mainly devised with the twofold aim of (i) relieving the pressure on constrained devices that are solicited by multiple parties interested in accessing their generated data and inference, and (ii) and targeting interoperability among AI-powered platforms. A Proof-of-Concept (PoC) is provided to showcase the viability and advantages of the proposed solution. Full article
Show Figures

Figure 1

Article
Leveraging Stack4Things for Federated Learning in Intelligent Cyber Physical Systems
J. Sens. Actuator Netw. 2020, 9(4), 59; https://doi.org/10.3390/jsan9040059 - 18 Dec 2020
Viewed by 792
Abstract
During the last decade, the Internet of Things acted as catalyst for the big data phenomenon. As result, modern edge devices can access a huge amount of data that can be exploited to build useful services. In such a context, artificial intelligence has [...] Read more.
During the last decade, the Internet of Things acted as catalyst for the big data phenomenon. As result, modern edge devices can access a huge amount of data that can be exploited to build useful services. In such a context, artificial intelligence has a key role to develop intelligent systems (e.g., intelligent cyber physical systems) that create a connecting bridge with the physical world. However, as time goes by, machine and deep learning applications are becoming more complex, requiring increasing amounts of data and training time, which makes the use of centralized approaches unsuitable. Federated learning is an emerging paradigm which enables the cooperation of edge devices to learn a shared model (while keeping private their training data), thereby abating the training time. Although federated learning is a promising technique, its implementation is difficult and brings a lot of challenges. In this paper, we present an extension of Stack4Things, a cloud platform developed in our department; leveraging its functionalities, we enabled the deployment of federated learning on edge devices without caring their heterogeneity. Experimental results show a comparison with a centralized approach and demonstrate the effectiveness of the proposed approach in terms of both training time and model accuracy. Full article
Show Figures

Figure 1

Article
Exploiting Virtual Machine Commonality for Improved Resource Allocation in Edge Networks
J. Sens. Actuator Netw. 2020, 9(4), 58; https://doi.org/10.3390/jsan9040058 - 13 Dec 2020
Viewed by 723
Abstract
5G systems are putting increasing pressure on Telecom operators to enhance users’ experience, leading to the development of more techniques with the aim of improving service quality. However, it is essential to take into consideration not only users’ demands but also service providers’ [...] Read more.
5G systems are putting increasing pressure on Telecom operators to enhance users’ experience, leading to the development of more techniques with the aim of improving service quality. However, it is essential to take into consideration not only users’ demands but also service providers’ interests. In this work, we explore policies that satisfy both views. We first formulate a mathematical model to compute End-to-End (E2E) delay experienced by mobile users in Multi-access Edge Computing (MEC) environments. Then, dynamic Virtual Machine (VM) allocation policies are presented, with the objective of satisfying mobile users Quality of Service (QoS) requirements, while optimally using the cloud resources by exploiting VM resource reuse.Thus, maximizing the service providers’ profit should be ensured while providing the service required by users. We further demonstrate the benefits of these policies in comparison with previous works. Full article
Show Figures

Figure 1

Article
End-to-End Performance Evaluation of MEC Deployments in 5G Scenarios
J. Sens. Actuator Netw. 2020, 9(4), 57; https://doi.org/10.3390/jsan9040057 - 11 Dec 2020
Cited by 5 | Viewed by 1352
Abstract
Multi-access edge computing (MEC) promises to deliver localized computing power and storage. Coupled with low-latency 5G radio access, this enables the creation of high added-value services for mobile users, such as in-vehicle infotainment or remote driving. The performance of these services as well [...] Read more.
Multi-access edge computing (MEC) promises to deliver localized computing power and storage. Coupled with low-latency 5G radio access, this enables the creation of high added-value services for mobile users, such as in-vehicle infotainment or remote driving. The performance of these services as well as their scalability will however depend on how MEC will be deployed in 5G systems. This paper evaluates different MEC deployment options, coherent with the respective 5G migration phases, using an accurate and comprehensive end-to-end (E2E) system simulation model (exploiting Simu5G for radio access and Intel CoFluent for core network and MEC), taking into account user-related metrics, such as response time or MEC latency. Our results show that 4G radio access is going to be a bottleneck, preventing MEC services from scaling up. On the other hand, the introduction of 5G will allow a considerable higher penetration of MEC services. Full article
Show Figures

Figure 1

Back to TopTop