Convergence of Edge Computing and Next Generation Networking

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: 31 May 2025 | Viewed by 7376

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science and Engineering, University of Bologna, 40126 Bologna, Italy
Interests: edge computing; cloud-to-thing-continuum; industrial IoT
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Head of Technology Transformation, Standardization and IPR at TIM, 4455 Roma, Italy
Interests: telecommunication networks
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Applied Engineering, Universiteit Antwerpen, 2000 Antwerpen, Belgium
Interests: 5G advanced heterogeneous dense cells architectures; elastic and flexible future wireless networks and its integration and impact on optical networks; IoT clustering; virtualization; provisioning and dynamic resource allocation towards dynamic converged networks; vehicular networks, mobility and handovering within smart cities
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Fulfilling the edge computing promise of zero (low) latency, high bandwidth communication, low energy consumption, and (potentially) enhanced data security and privacy, it is expected that there will be a significant push towards edge application/service deployment. Edge computing not only offers developments in the network architecture, but also presents a potential for innovation in service patterns, assuming that application/service Quality of Service (QoS) constraints can be fulfilled along its lifetime. From a network organization viewpoint, several hierarchical layers of edge nodes with different capabilities can be deployed, thus distributing the resources toward the end-user to support the execution of applications and their data; this will generate a more fluid edge network model. On the other hand, with the rapid development of information technology, the network is experiencing unprecedented transformations, as both the number of connections and the volume of data are extensively increased. To ensure that the networks are suitable for such ever-growing diverse needs, many novel networking technologies have been proposed, covering both the access and the backbone network. As an example, the relatively recent Open Radio Access Network (O-RAN) initiative, which targets an open, virtualized and interoperable RAN, advocates for the integration of AI and machine learning to enable smarter network management; this will lead to more efficient resource utilization, predictive maintenance, and enhanced security features. Software-defined Networking (SDN) decouples the control plane from the data plane to enable more flexible and customized network flow control. The Information-Centric Network (ICN) evolves the current Internet infrastructure from the host-centric paradigm to the data or service-name-centric paradigm. These newly emerging networking technologies are already widely regarded as the key enabling technologies in future networks.

Service provisioning at the edge, however, is associated with numerous challenges that are unique to distributed edge cloud environments. A fundamental limitation of the approach is that in contrast to traditional cloud platforms and data centers, edge clouds have limited resources and may not always be able to satisfy application demands for resources. It is clear that only introducing support for the execution of applications at edge nodes (e.g., through containerization) is not sufficient. The seamless integration of all levels of the infrastructure and novel management approaches that coordinate and orchestrates its (virtualized) resources vertically and horizontally, while ensuring QoS, is of paramount importance.

This Special Issue aims to compile novel research on algorithmic, architectural, and system issues, as well as on experimental aspects that advance the state of the art in the design of integrated resource management in future decentralized edge networks. Prospective authors are invited to submit original, high-quality contributions in areas including, but not limited to, the following:

  • AI techniques for resource management and distributed control;
  • Multi-scale, closed-loop control techniques for edge-cloud networks;
  • Analysis of fundamental trade-offs in edge systems, including metrics such as energy efficiency, latency, overhead, cost, among others;
  • Algorithmic approaches for end-to-end network slicing and SLA assurance;
  • Approaches for cross-domain, cross-edge resource federation and trustworthy cooperation;
  • Data management in decentralized and federated edge networks;
  • Application acceleration use cases, including the metaverse;
  • Novel use cases and applications for converged edge-cloud networks;
  • Security analysis and solutions for decentralized edge networks;
  • Data-driven techniques to enhance network security;
  • Design and optimization of edge-cloud solutions for private networks;
  • Beyond containerization workload management models;
  • Large-scale testbed design and trial.

You may choose our Joint Special Issue in Network.

 

Dr. Armir Bujari
Dr. Gabriele Elia
Prof. Dr. Johann M. Marquez-Barja
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • network management
  • federated edge networks
  • network programmability
  • cloud continuum
  • ai for the network
  • network security
  • network slicing
  • network intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 5086 KiB  
Article
A Transfer Reinforcement Learning Approach for Capacity Sharing in Beyond 5G Networks
by Irene Vilà, Jordi Pérez-Romero and Oriol Sallent
Future Internet 2024, 16(12), 434; https://doi.org/10.3390/fi16120434 - 21 Nov 2024
Viewed by 664
Abstract
The use of Reinforcement Learning (RL) techniques has been widely addressed in the literature to cope with capacity sharing in 5G Radio Access Network (RAN) slicing. These algorithms consider a training process to learn an optimal capacity sharing decision-making policy, which is later [...] Read more.
The use of Reinforcement Learning (RL) techniques has been widely addressed in the literature to cope with capacity sharing in 5G Radio Access Network (RAN) slicing. These algorithms consider a training process to learn an optimal capacity sharing decision-making policy, which is later applied to the RAN environment during the inference stage. When relevant changes occur in the RAN, such as the deployment of new cells in the network, RL-based capacity sharing solutions require a re-training process to update the optimal decision-making policy, which may require long training times. To accelerate this process, this paper proposes a novel Transfer Learning (TL) approach for RL-based capacity sharing solutions in multi-cell scenarios that is implementable following the Open-RAN (O-RAN) architecture and exploits the availability of computing resources at the edge for conducting the training/inference processes. The proposed approach allows transferring the weights of the previously learned policy to learn the new policy to be used after the addition of new cells. The performance assessment of the TL solution highlights its capability to reduce the training process duration of the policies when adding new cells. Considering that the roll-out of 5G networks will continue for several years, TL can contribute to enhancing the practicality and feasibility of applying RL-based solutions for capacity sharing. Full article
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)
Show Figures

Figure 1

24 pages, 1273 KiB  
Article
Flexible Hyper-Distributed IoT–Edge–Cloud Platform for Real-Time Digital Twin Applications on 6G-Intended Testbeds for Logistics and Industry
by Maria Crespo-Aguado, Raul Lozano, Fernando Hernandez-Gobertti, Nuria Molner and David Gomez-Barquero
Future Internet 2024, 16(11), 431; https://doi.org/10.3390/fi16110431 - 20 Nov 2024
Cited by 2 | Viewed by 1861
Abstract
This paper presents the design and development of a flexible hyper-distributed IoT–Edge–Cloud computing platform for real-time Digital Twins in real logistics and industrial environments, intended as a novel living lab and testbed for future 6G applications. It expands the limited capabilities of IoT [...] Read more.
This paper presents the design and development of a flexible hyper-distributed IoT–Edge–Cloud computing platform for real-time Digital Twins in real logistics and industrial environments, intended as a novel living lab and testbed for future 6G applications. It expands the limited capabilities of IoT devices with extended Cloud and Edge computing functionalities, creating an IoT–Edge–Cloud continuum platform composed of multiple stakeholder solutions, in which vertical application developers can take full advantage of the computing resources of the infrastructure. The platform is built together with a private 5G network to connect machines and sensors on a large scale. Artificial intelligence and machine learning are used to allocate computing resources for real-time services by an end-to-end intelligent orchestrator, and real-time distributed analytic tools leverage Edge computing platforms to support different types of Digital Twin applications for logistics and industry, such as immersive remote driving, with specific characteristics and features. Performance evaluations demonstrated the platform’s capability to support the high-throughput communications required for Digital Twins, achieving user-experienced rates close to the maximum theoretical values, up to 552 Mb/s for the downlink and 87.3 Mb/s for the uplink in the n78 frequency band. Moreover, the platform’s support for Digital Twins was validated via QoE assessments conducted on an immersive remote driving prototype, which demonstrated high levels of user satisfaction in key dimensions such as presence, engagement, control, sensory integration, and cognitive load. Full article
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)
Show Figures

Figure 1

20 pages, 6757 KiB  
Article
A Task Offloading and Resource Allocation Strategy Based on Multi-Agent Reinforcement Learning in Mobile Edge Computing
by Guiwen Jiang, Rongxi Huang, Zhiming Bao and Gaocai Wang
Future Internet 2024, 16(9), 333; https://doi.org/10.3390/fi16090333 - 11 Sep 2024
Cited by 2 | Viewed by 1759
Abstract
Task offloading and resource allocation is a research hotspot in cloud-edge collaborative computing. Many existing pieces of research adopted single-agent reinforcement learning to solve this problem, which has some defects such as low robustness, large decision space, and ignoring delayed rewards. In view [...] Read more.
Task offloading and resource allocation is a research hotspot in cloud-edge collaborative computing. Many existing pieces of research adopted single-agent reinforcement learning to solve this problem, which has some defects such as low robustness, large decision space, and ignoring delayed rewards. In view of the above deficiencies, this paper constructs a cloud-edge collaborative computing model, and related task queue, delay, and energy consumption model, and gives joint optimization problem modeling for task offloading and resource allocation with multiple constraints. Then, in order to solve the joint optimization problem, this paper designs a decentralized offloading and scheduling scheme based on “task-oriented” multi-agent reinforcement learning. In this scheme, we present information synchronization protocols and offloading scheduling rules and use edge servers as agents to construct a multi-agent system based on the Actor–Critic framework. In order to solve delayed rewards, this paper models the offloading and scheduling problem as a “task-oriented” Markov decision process. This process abandons the commonly used equidistant time slot model but uses dynamic and parallel slots in the step of task processing time. Finally, an offloading decision algorithm TOMAC-PPO is proposed. The algorithm applies the proximal policy optimization to the multi-agent system and combines the Transformer neural network model to realize the memory and prediction of network state information. Experimental results show that this algorithm has better convergence speed and can effectively reduce the service cost, energy consumption, and task drop rate under high load and high failure rates. For example, the proposed TOMAC-PPO can reduce the average cost by from 19.4% to 66.6% compared to other offloading schemes under the same network load. In addition, the drop rate of some baseline algorithms with 50 users can achieve 62.5% for critical tasks, while the proposed TOMAC-PPO only has 5.5%. Full article
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)
Show Figures

Figure 1

19 pages, 1186 KiB  
Article
PrismParser: A Framework for Implementing Efficient P4-Programmable Packet Parsers on FPGA
by Parisa Mashreghi-Moghadam, Tarek Ould-Bachir and Yvon Savaria
Future Internet 2024, 16(9), 307; https://doi.org/10.3390/fi16090307 - 27 Aug 2024
Viewed by 1010
Abstract
The increasing complexity of modern networks and their evolving needs demand flexible, high-performance packet processing solutions. The P4 language excels in specifying packet processing in software-defined networks (SDNs). Field-programmable gate arrays (FPGAs) are ideal for P4-based packet parsers due to their reconfigurability and [...] Read more.
The increasing complexity of modern networks and their evolving needs demand flexible, high-performance packet processing solutions. The P4 language excels in specifying packet processing in software-defined networks (SDNs). Field-programmable gate arrays (FPGAs) are ideal for P4-based packet parsers due to their reconfigurability and ability to handle data transmitted at high speed. This paper introduces three FPGA-based P4-programmable packet parsing architectural designs that translate P4 specifications into adaptable hardware implementations called base, overlay, and pipeline, each optimized for different packet parsing performance. As modern network infrastructures evolve, the need for multi-tenant environments becomes increasingly critical. Multi-tenancy allows multiple independent users or organizations to share the same physical network resources while maintaining isolation and customized configurations. The rise of 5G and cloud computing has accelerated the demand for network slicing and virtualization technologies, enabling efficient resource allocation and management for multiple tenants. By leveraging P4-programmable packet parsers on FPGAs, our framework addresses these challenges by providing flexible and scalable solutions for multi-tenant network environments. The base parser offers a simple design for essential packet parsing, using minimal resources for high-speed processing. The overlay parser extends the base design for parallel processing, supporting various bus sizes and throughputs. The pipeline parser boosts throughput by segmenting parsing into multiple stages. The efficiency of the proposed approaches is evaluated through detailed resource consumption metrics measured on an Alveo U280 board, demonstrating throughputs of 15.2 Gb/s for the base design, 15.2 Gb/s to 64.42 Gb/s for the overlay design, and up to 282 Gb/s for the pipelined design. These results demonstrate a range of high performances across varying throughput requirements. The proposed approach utilizes a system that ensures low latency and high throughput that yields streaming packet parsers directly from P4 programs, supporting parsing graphs with up to seven transitioning nodes and four connections between nodes. The functionality of the parsers was tested on enterprise networks, a firewall, and a 5G Access Gateway Function graph. Full article
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)
Show Figures

Figure 1

Back to TopTop