Scalable and Distributed Cloud Continuum Orchestration for Next-Generation IoT Applications: Latest Advances and Prospects

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (31 January 2025) | Viewed by 19128

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, National Technical University of Athens, 15780 Athens, Greece
Interests: software-defined networks; cognitive radio networks; IoT; big data; social network analysis; recommender systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electrical and Computer Engineering, National Technical University of Athens, 15780 Athens, Greece
Interests: computer networking; network optimisation; cloud computing; edge computing; multi-criteria decision-making
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the advent of the Internet of Things (IoT), the centralized cloud computing service delivery paradigm has been gradually transformed into a cloud continuum that includes edge and fog computing and heterogeneous IoT devices with varying computing and power capabilities. At the same time, the dawn of the 5G era requires advanced orchestration solutions that guarantee the requirements of time-/mission-critical IoT-enabled applications.

There remain, however, several challenges to be addressed regarding these capabilities. The orchestration of such complex applications requires, for example, heterogeneous resources spread across the cloud continuum. The core advantage of edge computing is the placement of computational resources at the network edge to alleviate the computation burden of IoT devices, which are also augmented with enhanced computational capabilities. In this context, the orchestration of such applications requires solutions for distributed service embedding, task offloading, resource autoscaling, and service migration in a real-time and in a scalable fashion. With the evolution of the IoT and the advent of 5G/6G and massive machine-type communications, extremely dense networks are expected to be created, placing even more strain on the available infrastructure across the cloud continuum.

With this context in mind, this Special Issue is soliciting novel conceptual, theoretical, and experimental contributions to this set of currently unresolved challenges in the area of the cloud continuum, the IoT, and AI. The topics of interest include, but are not limited to:

  • Resource allocation and scheduling in the cloud continuum;
  • Virtual network embedding in the cloud continuum;
  • Scalability issues in the cloud continuum;
  • Novel architectures in the cloud continuum;
  • AI for QoS management in cloud continuum.
  • task offloading in the cloud continuum;
  • Energy sustainability in the cloud continuum;
  • Digital twins in the cloud continuum;
  • Network/resource monitoring in the cloud continuum;
  • Computation and network infrastructure resilience and reliability;
  • Virtualization solutions for distributed application deployment in the cloud continuum;
  • Security issues in the cloud continuum;
  • Architectures for digital forensics;
  • Data analytics, traffic analysis, and classification in the cloud continuum;
  • Testbeds and experimental facilities report.

Dr. Dimitrios Dechouniotis
Dr. Ioannis Dimolitsas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • IoT 
  • AI
  • cloud computing 
  • edge computing 
  • fog computing
  • 5G/6G
  • digital twins
  • distributed service embedding
  • task offloading
  • resource autoscaling
  • service migration

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 11264 KiB  
Article
Cache Aging with Learning (CAL): A Freshness-Based Data Caching Method for Information-Centric Networking on the Internet of Things (IoT)
by Nemat Hazrati, Sajjad Pirahesh, Bahman Arasteh, Seyed Salar Sefati, Octavian Fratu and Simona Halunga
Future Internet 2025, 17(1), 11; https://doi.org/10.3390/fi17010011 - 1 Jan 2025
Viewed by 941
Abstract
Information-centric networking (ICN) changes the way data are accessed by focusing on the content rather than the location of devices. In this model, each piece of data has a unique name, making it accessible directly by name. This approach suits the Internet of [...] Read more.
Information-centric networking (ICN) changes the way data are accessed by focusing on the content rather than the location of devices. In this model, each piece of data has a unique name, making it accessible directly by name. This approach suits the Internet of Things (IoT), where data generation and real-time processing are fundamental. Traditional host-based communication methods are less efficient for the IoT, making ICN a better fit. A key advantage of ICN is in-network caching, which temporarily stores data across various points in the network. This caching improves data access speed, minimizes retrieval time, and reduces overall network traffic by making frequently accessed data readily available. However, IoT systems involve constantly updating data, which requires managing data freshness while also ensuring their validity and processing accuracy. The interactions with cached data, such as updates, validations, and replacements, are crucial in optimizing system performance. This research introduces an ICN-IoT method to manage and process data freshness in ICN for the IoT. It optimizes network traffic by sharing only the most current and valid data, reducing unnecessary transfers. Routers in this model calculate data freshness, assess its validity, and perform cache updates based on these metrics. Simulation results across four models show that this method enhances cache hit ratios, reduces traffic load, and improves retrieval delays, outperforming similar methods. The proposed method uses an artificial neural network to make predictions. These predictions closely match the actual values, with a low error margin of 0.0121. This precision highlights its effectiveness in maintaining data currentness and validity while reducing network overhead. Full article
Show Figures

Figure 1

16 pages, 1308 KiB  
Article
Evaluating DL Model Scaling Trade-Offs During Inference via an Empirical Benchmark Analysis
by Demetris Trihinas, Panagiotis Michael and Moysis Symeonides
Future Internet 2024, 16(12), 468; https://doi.org/10.3390/fi16120468 - 13 Dec 2024
Viewed by 995
Abstract
With generative Artificial Intelligence (AI) capturing public attention, the appetite of the technology sector for larger and more complex Deep Learning (DL) models is continuously growing. Traditionally, the focus in DL model development has been on scaling the neural network’s foundational structure to [...] Read more.
With generative Artificial Intelligence (AI) capturing public attention, the appetite of the technology sector for larger and more complex Deep Learning (DL) models is continuously growing. Traditionally, the focus in DL model development has been on scaling the neural network’s foundational structure to increase computational complexity and enhance the representational expressiveness of the model. However, with recent advancements in edge computing and 5G networks, DL models are now aggressively being deployed and utilized across the cloud–edge–IoT continuum for the realization of in situ intelligent IoT services. This paradigm shift introduces a growing need for AI practitioners, as a focus on inference costs, including latency, computational overhead, and energy efficiency, is long overdue. This work presents a benchmarking framework designed to assess DL model scaling across three key performance axes during model inference: classification accuracy, computational overhead, and latency. The framework’s utility is demonstrated through an empirical study involving various model structures and variants, as well as publicly available datasets for three popular DL use cases covering natural language understanding, object detection, and regression analysis. Full article
Show Figures

Figure 1

16 pages, 725 KiB  
Article
Virtualization vs. Containerization, a Comparative Approach for Application Deployment in the Computing Continuum Focused on the Edge
by Hamish Sturley, Augustin Fournier, Andoni Salcedo-Navarro, Miguel Garcia-Pineda and Jaume Segura-Garcia
Future Internet 2024, 16(11), 427; https://doi.org/10.3390/fi16110427 - 19 Nov 2024
Viewed by 1281
Abstract
With the emergence of containerization 10 years ago, we saw a compact, convenient and portable way of running apps directly concurrently with virtualization. The major difference is in the architecture. Containers share the same kernel as the guest and then do not virtualize [...] Read more.
With the emergence of containerization 10 years ago, we saw a compact, convenient and portable way of running apps directly concurrently with virtualization. The major difference is in the architecture. Containers share the same kernel as the guest and then do not virtualize low-layer components like the Central Processing Unit (CPU). On the one hand, they are lighter and more flexible than virtual machines (VMs). On the other hand, VMs can more precisely meet the low-layer needs and are completely autonomous systems. Nowadays, what is the best architecture to use to develop an application? In this paper, we will study the two main virtual methods of deploying this. We will compare both methods on several criteria: compatibility based on user experience and the ease of installation/deployment, scalability based on the automatic elasticity facing the workload and energy efficiency in terms of energy and computer resources. After the tests, we conclude that the containerization option is the most ecologically advantageous option in terms of energy consumption. Full article
Show Figures

Figure 1

18 pages, 8355 KiB  
Article
On Exploiting and Implementing Collaborative Virtual and Augmented Reality in a Cloud Continuum Scenario
by Beniamino Di Martino, Gennaro Junior Pezzullo, Vincenzo Bombace, Ling-Huey Li and Kuan-Ching Li
Future Internet 2024, 16(11), 393; https://doi.org/10.3390/fi16110393 - 26 Oct 2024
Viewed by 1073
Abstract
This work explores the application of collaborative virtual and augmented reality in a cloud continuum context, focusing on designing, implementing, and verifying three reference architectures for five collaborative VR/AR software deployment. The architectures designed differ in their distribution of computational load: one handles [...] Read more.
This work explores the application of collaborative virtual and augmented reality in a cloud continuum context, focusing on designing, implementing, and verifying three reference architectures for five collaborative VR/AR software deployment. The architectures designed differ in their distribution of computational load: one handles everything in the cloud, one balances the load between the cloud and the edge, and the last concentrates the load entirely on the edge. The design of the architectures was initially outlined through sequence and component diagrams and then implemented using the most appropriate technologies and frameworks. For each architecture, a specific application was developed and deployed on the various components of that architecture to test its proper functioning. Finally, the scenarios were simulated to be stressed with a significant number of users, employing tools such as Cloud Analyst to analyze performance and present well-defined and implemented reference architectures. Full article
Show Figures

Figure 1

20 pages, 1607 KiB  
Article
Securing the Edge: CatBoost Classifier Optimized by the Lyrebird Algorithm to Detect Denial of Service Attacks in Internet of Things-Based Wireless Sensor Networks
by Sennanur Srinivasan Abinayaa, Prakash Arumugam, Divya Bhavani Mohan, Anand Rajendran, Abderezak Lashab, Baoze Wei and Josep M. Guerrero
Future Internet 2024, 16(10), 381; https://doi.org/10.3390/fi16100381 - 19 Oct 2024
Viewed by 1396
Abstract
The security of Wireless Sensor Networks (WSNs) is of the utmost importance because of their widespread use in various applications. Protecting WSNs from harmful activity is a vital function of intrusion detection systems (IDSs). An innovative approach to WSN intrusion detection (ID) utilizing [...] Read more.
The security of Wireless Sensor Networks (WSNs) is of the utmost importance because of their widespread use in various applications. Protecting WSNs from harmful activity is a vital function of intrusion detection systems (IDSs). An innovative approach to WSN intrusion detection (ID) utilizing the CatBoost classifier (Cb-C) and the Lyrebird Optimization Algorithm is presented in this work (LOA). As is typical in ID settings, Cb-C excels at handling datasets that are imbalanced. The lyrebird’s remarkable capacity to imitate the sounds of its surroundings served as inspiration for the LOA, a metaheuristic optimization algorithm. The WSN-DS dataset, acquired from Prince Sultan University in Saudi Arabia, is used to assess the suggested method. Among the models presented, LOA-Cb-C produces the highest accuracy of 99.66%; nevertheless, when compared with the other methods discussed in this article, its error value of 0.34% is the lowest. Experimental results reveal that the suggested strategy improves WSN-IoT security over the existing methods in terms of detection accuracy and the false alarm rate. Full article
Show Figures

Figure 1

28 pages, 1126 KiB  
Article
Internet of Things Adoption in Technology Ecosystems Within the Central African Region: The Case of Silicon Mountain
by Godlove Suila Kuaban, Valery Nkemeni, Onyeka J. Nwobodo, Piotr Czekalski and Fabien Mieyeville
Future Internet 2024, 16(10), 376; https://doi.org/10.3390/fi16100376 - 16 Oct 2024
Viewed by 1243
Abstract
The Internet of Things (IoT) has emerged as a transformative technology with the potential to revolutionize various sectors and industries worldwide. Despite its global significance, the adoption and implementation of IoT technologies in emerging technology ecosystems within the Central African region still need [...] Read more.
The Internet of Things (IoT) has emerged as a transformative technology with the potential to revolutionize various sectors and industries worldwide. Despite its global significance, the adoption and implementation of IoT technologies in emerging technology ecosystems within the Central African region still need to be studied and explored. This paper presents a case study of the Silicon Mountain technology ecosystem, located in Fako division of the southwest region of Cameroon, focusing on the barriers and challenges to adopting and integrating IoT technologies within this emerging tech ecosystem. Through a survey-based approach, we investigate the factors influencing IoT adoption in the Silicon Mountain tech ecosystem, including technological, economic, social, and regulatory factors. Our study reveals key insights into the current state of IoT adoption, opportunities for growth and innovation, and IoT adoption challenges. Key among the challenges identified for impeding IoT uptake were issues related to standardization and financial resources, labor shortage in the industry, educational and knowledge gaps, market challenges, government policies, security and data privacy concerns, and inadequate power supply. Based on our findings, we provide recommendations for policymakers, industry stakeholders, and academic institutions to promote and facilitate the widespread adoption of IoT technologies in Silicon Mountain and the Central African region at large. Full article
Show Figures

Figure 1

22 pages, 2352 KiB  
Article
Optimizing Network Performance: A Comparative Analysis of EIGRP, OSPF, and BGP in IPv6-Based Load-Sharing and Link-Failover Systems
by Kamal Shahid, Saleem Naseer Ahmad and Syed Tahir Hussain Rizvi
Future Internet 2024, 16(9), 339; https://doi.org/10.3390/fi16090339 - 20 Sep 2024
Cited by 2 | Viewed by 2621
Abstract
The purpose of this study is to evaluate and compare how well different routing protocols perform in terms of load sharing, link failover, and overall network performance. Wireshark was used for packet-level analysis, VMWare was used for virtualization, GNS3 was used for network [...] Read more.
The purpose of this study is to evaluate and compare how well different routing protocols perform in terms of load sharing, link failover, and overall network performance. Wireshark was used for packet-level analysis, VMWare was used for virtualization, GNS3 was used for network simulation, and Iperf3 was used to measure network performance parameters. Convergence time, packet losses, network jitter, and network delay are the parameters that were selected for assessment. To examine the behaviors of Open Shortest Path First (OSPF) and Enhanced Interior Gateway Routing Protocol (EIGRP) routing protocols in a variety of network settings, a simulated network environment incorporating both protocols along with Border Gateway Protocol (BGP) is created for the research. The setup for the experiment entails simulating different network conditions, such as fluctuating traffic loads and connection failures, to track how the protocols function in dynamic situations. The efficiency metrics for OSPF and EIGRP with BGP are measured and evaluated using the data generated by Wireshark and Iperf3. The results of this research show that EIGRP has a better failover convergence time and packet loss percentage as compared to the OSPF. Full article
Show Figures

Figure 1

24 pages, 2736 KiB  
Article
Developing a Secure Service Ecosystem to Implement the Intelligent Edge Environment for Smart Cities
by Gayathri Karthick and Glenford Mapp
Future Internet 2024, 16(9), 317; https://doi.org/10.3390/fi16090317 - 2 Sep 2024
Viewed by 1009
Abstract
In the future, smart cities will provide key services including seamless communication, intelligent transport systems, advanced healthcare platforms, urban and infrastructure management, and digital services for local and regional government. Therefore, a new service and networking paradigm, called the Intelligent Edge Environment, has [...] Read more.
In the future, smart cities will provide key services including seamless communication, intelligent transport systems, advanced healthcare platforms, urban and infrastructure management, and digital services for local and regional government. Therefore, a new service and networking paradigm, called the Intelligent Edge Environment, has been specified. As a key part of this system, a new secure service ecosystem must be developed to provide the secure real-time movement of services on different High-Performance Edge Cloud Systems. This paper explores these issues by introducing the following mechanisms: a Resource Allocation Algorithm, a Resource Allocation Secure Protocol and finally a Secure Service Protocol. These systems were integrated into the Basic Capability System Library and a multithreaded FUSE client connected to the Service Management Framework. Docker was used as the migration mechanism. A prototype was developed and implemented using a FUSE-Network Memory System in which the Network Memory Server was migrated as users moved around. The result shows that this approach was safe and could be used to develop new applications and services for smart cities. Full article
Show Figures

Graphical abstract

19 pages, 1076 KiB  
Article
TRUST-ME: Trust-Based Resource Allocation and Server Selection in Multi-Access Edge Computing
by Sean Tsikteris, Aisha B Rahman, Md. Sadman Siraj and Eirini Eleni Tsiropoulou
Future Internet 2024, 16(8), 278; https://doi.org/10.3390/fi16080278 - 4 Aug 2024
Viewed by 1596
Abstract
Multi-access edge computing (MEC) has attracted the interest of the research and industrial community to support Internet of things (IoT) applications by enabling efficient data processing and minimizing latency. This paper presents significant contributions toward optimizing the resource allocation and enhancing the decision-making [...] Read more.
Multi-access edge computing (MEC) has attracted the interest of the research and industrial community to support Internet of things (IoT) applications by enabling efficient data processing and minimizing latency. This paper presents significant contributions toward optimizing the resource allocation and enhancing the decision-making process in edge computing environments. Specifically, the TRUST-ME model is introduced, which consists of multiple edge servers and IoT devices, i.e., users, with varied computing tasks offloaded to the MEC servers. A utility function was designed to quantify the benefits in terms of latency and cost for the IoT device while utilizing the MEC servers’ computing capacities. The core innovation of our work is a novel trust model that was designed to evaluate the IoT devices’ confidence in MEC servers. This model integrates both direct and indirect trust and reflects the trustworthiness of the servers based on the direct interactions and social feedback from other devices using the same servers. This dual trust approach helps with accurately gauging the reliability of MEC services and ensuring more informed decision making. A reinforcement learning framework based on the optimistic Q-learning with an upper confidence bounds action selection algorithm enables the IoT devices to autonomously select a MEC server to process their computing tasks. Also, a multilateral bargaining model is proposed for fair resource allocation of the MEC servers’ computing resources to the users while accounting for their computing demands. Numerical simulations demonstrated the operational effectiveness, convergence, and scalability of the TRUST-ME model, which was validated through real-world scenarios and comprehensive comparative evaluations against existing approaches. Full article
Show Figures

Figure 1

37 pages, 18482 KiB  
Article
Active Queue Management in L4S with Asynchronous Advantage Actor-Critic: A FreeBSD Networking Stack Perspective
by Deol Satish, Jonathan Kua and Shiva Raj Pokhrel
Future Internet 2024, 16(8), 265; https://doi.org/10.3390/fi16080265 - 25 Jul 2024
Cited by 1 | Viewed by 1395
Abstract
Bufferbloat is one of the leading causes of high data transmission latency and jitter on the Internet, which severely impacts the performance of low-latency interactive applications such as online streaming, cloud-based gaming/applications, Internet of Things (IoT) applications, voice over IP (VoIP), real-time video [...] Read more.
Bufferbloat is one of the leading causes of high data transmission latency and jitter on the Internet, which severely impacts the performance of low-latency interactive applications such as online streaming, cloud-based gaming/applications, Internet of Things (IoT) applications, voice over IP (VoIP), real-time video conferencing, and so forth. There is currently a pressing need for developing Transmission Control Protocol (TCP) congestion control algorithms and bottleneck queue management schemes that can collaboratively control/reduce end-to-end latency, thus ensuring optimal quality of service (QoS) and quality of experience (QoE) for users. This paper introduces a novel solution by experimentally integrate the low latency, low loss, and scalable throughput (L4S) architecture (specified by the IETF in RFC 9330) in FreeBSD framework with the asynchronous advantage actor-critic (A3C) reinforcement learning algorithm. The first phase involves incorporating a modified dual-queue coupled active queue management (AQM) system for L4S into the FreeBSD networking stack, enhancing queue management and mitigating latency and packet loss. The second phase employs A3C to adjust and fine-tune the system performance dynamically. Finally, we evaluate the proposed solution’s effectiveness through comprehensive experiments, comparing it with traditional AQM-based systems. This paper contributes to the advancement of machine learning (ML) for transport protocol research in the field. The experimental implementation and results presented in this paper are made available through our GitHub repositories. Full article
Show Figures

Figure 1

44 pages, 2594 KiB  
Article
NFT Technology for Enhanced Global Digital Registers: A Novel Approach to Tokenization
by Oleksandr Kuznetsov, Emanuele Frontoni, Kateryna Kuznetsova, Ruslan Shevchuk and Mikolaj Karpinski
Future Internet 2024, 16(7), 252; https://doi.org/10.3390/fi16070252 - 17 Jul 2024
Viewed by 4634
Abstract
In the rapidly evolving field of digital asset management, centralized and decentralized global registries have become essential tools for organizing, tracking, and distributing digital assets. However, existing systems often face challenges regarding security, censorship resistance, interoperability, customizability, and scalability. This research paper aims [...] Read more.
In the rapidly evolving field of digital asset management, centralized and decentralized global registries have become essential tools for organizing, tracking, and distributing digital assets. However, existing systems often face challenges regarding security, censorship resistance, interoperability, customizability, and scalability. This research paper aims to address these gaps by proposing a novel decentralized global registry system based on blockchain technology and non-fungible tokens (NFTs). The research paper makes several key contributions to the field of digital asset management. First, it provides a detailed system design for the proposed decentralized global registry, outlining its architectural components, functional modules, and integration with blockchain and NFT technologies. Second, it offers a thorough comparative analysis of the advantages and limitations of the proposed system in relation to existing centralized and decentralized registries. Finally, the paper presents potential use cases and practical applications of the proposed system in various industries, demonstrating its versatility and adaptability to different contexts and requirements. In conclusion, this research paper contributes significantly to the ongoing efforts to improve digital asset management by presenting a novel, decentralized global registry system based on blockchain technology and NFTs. The proposed system addresses the key limitations of existing solutions and offers a promising direction for future research and development in this critical field. Full article
Show Figures

Figure 1

Back to TopTop