Scalable and Distributed Cloud Continuum Orchestration for Next-Generation IoT Applications: Latest Advances and Prospects—2nd Edition

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (10 February 2026) | Viewed by 13349

Special Issue Editors


E-Mail Website
Guest Editor
School of Electrical and Computer Engineering, National Technical University of Athens, 15780 Athens, Greece
Interests: computer networking; network optimisation; cloud computing; edge computing; multi-criteria decision-making; dynamic resource allocation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the advent of the Internet of Things (IoT), the centralised cloud computing service delivery paradigm has been gradually transformed into a cloud continuum that includes edge and fog computing and heterogeneous IoT devices with varying computing and power capabilities. At the same time, the dawn of the 5G era demands advanced orchestration solutions that can meet the time- and mission-critical requirements of IoT-enabled applications.

However, several challenges still need to be addressed regarding these capabilities. The orchestration of such complex applications requires, for example, heterogeneous resources spread across the cloud continuum. The core advantage of edge computing is the placement of computational resources at the network edge to alleviate the computation burden of IoT devices, which are also augmented with enhanced computational capabilities. In this context, orchestrating such applications requires solutions for distributed service embedding, task offloading, resource autoscaling, and service migration in a real-time and scalable manner. With the evolution of the IoT and the advent of 5G/6G and massive machine-type communications, extremely dense networks are expected to be created, placing even more strain on the available infrastructure across the cloud continuum.

With this context in mind, this Special Issue invites novel conceptual, theoretical, and experimental contributions addressing the unresolved challenges in the area of the cloud continuum, the IoT, and AI. The topics of interest include, but are not limited to the following:

  • Resource allocation and scheduling in the cloud continuum;
  • Virtual network embedding in the cloud continuum;
  • Scalability issues in the cloud continuum;
  • Novel architectures in the cloud continuum;
  • AI for QoS management in cloud continuum.
  • task offloading in the cloud continuum;
  • Energy sustainability in the cloud continuum;
  • Digital twins in the cloud continuum;
  • Network/resource monitoring in the cloud continuum;
  • Computation and network infrastructure resilience and reliability;
  • Virtualization solutions for distributed application deployment in the cloud continuum;
  • Security issues in the cloud continuum;
  • Architectures for digital forensics;
  • Data analytics, traffic analysis, and classification in the cloud continuum;
  • Testbeds and experimental facilities report.

Dr. Dimitrios Dechouniotis
Dr. Ioannis Dimolitsas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • IoT
  • AI cloud computing
  • edge computing
  • fog computing
  • 5G/6G
  • digital twins
  • distributed service embedding
  • task offloading
  • resource autoscaling
  • service migration

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 1137 KB  
Article
Adaptive Healthcare Monitoring Through Drift-Aware Edge-Cloud Intelligence
by Aleksandra Stojnev Ilic, Milos Ilic, Natalija Stojanovic and Dragan Stojanovic
Future Internet 2026, 18(3), 156; https://doi.org/10.3390/fi18030156 - 17 Mar 2026
Viewed by 395
Abstract
Continuous healthcare monitoring systems generate non-stationary physiological data streams, where evolving statistical properties and patterns often invalidate static models and fixed user classifications. To address this challenge, we propose drift-aware adaptive architecture that integrates concept drift detection into a distributed edge–cloud data analytics [...] Read more.
Continuous healthcare monitoring systems generate non-stationary physiological data streams, where evolving statistical properties and patterns often invalidate static models and fixed user classifications. To address this challenge, we propose drift-aware adaptive architecture that integrates concept drift detection into a distributed edge–cloud data analytics pipeline. In the proposed design, a concept drift is elevated from a maintenance signal to the primary mechanism governing user-state adaptation, model evolution, and inference consistency. Within the proposed system, the edge tier performs low-latency inference and preliminary drift screening under strict resource constraints, while the cloud tier executes advanced drift detection and validation, orchestrates user reclassification and model retraining, and manages model evolution. A feedback loop synchronizes edge and cloud operations, ensuring that detected drift triggers appropriate system transitions, either reassigning a user to an updated state category or initiating targeted model updates. This architecture reduces reliance on static group assignments, improves personalization, and preserves model fidelity under evolving physiological conditions. We analyze the drift types most relevant to healthcare data streams, evaluate the suitability of lightweight and cloud-grade drift detectors, and define the system requirements for stability, responsiveness, and clinical safety. Evaluation across 21 concurrent users demonstrates that drift-aware adaptation reduced prediction MAE by 40.6% relative to periodic retraining, with an end-to-end adaptation latency of 66 ± 37 s. Hierarchical cloud validation reduced the false-positive retraining rate from 88.9% (edge-only triggering) to 27.3%, while maintaining uninterrupted inference throughout all adaptation events. Full article
Show Figures

Figure 1

35 pages, 8095 KB  
Article
DACCA: Distributed Adaptive Cloud Continuum Architecture
by Nektarios Deligiannakis, Vassilis Papataxiarhis, Michalis Loukeris, Stathes Hadjiefthymiades, Marios Touloupou, Syed Mafooq Ul Hassan, Herodotos Herodotou, Thanasis Moustakas, Emmanouil Bampis, Konstantinos Ioannidis, Iakovos T. Michailidis, Stefanos Vrochidis, Elias Kosmatopoulos, Francisco Javier Romero Martínez, Rafael Marín Pérez, Amr Mousa, Jacopo Castellini and Pablo Strasser
Future Internet 2026, 18(2), 74; https://doi.org/10.3390/fi18020074 - 1 Feb 2026
Viewed by 733
Abstract
Recently, the need for unified orchestration frameworks that can manage extremely heterogeneous, distributed, and resource-constrained environments has emerged due to the rapid development of cloud, edge, and IoT computing. Kubernetes and other traditional cloud-native orchestration systems are not built to facilitate autonomous, decentralized [...] Read more.
Recently, the need for unified orchestration frameworks that can manage extremely heterogeneous, distributed, and resource-constrained environments has emerged due to the rapid development of cloud, edge, and IoT computing. Kubernetes and other traditional cloud-native orchestration systems are not built to facilitate autonomous, decentralized decision-making across the computing continuum or to seamlessly integrate non-container-native devices. This paper presents the Distributed Adaptive Cloud Continuum Architecture (DACCA), a Kubernetes-native architecture that extends orchestration beyond the data center to encompass edge and Internet of Things infrastructures. Decentralized self-awareness and swarm formation are supported for adaptive and resilient operation, a resource and application abstraction layer is established for uniform resource representation, and a Distributed and Adaptive Resource Optimization (DARO) framework based on multi-agent reinforcement learning is integrated for intelligent scheduling in the proposed architecture. Verifiable identity, access control, and tamper-proof data exchange across heterogeneous domains are further ensured by a zero-trust security framework based on distributed ledger technology. When combined, these elements enable increasingly autonomous workload orchestration, trading centralized control for adaptive, decentralized operation with enhanced interoperability, scalability, and trust. Thus, the proposed architecture enables self-managing and context-aware orchestration systems that support next-generation AI-driven distributed applications across the entire computing continuum. Full article
Show Figures

Graphical abstract

25 pages, 2150 KB  
Article
Architecting Multi-Cluster Layer-2 Connectivity for Cloud-Native Network Slicing
by Alex T. de Cock Buning, Ivan Vidal and Francisco Valera
Future Internet 2026, 18(1), 39; https://doi.org/10.3390/fi18010039 - 8 Jan 2026
Viewed by 750
Abstract
Connecting distributed applications across multiple cloud-native domains is growing in complexity. Applications have become containerized and fragmented across heterogeneous infrastructures, such as public clouds, edge nodes, and private data centers, including emerging IoT-driven environments. Existing networking solutions like CNI plugins and service meshes [...] Read more.
Connecting distributed applications across multiple cloud-native domains is growing in complexity. Applications have become containerized and fragmented across heterogeneous infrastructures, such as public clouds, edge nodes, and private data centers, including emerging IoT-driven environments. Existing networking solutions like CNI plugins and service meshes have proven insufficient for providing isolated, low-latency and secure multi-cluster communication. By combining SDN control with Kubernetes abstractions, we present L2S-CES, a Kubernetes-native solution for multi-cluster layer-2 network slicing that offers flexible isolated connectivity for microservices while maintaining performance and automation. In this work, we detail the design and implementation of L2S-CES, outlining its architecture and operational workflow. We experimentally validate against state-of-the-art alternatives and show superior isolation, reduced setup time, native support for broadcast and multicast, and minimal performance overhead. By addressing the current lack of native link-layer networking capabilities across multiple Kubernetes domains, L2S-CES provides a unified and practical foundation for deploying scalable, multi-tenant, and latency-sensitive cloud-native applications. Full article
Show Figures

Figure 1

22 pages, 2918 KB  
Article
Multi-Attribute Physical-Layer Authentication Against Jamming and Battery-Depletion Attacks in LoRaWAN
by Azita Pourghasem, Raimund Kirner, Athanasios Tsokanos, Iosif Mporas and Alexios Mylonas
Future Internet 2026, 18(1), 38; https://doi.org/10.3390/fi18010038 - 8 Jan 2026
Viewed by 588
Abstract
LoRaWAN is widely used for IoT environmental monitoring, but its lightweight security mechanisms leave the physical layer vulnerable to availability attacks such as jamming and battery-depletion. These risks are particularly critical in mission-critical environmental monitoring systems. This paper proposes a multi-attribute physical-layer authentication [...] Read more.
LoRaWAN is widely used for IoT environmental monitoring, but its lightweight security mechanisms leave the physical layer vulnerable to availability attacks such as jamming and battery-depletion. These risks are particularly critical in mission-critical environmental monitoring systems. This paper proposes a multi-attribute physical-layer authentication (PLA) framework that supports uplink legitimacy assessment by jointly exploiting radio, energy, and temporal attributes, specifically RSSI, altitude, battery_level, battery_drop_speed, event_step, and time_rank. Using publicly available Brno LoRaWAN traces, we construct a device-aware semi-synthetic dataset comprising 230,296 records from 1921 devices over 13.68 days, augmented with energy, spatial, and temporal attributes and injected with controlled jamming and battery-depletion anomalies. Five classifiers (Random Forest, Multi-Layer Perceptron, XGBoost, Logistic Regression, and K-Nearest Neighbours) are evaluated using accuracy, precision, recall, F1-score, and AUC-ROC. The Multi-Layer Perceptron achieves the strongest detection performance (F1-score = 0.8260, AUC-ROC = 0.8953), with Random Forest performing comparably. Deployment-oriented computational profiling shows that lightweight models such as Logistic Regression and the MLP achieve near-instantaneous prediction latency (below 2 µs per sample) with minimal CPU overhead, while tree-based models incur higher training and storage costs but remain feasible for Network Server-side deployment. Full article
Show Figures

Figure 1

25 pages, 836 KB  
Article
Capacity Planning for Software Applications in Natural Disaster Scenarios
by Juan Ovando-Leon, Luis Veas-Castillo, Veronica Gil-Costa and Mauricio Marin
Future Internet 2026, 18(1), 21; https://doi.org/10.3390/fi18010021 - 1 Jan 2026
Viewed by 676
Abstract
In recent years, there has been a notable increase in the development of social media and humanitarian applications based on bots, designed to provide assistance during large-scale natural disasters. These applications play a crucial role in managing the chaos and useful to satisfy [...] Read more.
In recent years, there has been a notable increase in the development of social media and humanitarian applications based on bots, designed to provide assistance during large-scale natural disasters. These applications play a crucial role in managing the chaos and useful to satisfy the urgent needs for rescue and relief that arise when catastrophes disrupt daily routines. However, they often encounter challenges during emergencies, such as dynamic and unpredictable variations in user workload, which can affect service quality and application stability. To tackle these challenges, we propose a capacity planning methodology to determine the optimal number of replicas and partitions for each component of an application and distributes them across virtual machines in server clusters. By bridging the gap between the algorithms executed in the applications and the performance characteristics of their implementations, this methodology enables applications to scale efficiently. It helps maintain response times and average utilization within user-defined ranges while providing fault tolerance to prevent component saturation. We validate the proposed methodology with tree bot-based applications devised to be use after a natural disaster strikes. Our experimental results show the effectiveness of the methodology, with estimation errors ranging from 1% to 15% for utilization and average response times. Furthermore, the methodology serves as an effective elasticity tool, allowing for the adjustment of component replicas based on user’s requests. Full article
Show Figures

Graphical abstract

26 pages, 930 KB  
Article
Modular Microservices Architecture for Generative Music Integration in Digital Audio Workstations via VST Plugin
by Adriano N. Raposo and Vasco N. G. J. Soares
Future Internet 2025, 17(10), 469; https://doi.org/10.3390/fi17100469 - 12 Oct 2025
Viewed by 1544
Abstract
This paper presents the design and implementation of a modular cloud-based architecture that enables generative music capabilities in Digital Audio Workstations through a MIDI microservices backend and a user-friendly VST plugin frontend. The system comprises a generative harmony engine deployed as a standalone [...] Read more.
This paper presents the design and implementation of a modular cloud-based architecture that enables generative music capabilities in Digital Audio Workstations through a MIDI microservices backend and a user-friendly VST plugin frontend. The system comprises a generative harmony engine deployed as a standalone service, a microservice layer that orchestrates communication and exposes an API, and a VST plugin that interacts with the backend to retrieve harmonic sequences and MIDI data. Among the microservices is a dedicated component that converts textual chord sequences into MIDI files. The VST plugin allows the user to drag and drop the generated chord progressions directly into a DAW’s MIDI track timeline. This architecture prioritizes modularity, cloud scalability, and seamless integration into existing music production workflows, while abstracting away technical complexity from end users. The proposed system demonstrates how microservice-based design and cross-platform plugin development can be effectively combined to support generative music workflows, offering both researchers and practitioners a replicable and extensible framework. Full article
Show Figures

Figure 1

19 pages, 1780 KB  
Article
A Case Study on Monolith to Microservices Decomposition with Variational Autoencoder-Based Graph Neural Network
by Rokin Maharjan, Korn Sooksatra, Tomas Cerny, Yudeep Rajbhandari and Sakshi Shrestha
Future Internet 2025, 17(7), 303; https://doi.org/10.3390/fi17070303 - 13 Jul 2025
Cited by 1 | Viewed by 3879
Abstract
Microservice is a popular architecture for developing cloud-native applications. However, decomposing a monolithic application into microservices remains a challenging task. This complexity arises from the need to account for factors such as component dependencies, cohesive clusters, and bounded contexts. To address this challenge, [...] Read more.
Microservice is a popular architecture for developing cloud-native applications. However, decomposing a monolithic application into microservices remains a challenging task. This complexity arises from the need to account for factors such as component dependencies, cohesive clusters, and bounded contexts. To address this challenge, we present an automated approach to decomposing monolithic applications into microservices. Our approach uses static code analysis to generate a dependency graph of the monolithic application. Then, a variational autoencoder (VAE) is used to extract features from the components of a monolithic application. Finally, the C-means algorithm is used to cluster the components into possible microservices. We evaluate our approach using a third-party benchmark comprising both monolithic and microservice implementations. Additionally, we compare its performance against two existing decomposition techniques. The results demonstrate the potential of our method as a practical tool for guiding the transition from monolithic to microservice architectures. Full article
Show Figures

Figure 1

18 pages, 4079 KB  
Article
A Scalable Hybrid Autoencoder–Extreme Learning Machine Framework for Adaptive Intrusion Detection in High-Dimensional Networks
by Anubhav Kumar, Rajamani Radhakrishnan, Mani Sumithra, Prabu Kaliyaperumal, Balamurugan Balusamy and Francesco Benedetto
Future Internet 2025, 17(5), 221; https://doi.org/10.3390/fi17050221 - 15 May 2025
Cited by 6 | Viewed by 2198
Abstract
The rapid expansion of network environments has introduced significant cybersecurity challenges, particularly in handling high-dimensional traffic and detecting sophisticated threats. This study presents a novel, scalable Hybrid Autoencoder–Extreme Learning Machine (AE–ELM) framework for Intrusion Detection Systems (IDS), specifically designed to operate effectively in [...] Read more.
The rapid expansion of network environments has introduced significant cybersecurity challenges, particularly in handling high-dimensional traffic and detecting sophisticated threats. This study presents a novel, scalable Hybrid Autoencoder–Extreme Learning Machine (AE–ELM) framework for Intrusion Detection Systems (IDS), specifically designed to operate effectively in dynamic, cloud-supported IoT environments. The scientific novelty lies in the integration of an Autoencoder for deep feature compression with an Extreme Learning Machine for rapid and accurate classification, enhanced through adaptive thresholding techniques. Evaluated on the CSE-CIC-IDS2018 dataset, the proposed method demonstrates a high detection accuracy of 98.52%, outperforming conventional models in terms of precision, recall, and scalability. Additionally, the framework exhibits strong adaptability to emerging threats and reduced computational overhead, making it a practical solution for real-time, scalable IDS in next-generation network infrastructures. Full article
Show Figures

Figure 1

15 pages, 783 KB  
Article
On Microservice-Based Architecture for Digital Forensics Applications: A Competition Policy Perspective
by Fragkiskos Ninos, Konstantinos Karalas, Dimitrios Dechouniotis and Michael Polemis
Future Internet 2025, 17(4), 137; https://doi.org/10.3390/fi17040137 - 23 Mar 2025
Viewed by 1551
Abstract
Digital forensics systems are complex applications consisting of numerous individual components that demand substantial computing resources. By adopting the concept of microservices, forensics applications can be divided into smaller, independently managed services. In this context, cloud resource orchestration platforms like Kubernetes provide augmented [...] Read more.
Digital forensics systems are complex applications consisting of numerous individual components that demand substantial computing resources. By adopting the concept of microservices, forensics applications can be divided into smaller, independently managed services. In this context, cloud resource orchestration platforms like Kubernetes provide augmented functionalities, such as resource scaling, load balancing, and monitoring, supporting every stage of the application’s lifecycle. This article explores the deployment of digital forensics applications over a microservice-based architecture. Leveraging resource scaling and persistent storage mechanisms, we introduce a vertical scaling mechanism for compute-intensive forensics applications. A practical evaluation of digital forensics applications in competition investigations was performed using datasets from the private cloud of the Hellenic Competition Commission. The numerical results illustrate that the processing time of CPU-intensive tasks is reduced significantly using dynamic resource scaling, while data integrity and security requirements are fulfilled. Full article
Show Figures

Figure 1

Back to TopTop