Next Issue
Volume 18, April
Previous Issue
Volume 18, February
 
 

Future Internet, Volume 18, Issue 3 (March 2026) – 62 articles

Cover Story (view full-size image): NOMA is a promising technology for improving 6G capacity. Practical deployment is challenging due to carrier, timing, and phase offsets, successive interference cancellation error propagation, packet loss, and SDR processing jitter. This paper bridges the theory-to-hardware gap by presenting a two-user NOMA transceiver on the ADALM-Pluto SDR platform. It incorporates matched filtering, offset estimation and correction, SIC with waveform reconstruction, and rate-1/2 convolutional FEC. Full validation is performed in downlink and uplink modes, evaluating latency, BER, and success rate. Uplink NOMA is demonstrated without a GPSDO by exploiting Pluto Rev-C dual-transmit channels that share a common oscillator. Experimental results at 915 MHz using BPSK show excellent downlink reliability and good uplink performance. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
31 pages, 4664 KB  
Review
A Decade of Horizontal Fragmentation Methods in OLAP from Data Warehouse to Data Lakehouse: A Scoping Review
by Nidia Rodríguez-Mazahua, Lisbeth Rodríguez-Mazahua, Giner Alor-Hernández, Jair Cervantes and Felipe Castro-Medina
Future Internet 2026, 18(3), 176; https://doi.org/10.3390/fi18030176 - 23 Mar 2026
Viewed by 451
Abstract
One of the main problems faced by database administrators for optimizing analytic workloads is fragmentation. Therefore, in recent decades, several fragmentation methods for analytical platforms have been proposed because this technique is able to improve the performance of OLAP (Online Analytical Processing) queries. [...] Read more.
One of the main problems faced by database administrators for optimizing analytic workloads is fragmentation. Therefore, in recent decades, several fragmentation methods for analytical platforms have been proposed because this technique is able to improve the performance of OLAP (Online Analytical Processing) queries. In this study, we conducted an exploratory review of horizontal fragmentation methods for analytical repositories such as data warehouses, data lakes, and data lakehouses. This study presents a scoping review conducted using Arksey and O’Malley’s methodological framework and reported according to the PRISMA guidelines, covering 58 primary studies on horizontal fragmentation published from 2015 to 2025. Our analysis focuses on five aspects: (1) determining the main techniques used in horizontal fragmentation works for analytical repositories, (2) the classification of these studies, (3) the performance metrics considered when evaluating the horizontal fragmentation scheme, (4) the information type indexed by the repositories, and (5) the technologies most used by the approaches. Our findings suggest that horizontal fragmentation is a good opportunity to improve the performance of analytical workloads in most cases. The results of this scoping review will provide important guidelines for future research on horizontal fragmentation methods. In addition, the results will provide clues about the use of OLAP technologies for professionals and academics considering future directions. Full article
(This article belongs to the Special Issue Blockchain and Big Data Analytics)
Show Figures

Graphical abstract

38 pages, 4089 KB  
Article
A Mobility-Aware Zone-Based Key Management Scheme with Dynamic Key Refinement for Large-Scale Mobile Wireless Sensor Networks
by Abdelbassette Chenna, Djallel Eddine Boubiche, Abderrezak Benyahia, Homero Toral-Cruz, Rafael Martínez-Peláez and Pablo Velarde-Alvarado
Future Internet 2026, 18(3), 175; https://doi.org/10.3390/fi18030175 - 23 Mar 2026
Viewed by 298
Abstract
Mobile Wireless Sensor Networks (MWSNs) enhance traditional wireless sensor networks by allowing sensor nodes to move, resulting in continuously changing network topologies. Although this mobility enables advanced applications such as disaster response, intelligent transportation systems, and mission-critical monitoring, it poses major challenges for [...] Read more.
Mobile Wireless Sensor Networks (MWSNs) enhance traditional wireless sensor networks by allowing sensor nodes to move, resulting in continuously changing network topologies. Although this mobility enables advanced applications such as disaster response, intelligent transportation systems, and mission-critical monitoring, it poses major challenges for secure and scalable key management in large-scale deployments. Most existing key management and key pre-distribution schemes are tailored to static or lightly mobile networks and therefore suffer from limited scalability, excessive memory consumption, inefficient key utilization, and increased vulnerability to node capture when applied to highly mobile environments. This paper proposes a mobility-aware, zone-based key management scheme that integrates an enhanced composite key distribution mechanism with dynamic key refinement. The network is partitioned into logical zones, each maintaining an independent key pool to confine security breaches and improve scalability. To adapt to mobility-induced topology changes, sensor nodes continuously refine their key rings by preserving only the cryptographic keys associated with persistent neighbor relationships. This selective retention strategy significantly reduces storage overhead while strengthening resilience against key compromise and unauthorized access. Comprehensive analytical modeling and performance evaluations demonstrate that the proposed scheme achieves higher secure connectivity, stronger resistance to node capture attacks, and improved scalability compared to existing approaches, particularly in dense and highly mobile MWSN scenarios. Full article
Show Figures

Graphical abstract

27 pages, 590 KB  
Perspective
Machine Unlearning: A Perspective, Taxonomy, and Benchmark Evaluation
by Cristian Cosentino, Simone Gatto, Pietro Liò and Fabrizio Marozzo
Future Internet 2026, 18(3), 174; https://doi.org/10.3390/fi18030174 - 23 Mar 2026
Viewed by 649
Abstract
Machine Learning (ML) models trained on large-scale datasets learn useful predictive patterns, but they may also memorize undesired information, leading to risks such as information leakage, bias, copyright violations, and privacy attacks. As these models are increasingly deployed in real-world and regulated settings, [...] Read more.
Machine Learning (ML) models trained on large-scale datasets learn useful predictive patterns, but they may also memorize undesired information, leading to risks such as information leakage, bias, copyright violations, and privacy attacks. As these models are increasingly deployed in real-world and regulated settings, the consequences of such memorization become practical and high-stakes, reinforced by data-protection frameworks that grant individuals a Right to be Forgotten (e.g., the GDPR). Simply removing a record from the training dataset does not guarantee the elimination of its influence from the model, while retrain-from-scratch procedures are often prohibitive for modern architectures, including Transformers and Large Language Models (LLMs). In this work, we provide a perspective on Machine Unlearning (MU) in supervised learning settings, with a particular focus on Natural Language Processing (NLP) scenarios, grounded in a PRISMA-driven systematic review. We propose a multi-level taxonomy that organizes MU techniques along practical and conceptual dimensions, including exactness (exact versus approximate), unlearning granularity, guarantees, and application constraints. To complement this perspective, we run an illustrative benchmark evaluation using a standardized unlearning protocol on DistilBERT trained on a public corpus of news headlines for topic classification, contrasting the retraining gold standard with representative design-for-unlearning and approximate post hoc techniques. For completeness, we also report two oracle-assisted upper-bound baselines (distillation and scrubbing) that rely on a clean retrained reference model, and we account for their incremental cost separately. Our analysis jointly considers model utility, probabilistic quality, forgetting and privacy indicators, as well as computational efficiency. The results highlight systematic trade-offs between accuracy, computational cost, and removal effectiveness, providing practical guidance for selecting machine unlearning techniques in realistic deployment scenarios. Full article
Show Figures

Graphical abstract

37 pages, 1717 KB  
Article
DFedForest++: A Novel Privacy-Enhanced Framework for Integrating Cyber Threat Intelligence in IDS Using Federated Learning
by Md. Moradul Siddique, Syed Md. Galib, Md. Nasim Adnan and Mohammad Nowsin Amin Sheikh
Future Internet 2026, 18(3), 173; https://doi.org/10.3390/fi18030173 - 23 Mar 2026
Viewed by 416
Abstract
The sophistication of cyber attacks and privacy issues related to data sharing is improving and requires a decentralized approach. Conventional centralized approaches to IDS pose a threat to the privacy of data and data sovereignty. Contrarily, federated learning enables several clients to learn [...] Read more.
The sophistication of cyber attacks and privacy issues related to data sharing is improving and requires a decentralized approach. Conventional centralized approaches to IDS pose a threat to the privacy of data and data sovereignty. Contrarily, federated learning enables several clients to learn simultaneously without sharing their sensitive information, which is one of the most promising solutions to studying cyber threats in real time. This framework also adds value to IDS by using CTI, which is incorporated into the training process to make it more accurate in its detection while still maintaining privacy. Each client uses the local model, which is a random forest model that is trained on local datasets without sharing the raw data. Multiple aggregation methods, such as FedAvg, FedOPT, FedProx, and FedXGBoost, are then used to combine the local models into a global model. These techniques are judged with regard to accuracy and Cohen’s Kappa Score. The performance of various models in the NF-UNSW-NB15-v2 dataset experiments was tested. The local model took a value of 0.9941–0.9934 with Kappa scores of 0.8336–0.8088, showing strong performance in different configurations. The FedXGBoost aggregated global model was best in terms of its highest accuracy of 99.22 (Kappa score of 0.8417). More experiments were done on the DFedForest and DFedForest++ models. DFedForest++, incorporating diversity in local models alongside validation accuracy, achieved 99.76% accuracy, surpassing DFedForest (with 71% accuracy in local models). This framework operationalizes CTI through feature augmentation—appending three CTI-derived features (is_known_malicious_ip, is_suspicious_port, and ttp_match_score from MITRE ATT&CK v14 and AlienVault OTX) to each NetFlow record locally at each client before federated training begins. These results highlight the advantages of federated learning in providing collaborative, privacy-preserving solutions for cyber threat detection and emphasize the potential of CTI integration for improving the accuracy and robustness of IDS models across decentralized environments. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Graphical abstract

23 pages, 1038 KB  
Article
The Age of Generative AI Model for Fresh Industrial AIGC Services: A Hybrid-Action Multi-Agent DRL Approach
by Wenjing Li, Ni Tian and Long Zhang
Future Internet 2026, 18(3), 172; https://doi.org/10.3390/fi18030172 - 23 Mar 2026
Viewed by 278
Abstract
To meet the growing demand for autonomous decision-making and real-time optimization in industrial manufacturing, integrating Artificial Intelligence-Generated Content (AIGC) services with Industry 5.0 can enable real-time industrial intelligence. The effectiveness of a generative model is closely related to the current state of the [...] Read more.
To meet the growing demand for autonomous decision-making and real-time optimization in industrial manufacturing, integrating Artificial Intelligence-Generated Content (AIGC) services with Industry 5.0 can enable real-time industrial intelligence. The effectiveness of a generative model is closely related to the current state of the production environment. However, existing studies often ignore the dynamic temporal relationship between generative models and production environments, especially in industrial scenarios with large model transmission delays and random AIGC task arrivals. Therefore, we define a novel metric, namely the Age of Model (AoM), to measure the freshness of generative models with respect to current industrial tasks. We then formulate an average-AoM-minimization problem that jointly considers LoRA-based fine-tuning, wireless transmission and resource allocation. To solve this problem, we propose a Hybrid-Action Multi-Agent Proximal Policy Optimization (HA-MAPPO) algorithm. The proposed algorithm follows the centralized training and decentralized execution (CTDE) paradigm and introduces a Main-Agent Priority State Strategy to support coordinated training and independent execution. In addition, a multi-head output structure is designed to handle the hybrid-action space, which includes discrete fine-tuning association decisions and continuous transmission resource allocation. Simulation results show that the proposed scheme outperforms all benchmark methods. Specifically, the cumulative rewards are improved by approximately 11.13%, 20.32%, 36.61%, and 38.78% compared with the four benchmark algorithms, respectively. These results demonstrate that the proposed scheme can significantly reduce the average AoM while providing high-quality and timely industrial AIGC services. Full article
Show Figures

Figure 1

35 pages, 710 KB  
Review
AI Agent Communications in the Future Internet—Paving a Path Toward the Agentic Web
by Qiang Duan and Zhihui Lu
Future Internet 2026, 18(3), 171; https://doi.org/10.3390/fi18030171 - 21 Mar 2026
Viewed by 773
Abstract
The rapid evolution of artificial intelligence technologies toward the agentic AI paradigm enables the emergence of the Agentic Web in the future Internet. Agent communication plays a critical role in constructing the Agentic Web but faces unique challenges posed by the edge–network–cloud continuum [...] Read more.
The rapid evolution of artificial intelligence technologies toward the agentic AI paradigm enables the emergence of the Agentic Web in the future Internet. Agent communication plays a critical role in constructing the Agentic Web but faces unique challenges posed by the edge–network–cloud continuum in the future Internet. This paper provides a comprehensive overview of state-of-the-art agent communication protocols and technologies, evaluating their readiness to support the construction of the Agentic Web. We first survey representative communication protocols and analyze the key technologies they employ, assessing their effectiveness in addressing the challenges for agent communications in the future Internet. We then identify critical gaps between existing approaches and the requirements of the Agentic Web, and propose a unified architectural framework grounded in virtualization and service-oriented principles to address these gaps. Such a framework may greatly facilitate the development of a pluralistic ecosystem in which various agent communication technologies and protocols can be freely developed and fully utilized. We also discuss open topics and possible directions for future research toward a fully realized Agentic Web. Full article
Show Figures

Graphical abstract

21 pages, 1823 KB  
Article
Two-Stage Distributed Robust Air-Ground Cooperative Mission Planning: An Emergency Communication Solution for Addressing Probabilistic Uncertainty in Road Interruption
by Miao Miao, Wei Wang and Xiaokai Lian
Future Internet 2026, 18(3), 170; https://doi.org/10.3390/fi18030170 - 20 Mar 2026
Viewed by 190
Abstract
Earthquake disasters often cause communication base stations to fail, severely hindering rescue operations and information transmission. While traditional air-ground collaborative emergency communication systems can rapidly restore communications, they still face challenges such as the “time gap” caused by the endurance limitations of unmanned [...] Read more.
Earthquake disasters often cause communication base stations to fail, severely hindering rescue operations and information transmission. While traditional air-ground collaborative emergency communication systems can rapidly restore communications, they still face challenges such as the “time gap” caused by the endurance limitations of unmanned aerial vehicle (UAV) and the “spatial blind spots” resulting from the uncertainty of road disruptions. These issues reduce the continuity and reliability of system services. To address the robustness of air-ground platform coordinated deployment and path planning under uncertain road disruptions, this paper proposes a two-stage distributionally robust deployment and path planning (DRDPRP) method for fixed-wing UAV and ground unmanned vehicles (UGVs) in post-disaster emergency communications. This method constructs a distributionally robust uncertainty set based on a probabilistic distance metric to characterize road disruption risks. It establishes a two-stage distributionally robust optimization model to jointly optimize the deployment and paths of fixed-wing UAV and UGVs. Concurrently, it employs the Column and Constraint Generation (C&CG) algorithm as the solution framework, combined with branch-and-bound and local optimization strategies to enhance computational efficiency. Simulation results demonstrate that this method generates more robust collaborative deployment plans under road disruption uncertainties, thereby enhancing the continuity and reliability of post-disaster emergency communication systems. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

18 pages, 576 KB  
Article
Does Appearance Matter? A Technology Acceptance Study of Mixed Reality Avatars in Citizen Services
by Tamara Lauser and Markus Weinberger
Future Internet 2026, 18(3), 169; https://doi.org/10.3390/fi18030169 - 20 Mar 2026
Viewed by 281
Abstract
This paper examines citizens’ acceptance of AI-supported mixed reality avatars in municipal services. The aim was to investigate the effects of these avatars’ visual appearance on user acceptance and citizen trust. To this end, two avatar variants were tested on 54 participants in [...] Read more.
This paper examines citizens’ acceptance of AI-supported mixed reality avatars in municipal services. The aim was to investigate the effects of these avatars’ visual appearance on user acceptance and citizen trust. To this end, two avatar variants were tested on 54 participants in an experiment. One avatar was designed in a comic style, while the other was more realistic. The interaction with both variants was evaluated independently by the test subjects. The results show that mixed reality avatars are generally perceived positively and accepted as a supportive addition in the city hall (d = 1.85). Differences in appearance did not significantly affect trust or acceptance (p = 0.363). Instead, factors such as social norms (ß = 0.421 (comic-style), ß = 0.513 (realistic)) and comprehensibility (ß = 0.439 (comic-style)) proved to be decisive. This study makes an important contribution to closing the research gap at the interface of mixed reality avatars, user acceptance, trust, artificial intelligence, and public administration. It highlights the potential of avatars to increase efficiency in citizen services. Full article
Show Figures

Figure 1

18 pages, 1843 KB  
Article
Heterogeneous Computing Resources Scheduling Based on Time-Varying Graphs and Multi-Agent Reinforcement Learning
by Jinshan Yuan, Xuncai Zhang and Kexin Gong
Future Internet 2026, 18(3), 168; https://doi.org/10.3390/fi18030168 - 20 Mar 2026
Viewed by 277
Abstract
The evolution toward 6G Computing Power Networks (CPN) aims to deeply integrate multi-tier computing resources across Cloud, Edge, and end devices. However, the significant heterogeneity of computing resources, characterized by varying hardware architectures such as CPUs, GPUs, and NPUs, coupled with the time-varying [...] Read more.
The evolution toward 6G Computing Power Networks (CPN) aims to deeply integrate multi-tier computing resources across Cloud, Edge, and end devices. However, the significant heterogeneity of computing resources, characterized by varying hardware architectures such as CPUs, GPUs, and NPUs, coupled with the time-varying network topology caused by terminal mobility, poses severe challenges to realizing efficient integrated scheduling that satisfies Quality of Service (QoS). To address spatiotemporal mismatches between task requirements and hardware architectures, this paper proposes an integrated scheduling method combining Discrete Time-Varying Graph (DTVG) construction with Multi-Agent Reinforcement Learning (MARL). Specifically, we model the dynamic interaction between mobile tasks and heterogeneous nodes as a DTVG to capture spatiotemporal evolution and employ a QMIX-based algorithm to enable collaborative decision-making among distributed agents. Simulation results demonstrate that the proposed approach effectively solves the joint optimization problem of heterogeneous resource matching and dynamic path planning, significantly outperforming traditional baselines in terms of resource utilization and average latency. This study confirms that incorporating graph-theoretic modeling with reinforcement learning offers a robust solution for the complex coupling of communication and computation in dynamic 6G networks. Full article
(This article belongs to the Special Issue Collaborative Intelligence for Connected Agents)
Show Figures

Figure 1

26 pages, 2242 KB  
Article
A Multi-Source Feedback-Driven Framework for Generating WAF Test Cases
by Pengcheng Lu, Xiaofeng Zhong, Wenbo Xu and Yongjie Wang
Future Internet 2026, 18(3), 167; https://doi.org/10.3390/fi18030167 - 20 Mar 2026
Viewed by 224
Abstract
Web application firewalls (WAFs) are critical defenses against persistent threats to web applications, yet their security evaluation remains challenging. Traditional manual testing methods are often inefficient and resource-intensive, while existing reinforcement learning (RL)-based automated approaches face two key limitations: (1) attackers cannot perceive [...] Read more.
Web application firewalls (WAFs) are critical defenses against persistent threats to web applications, yet their security evaluation remains challenging. Traditional manual testing methods are often inefficient and resource-intensive, while existing reinforcement learning (RL)-based automated approaches face two key limitations: (1) attackers cannot perceive opaque WAF rule logic; (2) boolean feedback from WAFs results in sparse/delayed rewards—sparse rewards trap agents in blind exploration, and delayed rewards hinder the association between early actions and final outcomes, adversely affecting learning efficiency. To address those challenges, we propose Ouroboros—a framework integrating genetic algorithm-based symbolic rule reconstruction (translating WAF rules into interpretable RNNs for fine-grained confidence scoring), timing side-channel analysis (evaluating rule-matching depth), and a multi-tiered reward mechanism to enable self-evolving RL testing. Experiments show that the framework reaches 89.2% bypass success rate on signature-based WAFs. This paper presents an efficient solution for automated WAF testing and delivers insights for optimizing rule logic and anomaly detection mechanisms. Full article
(This article belongs to the Special Issue Adversarial Attacks and Cyber Security)
Show Figures

Figure 1

41 pages, 4390 KB  
Article
AE3GIS—An Agile Emulated Educational Environment for Guided Industrial Security Training
by Tollan Berhanu, Hunter Squires, Braxton Marlatt, Scott Anderson, Benton Wilson, Robert A. Borrelli and Constantinos Kolias
Future Internet 2026, 18(3), 166; https://doi.org/10.3390/fi18030166 - 20 Mar 2026
Viewed by 254
Abstract
Industrial Control Systems (ICSs) are the backbone of modern critical infrastructure, such as electric power, water treatment, oil and gas distribution, and manufacturing operations. While the convergence of IT and OT has greatly increased efficiency and observability, it has also greatly expanded the [...] Read more.
Industrial Control Systems (ICSs) are the backbone of modern critical infrastructure, such as electric power, water treatment, oil and gas distribution, and manufacturing operations. While the convergence of IT and OT has greatly increased efficiency and observability, it has also greatly expanded the attack surface of these once-isolated systems. High-profile cyber-physical attacks, including Stuxnet (2010), TRITON (2017), and the Colonial Pipeline ransomware attack (2021), have shown that ICS-targeted cyberattacks can cause physical damage, disrupt economic stability, and put public safety at risk. Despite the growing prevalence and intensity of such threats, ICS-based cybersecurity education remains largely under-resourced and underfunded. Traditional ICS training laboratories require highly specialized hardware, vendor-specific tools, and expensive licensing that significantly raise barriers to entry. Traditional labs typically require on-site participation and pose physical safety concerns when cyber-physical attack scenarios are performed. These barriers leave students unable to get necessary security training for ICSs. Therefore, this paper introduces AE3GIS: Agile Emulated Educational Environment for Guided Industrial Security—a fully virtual, lightweight, open-source platform designed to democratize ICS cybersecurity education. Based on the GNS3 network simulation tool, AE3GIS enables rapid deployment of comprehensive ICS environments containing IT and OT systems, industrial communication protocols, control logic, and diverse security tools. AE3GIS is designed to provide practical training for students using realistic ICS cybersecurity scenarios through a local or remote training platform without the cost, safety, or accessibility limitations of hardware-based labs. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

39 pages, 1642 KB  
Article
A Post-Quantum Secure Architecture for 6G-Enabled Smart Hospitals: A Multi-Layered Cryptographic Framework
by Poojitha Devaraj, Syed Abrar Chaman Basha, Nithesh Nair Panarkuzhiyil Santhosh and Niharika Panda
Future Internet 2026, 18(3), 165; https://doi.org/10.3390/fi18030165 - 20 Mar 2026
Viewed by 369
Abstract
Future 6G-enabled smart hospital infrastructures will support latency-critical medical operations such as robotic surgery, autonomous monitoring, and real-time clinical decision systems, which require communication mechanisms that ensure both ultra-low latency and long-term cryptographic security. Existing security solutions either rely on classical cryptographic protocols [...] Read more.
Future 6G-enabled smart hospital infrastructures will support latency-critical medical operations such as robotic surgery, autonomous monitoring, and real-time clinical decision systems, which require communication mechanisms that ensure both ultra-low latency and long-term cryptographic security. Existing security solutions either rely on classical cryptographic protocols that are vulnerable to quantum attacks or deploy isolated post-quantum primitives without providing a unified framework for secure real-time medical command transmission. This research presents a latency-aware, multi-layered post-quantum security architecture for 6G-enabled smart hospital environments. The proposed framework establishes an end-to-end secure command transmission pipeline that integrates hardware-rooted device authentication, post-quantum key establishment, hybrid payload protection, dynamic access enforcement, and tamper-evident auditing within a coherent system design. In contrast to existing approaches that focus on individual security mechanisms, the architecture introduces a structured integration of Kyber-based key encapsulation and Dilithium digital signatures with hybrid AES-based encryption and legacy-compatible key transport, while Physical Unclonable Function authentication provides hardware-bound device identity verification. Zero Trust access control, metadata-driven anomaly detection, and blockchain-style audit logging provide continuous verification and traceability, while threshold cryptography distributes cryptographic authority to eliminate single points of compromise. The proposed architecture is evaluated using a discrete-event simulation framework representing adversarial conditions in realistic 6G medical communication scenarios, including replay attacks, payload manipulation, and key corruption attempts. Experimental results demonstrate improved security and operational efficiency, achieving a 48% reduction in detection latency, a 68% reduction in false-positive anomaly detection rate, and a 39% improvement in end-to-end round-trip latency compared to conventional RSA-AES-based architectures. These results demonstrate that the proposed framework provides a practical and scalable approach for achieving post-quantum secure and low-latency command transmission in next-generation 6G smart hospital systems. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks—2nd Edition)
Show Figures

Graphical abstract

18 pages, 1430 KB  
Article
Multi-Layer Traffic Analysis Framework for DDoS Attacks in Software-Defined IoT Networks
by Keerthana Balaji and Mamatha Balachandra
Future Internet 2026, 18(3), 164; https://doi.org/10.3390/fi18030164 - 19 Mar 2026
Viewed by 211
Abstract
The data plane and the control plane are targets for Distributed Denial of Service (DDoS) attacks in the Software-Defined Internet of Things (SDIoT). Currently available studies rely on observations from a single network layer which limits the cross-layer attack analysis. This paper presents [...] Read more.
The data plane and the control plane are targets for Distributed Denial of Service (DDoS) attacks in the Software-Defined Internet of Things (SDIoT). Currently available studies rely on observations from a single network layer which limits the cross-layer attack analysis. This paper presents a synchronized, phase-aware, and a multi-layer traffic collection framework mimicking SDIoT environments under diverse DDoS attack scenarios. The data collected are the metrics captured at host, switch, and controller layers during normal, attack, and post-attack phases with strict temporal alignment. For capturing diverse DDoS attack behaviors in SDIoT environments, representative data plane attacks including volumetric flooding and switch-level flow table saturation were used. Control plane level attack targeting the SDN controller was implemented. The evaluation was done using a Mininet-based SDIoT testbed with a POX controller. Each scenario is executed across five independent runs with statistical validation. The proposed framework enables reproducible and time-aligned multi-layer analysis through standardized orchestration and automated logging. Results indicate that SDIoT DDoS behavior demonstrates differently across traffic, state, and resource-level metrics, and that accurate characterization benefits from temporally aligned multi-layer monitoring rather than relying solely on packet rate analysis. Full article
(This article belongs to the Special Issue Cybersecurity, Privacy, and Trust in Intelligent Networked Systems)
Show Figures

Figure 1

33 pages, 8800 KB  
Article
Energy-Efficient Wireless Sensor Networks Through Coverage Hole Detection and Mitigation Using a Hybrid Raccoon–Hermit Crab Optimization Algorithm
by Sean Laurel Rex Bashyam and Renuga Devi Subramanian
Future Internet 2026, 18(3), 163; https://doi.org/10.3390/fi18030163 - 19 Mar 2026
Viewed by 259
Abstract
Wireless sensor networks encounter issues like irregular deployment, node failures, and uneven energy consumption that create coverage holes, leading to a reduction in network lifetime in critical or disaster-based applications. Most existing approaches focus on coverage enhancement during the initial deployment and perform [...] Read more.
Wireless sensor networks encounter issues like irregular deployment, node failures, and uneven energy consumption that create coverage holes, leading to a reduction in network lifetime in critical or disaster-based applications. Most existing approaches focus on coverage enhancement during the initial deployment and perform mitigation only at the beginning of the network operation. However, the coverage holes may also occur later due to node failures and energy depletion. To address this issue, a Hybrid Raccoon–Hermit crab optimization algorithm that advocates both initial coverage enhancement and adaptive mitigation due to future coverage holes is proposed. The proposed algorithm uses the global exploration ability of the raccoon optimization algorithm to find optimal cluster heads and the exploitation ability of the Hermit crab optimization to determine the optimal position and to relocate the static nodes logically to mitigate coverage holes. The proposed algorithm is evaluated under different node densities (50, 100, 200, 500, and 1000), with the sink at (100,100). It results in an enhanced network lifetime of 65.20%, an improved coverage ratio (16.94%) from (77.05%) to (93.94%), increased throughput by delivering (3,139,293) bits, and a reduced delay of 2.27292 s for 1000 nodes compared with other existing methods. Full article
Show Figures

Graphical abstract

29 pages, 1632 KB  
Article
Context-Aware Software-Defined Wireless Networks: An AI-Based Approach to Deal with QoS
by Dainier González Romero, Sergio F. Ochoa and Rodrigo M. Santos
Future Internet 2026, 18(3), 162; https://doi.org/10.3390/fi18030162 - 19 Mar 2026
Viewed by 428
Abstract
Many IoT systems require real-time communication, which imposes strict timing constraints on data transmission and stresses network propagation models. These systems need to address these communication requirements using wireless networks and also manage quality of service. While Software-Defined Wireless Networks (SDWNs) offer a [...] Read more.
Many IoT systems require real-time communication, which imposes strict timing constraints on data transmission and stresses network propagation models. These systems need to address these communication requirements using wireless networks and also manage quality of service. While Software-Defined Wireless Networks (SDWNs) offer a compelling alternative for these scenarios, they lack dynamic mechanisms to autonomously adapt network behavior to fluctuating operational conditions. In order to do that, this paper builds on the authors’ previous work and shows how to implement Context-Aware Software-Defined Wireless Networks (CA-SDWNs) that use a self-adapting traffic management strategy to deal with dynamic real-time requirements. In particular, it adapts the medium access protocol parameters to changes in the operational context using an intelligent agent in the control loop of the network. We implement the CA-SDWN model using the NS-3 simulator, and that implementation is made available for researchers and developers through an open-source library. The model is evaluated using several SDWNs that operate under dynamic conditions. The experimental results show how incorporating artificial intelligence into the control loop enables the use of the context information to enhance the predictability of the medium access protocol parameters, thus handling different traffic QoS according to the demand of IoT applications. It represents a clear contribution for researchers and developers of these systems when they have to deal with QoS and real-time constrained communication in SDWNs implemented on WiFi. Full article
Show Figures

Graphical abstract

24 pages, 560 KB  
Systematic Review
Augmented Reality Technologies for Radiation Safety Training: A Systematic Review of Sensor Integration and Visualization Approaches
by Rajiv Khadka, Xingyue Yang, Jack Dunker and John Koudelka
Future Internet 2026, 18(3), 161; https://doi.org/10.3390/fi18030161 - 19 Mar 2026
Viewed by 264
Abstract
This paper presents a comprehensive systematic review examining the application of augmented reality (AR) and sensor technologies for visualizing ionizing radiation in virtual training environments. The review methodology involved systematic identification and analysis of the relevant literature based on predetermined criteria including publication [...] Read more.
This paper presents a comprehensive systematic review examining the application of augmented reality (AR) and sensor technologies for visualizing ionizing radiation in virtual training environments. The review methodology involved systematic identification and analysis of the relevant literature based on predetermined criteria including publication type, year of publication, application domain, and technological approach. The literature search encompassed publications from 2011 to 2021 across four major academic databases: Web of Science, Google Scholar, IEEE Xplore, and Scopus. Through rigorous screening following PRISMA 2020 guidelines, 23 research articles met the inclusion criteria for detailed analysis. From 404 initial database records, 360 were excluded during title/abstract screening (primarily for lacking AR components, radiation focus, or training applications) and 4 during full-text assessment (all for lacking sensor integration). The findings reveal that AR-based ionizing radiation visualization has been successfully implemented across diverse domains, including nuclear facility operations, medical procedures, CERN research activities, and educational and monitoring applications. The analysis identified multiple dimensions of impact, encompassing distinct benefits, emerging opportunities, and implementation challenges associated with AR deployment for ionizing radiation training. Each of these dimensions is comprehensively examined and documented within this review. Additionally, this study identifies critical research gaps that currently limit the full potential of AR technology in supporting ionizing radiation training programs. These gaps are systematically analyzed and discussed to establish clear directions for future research endeavors in this emerging field. Full article
(This article belongs to the Special Issue Human-Computer Interaction and Virtual Reality (VR))
Show Figures

Figure 1

28 pages, 7442 KB  
Article
Usability and User Experience in an Industrial Metaverse: A Mixed-Methods Study of the Necoverse Point Cloud Inspection System for Shipbuilding
by Aung Pyae, Juha Saarinen, Jaakko Haavisto, Jaro Virta, Matti Gröhn and Mika Luimula
Future Internet 2026, 18(3), 160; https://doi.org/10.3390/fi18030160 - 18 Mar 2026
Viewed by 236
Abstract
Industrial metaverse systems enable shared, immersive environments for coordinating complex, data-intensive industrial workflows; however, ensuring effective and usable interaction remains a key barrier to professional adoption. This study examines immersive point cloud- and CAD-based inspection tasks in an industrial metaverse context using a [...] Read more.
Industrial metaverse systems enable shared, immersive environments for coordinating complex, data-intensive industrial workflows; however, ensuring effective and usable interaction remains a key barrier to professional adoption. This study examines immersive point cloud- and CAD-based inspection tasks in an industrial metaverse context using a mixed-methods evaluation that combines perceived usability ratings, cognitive workload assessment (NASA-TLX), validated presence and flow instruments, qualitative interviews, and structured observation. The results indicate that users generally experienced smooth navigation, manageable cognitive workload, and a meaningful sense of spatial presence, supporting focused and task-oriented engagement. At the same time, execution-level challenges—particularly related to tool discoverability, annotation flexibility, system feedback clarity, and interaction ergonomics—introduced workflow friction for some users. By triangulating quantitative, qualitative, and observational evidence, the study derives actionable design recommendations, including adaptive onboarding, improved feedback mechanisms, and refinements to interaction design. Overall, the findings provide empirical insight into how usability, cognitive workload, presence, and flow jointly shape user experience in industrial metaverse inspection environments and inform the development of more robust, user-centered industrial systems. Full article
(This article belongs to the Section Techno-Social Smart Systems)
Show Figures

Figure 1

15 pages, 1872 KB  
Article
FPGA-Based Time Synchronization over Ethernet Networks for the DTT Control and Data Acquisition System
by Aamir Ali Patoli, Luca Boncagni, Gabriele Manduchi and Giancarlo Fortino
Future Internet 2026, 18(3), 159; https://doi.org/10.3390/fi18030159 - 18 Mar 2026
Viewed by 604
Abstract
Time synchronization is a fundamental requirement for the reliable operation of Control and Data Acquisition Systems (CODASs) in large-scale fusion experiments such as the Divertor Tokamak Test (DTT). Distributed diagnostics, sensors, and control subsystems must share a unified time reference to guarantee deterministic [...] Read more.
Time synchronization is a fundamental requirement for the reliable operation of Control and Data Acquisition Systems (CODASs) in large-scale fusion experiments such as the Divertor Tokamak Test (DTT). Distributed diagnostics, sensors, and control subsystems must share a unified time reference to guarantee deterministic data acquisition and stable plasma control. This paper presents the FPGA-based implementation and evaluation of a synchronization system that combines the IEEE 1588 Precision Time Protocol (PTP) with Pulse Per Second (PPS) generation. The proposed platform is built on Zynq UltraScale+ Kria KR260 System-on-Modules (SOMs) running a customized PetaLinux distribution with LinuxPTP utilities. Hardware timestamping is enabled through the integrated Timestamping Unit (TSU) in the Gigabit Ethernet MAC, while a hardware logic module generates PPS signals from the synchronized PTP clock. Experimental validation demonstrates nanosecond-level synchronization with an RMS timing accuracy of approximately 8.5 ns. A detailed analysis of PPS offset, network path delay, and servo adjustments confirms stability of the timing system. The proposed design offers a low-cost, flexible, fully customizable and controllable solution for distributed diagnostic and control systems in fusion facilities. Full article
(This article belongs to the Special Issue Future Industrial Networks: Technologies, Algorithms, and Protocols)
Show Figures

Figure 1

17 pages, 249 KB  
Article
ChatGPT-Assisted Task Analysis for Special Education Teachers: An Exploratory Study of Alignment, Readability, Efficiency, and Acceptability
by Serife Balikci, Nesime Kubra Terzioglu and Salih Rakap
Future Internet 2026, 18(3), 158; https://doi.org/10.3390/fi18030158 - 18 Mar 2026
Viewed by 214
Abstract
Task analysis is a foundational component of instructional design in special education, yet it can impose substantial time and cognitive demands on teachers. Artificial intelligence (AI) tools such as ChatGPT may provide support for instructional planning tasks by assisting educators in generating and [...] Read more.
Task analysis is a foundational component of instructional design in special education, yet it can impose substantial time and cognitive demands on teachers. Artificial intelligence (AI) tools such as ChatGPT may provide support for instructional planning tasks by assisting educators in generating and organizing task sequences. This study examined the effectiveness, readability, time efficiency, and acceptability of ChatGPT-assisted task analysis compared to a traditional task analysis method. Thirty-two special education teachers participated in a randomized between-groups study in which they developed task analyses using either a traditional approach or ChatGPT supported by a structured interaction protocol. Task analyses were evaluated based on alignment with expert-developed models, readability, and development time, and teachers’ perceptions of acceptability were also examined. Results indicated that ChatGPT-assisted task analyses required significantly less development time while demonstrating strong alignment with expert-generated models. Readability levels and the number of task steps were similar across groups. Teachers who used ChatGPT also reported positive perceptions regarding the usefulness and acceptability of AI assistance in instructional planning. These findings suggest that AI-assisted tools may support teachers in developing task analyses more efficiently while maintaining instructional clarity. However, given the exploratory nature of the study and the limited sample, further research is needed to examine how AI-assisted task analysis may influence instructional practice and student learning outcomes in special education. Full article
(This article belongs to the Special Issue Human-Centered Artificial Intelligence)
20 pages, 1948 KB  
Article
Contra-KD: A Lightweight Transformer Model for Malicious URL Detection with Contrastive Representation and Model Distillation
by Zheng You Lim, Ying Han Pang, Edwin Chan Kah Jun, Shih Yin Ooi and Goh Fan Ling
Future Internet 2026, 18(3), 157; https://doi.org/10.3390/fi18030157 - 17 Mar 2026
Viewed by 295
Abstract
Infected URLs are always regarded as a serious threat to cybersecurity, serving as pathways to phishing, maliciousness, and other offenses. Although transformer-based models have demonstrated good performance in malicious URL detection, their high computational cost and latency make them impractical for deployment in [...] Read more.
Infected URLs are always regarded as a serious threat to cybersecurity, serving as pathways to phishing, maliciousness, and other offenses. Although transformer-based models have demonstrated good performance in malicious URL detection, their high computational cost and latency make them impractical for deployment in real-time or resource-constrained systems. Allocated on the basis of knowledge distillation (KD), lightweight models tend to be efficient but are commonly not sufficiently discriminative to distinguish between malicious and benign URLs with non-cataclysmic lexical overlaps, particularly when dealing with an imbalanced dataset. In order to address these issues, we propose Contra-KD, a lightweight transformer model that incorporates contrastive learning (CL) and KD. This proposed framework imposes structured embedding matching, allowing the student model to learn more meaningful and generalized depictions. Contra-KD uses a compact 6-layer student transformer architecture based on ELECTRA to scale parameters up and can achieve more than 90% computational fidelity with a high accuracy. In this scheme, CL improves the feature of discrimination by semantically clustering similar URLs and separating different URLs. This tendency serves to limit confusion, especially when a common lexical trait is held between two words and/or in the presence of adversarial obfuscation. Through a large-scale publicly available Kaggle dataset of 651,191 URLs in imbalanced scenarios, the proposed Contra-KD can achieve 99.05% accuracy, 99.96% ROC-AUC, and 98.18% MCC which are superior to their counterparts including lightweight models and transformer-based ones. To summarize, Contra-KD proposes an efficient transformer architecture that is both small and effective in computation while delivering stable detection performance. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Graphical abstract

23 pages, 1137 KB  
Article
Adaptive Healthcare Monitoring Through Drift-Aware Edge-Cloud Intelligence
by Aleksandra Stojnev Ilic, Milos Ilic, Natalija Stojanovic and Dragan Stojanovic
Future Internet 2026, 18(3), 156; https://doi.org/10.3390/fi18030156 - 17 Mar 2026
Viewed by 267
Abstract
Continuous healthcare monitoring systems generate non-stationary physiological data streams, where evolving statistical properties and patterns often invalidate static models and fixed user classifications. To address this challenge, we propose drift-aware adaptive architecture that integrates concept drift detection into a distributed edge–cloud data analytics [...] Read more.
Continuous healthcare monitoring systems generate non-stationary physiological data streams, where evolving statistical properties and patterns often invalidate static models and fixed user classifications. To address this challenge, we propose drift-aware adaptive architecture that integrates concept drift detection into a distributed edge–cloud data analytics pipeline. In the proposed design, a concept drift is elevated from a maintenance signal to the primary mechanism governing user-state adaptation, model evolution, and inference consistency. Within the proposed system, the edge tier performs low-latency inference and preliminary drift screening under strict resource constraints, while the cloud tier executes advanced drift detection and validation, orchestrates user reclassification and model retraining, and manages model evolution. A feedback loop synchronizes edge and cloud operations, ensuring that detected drift triggers appropriate system transitions, either reassigning a user to an updated state category or initiating targeted model updates. This architecture reduces reliance on static group assignments, improves personalization, and preserves model fidelity under evolving physiological conditions. We analyze the drift types most relevant to healthcare data streams, evaluate the suitability of lightweight and cloud-grade drift detectors, and define the system requirements for stability, responsiveness, and clinical safety. Evaluation across 21 concurrent users demonstrates that drift-aware adaptation reduced prediction MAE by 40.6% relative to periodic retraining, with an end-to-end adaptation latency of 66 ± 37 s. Hierarchical cloud validation reduced the false-positive retraining rate from 88.9% (edge-only triggering) to 27.3%, while maintaining uninterrupted inference throughout all adaptation events. Full article
Show Figures

Figure 1

24 pages, 1451 KB  
Review
AI-Driven Network Optimization for the 5G-to-6G Transition: A Taxonomy-Based Survey and Reference Framework
by Rexhep Mustafovski, Galia Marinova, Besnik Qehaja, Edmond Hajrizi, Shejnaze Gagica and Vassil Guliashki
Future Internet 2026, 18(3), 155; https://doi.org/10.3390/fi18030155 - 17 Mar 2026
Viewed by 643
Abstract
This paper presents a taxonomy-based survey of AI-driven network optimization mechanisms relevant to the transition from fifth generation (5G) to sixth generation (6G) mobile communication systems. In contrast to earlier generational shifts that are often described as technology replacement cycles, the 5G-to-6G evolution [...] Read more.
This paper presents a taxonomy-based survey of AI-driven network optimization mechanisms relevant to the transition from fifth generation (5G) to sixth generation (6G) mobile communication systems. In contrast to earlier generational shifts that are often described as technology replacement cycles, the 5G-to-6G evolution is increasingly characterized in the literature as a prolonged period of coexistence, hybrid operation, and progressive integration of new capabilities across radio, edge, core, and service layers. To structure this transition, the paper organizes prior work into a transition-oriented taxonomy covering migration strategies, AI-enabled closed-loop control, RAN disaggregation and edge intelligence, core virtualization and slice orchestration, spectrum-aware coexistence, service-driven requirements, and security-aware governance. Rather than introducing a new optimization algorithm or an experimentally validated architecture, the contribution of this survey is analytical and integrative. Specifically, it consolidates fragmented research directions into a reference view of how AI-driven control mechanisms are distributed across spectrum, RAN, edge, and core domains during hybrid 5G–6G operation. In addition, the paper includes a structured evidence synthesis of performance trends, deployment maturity signals, and recurring methodological limitations reported across the literature. The review indicates that meeting anticipated 6G objectives, including ultra-low latency, high reliability, scalability, and improved energy efficiency, depends less on isolated enhancements at individual protocol layers and more on coordinated cross-layer optimization supported by AI-native control loops. At the same time, the surveyed literature reveals persistent gaps in service-to-control mapping, security-aware orchestration, interoperability across heterogeneous domains, and reproducible evaluation methodologies for hybrid 5G–6G environments. The survey is intended to provide researchers, network operators, and standardization stakeholders with a structured analytical basis for assessing how AI-driven optimization can support the staged evolution from 5G systems toward 6G-ready infrastructures. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Figure 1

27 pages, 3391 KB  
Article
A Hybrid Federated–Incremental Learning Framework for Continuous Authentication in Zero-Trust Networks
by Jie Ji, Shi Qiu, Shengpeng Ye and Xin Liu
Future Internet 2026, 18(3), 154; https://doi.org/10.3390/fi18030154 - 16 Mar 2026
Viewed by 225
Abstract
Zero-trust architecture (ZTA) requires continuous and adaptive identity authentication to maintain security in dynamic environments. However, current federated learning (FL)-based authentication models often struggle to incorporate evolving attack patterns without experiencing catastrophic forgetting. Moreover, non-independent and identically distributed (non-IID) client data and concept [...] Read more.
Zero-trust architecture (ZTA) requires continuous and adaptive identity authentication to maintain security in dynamic environments. However, current federated learning (FL)-based authentication models often struggle to incorporate evolving attack patterns without experiencing catastrophic forgetting. Moreover, non-independent and identically distributed (non-IID) client data and concept drift frequently lead to degraded model robustness and personalization. To address these issues, this paper presents a hybrid learning framework that integrates federated learning with incremental learning (IL) for sustainable authentication. A Dynamic Weighted Federated Aggregation (DWFA) algorithm is developed to mitigate concept drift by adjusting aggregation weights in real time, ensuring that the global model adapts to changing data distributions. This approach enables continuous learning from distributed threat data while maintaining privacy and eliminating the need for historical data retention. Experimental results on real-world traffic datasets indicate that the proposed framework outperforms conventional FL baselines, reducing the overall error rate by approximately 56% and improving the detection rate for novel attack types by over 17.8%. Furthermore, the framework remains stable against performance decay while maintaining efficient communication overhead. This study provides an adaptive, privacy-preserving solution for identity authentication in zero-trust systems. Full article
(This article belongs to the Special Issue Cybersecurity in the Age of AI, IoT, and Edge Computing)
Show Figures

Graphical abstract

28 pages, 4007 KB  
Article
CCBA: Dynamic Scheduling Algorithm for Jammer Resources in Strong Electromagnetic Interference Environment
by Zhenhua Wei, Wenpeng Wu, Haiyang You, Zhaoguang Zhang, Chenxi Li, Jianwei Zhan and Shan Zhao
Future Internet 2026, 18(3), 153; https://doi.org/10.3390/fi18030153 - 16 Mar 2026
Viewed by 188
Abstract
The strong electromagnetic interference environment on the battlefield has brought new challenges to the networking collaboration of jammers and the estimation of jamming effects. Traditional successful jamming indicators are difficult to meet the needs of continuous, low-power, and flexible jamming, causing difficulties in [...] Read more.
The strong electromagnetic interference environment on the battlefield has brought new challenges to the networking collaboration of jammers and the estimation of jamming effects. Traditional successful jamming indicators are difficult to meet the needs of continuous, low-power, and flexible jamming, causing difficulties in emergency scheduling of jamming resources. Aiming at the overall degradation of the communication party’s signal reception quality, this paper proposes the restrictive conditions of “overall limited jamming” and the analysis and evaluation index of “multistage jamming-to-signal ratio (J/S)”, which meets the scheduling requirements of distributed jamming resources in harsh environments. Based on the jammer layout that can achieve overall high-intensity jamming, the electromagnetic environment estimation, power scheduling, and collaboration strategies of jammers are designed, a communication countermeasure game algorithm under blocked networking collaboration is established, and the independent dynamic scheduling of jamming resources is realized. The experimental results show that the Concentric Circle Broadcasting Algorithm (CCBA) not only maintains effective communication jamming (the proportion of high-intensity jamming is no less than 50%, and the proportion of normal signal reception of communication nodes is no more than 6%), but also extends the system operation duration by 66.8–269.6% compared with the comparative algorithms for the 600 MHz fixed-frequency and 1 MHz bandwidth communication system. This work is limited to the line-of-sight (LOS) scenario, and future research will extend it to non-line-of-sight (NLOS) scenarios. Full article
Show Figures

Graphical abstract

16 pages, 686 KB  
Article
Design of Network Traffic Analysis Models Based on Deep Neural Networks
by Jiantao Cui and Yixiang Zhao
Future Internet 2026, 18(3), 152; https://doi.org/10.3390/fi18030152 - 16 Mar 2026
Viewed by 242
Abstract
The proliferation of next-generation Internet infrastructures and the Internet of Things (IoT) has exponentially increased network traffic complexity. While deep learning (DL)-based intrusion detection systems (IDSs) show immense potential, they persistently suffer from challenges including high computational overhead, vanishing gradients in deep architectures, [...] Read more.
The proliferation of next-generation Internet infrastructures and the Internet of Things (IoT) has exponentially increased network traffic complexity. While deep learning (DL)-based intrusion detection systems (IDSs) show immense potential, they persistently suffer from challenges including high computational overhead, vanishing gradients in deep architectures, and acute sensitivity to noise. Consequently, these issues impede their real-time deployment in resource-constrained edge computing environments. To overcome these limitations, we propose a novel, lightweight, and robust intrusion detection framework based on deep neural networks (DNNs). Initially, we employ a Robust Scaler-based statistical preprocessing strategy to supersede traditional Z-score standardization, effectively mitigating the adverse impacts of outliers and burst traffic noise. Subsequently, we design an advanced architecture that integrates self-normalizing residual blocks with a channel attention mechanism. Leveraging compressed hidden layers alongside the Scaled Exponential Linear Unit (SELU) activation function, this architecture not only mitigates the vanishing gradient problem but also amplifies critical traffic features. Concurrently, it achieves a substantial reduction in both parameter count and inference latency. Furthermore, we introduce a cosine annealing strategy to dynamically adjust the learning rate during training, thereby facilitating the model’s escape from local optima and accelerating convergence. Extensive experiments on standard benchmark datasets demonstrate that our proposed framework achieves superior detection accuracy while maintaining exceptional computational efficiency compared to state-of-the-art baselines. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

22 pages, 634 KB  
Review
A Multidimensional Maturity Model for the Metaverse: Stages, Dimensions and Architectural Alignment
by Joan-Marc Garcés Sabaté and Eloi Coloma Picó
Future Internet 2026, 18(3), 151; https://doi.org/10.3390/fi18030151 - 16 Mar 2026
Viewed by 367
Abstract
The Metaverse has become a central concept in the evolution of digital transformation, but its current development is marked by conceptual ambiguity, technological fragmentation and the limited presence of structured frameworks for the systematic assessment of its maturity. The Metaverse is currently approached [...] Read more.
The Metaverse has become a central concept in the evolution of digital transformation, but its current development is marked by conceptual ambiguity, technological fragmentation and the limited presence of structured frameworks for the systematic assessment of its maturity. The Metaverse is currently approached from partial perspectives that often focus on virtual worlds rather than conceptualizing it as a multidimensional digital ecosystem. This study proposes a multidimensional model of Metaverse maturity divided into three stages (Emergent, Developed and Integrated) and five analytical dimensions (experience, interoperability, standardization, technology and resources). The model is based on a systematic literature review of the academic and non-academic sources. It aligns these dimensions systematically with the layered architecture of the Metaverse and formalizes their interdependence through a structured impact-mapping procedure. This maturity model offers an analytical tool for comparing contexts and sectors, identifying bottlenecks, and guiding strategic planning. It establishes a conceptual framework for future empirical validation and sector-specific applications. Full article
Show Figures

Graphical abstract

31 pages, 1881 KB  
Article
DRT-PBFT: A Novel PBFT-Optimized Consensus Algorithm for Blockchain Based on Dynamic Reputation Tree
by Xiaohong Deng, Lihui Liu, Zhigang Chen, Xinrong Lu and Juan Li
Future Internet 2026, 18(3), 150; https://doi.org/10.3390/fi18030150 - 16 Mar 2026
Viewed by 308
Abstract
While the practical Byzantine fault tolerance (PBFT) consensus algorithm provides excellent theoretical fault tolerance, its performance in practical blockchain applications is often constrained by high communication overhead, especially in scenarios with limited node resources and high mobility, such as Vehicular Ad hoc Networks [...] Read more.
While the practical Byzantine fault tolerance (PBFT) consensus algorithm provides excellent theoretical fault tolerance, its performance in practical blockchain applications is often constrained by high communication overhead, especially in scenarios with limited node resources and high mobility, such as Vehicular Ad hoc Networks (VANETs). To address these blockchain-specific limitations without sacrificing the fundamental safety guarantees against arbitrary Byzantine failures, this paper proposes a novel PBFT-optimized consensus algorithm based on a dynamic reputation tree (DRT-PBFT). First, to address the issue of limited storage resources, we propose a block synchronization method based on differentiated storage of reputation values. The lower-reputation nodes retain only “micro-blocks” that contain essential information of the complete block, while the higher-reputation nodes store and synchronize complete blocks, significantly reducing the storage overhead. Second, on the basis of the reputation values, we construct a tree communication topology from the leaf node layer in a bottom-up manner. Messages are transmitted from multiple child nodes to their parent node, resolving the problem of a single message source in the tree structure. Additionally, we optimize the consensus process, reducing the number of mutual communications between nodes to a linear level. Finally, to address the problem of malicious nodes in the tree structure, we introduce a dynamic reconstruction mechanism for the reputation tree. When child node messages are inconsistent, the parent node splits the child nodes to mitigate the influence of malicious nodes, enhancing both the security and scalability of the consensus process. The experimental results show that, compared with typical improved PBFT algorithms, the proposed algorithm improves the average throughput by 34.1% and reduces the average latency by 27.4%. Moreover, compared with the full replication block synchronization method, the differentiated storage method reduces the storage overhead by 26.3%, making it potentially more suitable for large-scale VANET scenarios. Full article
Show Figures

Figure 1

33 pages, 3876 KB  
Article
Predictive Network Slicing Resource Orchestration: A VNF Approach
by Andrés Cárdenas, Luis Sigcha and Mohammadreza Mosahebfard
Future Internet 2026, 18(3), 149; https://doi.org/10.3390/fi18030149 - 16 Mar 2026
Viewed by 369
Abstract
As network slicing gains traction in cloud computing environments, efficient management and orchestration systems are required to realize the benefits of this technology. These systems must enable dynamic provisioning and resource optimization of virtualized services spanning multiple network slices. Nevertheless, the common resource [...] Read more.
As network slicing gains traction in cloud computing environments, efficient management and orchestration systems are required to realize the benefits of this technology. These systems must enable dynamic provisioning and resource optimization of virtualized services spanning multiple network slices. Nevertheless, the common resource overprovisioning practice implemented by service providers leads to the inefficient use of resources, limiting the ability of Mobile Network Operators (MNOs) to rent new network slices to more vertical customers. Hence, efficient resource allocation mechanisms are essential to achieve optimal network performance and cost-effectiveness. This paper proposes a predictive model for network slice resource optimization based on resource sharing between Virtualized Network Functions (VNFs). The model employs deep learning models based on Long Short-Term Memory (LSTM) and Transformers for CPU resource usage prediction and a reactive algorithm for resource sharing between VNFs. The model is powered by a telemetry system proposed as an extension of the 3GPP network slice management architectural framework. The extended architectural framework enhances the automation and optimization of the network slice lifecycle management. The model is validated through a practical use case, demonstrating the effectiveness of the resource sharing algorithm in preventing VNF overload and predicting resource usage accurately. The findings demonstrate that the sharing mechanism enhances resource optimization and ensures compliance with service level agreements, mitigating service degradation. This work contributes to the efficient management and utilization of network resources in 5G networks and provides a basis for further research in network slice resource optimization. Full article
(This article belongs to the Special Issue Software-Defined Networking and Network Function Virtualization)
Show Figures

Figure 1

16 pages, 1275 KB  
Article
Differentially Private Federated Learning with Adaptive Clipping Thresholds
by Jianhua Liu, Yanglin Zeng, Zhongmei Wang, Weiqing Zhang and Yao Tong
Future Internet 2026, 18(3), 148; https://doi.org/10.3390/fi18030148 - 14 Mar 2026
Viewed by 311
Abstract
Under non-independent and identically distributed (Non-IID) conditions, significant variations exist in local model updates across clients and training phases during the collaborative modeling process of differential privacy federated learning (DP-FL). Fixed clipping thresholds and noise scales struggle to accommodate these diverse update differences, [...] Read more.
Under non-independent and identically distributed (Non-IID) conditions, significant variations exist in local model updates across clients and training phases during the collaborative modeling process of differential privacy federated learning (DP-FL). Fixed clipping thresholds and noise scales struggle to accommodate these diverse update differences, leading to mismatches between local update intensity and noise perturbations. This imbalance results in data privacy leaks and suboptimal model accuracy. To address this, we propose a differential privacy federated learning method based on adaptive clipping thresholds. During each communication round, the server adaptively estimates the global clipping threshold for that round using a quantile strategy based on the statistical distribution of client update norms. Simultaneously, clients adaptively adjust their noise scales according to the clipping threshold magnitude, enabling dynamic matching of clipping intensity and noise perturbation across training phases and clients. The novelty of this work lies in a quantile-driven, round-wise global clipping adaptation that synchronizes sensitivity bounding and noise calibration across heterogeneous clients, enabling improved privacy–utility behavior under a fixed privacy accountant. Using experimental results on the rail damage datasets, our proposed method slightly reduces the attacker’s MIA ROC-AUC by 0.0033 and 0.0080 compared with Fed-DPA and DP-FedAvg, respectively, indicating stronger privacy protection, while improving average accuracy by 1.55% and 3.35% and achieving faster, more stable convergence. We further validate its effectiveness on CIFAR-10 under non-IID partitions. Full article
Show Figures

Figure 1

26 pages, 1470 KB  
Article
ANRF: An Adaptive Network Reconstruction Framework for Community Detection in Bipartite Networks
by Furong Chang, Songxian Wu, Yue Zhao and Farhan Ullah
Future Internet 2026, 18(3), 147; https://doi.org/10.3390/fi18030147 - 13 Mar 2026
Viewed by 348
Abstract
Bipartite network community detection is of significant importance for understanding the underlying structure and functional organization of real-world complex systems. Although many mature community detection algorithms exist for unipartite networks, they cannot be directly applied to bipartite networks due to their unique topological [...] Read more.
Bipartite network community detection is of significant importance for understanding the underlying structure and functional organization of real-world complex systems. Although many mature community detection algorithms exist for unipartite networks, they cannot be directly applied to bipartite networks due to their unique topological structure, characterized by heterogeneous node types and cross-layer connections. Furthermore, some existing bipartite network community detection methods still rely heavily on manual experience to set key parameters, which limits their applicability and scalability in practical scenarios. To address these issues, this paper proposes an enhanced framework—the Adaptive Network Reconstruction Framework (ANRF)—by introducing an adaptive parameter optimization mechanism based on the existing Network Reconstruction Framework (NRF). This framework can be effectively integrated with traditional unipartite network community detection algorithms to achieve automatic community detection with reduced dependence on manual parameter tuning. The core procedure of the method consists of four main steps. First, we calculate the interaction forces between node pairs. Second, through comprehensive analysis of the network topological features, we adaptively determine the threshold parameter θ and related parameters for the interaction forces. Third, based on these thresholds and parameters, we perform edge filtering on the bipartite network to construct a reconstructed network. Finally, we apply unipartite community detection algorithms directly to the reconstructed network to obtain the community structure. To validate the effectiveness of ANRF, we combined it with the Louvain method and the Greedy modularity method, and conducted experimental evaluations on multiple synthetic and real-world network datasets. A systematic comparison with current state-of-the-art algorithms was made. The experimental results on multiple synthetic and real-world datasets within our evaluated scope demonstrate that ANRF achieves competitive performance in terms of community modularity and community density compared to state-of-the-art algorithms, while significantly reducing reliance on manual parameter tuning and enhancing robustness under the tested conditions. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop