Journal Description
Future Internet
Future Internet
is an international, peer-reviewed, open access journal on internet technologies and the information society, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, dblp, Inspec, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Information Systems) / CiteScore - Q1 (Computer Networks and Communications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17 days after submission; acceptance to publication is undertaken in 3.6 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Clusters of Network and Communications Technology: Future Internet, IoT, Telecom, Journal of Sensor and Actuator Networks, Network, Signals.
Impact Factor:
3.6 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Data-Driven Predictive Analytics for Dynamic Aviation Systems: Optimising Fleet Maintenance and Flight Operations Through Machine Learning
Future Internet 2025, 17(11), 508; https://doi.org/10.3390/fi17110508 - 4 Nov 2025
Abstract
►
Show Figures
The aviation industry operates as a complex, dynamic system generating vast volumes of data from aircraft sensors, flight schedules, and external sources. Managing this data is critical for mitigating disruptive and costly events such as mechanical failures and flight delays. This paper presents
[...] Read more.
The aviation industry operates as a complex, dynamic system generating vast volumes of data from aircraft sensors, flight schedules, and external sources. Managing this data is critical for mitigating disruptive and costly events such as mechanical failures and flight delays. This paper presents a comprehensive application of predictive analytics and machine learning to enhance aviation safety and operational efficiency. We address two core challenges: predictive maintenance of aircraft engines and forecasting flight delays. For maintenance, we utilise NASA’s C-MAPSS simulation dataset to develop and compare models, including one-dimensional convolutional neural networks (1D CNNs) and long short-term memory networks (LSTMs), for classifying engine health status and predicting the Remaining Useful Life (RUL), achieving classification accuracy up to 97%. For operational efficiency, we analyse historical flight data to build regression models for predicting departure delays, identifying key contributing factors such as airline, origin airport, and scheduled time. Our methodology highlights the critical role of Exploratory Data Analysis (EDA), feature selection, and data preprocessing in managing high-volume, heterogeneous data sources. The results demonstrate the significant potential of integrating these predictive models into aviation Business Intelligence (BI) systems to transition from reactive to proactive decision-making. The study concludes by discussing the integration challenges within existing data architectures and the future potential of these approaches for optimising complex, networked transportation systems.
Full article
Open AccessArticle
Distributing Quantum Computations, Shot-Wise
by
Giuseppe Bisicchia, Giuseppe Clemente, Jose Garcia-Alonso, Juan Manuel Murillo, Massimo D’Elia and Antonio Brogi
Future Internet 2025, 17(11), 507; https://doi.org/10.3390/fi17110507 - 4 Nov 2025
Abstract
►▼
Show Figures
NISQ (Noisy Intermediate-Scale Quantum) era constraints, high sensitivity to noise and limited qubit count, impose significant barriers on the usability of QPUs (Quantum Process Units) capabilities. To overcome these challenges, researchers are exploring methods to maximize the utility of existing QPUs despite their
[...] Read more.
NISQ (Noisy Intermediate-Scale Quantum) era constraints, high sensitivity to noise and limited qubit count, impose significant barriers on the usability of QPUs (Quantum Process Units) capabilities. To overcome these challenges, researchers are exploring methods to maximize the utility of existing QPUs despite their limitations. Building upon the idea that the execution of a quantum circuit’s shots does not need to be treated as a singular monolithic unit, we propose a methodological framework, termed shot-wise, which enables the distribution of shots for a single circuit across multiple QPUs. Our framework features customizable policies to adapt to various scenarios. Additionally, it introduces a calibration method to pre-evaluate the accuracy and reliability of each QPU’s output before the actual distribution process and an incremental execution mechanism for dynamically managing the shot allocation and policy updates. Such an approach enables flexible and fine-grained management of the distribution process, taking into account various user-defined constraints and (contrasting) objectives. Demonstration results show that shot-wise distribution consistently and significantly improves the execution performance, with no significant drawbacks and additional qualitative advantages. Overall, the shot-wise methodology improves result stability and often outperforms single QPU runs, offering a robust and flexible approach to managing variability in quantum computing.
Full article

Figure 1
Open AccessArticle
Zero-Copy Messaging: Low-Latency Inter-Task Communication in CHERI-Enabled RTOS
by
Mina Soltani Siapoush and Jim Alves-Foss
Future Internet 2025, 17(11), 506; https://doi.org/10.3390/fi17110506 - 4 Nov 2025
Abstract
Efficient and secure inter-task communication (ITC) is critical in real-time embedded systems, particularly in security-sensitive architectures. Traditional ITC mechanisms in Real-Time Operating Systems (RTOSs) often incur high latency from kernel trapping, context-switch overhead, and multiple data copies during message passing. This paper introduces
[...] Read more.
Efficient and secure inter-task communication (ITC) is critical in real-time embedded systems, particularly in security-sensitive architectures. Traditional ITC mechanisms in Real-Time Operating Systems (RTOSs) often incur high latency from kernel trapping, context-switch overhead, and multiple data copies during message passing. This paper introduces a zero-copy, capability-protected ITC framework for CHERI-enabled RTOS environments that achieves both high performance and strong compartmental isolation. The approach integrates mutexes and semaphores encapsulated as sealed capabilities, a shared memory ring buffer for messaging, and compartment-local stubs to eliminate redundant data copies and reduce cross-compartment transitions. Temporal safety is ensured through hardware-backed capability expiration, mitigating use-after-free vulnerabilities. Implemented as a reference application on the CHERIoT RTOS, the framework delivers up to 3× lower mutex lock latency and over 70% faster message transfers compared to baseline FreeRTOS, while preserving deterministic real-time behavior. Security evaluation confirms resilience against unauthorized access, capability leakage, and TOCTTO vulnerabilities. These results demonstrate that capability-based zero-copy ITC can be a practical and performance-optimal solution for constrained embedded systems that demand high throughput, low latency, and verifiable isolation guarantees.
Full article
(This article belongs to the Special Issue Cybersecurity in the Age of AI, IoT, and Edge Computing)
►▼
Show Figures

Figure 1
Open AccessArticle
Resilient Federated Learning for Vehicular Networks: A Digital Twin and Blockchain-Empowered Approach
by
Jian Li, Chuntao Zheng and Ziyao Chen
Future Internet 2025, 17(11), 505; https://doi.org/10.3390/fi17110505 - 3 Nov 2025
Abstract
Federated learning (FL) is a foundational technology for enabling collaborative intelligence in vehicular edge computing (VEC). However, the volatile network topology caused by high vehicle mobility and the profound security risks of model poisoning attacks severely undermine its practical deployment. This paper introduces
[...] Read more.
Federated learning (FL) is a foundational technology for enabling collaborative intelligence in vehicular edge computing (VEC). However, the volatile network topology caused by high vehicle mobility and the profound security risks of model poisoning attacks severely undermine its practical deployment. This paper introduces DTB-FL, a novel framework that synergistically integrates digital twin (DT) and blockchain technologies to establish a secure and efficient learning paradigm. DTB-FL leverages a digital twin to create a real-time virtual replica of the network, enabling a predictive, mobility-aware participant selection strategy that preemptively mitigates network instability. Concurrently, a private blockchain underpins a decentralized trust infrastructure, employing a dynamic reputation system to secure model aggregation and smart contracts to automate fair incentives. Crucially, these components are synergistic: The DT provides a stable cohort of participants, enhancing the accuracy of the blockchain’s reputation assessment, while the blockchain feeds reputation scores back to the DT to refine future selections. Extensive simulations demonstrate that DTB-FL accelerates model convergence by 43% compared to FedAvg and maintains 75% accuracy under poisoning attacks even when 40% of participants are malicious—a scenario where baseline FL methods degrade to below 40% accuracy. The framework also exhibits high resilience to network dynamics, sustaining performance at vehicle speeds up to 120 km/h. DTB-FL provides a comprehensive, cross-layer solution that transforms vehicular FL from a vulnerable theoretical model into a practical, robust, and scalable platform for next-generation intelligent transportation systems.
Full article
(This article belongs to the Topic Internet of Things Architectures, Applications, and Strategies: Emerging Paradigms, Technologies, and Advancing AI Integration)
►▼
Show Figures

Figure 1
Open AccessArticle
Modeling Interaction Patterns in Visualizations with Eye-Tracking: A Characterization of Reading and Information Styles
by
Angela Locoro and Luigi Lavazza
Future Internet 2025, 17(11), 504; https://doi.org/10.3390/fi17110504 - 3 Nov 2025
Abstract
In data visualization, users’ scanning patterns are as crucial as their reading patterns in text-based media. Yet, no systematic attempt exists to characterize this activity with basic features, such as reading speed and scanpaths, nor to relate them to data complexity and information
[...] Read more.
In data visualization, users’ scanning patterns are as crucial as their reading patterns in text-based media. Yet, no systematic attempt exists to characterize this activity with basic features, such as reading speed and scanpaths, nor to relate them to data complexity and information disposition. To fill this gap, this paper proposes a model-based method to analyze and interpret those features from eye-tracking data. To this end, the bias-noise model is applied to a data visualization eye-tracking dataset available online, and enriched with areas of interest labels. The positive results of this method are as follows: (i) the identification of users’ reading styles like meticulous, systematic, and serendipitous; (ii) the characterization of information disposition as gathered or scattered, and of information complexity as more or less dense; (iii) the discovery of a behavioural pattern of efficiency, given that the more visualizations were read by a participant, the greater their reading speed, consistency, and predictability of reading; (iv) the identification of encoding and title areas of interest as the primary loci of attention in visualizations, with a peculiar back-and-forth reading pattern; (v) the identification of the encoding area of interest as the fastest to read in less dense visualization types, such as bars, circles, and lines charts. Future experiments involving participants from diverse cultural backgrounds could not only validate the observed behavioural patterns, but also enrich the experimental framework with additional perspectives.
Full article
(This article belongs to the Special Issue Human-Centered Artificial Intelligence)
►▼
Show Figures

Graphical abstract
Open AccessReview
Securing the SDN Data Plane in Emerging Technology Domains: A Review
by
Travis Quinn, Faycal Bouhafs and Frank den Hartog
Future Internet 2025, 17(11), 503; https://doi.org/10.3390/fi17110503 - 3 Nov 2025
Abstract
Over the last decade, Software-Defined Networking (SDN) has garnered increasing research interest for networking and security. This interest stems from the programmability and dynamicity offered by SDN, as well as the growing importance of SDN as a foundational technology of future telecommunications networks
[...] Read more.
Over the last decade, Software-Defined Networking (SDN) has garnered increasing research interest for networking and security. This interest stems from the programmability and dynamicity offered by SDN, as well as the growing importance of SDN as a foundational technology of future telecommunications networks and the greater Internet. However, research into SDN security has focused disproportionately on the security of the control plane, resulting in the relative trivialization of data plane security methods and a corresponding lack of appreciation of the data plane in SDN security discourse. To remedy this, this paper provides a comprehensive review of SDN data plane security research, classified into three primary research domains and several sub-domains. The three primary research domains are as follows: security capabilities within the data plane, security of the SDN infrastructure, and dynamic routing within the data plane. Our work resulted in the identification of specific strengths and weaknesses in existing research, as well as promising future directions, based on novelty and overlap with emerging technology domains. The most striking future directions are the use of hybrid SDN architectures leveraging a programmable data plane, SDN for heterogeneous network security, and the development of trust-based methods for SDN management and security, including trust-based routing.
Full article
(This article belongs to the Special Issue Software-Defined Networking (SDN) and Network Function Virtualization (NFV) for a Hyperconnected World)
►▼
Show Figures

Figure 1
Open AccessArticle
ABMS-Driven Reinforcement Learning for Dynamic Resource Allocation in Mass Casualty Incidents
by
Ionuț Murarețu, Alexandra Vultureanu-Albiși, Sorin Ilie and Costin Bădică
Future Internet 2025, 17(11), 502; https://doi.org/10.3390/fi17110502 - 3 Nov 2025
Abstract
This paper introduces a novel framework that integrates reinforcement learning with declarative modeling and mathematical optimization for dynamic resource allocation during mass casualty incidents. Our approach leverages Mesa as an agent-based modeling library to develop a flexible and scalable simulation environment as a
[...] Read more.
This paper introduces a novel framework that integrates reinforcement learning with declarative modeling and mathematical optimization for dynamic resource allocation during mass casualty incidents. Our approach leverages Mesa as an agent-based modeling library to develop a flexible and scalable simulation environment as a decision support system for emergency response. This paper addresses the challenge of efficiently allocating casualties to hospitals by combining mixed-integer linear and constraint programming while enabling a central decision-making component to adapt allocation strategies based on experience. The two-layer architecture ensures that casualty-to-hospital assignments satisfy geographical and medical constraints while optimizing resource usage. The reinforcement learning component receives feedback through agent-based simulation outcomes, using survival rates as the reward signal to guide future allocation decisions. Our experimental evaluation, using simulated emergency scenarios, shows a significant improvement in survival rates compared to traditional optimization approaches. The results indicate that the hybrid approach successfully combines the robustness of declarative modeling and the adaptability required for smart decision making in complex and dynamic emergency scenarios.
Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
►▼
Show Figures

Figure 1
Open AccessArticle
Design of a Pill-Sorting and Pill-Grasping Robot System Based on Machine Vision
by
Xuejun Tian, Jiadu Ke, Weiguo Wu and Jian Teng
Future Internet 2025, 17(11), 501; https://doi.org/10.3390/fi17110501 - 31 Oct 2025
Abstract
We developed a machine vision-based robotic system to address automation challenges in pharmaceutical pill sorting and packaging. The hardware platform integrates a high-resolution industrial camera with an HSR-CR605 robotic arm. Image processing leverages the VisionMaster 4.3.0 platform for color classification and positioning. Coordinate
[...] Read more.
We developed a machine vision-based robotic system to address automation challenges in pharmaceutical pill sorting and packaging. The hardware platform integrates a high-resolution industrial camera with an HSR-CR605 robotic arm. Image processing leverages the VisionMaster 4.3.0 platform for color classification and positioning. Coordinate mapping between camera and robot is established through a three-point calibration method, with real-time communication realized via the Modbus/TCP protocol. Experimental validation demonstrates that the system achieves 95% recognition accuracy under conditions of pill overlap ≤ 30% and dynamic illumination of 50–1000 lux, ±0.5 mm picking precision, and a sorting efficiency of108 pills per minute. These results confirm the feasibility of integrating domestic hardware and algorithms, providing an efficient automated solution for the pharmaceutical industry. This work makes three key contributions: (1) demonstrating a cost-effective domestic hardware-software integration achieving 42% cost reduction while maintaining comparable performance to imported alternatives, (2) establishing a systematic validation methodology under industrially-relevant conditions that provides quantitative robustness metrics for pharmaceutical automation, and (3) offering a practical implementation framework validated through multi-scenario experiments that bridges the gap between laboratory research and production-line deployment.
Full article
(This article belongs to the Special Issue Advances and Perspectives in Human-Computer Interaction—2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Eavesdropper Detection in Six-State Protocol Against Partial Intercept–Resend Attack
by
Francesco Fiorini, Rosario Giuseppe Garroppo, Michele Pagano and Rostyslav Schiavini Yadzhak
Future Internet 2025, 17(11), 500; https://doi.org/10.3390/fi17110500 - 31 Oct 2025
Abstract
This work presents and evaluates two threshold-based detection methods for the Six-State quantum key distribution protocol, considering a realistic scenario involving partial intercept–resend attack and channel noise. The statistical properties of the shared quantum bit error rate (QBER) are analyzed and used to
[...] Read more.
This work presents and evaluates two threshold-based detection methods for the Six-State quantum key distribution protocol, considering a realistic scenario involving partial intercept–resend attack and channel noise. The statistical properties of the shared quantum bit error rate (QBER) are analyzed and used to estimate the attacker interception density from observed data. Building on this foundation, the work derives two optimal QBER detection thresholds designed to minimize both false positive and false negative rates, following, respectively, upper theoretical bounds and limit probability density function approach. A developed Qiskit simulation environment enables the evaluation and comparison of the two detection methods on simulated and real-inspired quantum systems with differing noise characteristics. This framework moves beyond theoretical analysis, allowing practical investigation of system noise effects on detection accuracy. Simulation results confirm that both methods are robust and effective, achieving high detection accuracy across all the tested configurations, thereby validating their applicability to real-world quantum communication systems.
Full article
(This article belongs to the Special Issue Cybersecurity in the Age of AI, IoT, and Edge Computing)
►▼
Show Figures

Figure 1
Open AccessArticle
A Three-Party Evolutionary Game Model and Stability Analysis for Network Defense Strategy Selection
by
Zhenghao Qian, Fengzheng Liu, Mingdong He, Bo Li, Xuewu Li, Chuangye Zhao, Gehua Fu and Yifan Hu
Future Internet 2025, 17(11), 499; https://doi.org/10.3390/fi17110499 - 31 Oct 2025
Abstract
Traditional cyber attack-defense strategies have traditionally focused solely on the attacker and defender, while neglecting the role of government-led system administrators. To address strategic selection challenges in cyber warfare, this study employs an evolutionary game theory framework to construct a tripartite game model
[...] Read more.
Traditional cyber attack-defense strategies have traditionally focused solely on the attacker and defender, while neglecting the role of government-led system administrators. To address strategic selection challenges in cyber warfare, this study employs an evolutionary game theory framework to construct a tripartite game model involving cyber attackers, defenders, and system administrators. The replicator dynamic equation is utilized for stability analysis of behavioral strategies across stakeholders, with Lyapunov theory applied to evaluate the equilibrium points of pure strategies within the system. MATLAB (2021a) simulations were conducted to validate theoretical findings. Experimental results demonstrate that the model achieves evolutionary stability under various scenarios, yielding optimal defense strategies that provide theoretical support for addressing cybersecurity challenges.
Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Hybrid GOMP–ROMP Algorithm for Sparse Channel Estimation in mmWave MIMO: Enhancing Convergence and Reducing Computational Complexity
by
Anjana Babu Sujatha and Vinoth Babu Kumaravelu
Future Internet 2025, 17(11), 498; https://doi.org/10.3390/fi17110498 - 30 Oct 2025
Abstract
This paper proposes an efficient sparse channel estimation method for millimeter wave (mmWave) hybrid multiple-input multiple-output (MIMO) systems. The performance of orthogonal matching pursuit (OMP) and its advanced variants—generalized OMP (GOMP), simultaneous OMP (SOMP), and regularized OMP (ROMP)—is evaluated based on normalized mean
[...] Read more.
This paper proposes an efficient sparse channel estimation method for millimeter wave (mmWave) hybrid multiple-input multiple-output (MIMO) systems. The performance of orthogonal matching pursuit (OMP) and its advanced variants—generalized OMP (GOMP), simultaneous OMP (SOMP), and regularized OMP (ROMP)—is evaluated based on normalized mean square error (NMSE) and computational complexity. A new hybrid GOMP–ROMP algorithm is proposed to achieve faster convergence and lower computational costs while maintaining the desired estimation accuracy. Simulation results reveal that the proposed algorithm reduces NMSE by 0.040823 compared to OMP and attains ROMP’s accuracy with significantly less complexity. For a MIMO system with 32 ×32 configuration, the method offers up to a fourfold reduction in computational complexity compared to OMP, ROMP, and SOMP. These findings highlight the potential of the hybrid algorithm for real-time mmWave massive MIMO applications in fifth-generation (5G) and sixth-generation (6G) systems, where high bandwidth and low latency are essential.
Full article
(This article belongs to the Special Issue Advanced 5G and Beyond Networks)
►▼
Show Figures

Figure 1
Open AccessArticle
RL-Based Resource Allocation in SDN-Enabled 6G Networks
by
Ivan Radosavljević, Petar D. Bojović and Živko Bojović
Future Internet 2025, 17(11), 497; https://doi.org/10.3390/fi17110497 - 29 Oct 2025
Abstract
►▼
Show Figures
Dynamic and efficient resource allocation is critical for Software-Defined Networking (SDN) enabled sixth-generation (6G) networks to ensure adaptability and optimized utilization of network resources. This paper proposes a reinforcement learning (RL)-based framework that integrates an actor–critic model with a modular SDN interface for
[...] Read more.
Dynamic and efficient resource allocation is critical for Software-Defined Networking (SDN) enabled sixth-generation (6G) networks to ensure adaptability and optimized utilization of network resources. This paper proposes a reinforcement learning (RL)-based framework that integrates an actor–critic model with a modular SDN interface for fine-grained, queue-level bandwidth scheduling. The framework further incorporates a stochastic traffic generator for training and a virtualized multi-slice platform testbed for a realistic beyond-5G/6G evaluation. Experimental results show that the proposed RL model significantly outperforms a baseline forecasting model: it converges faster, showing notable improvements after 240 training epochs, achieves higher cumulative rewards, and reduces packet drops under dynamic traffic conditions. Moreover, the RL-based scheduling mechanism exhibits improved adaptability to traffic fluctuations, although both approaches face challenges under node outage conditions. These findings confirm that queue-level reinforcement learning enhances responsiveness and reliability in 6G networks, while also highlighting open challenges in fault-tolerant scheduling.
Full article

Graphical abstract
Open AccessSystematic Review
Modular Monolith Architecture in Cloud Environments: A Systematic Literature Review
by
Lamis F. Al-Qora’n and Amro Al-Said Ahmad
Future Internet 2025, 17(11), 496; https://doi.org/10.3390/fi17110496 - 29 Oct 2025
Abstract
►▼
Show Figures
Modular monolithic architecture (MMA) has recently emerged as a hybrid architecture that is positioned between traditional monoliths and microservices. It combines operational simplicity with modularity and maintainability. Although industry adoption of the architecture is growing, academic research on MMA remains fragmented and lacks
[...] Read more.
Modular monolithic architecture (MMA) has recently emerged as a hybrid architecture that is positioned between traditional monoliths and microservices. It combines operational simplicity with modularity and maintainability. Although industry adoption of the architecture is growing, academic research on MMA remains fragmented and lacks systematic synthesis. This paper presents the first systematic literature review (SLR) of MMA in cloud environments. The review follows Kitchenham’s guidelines; we searched six major digital libraries for peer-reviewed studies published between 2020 and May 2025. From 369 retrieved records, we included 15 primary studies through a structured review protocol. Our synthesis highlights the problem of inconsistent terminology usage in the literature. It also identifies the architectural scope of MMA, and specifies the adoption drivers such as simplified deployment, maintainability, and reduced orchestration overhead. We also analyse implementation practices—including Domain-Driven Design (DDD), modular boundaries, and containerised deployment—and highlight comparative evidence showing MMA’s suitability when microservices introduce excessive complexity or costs. Key research gaps include the absence of consensus on a clear comprehensive definition, limited empirical benchmarking, and insufficient tools support. Thus, this study establishes a conceptual foundation for future research and provides practitioners with structured insights to inform architectural decisions in cloud-native environments.
Full article

Figure 1
Open AccessArticle
Sparse Regularized Autoencoders-Based Radiomics Data Augmentation for Improved EGFR Mutation Prediction in NSCLC
by
Muhammad Asif Munir, Reehan Ali Shah, Urooj Waheed, Muhammad Aqeel Aslam, Zeeshan Rashid, Mohammed Aman, Muhammad I. Masud and Zeeshan Ahmad Arfeen
Future Internet 2025, 17(11), 495; https://doi.org/10.3390/fi17110495 - 29 Oct 2025
Abstract
►▼
Show Figures
Lung cancer (LC) remains a leading cause of cancer mortality worldwide, where accurate and early identification of gene mutations such as epidermal growth factor receptor (EGFR) is critical for precision treatment. However, machine learning-based radiomics approaches often face challenges due to the small
[...] Read more.
Lung cancer (LC) remains a leading cause of cancer mortality worldwide, where accurate and early identification of gene mutations such as epidermal growth factor receptor (EGFR) is critical for precision treatment. However, machine learning-based radiomics approaches often face challenges due to the small and imbalanced nature of the datasets. This study proposes a comprehensive framework based on Generic Sparse Regularized Autoencoders with Kullback–Leibler divergence (GSRA-KL) to generate high-quality synthetic radiomics data and overcome these limitations. A systematic approach generated 63 synthetic radiomics datasets by tuning a novel kl_weight regularization hyperparameter across three hidden-layer sizes, optimized using Optuna for computational efficiency. A rigorous assessment was conducted to evaluate the impact of hyperparameter tuning across 63 synthetic datasets, with a focus on the EGFR gene mutation. This evaluation utilized resemblance-dimension scores (RDS), novel utility-dimension scores (UDS), and t-SNE visualizations to ensure the validation of data quality, revealing that GSRA-KL achieves excellent performance (RDS > 0.45, UDS > 0.7), especially when class distribution is balanced, while remaining competitive with the Tabular Variational Autoencoder (TVAE). Additionally, a comprehensive statistical correlation analysis demonstrated strong and significant monotonic relationships among resemblance-based performance metrics up to moderate scaling (≤1.0*), confirming the robustness and stability of inter-metric associations under varying configurations. Complementary computational cost evaluation further indicated that moderate kl_weight values yield an optimal balance between reconstruction accuracy and resource utilization, with Spearman correlations revealing improved reconstruction quality (MSE , ) at reduced computational overhead. The ablation-style analysis confirmed that including the KL divergence term meaningfully enhances the generative capacity of GSRA-KL over its baseline counterpart. Furthermore, the GSRA-KL framework achieved substantial improvements in computational efficiency compared to prior PSO-based optimization methods, resulting in reduced memory usage and training time. Overall, GSRA-KL represents an incremental yet practical advancement for augmenting small and imbalanced high-dimensional radiomics datasets, showing promise for improved mutation prediction and downstream precision oncology studies.
Full article

Figure 1
Open AccessReview
MEC and SDN Enabling Technologies, Design Challenges, and Future Directions of Tactile Internet and Immersive Communications
by
Shahd Thabet, Abdelhamied A. Ateya, Mohammed ElAffendi and Mohammed Abo-Zahhad
Future Internet 2025, 17(11), 494; https://doi.org/10.3390/fi17110494 - 28 Oct 2025
Abstract
Tactile Internet (TI) is an innovative paradigm for emerging generations of communication systems that support ultra-low latency and highly robust transmission of haptics, actuation, and immersive communication in real time. It is considered a critical facilitator for remote surgery, industrial automation, and extended
[...] Read more.
Tactile Internet (TI) is an innovative paradigm for emerging generations of communication systems that support ultra-low latency and highly robust transmission of haptics, actuation, and immersive communication in real time. It is considered a critical facilitator for remote surgery, industrial automation, and extended reality (XR). Originally intended as a flagship application for the fifth-generation (5G) networks, their strict constraints, especially the one-millisecond end-to-end latency, ultra-high reliability, and seamless adaptation, present formidable challenges. These challenges are the bottleneck for evolution to sixth-generation (6G) networks; thus, new architects and technologies are urgently required. This survey systematically discusses the most important underlying technologies for TI and immersive communications. It especially highlights using software-defined networking (SDN) and edge intelligence (EI) as enabling technologies. SDN improves the programmability, adaptability, and dynamic control of network infrastructures. In contrast, EI exploits intelligence-based artificial intelligence (AI)-driven decision-making at the network edge for latency optimization, resource usage, and service offering. Moreover, this work describes other enabling technologies, including network function virtualization (NFV), digital twin, quantum computing, and blockchain. Furthermore, the work investigates the recent achievements and studies in which SDN and EI are combined in TI and presents their effect on latency reduction, optimum network utilization, and service stability. A comparison of several State-of-the-Art methods is performed to determine present limitations and gaps. Finally, the work provides open research problems and future trends, focusing on the importance of intelligent, autonomous, and scalable network topologies for defining the paradigm of TI and immersive communication systems.
Full article
(This article belongs to the Special Issue Software-Defined Networking (SDN) and Network Function Virtualization (NFV) for a Hyperconnected World)
►▼
Show Figures

Figure 1
Open AccessArticle
QuantumTrust-FedChain: A Blockchain-Aware Quantum-Tuned Federated Learning System for Cyber-Resilient Industrial IoT in 6G
by
Saleh Alharbi
Future Internet 2025, 17(11), 493; https://doi.org/10.3390/fi17110493 - 27 Oct 2025
Abstract
Industrial Internet of Things (IIoT) systems face severe security and trust challenges, particularly under cross-domain data sharing and federated orchestration. We present QuantumTrust-FedChain, a cyber-resilient federated learning framework integrating quantum variational trust modeling, blockchain-backed provenance, and Byzantine-robust aggregation for secure IIoT collaboration in
[...] Read more.
Industrial Internet of Things (IIoT) systems face severe security and trust challenges, particularly under cross-domain data sharing and federated orchestration. We present QuantumTrust-FedChain, a cyber-resilient federated learning framework integrating quantum variational trust modeling, blockchain-backed provenance, and Byzantine-robust aggregation for secure IIoT collaboration in 6G networks. The architecture includes a Quantum Graph Attention Network (Q-GAT) for modeling device trust evolution using encrypted device logs. This consensus-aware federated optimizer penalizes adversarial gradients using stochastic contract enforcement, and a shard-based blockchain for real-time forensic traceability. Using datasets from SWaT and TON IoT, experiments show 98.3% accuracy in anomaly detection, 35% improvement in defense against model poisoning, and full ledger traceability with under 8.5% blockchain overhead. This framework offers a robust and explainable solution for secure AI deployment in safety-critical IIoT environments.
Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT—3rd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Federated Decision Transformers for Scalable Reinforcement Learning in Smart City IoT Systems
by
Laila AlTerkawi and Mokhled AlTarawneh
Future Internet 2025, 17(11), 492; https://doi.org/10.3390/fi17110492 - 27 Oct 2025
Abstract
The rapid proliferation of devices on the Internet of Things (IoT) in smart city environments enables autonomous decision-making, but introduces challenges of scalability, coordination, and privacy. Existing reinforcement learning (RL) methods, such as Multi-Agent Actor–Critic (MAAC), depend on centralized critics and recurrent structures,
[...] Read more.
The rapid proliferation of devices on the Internet of Things (IoT) in smart city environments enables autonomous decision-making, but introduces challenges of scalability, coordination, and privacy. Existing reinforcement learning (RL) methods, such as Multi-Agent Actor–Critic (MAAC), depend on centralized critics and recurrent structures, which limit scalability and create single points of failure. This paper proposes a Federated Decision Transformer (FDT) framework that integrates transformer-based sequence modeling with federated learning. By replacing centralized critics with self-attention-driven trajectory modeling, the FDT preserves data locality, enhances privacy, and supports decentralized policy learning across distributed IoT nodes. We benchmarked the FDT against MAAC in a mobile edge computing (MEC) environment with identical hyperparameter configurations. The results demonstrate that the FDT achieves superior reward efficiency, scalability, and adaptability in dynamic IoT networks, although with slightly higher variance during early training. These findings highlight transformer-based federated RL as a robust and privacy-preserving alternative to critic-based methods for large-scale IoT systems.
Full article
(This article belongs to the Special Issue Internet of Things (IoT) in Smart City)
►▼
Show Figures

Figure 1
Open AccessArticle
Towards Fair Medical Risk Prediction Software
by
Wolfram Luther and Ekaterina Auer
Future Internet 2025, 17(11), 491; https://doi.org/10.3390/fi17110491 - 27 Oct 2025
Abstract
This article examines the role of fairness in software across diverse application contexts, with a particular emphasis on healthcare, and introduces the concept of algorithmic (individual) meta-fairness. We argue that attaining a high degree of fairness—under any interpretation of its meaning—necessitates higher-level consideration.
[...] Read more.
This article examines the role of fairness in software across diverse application contexts, with a particular emphasis on healthcare, and introduces the concept of algorithmic (individual) meta-fairness. We argue that attaining a high degree of fairness—under any interpretation of its meaning—necessitates higher-level consideration. We analyze the factors that may guide the choice of a fairness definition or bias metric depending on the context, and we propose a framework that additionally highlights quality criteria such as accountability, accuracy, and explainability, as these play a crucial role from the perspective of individual fairness. A detailed analysis of requirements and applications in healthcare forms the basis for the development of this framework. The framework is illustrated through two examples: (i) a specific application to a predictive model for reliable lower bounds of BRCA1/2 mutation probabilities using Dempster–Shafer theory, and (ii) a more conceptual application to digital, feature-oriented healthcare twins, with the focus on bias in communication and collaboration. Throughout the article, we present a curated selection of the relevant literature at the intersection of ethics, medicine, and modern digital society.
Full article
(This article belongs to the Special Issue IoT Architecture Supported by Digital Twin: Challenges and Solutions)
►▼
Show Figures

Figure 1
Open AccessArticle
BIMW: Blockchain-Enabled Innocuous Model Watermarking for Secure Ownership Verification
by
Xinyun Liu and Ronghua Xu
Future Internet 2025, 17(11), 490; https://doi.org/10.3390/fi17110490 - 26 Oct 2025
Abstract
The integration of artificial intelligence (AI) and edge computing gives rise to edge intelligence (EI), which offers effective solutions to the limitations of traditional cloud-based AI; however, deploying models across distributed edge platforms raises concerns regarding authenticity, thereby necessitating robust mechanisms for ownership
[...] Read more.
The integration of artificial intelligence (AI) and edge computing gives rise to edge intelligence (EI), which offers effective solutions to the limitations of traditional cloud-based AI; however, deploying models across distributed edge platforms raises concerns regarding authenticity, thereby necessitating robust mechanisms for ownership verification. Currently, backdoor-based model watermarking techniques represent a state-of-the-art approach for ownership verification; however, their reliance on model poisoning introduces potential security risks and unintended behaviors. To solve this challenge, we propose BIMW, a blockchain-enabled innocuous model watermarking framework that ensures secure and trustworthy AI model deployment and sharing in distributed edge computing environments. Unlike widely applied backdoor-based watermarking methods, BIMW adopts a novel innocuous model watermarking method called interpretable watermarking (IW), which embeds ownership information without compromising model integrity or functionality. In addition, BIMW integrates a blockchain security fabric to ensure the integrity and auditability of watermarked data during storage and sharing. Extensive experiments were conducted on a Jetson Orin Nano board, which simulates edge computing environments. The numerical results show that our framework outperforms baselines in terms of predicate accuracy, p-value, watermark success rate (WSR), and harmlessness H. Our framework demonstrates resilience against watermarking removal attacks, and it introduces limited latency through the blockchain fabric.
Full article
(This article belongs to the Special Issue Distributed Machine Learning and Federated Edge Computing for IoT)
►▼
Show Figures

Figure 1
Open AccessArticle
Efficient Lightweight Image Classification via Coordinate Attention and Channel Pruning for Resource-Constrained Systems
by
Yao-Liang Chung
Future Internet 2025, 17(11), 489; https://doi.org/10.3390/fi17110489 - 25 Oct 2025
Abstract
Image classification is central to computer vision, supporting applications from autonomous driving to medical imaging, yet state-of-the-art convolutional neural networks remain constrained by heavy floating-point operations (FLOPs) and parameter counts on edge devices. To address this accuracy–efficiency trade-off, we propose a unified lightweight
[...] Read more.
Image classification is central to computer vision, supporting applications from autonomous driving to medical imaging, yet state-of-the-art convolutional neural networks remain constrained by heavy floating-point operations (FLOPs) and parameter counts on edge devices. To address this accuracy–efficiency trade-off, we propose a unified lightweight framework built on a pruning-aware coordinate attention block (PACB). PACB integrates coordinate attention (CA) with L1-regularized channel pruning, enriching feature representation while enabling structured compression. Applied to MobileNetV3 and RepVGG, the framework achieves substantial efficiency gains. On GTSRB, MobileNetV3 parameters drop from 16.239 M to 9.871 M (–6.37 M) and FLOPs from 11.297 M to 8.552 M (–24.3%), with accuracy improving from 97.09% to 97.37%. For RepVGG, parameters fall from 7.683 M to 7.093 M (–0.59 M) and FLOPs from 31.264 M to 27.918 M (–3.35 M), with only ~0.51% average accuracy loss across CIFAR-10, Fashion-MNIST, and GTSRB. Complexity analysis further confirms PACB does not increase asymptotic order, since the additional CA operations contribute only lightweight lower-order terms. These results demonstrate that coupling CA with structured pruning yields a scalable accuracy–efficiency trade-off under hardware-agnostic metrics, making PACB a promising, deployment-ready solution for mobile and edge applications.
Full article
(This article belongs to the Special Issue Clustered Federated Learning for Networks)
►▼
Show Figures

Figure 1
Journal Menu
► ▼ Journal Menu-
- Future Internet Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Education Sciences, Future Internet, Information, Sustainability
Advances in Online and Distance Learning
Topic Editors: Neil Gordon, Han ReichgeltDeadline: 31 December 2025
Topic in
Applied Sciences, Electronics, Future Internet, IoT, Technologies, Inventions, Sensors, Vehicles
Next-Generation IoT and Smart Systems for Communication and Sensing
Topic Editors: Dinh-Thuan Do, Vitor Fialho, Luis Pires, Francisco Rego, Ricardo Santos, Vasco VelezDeadline: 31 January 2026
Topic in
Entropy, Future Internet, Healthcare, Sensors, Data
Communications Challenges in Health and Well-Being, 2nd Edition
Topic Editors: Dragana Bajic, Konstantinos Katzis, Gordana GardasevicDeadline: 28 February 2026
Topic in
AI, Computers, Education Sciences, Societies, Future Internet, Technologies
AI Trends in Teacher and Student Training
Topic Editors: José Fernández-Cerero, Marta Montenegro-RuedaDeadline: 11 March 2026
Conferences
Special Issues
Special Issue in
Future Internet
Information Communication Technologies and Social Media
Guest Editors: Jinpeng Chen, Ruifan Li, Kaimin WeiDeadline: 20 November 2025
Special Issue in
Future Internet
Human-Centric Explainability in Large-Scale IoT and AI Systems
Guest Editors: Aristeidis Karras, Massimo CafaroDeadline: 20 November 2025
Special Issue in
Future Internet
Advances and Perspectives in Human-Computer Interaction—2nd Edition
Guest Editors: Christos Troussas, Akrivi Krouska, Cleo Sgouropoulou, Jaime CaroDeadline: 30 November 2025
Special Issue in
Future Internet
Smart Technology: Artificial Intelligence, Robotics and Algorithms
Guest Editors: Fengyu Wang, Jincheng DaiDeadline: 30 November 2025
Topical Collections
Topical Collection in
Future Internet
Information Systems Security
Collection Editor: Luis Javier Garcia Villalba
Topical Collection in
Future Internet
Innovative People-Centered Solutions Applied to Industries, Cities and Societies
Collection Editors: Dino Giuli, Filipe Portela
Topical Collection in
Future Internet
Featured Reviews of Future Internet Research
Collection Editor: Dino Giuli
Topical Collection in
Future Internet
Machine Learning Approaches for User Identity
Collection Editors: Kaushik Roy, Mustafa Atay, Ajita Rattani


