Next Issue
Volume 17, December
Previous Issue
Volume 17, October
 
 

Future Internet, Volume 17, Issue 11 (November 2025) – 50 articles

Cover Story (view full-size image): Bias in SDN security research has led to the trivialization of the security of the SDN data plane, in favor of the control plane. This review addresses this gap by comprehensively surveying existing SDN data plane security research, identifying its main themes, strengths, weaknesses and potential future directions for research. The review demonstrates that SDN data plane security is more diverse and mature than previously supposed. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 8038 KB  
Article
Semantic Data Federated Query Optimization Based on Decomposition of Block-Level Subqueries
by Yuan Yao and Yang Zhang
Future Internet 2025, 17(11), 531; https://doi.org/10.3390/fi17110531 - 20 Nov 2025
Viewed by 180
Abstract
The digital age and the rise of Internet of Things technology have led to an explosion of data, including vast amounts of semantic data. In the context of large-scale semantic data graphs, centralized storage struggles to meet the efficiency requirements of the queries. [...] Read more.
The digital age and the rise of Internet of Things technology have led to an explosion of data, including vast amounts of semantic data. In the context of large-scale semantic data graphs, centralized storage struggles to meet the efficiency requirements of the queries. This has led to a shift towards distributed semantic data systems. In federated semantic data systems, ensuring both query efficiency and comprehensive results is challenging because of data independence and privacy constraints. To address this, we propose a query processing framework featuring a block-level star decomposition method for generating efficient query plans, augmented by auxiliary indexes to guarantee the completeness of the results. A specialized FEDERATEDAND BY keyword is introduced for federated environments, and a partition-based parallel assembly method accelerates the result integration. Our approach demonstrably improves query efficiency and is analyzed for its potential application in energy systems. Full article
(This article belongs to the Special Issue Internet of Things Technology and Service Computing)
Show Figures

Figure 1

21 pages, 653 KB  
Article
A Stateful Extension to P4THLS for Advanced Telemetry and Flow Control
by Mostafa Abbasmollaei, Tarek Ould-Bachir and Yvon Savaria
Future Internet 2025, 17(11), 530; https://doi.org/10.3390/fi17110530 - 20 Nov 2025
Viewed by 254
Abstract
Programmable data planes are increasingly essential for enabling In-band Network Telemetry (INT), fine-grained monitoring, and congestion-aware packet processing. Although the P4 language provides a high-level abstraction to describe such behaviors, implementing them efficiently on FPGA-based platforms remains challenging due to hardware constraints and [...] Read more.
Programmable data planes are increasingly essential for enabling In-band Network Telemetry (INT), fine-grained monitoring, and congestion-aware packet processing. Although the P4 language provides a high-level abstraction to describe such behaviors, implementing them efficiently on FPGA-based platforms remains challenging due to hardware constraints and limited compiler support. Building on P4THLS framework, which leverages HLS for FPGA data-plane programmability, this paper extends the approach by introducing support for P4-style stateful objects and a structured metadata propagation mechanism throughout the processing pipeline. These extensions enrich pipeline logic with real-time context and flow-level state, thereby facilitating advanced applications while preserving programmability. The generated codebase remains extensible and customizable, allowing developers to adapt the design to various scenarios. We implement two representative use cases to demonstrate the effectiveness of the approach: an INT-enabled forwarding engine that embeds hop-by-hop telemetry into packets and a congestion-aware switch that dynamically adapts to queue conditions. Evaluation of an AMD Alveo U280 FPGA implementation reveals that incorporating INT support adds roughly 900 LUTs and 1000 Flip-Flops relative to the baseline switch. Furthermore, the proposed meter maintains rate measurement errors below 3% at 700 Mbps and achieves up to a 5× reduction in LUT and 2× reduction in Flip-Flop usage compared to existing FPGA-based stateful designs, substantially expanding the applicability of P4THLS for complex and performance-critical network functions. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks—2nd Edition)
Show Figures

Figure 1

38 pages, 4889 KB  
Article
Top-K Feature Selection for IoT Intrusion Detection: Contributions of XGBoost, LightGBM, and Random Forest
by Brou Médard Kouassi, Abou Bakary Ballo, Kacoutchy Jean Ayikpa, Diarra Mamadou and Minfonga Zié Jérôme Coulibaly
Future Internet 2025, 17(11), 529; https://doi.org/10.3390/fi17110529 - 19 Nov 2025
Viewed by 384
Abstract
The rapid growth of the Internet of Things (IoT) has created vast networks of interconnected devices that are increasingly exposed to cyberattacks. Ensuring the security of such distributed systems requires efficient and adaptive intrusion detection mechanisms. However, conventional methods face limitations in processing [...] Read more.
The rapid growth of the Internet of Things (IoT) has created vast networks of interconnected devices that are increasingly exposed to cyberattacks. Ensuring the security of such distributed systems requires efficient and adaptive intrusion detection mechanisms. However, conventional methods face limitations in processing large and complex feature spaces. To address this issue, this study proposes an optimized intrusion detection approach based on Top-K feature selection combined with ensemble learning models, evaluated on the CICIoMT2024 dataset. Three algorithms, XGBoost, LightGBM, and Random Forest, were trained and tested on IoT datasets using three feature configurations: Top-10, Top-15, and the complete feature set. The results show that the Random Forest model provides the best balance between accuracy and computational efficiency, achieving 91.7% accuracy and an F1-score of 93% with the Top-10 subset while reducing processing time by 35%. These findings demonstrate that the Top-K selection strategy enhances the interpretability and performance of IDSs in IoT environments. Future work will extend this framework to real-time adaptive detection and edge computing integration for large-scale IoT deployments. Full article
(This article belongs to the Special Issue Machine Learning and Internet of Things in Industry 4.0)
Show Figures

Figure 1

22 pages, 1557 KB  
Article
AI-Driven Damage Detection in Wind Turbines: Drone Imagery and Lightweight Deep Learning Approaches
by Ahmed Hamdi and Hassan N. Noura
Future Internet 2025, 17(11), 528; https://doi.org/10.3390/fi17110528 - 19 Nov 2025
Viewed by 352
Abstract
Wind power plays an increasingly vital role in sustainable energy production, yet the harsh environments in which turbines operate often lead to mechanical or structural degradation. Detecting such faults early is essential to reducing maintenance expenses and extending operational lifetime. In this work, [...] Read more.
Wind power plays an increasingly vital role in sustainable energy production, yet the harsh environments in which turbines operate often lead to mechanical or structural degradation. Detecting such faults early is essential to reducing maintenance expenses and extending operational lifetime. In this work, we propose a deep learning-based image classification framework designed to assess turbine condition directly from drone-acquired imagery. Unlike object detection pipelines, which require locating specific damage regions, the proposed strategy focuses on recognizing global visual cues that indicate the overall turbine state. A comprehensive comparison is performed among several lightweight and transformer-based architectures, including MobileNetV3, ResNet, EfficientNet, ConvNeXt, ShuffleNet, ViT, DeiT, and DINOv2, to identify the most suitable model for real-time deployment. The MobileNetV3-Large network achieved the best trade-off between performance and efficiency, reaching 98.9% accuracy while maintaining a compact size of 5.4 million parameters. These results highlight the capability of compact CNNs to deliver accurate and efficient turbine monitoring, paving the way for autonomous, drone-based inspection solutions at the edge. Full article
(This article belongs to the Special Issue Navigation, Deployment and Control of Intelligent Unmanned Vehicles)
Show Figures

Figure 1

12 pages, 439 KB  
Article
Advancing Conversational Text-to-SQL: Context Strategies and Model Integration with Large Language Models
by Benjamin G. Ascoli and Jinho D. Choi
Future Internet 2025, 17(11), 527; https://doi.org/10.3390/fi17110527 - 18 Nov 2025
Viewed by 430
Abstract
Conversational text-to-SQL extends the traditional single-turn SQL generation paradigm to multi-turn, dialogue-based scenarios, enabling users to pose and refine database queries interactively, and requiring models to track dialogue context over multiple user queries and system responses. Despite extensive progress in single-turn benchmarks such [...] Read more.
Conversational text-to-SQL extends the traditional single-turn SQL generation paradigm to multi-turn, dialogue-based scenarios, enabling users to pose and refine database queries interactively, and requiring models to track dialogue context over multiple user queries and system responses. Despite extensive progress in single-turn benchmarks such as Spider and BIRD, and the recent rise of large language models, conversational datasets continue to pose challenges. In this paper, we spotlight model merging as a key strategy for boosting ESM performance on CoSQL and SParC. We present a new state-of-the-art system on the CoSQL benchmark, achieved by fine-tuning CodeS-7b under two paradigms for handling conversational history: (1) full history concatenation, and (2) question rewriting via GPT-based summarization. While each paradigm alone obtains competitive results, we observe that averaging the weights of these fine-tuned models can outperform both individual variants. Our findings highlight the promise of LLM-driven multi-turn SQL generation, offering a lightweight yet powerful avenue for improving conversational text-to-SQL. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
Show Figures

Figure 1

23 pages, 3571 KB  
Article
Optimizing Cloud Service Composition with Cuckoo Optimization Algorithm for Enhanced Resource Allocation and Energy Efficiency
by Issam AlHadid, Evon Abu-Taieh, Mohammad Al Rawajbeh, Suha Afaneh, Mohammed E. Daghbosheh, Rami S. Alkhawaldeh, Sufian Khwaldeh and Ala’aldin Alrowwad
Future Internet 2025, 17(11), 526; https://doi.org/10.3390/fi17110526 - 18 Nov 2025
Viewed by 306
Abstract
The composition of cloud services plays a vital role in optimizing resource allocation, load balancing, task scheduling, and energy management. However, it remains a significant challenge due to the dynamic nature of workloads and the variability in resource demands, where addressing these challenges [...] Read more.
The composition of cloud services plays a vital role in optimizing resource allocation, load balancing, task scheduling, and energy management. However, it remains a significant challenge due to the dynamic nature of workloads and the variability in resource demands, where addressing these challenges is essential for ensuring seamless service delivery. This research investigated the implementation of the Cuckoo Optimization Algorithm (COA) in a cloud computing environment to optimize service composition. In the proposed approach, each service was treated as an egg, where high-demand services represented the host’s original eggs, while low-demand services represented the cuckoo bird’s eggs that competed for the same resources. This implementation enabled the algorithm to balance workloads dynamically and allocate resources efficiently while optimizing load balancing, task scheduling, cost reduction, processing and response times, system stability, and energy management. The simulations were conducted using CloudSim 5.0, and the results were compared with the Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) algorithms across key performance metrics. Experimental results clearly demonstrate that the COA outperformed both PSO and ACO across all evaluated metrics. The COA achieved higher efficiency in task scheduling, dynamic load balancing, and energy-aware resource allocation. It consistently maintained lower operational costs, reduced SLA violations, and achieved superior task completion and VM utilization rates. These findings underscore the COA’s potential as a robust and scalable approach for optimizing cloud service composition in dynamic and resource-constrained environments. Full article
Show Figures

Figure 1

15 pages, 1126 KB  
Article
Edge Caching Strategies for Seamless Handover in SDN-Enabled Connected Cars
by Kuljaree Tantayakul, Wasimon Panichpattanakul, Adisak Intana and Parin Sornlertlamvanich
Future Internet 2025, 17(11), 525; https://doi.org/10.3390/fi17110525 - 18 Nov 2025
Viewed by 169
Abstract
This paper presents an analytical evaluation of three strategies designed to minimize service disruption during handovers in Software-Defined Networking (SDN)-based connected car networks: SDN Mobility with a Caching Policy, cache-miss Content Delivery Network (CDN), and cache-hit CDN. The performance trade-offs inherent in each [...] Read more.
This paper presents an analytical evaluation of three strategies designed to minimize service disruption during handovers in Software-Defined Networking (SDN)-based connected car networks: SDN Mobility with a Caching Policy, cache-miss Content Delivery Network (CDN), and cache-hit CDN. The performance trade-offs inherent in each approach have been quantitatively compared, and their multi-faceted performances have been analyzed. The evaluation includes a range of Session-to-Mobility Ratios (SMR), sensitivities of core network latency and L2 handover delays, and a realistic assessment of CDN effectiveness as a function of its cache-hit ratio. Our analysis conclusively shows that the availability of content on the network edge is the key factor for ensuring seamless connectivity. The study also identifies a quantitative crossover point 70% cache-hit ratio, where the proactive CDN’s performance surpasses that of reactive Caching Policy, establishing a clear target for predictive algorithms. Results confirm that while the cache-hit strategy offers the lowest disruption, the high data plane latency of a cache-miss 100 ms delay makes it unsuitable for vehicular applications, and the Caching Policy is limited by a 30 ms control plane overhead. This paper provides a robust framework for future vehicular network designs that proactive edge-based architectures are critical for stringent connectivity requirements. Full article
(This article belongs to the Special Issue Software-Defined Networking and Network Function Virtualization)
Show Figures

Figure 1

31 pages, 4478 KB  
Article
Explainable Artificial Intelligence System for Guiding Companies and Users in Detecting and Fixing Multimedia Web Vulnerabilities on MCS Contexts
by Sergio Alloza-García, Iván García-Magariño and Raquel Lacuesta Gilaberte
Future Internet 2025, 17(11), 524; https://doi.org/10.3390/fi17110524 - 17 Nov 2025
Viewed by 203
Abstract
In the evolving landscape of Mobile Crowdsourcing (MCS), ensuring the security and privacy of both stored and transmitted multimedia content has become increasingly challenging. Factors such as human mobility, device heterogeneity, dynamic topologies, and data diversity exacerbate the complexity of addressing these concerns [...] Read more.
In the evolving landscape of Mobile Crowdsourcing (MCS), ensuring the security and privacy of both stored and transmitted multimedia content has become increasingly challenging. Factors such as human mobility, device heterogeneity, dynamic topologies, and data diversity exacerbate the complexity of addressing these concerns effectively. To tackle these challenges, this paper introduces CSXAI (Crowdsourcing eXplainable Artificial Intelligence)—a novel explainable AI system designed to proactively assess and communicate the security status of multimedia resources downloaded in MCS environments. While CSXAI integrates established attack detection techniques, its primary novelty lies in its synthesis of these methods with a user-centric XAI framework tailored for the specific challenges of MCS frameworks. CSXAI intelligently analyzes potential vulnerabilities and threat scenarios by evaluating website context, attack impact, and user-specific characteristics. The current implementation focuses on the detection and explanation of three major web vulnerability classes: Cross-Site Scripting (XSS), Cross-Site Request Forgery (CSRF), and insecure File Upload. The proposed system not only detects digital threats in advance but also adapts its explanations to suit both technical and non-technical users, thereby enabling informed decision-making before users access potentially harmful content. Furthermore, the system offers actionable security recommendations through clear, tailored explanations, enhancing users’ ability to implement protective measures across diverse devices. The results from real-world testing suggest a notable improvement in users’ ability to understand and mitigate security risks in MCS environments. By combining proactive vulnerability detection with user-adaptive, explainable feedback, the CSXAI framework shows promise in empowering users to enhance their security posture effectively, even with minimal cybersecurity expertise. These findings underscore the potential of CSXAI as a reliable and accessible solution for tackling cybersecurity challenges in dynamic, multimedia-driven ecosystems. Quantitative results showed high user satisfaction and interpretability (SUS = 79.75 ± 6.40; USE subscales = 5.32–5.88). Full article
Show Figures

Figure 1

22 pages, 915 KB  
Article
Performance Evaluation Methodology for Federated XR Network Digital Twins in AI-Aware 6G Networks
by Xavier Calle-Heredia and Xavier Hesselbach
Future Internet 2025, 17(11), 523; https://doi.org/10.3390/fi17110523 - 17 Nov 2025
Viewed by 294
Abstract
Network digital twins (NDTs) are emerging as key enablers of 6G networks integrating artificial Intelligence (AI) techniques. NDT systems offer novel features, including real-time monitoring, simulation, enhanced network planning, autonomous management, seamless integration with emerging technologies such as extended reality (XR), among others. [...] Read more.
Network digital twins (NDTs) are emerging as key enablers of 6G networks integrating artificial Intelligence (AI) techniques. NDT systems offer novel features, including real-time monitoring, simulation, enhanced network planning, autonomous management, seamless integration with emerging technologies such as extended reality (XR), among others. When NDTs converge with XR, NDTs can be customized with additional interactive services that are not available in the original network. In this work, artificial intelligence (AI) strategies are applied to a set of XR functions within federated NDTs. While existing NDT approaches follow a one-to-one (1:1) model, where a single NDT instance is deployed from an original network, the one-to-many (1:N) federation model requires the orchestration of multiple XR-tailored NDT instances. The federation of NDTs can be applied across diverse 6G use cases, including telemedicine, UAV management, Industry 4.0, and the remote driving of complex vehicles. Ensuring the optimal operation of the NDT federation requires a methodology tailored to the requirements of each use case. This paper introduces a score-based performance analysis to quantify the benefits achieved through NDT federation. Unlike existing models for the digital twin (DT) federation, this paper introduces a KPI-based rational model that quantifies the trade-off between federation benefits and the associated operational complexity. A mathematical analysis is performed to validate the consistency of the score formula both in general terms and within the context of each specific use case. Full article
(This article belongs to the Special Issue Advances in Smart Environments and Digital Twin Technologies)
Show Figures

Figure 1

21 pages, 4379 KB  
Article
ReHAb Playground: A DL-Based Framework for Game-Based Hand Rehabilitation
by Samuele Rasetto, Giorgia Marullo, Ludovica Adamo, Federico Bordin, Francesca Pavesi, Chiara Innocente, Enrico Vezzetti and Luca Ulrich
Future Internet 2025, 17(11), 522; https://doi.org/10.3390/fi17110522 - 17 Nov 2025
Viewed by 466
Abstract
Hand rehabilitation requires consistent, repetitive exercises that can often reduce patient motivation, especially in home-based therapy. This study introduces ReHAb Playground, a deep learning-based system that merges real-time gesture recognition with 3D hand tracking to create an engaging and adaptable rehabilitation experience built [...] Read more.
Hand rehabilitation requires consistent, repetitive exercises that can often reduce patient motivation, especially in home-based therapy. This study introduces ReHAb Playground, a deep learning-based system that merges real-time gesture recognition with 3D hand tracking to create an engaging and adaptable rehabilitation experience built in the Unity Game Engine. The system utilizes a YOLOv10n model for hand gesture classification and MediaPipe Hands for 3D hand landmark extraction. Three mini-games were developed to target specific motor and cognitive functions: Cube Grab, Coin Collection, and Simon Says. Key gameplay parameters, namely repetitions, time limits, and gestures, can be tuned according to therapeutic protocols. Experiments with healthy participants were conducted to establish reference performance ranges based on average completion times and standard deviations. The results showed a consistent decrease in both task completion and gesture times across trials, indicating learning effects and improved control of gesture-based interactions. The most pronounced improvement was observed in the more complex Coin Collection task, confirming the system’s ability to support skill acquisition and engagement in rehabilitation-oriented activities. ReHAb Playground was conceived with modularity and scalability at its core, enabling the seamless integration of additional exercises, gesture libraries, and adaptive difficulty mechanisms. While preliminary, the findings highlight its promise as an accessible, low-cost rehabilitation platform suitable for home use, capable of monitoring motor progress over time and enhancing patient adherence through engaging, game-based interactions. Future developments will focus on clinical validation with patient populations and the implementation of adaptive feedback strategies to further personalize the rehabilitation process. Full article
(This article belongs to the Special Issue Advances in Deep Learning and Next-Generation Internet Technologies)
Show Figures

Graphical abstract

25 pages, 12497 KB  
Article
Hybrid Sensor Fusion Beamforming for UAV mmWave Communication
by Yuya Sugimoto and Gia Khanh Tran
Future Internet 2025, 17(11), 521; https://doi.org/10.3390/fi17110521 - 17 Nov 2025
Viewed by 442
Abstract
Resilient autonomous inter-Unmanned Aerial Vehicle (UAV) communication is critical for applications like drone swarms. While conventional Global Navigation Satellite System (GNSS)-based beamforming is effective at long ranges, it suffers from significant pointing errors at close range due to latency, low update rates and [...] Read more.
Resilient autonomous inter-Unmanned Aerial Vehicle (UAV) communication is critical for applications like drone swarms. While conventional Global Navigation Satellite System (GNSS)-based beamforming is effective at long ranges, it suffers from significant pointing errors at close range due to latency, low update rates and the inherent GNSS positioning error. To overcome these limitations, this paper proposes a novel hybrid beamforming system that enhances resilience by adaptively switching between two methods. For short-range operations, our system leverages Light Detection and Ranging (LiDAR)–camera sensor fusion for high-accuracy, low-latency UAV tracking, enabling precise millimeter-wave (mmWave) beamforming. For long-range scenarios beyond the camera’s detection limit, it intelligently switches to a GNSS-based method. The switching threshold is determined by considering both the sensor’s effective range and the pointing errors caused by GNSS latency and a UAV velocity. Simulations conducted in a realistic urban model demonstrate that our hybrid approach compensates for the weaknesses of each individual method. It maintains a stable, high-throughput link across a wide range of distances, achieving superior performance and resilience compared to systems relying on a single tracking method. This paves the way for advanced autonomous drone network operations in dynamic environments. Full article
Show Figures

Figure 1

18 pages, 2540 KB  
Article
HEXADWSN: Explainable Ensemble Framework for Robust and Energy-Efficient Anomaly Detection in WSNs
by Rahul Mishra, Sudhanshu Kumar Jha, Shiv Prakash and Rajkumar Singh Rathore
Future Internet 2025, 17(11), 520; https://doi.org/10.3390/fi17110520 - 15 Nov 2025
Viewed by 256
Abstract
Wireless Sensor Networks (WSNs) have a decisive share in various monitoring and control systems. However, their distributed and resource-constrained nature makes them vulnerable to anomalies caused by factors such as environmental noise, sensor faults, and cyber intrusions. In this paper, HEXADWSN, a hybrid [...] Read more.
Wireless Sensor Networks (WSNs) have a decisive share in various monitoring and control systems. However, their distributed and resource-constrained nature makes them vulnerable to anomalies caused by factors such as environmental noise, sensor faults, and cyber intrusions. In this paper, HEXADWSN, a hybrid ensemble learning-based explainable anomaly detection framework for anomaly detection to improve reliability and interpretability in WSNs, has been proposed. The proposed framework integrates an ensemble learning approach using Autoencoders, Isolation Forests, and One-Class SVMs to achieve robust detection of time-series-based irregularities in the Intel Lab dataset. The framework uses stack and vote ensemble learning. The stack ensemble achieved the highest overall performance, indicating strong effectiveness in detecting varied anomaly patterns. The voting ensemble demonstrated moderate results and offered a balance between detection rate and computation, whereas LSTM, which is efficient at capturing temporal dependencies, exhibited a relatively low performance in the processed dataset. SHAP, LIME, and Permutation Feature Importance techniques are employed for model explainability. These techniques offer insights into feature relevance and anomalies at global and local levels. The framework also measures the mean energy consumption for anomalous and normal data. The interpretability results identified that temperature, humidity, and voltage are the most influential features. HEXADWSN establishes a scalable and explainable foundation for anomaly detection in WSNs, striking a balance between accuracy, interpretability, and energy management insights. Full article
(This article belongs to the Special Issue Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

17 pages, 756 KB  
Article
A DLT-Aware Performance Evaluation Framework for Virtual-Core Speedup Modeling
by Zile Xiang and Thomas G. Robertazzi
Future Internet 2025, 17(11), 519; https://doi.org/10.3390/fi17110519 - 14 Nov 2025
Viewed by 343
Abstract
Scheduling computing is a well-studied area focused on improving task execution by reducing processing time and increasing system efficiency. Divisible Load Theory (DLT) provides a structured analytical framework for distributing partitionable computational and communicational loads across processors, and its adaptability has allowed researchers [...] Read more.
Scheduling computing is a well-studied area focused on improving task execution by reducing processing time and increasing system efficiency. Divisible Load Theory (DLT) provides a structured analytical framework for distributing partitionable computational and communicational loads across processors, and its adaptability has allowed researchers to integrate it with other models and modern technologies. Building on this foundation, previous studies have shown that Amdahl-like laws can be effectively combined with DLT to produce more realistic performance models. This paper further develops analytical models that further extend such integration by incorporating Gustafson’s Law and Juurlink’s Law into DLT to capture broader scaling behaviors. It also extends the analysis to workload distribution in virtual multicore systems, providing a more structured basis for evaluating parallel performance. Methods include analytically computing speedup as a function of the number of cores and the parallelizable fraction under different scheduling strategies, with comparisons across workload conditions. Results show that combining DLT with speedup laws and virtual core design offers a deeper and more structured approach for analytical parallel system evaluation. While the analysis remains theoretical, the proposed framework establishes a mathematical foundation for future empirical validation, heterogeneous workload modeling, and sensitivity analysis. Full article
(This article belongs to the Special Issue Parallel and Distributed Systems)
Show Figures

Figure 1

17 pages, 3261 KB  
Article
Scalable Generation of Synthetic IoT Network Datasets: A Case Study with Cooja
by Hrant Khachatrian, Aram Dovlatyan, Greta Grigoryan and Theofanis P. Raptis
Future Internet 2025, 17(11), 518; https://doi.org/10.3390/fi17110518 - 13 Nov 2025
Viewed by 416
Abstract
Predicting the behavior of Internet of Things (IoT) networks under irregular topologies and heterogeneous battery conditions remains a significant challenge. Simulation tools can capture these effects but can require high manual effort and computational capacity, motivating the use of machine learning surrogates. This [...] Read more.
Predicting the behavior of Internet of Things (IoT) networks under irregular topologies and heterogeneous battery conditions remains a significant challenge. Simulation tools can capture these effects but can require high manual effort and computational capacity, motivating the use of machine learning surrogates. This work introduces an automated pipeline for generating large-scale IoT network datasets by bringing together the Contiki-NG firmware, parameterized topology generation, and Slurm-based orchestration of Cooja simulations. The system supports a variety of network structures, scalable node counts, randomized battery allocations, and routing protocols to reproduce diverse failure modes. As a case study, we conduct over 10,000 Cooja simulations with 15–75 battery-powered motes arranged in sparse grid topologies and operating the RPL routing protocol, consuming 1300 CPU-hours in total. The simulations capture realistic failure modes, including unjoined nodes despite physical connectivity and cascading disconnects caused by battery depletion. The resulting graph-structured datasets are used for two prediction tasks: (1) estimating the last successful message delivery time for each node and (2) predicting network-wide spatial coverage. Graph neural network models trained on these datasets outperform baseline regression models and topology-aware heuristics while evaluating substantially faster than full simulations. The proposed framework provides a reproducible foundation for data-driven analysis of energy-limited IoT networks. Full article
Show Figures

Figure 1

33 pages, 2522 KB  
Article
Ontology-Driven Multi-Agent System for Cross-Domain Art Translation
by Viktor Matanski, Anton Iliev, Nikolay Kyurkchiev and Todorka Terzieva
Future Internet 2025, 17(11), 517; https://doi.org/10.3390/fi17110517 - 12 Nov 2025
Viewed by 523
Abstract
Generative models can generate art within a single modality with high fidelity. However, translating a work of art from one domain to another (e.g., painting to music or poem to painting) in a meaningful way remains a longstanding, interdisciplinary challenge. We propose a [...] Read more.
Generative models can generate art within a single modality with high fidelity. However, translating a work of art from one domain to another (e.g., painting to music or poem to painting) in a meaningful way remains a longstanding, interdisciplinary challenge. We propose a novel approach combining a multi-agent system (MAS) architecture with an ontology-guided semantic representation to achieve cross-domain art translation while preserving the original artwork’s meaning and emotional impact. In our concept, specialized agents decompose the task: a Perception Agent extracts symbolic descriptors from the source artwork, a Translation Agent maps these descriptors using shared knowledge base, a Generator Agent creates the target-modality artwork, and a Curator Agent evaluates and refines the output for coherence and style alignment. This modular design, inspired by human creative workflows, allows complex artistic concepts (themes, moods, motifs) to carry over across modalities in a consistent and interpretable way. We implemented a prototype supporting translations between painting and poetry, leveraging state-of-the-art generative models. Preliminary results indicate that our ontology-driven MAS produces cross-domain translations that preserve key semantic elements and affective tone of the input, offering a new path toward explainable and controllable creative AI. Finally, we discuss a case study and potential applications from educational tools to synesthetic VR experiences and outline future research directions for enhancing the realm of intelligent agents. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Figure 1

22 pages, 1664 KB  
Article
A Blockchain-Enabled Decentralized Zero-Trust Architecture for Anomaly Detection in Satellite Networks via Post-Quantum Cryptography and Federated Learning
by Sridhar Varadala and Hao Xu
Future Internet 2025, 17(11), 516; https://doi.org/10.3390/fi17110516 - 12 Nov 2025
Viewed by 314
Abstract
The rapid expansion of satellite networks for advanced communication and space exploration has ensured that robust cybersecurity for inter-satellite links has become a critical challenge. Traditional security models rely on centralized trust authorities, and node-specific protections are no longer sufficient, particularly when system [...] Read more.
The rapid expansion of satellite networks for advanced communication and space exploration has ensured that robust cybersecurity for inter-satellite links has become a critical challenge. Traditional security models rely on centralized trust authorities, and node-specific protections are no longer sufficient, particularly when system failures or attacks affect groups of satellites or agent clusters. To address this problem, we propose a blockchain-enabled decentralized zero-trust model based on post-quantum cryptography (BEDZTM-PQC) to improve the security of satellite communications via continuous authentication and anomaly detection. This model introduces a group-based security framework, where satellite teams operate under a zero-trust architecture (ZTA) enforced by blockchain smart contracts and threshold cryptographic mechanisms. Each group shares the responsibility for local anomaly detection and policy enforcement while maintaining decentralized coordination through hierarchical federated learning, allowing for collaborative model training without centralizing sensitive telemetry data. A post-quantum cryptography (PQC) algorithm is employed for future-proof communication and authentication protocols against quantum computing threats. Furthermore, the system enhances network reliability by incorporating redundant communication channels, consensus-based anomaly validation, and group trust scoring, thus eliminating single points of failure at both the node and team levels. The proposed BEDZTM-PQC is implemented in MATLAB, and its performance is evaluated using key metrics, including accuracy, latency, security robustness, trust management, anomaly detection accuracy, performance scalability, and security rate with respect to different numbers of input satellite users. Full article
Show Figures

Figure 1

27 pages, 548 KB  
Article
Social Engineering with AI
by Alexandru-Raul Matecas, Peter Kieseberg and Simon Tjoa
Future Internet 2025, 17(11), 515; https://doi.org/10.3390/fi17110515 - 12 Nov 2025
Viewed by 577
Abstract
The new availability of powerful Artificial Intelligence (AI) as an everyday copilot has instigated a new wave of attack techniques, especially in the area of Social Engineering (SE). The possibility of generating a multitude of different templates within seconds in order to carry [...] Read more.
The new availability of powerful Artificial Intelligence (AI) as an everyday copilot has instigated a new wave of attack techniques, especially in the area of Social Engineering (SE). The possibility of generating a multitude of different templates within seconds in order to carry out an SE-attack lowers the entry barrier for potential threat actors. Still, the question remains whether this can be done using openly available tools without specialized expert skill sets on the attacker side, and how these compare to each other. This paper conducts three experiments based on a blueprint from a real-world CFO fraud attack, which utilized two of the most used social engineering attacks, phishing and vishing, and investigates the success rate of these SE attacks based on utilizing different available LLMs. The third experiment centers around the training of an AI-powered chatbot to act as a social engineer and gather sensitive information from interacting users. As this work focuses on the offensive side of SE, all conducted experiments return promising results, proving not only the ability and effectiveness of AI technology to act unethically, but also the little to no implied restrictions. Based on a reflection on the findings and potential countermeasures available, this research provides a deeper understanding of the development and deployment of AI-enhanced SE attacks, further highlighting potential dangers, as well as mitigation methods against this “upgraded” type of threat. Full article
(This article belongs to the Special Issue Securing Artificial Intelligence Against Attacks)
Show Figures

Figure 1

16 pages, 4127 KB  
Article
Dynamic Topology Reconfiguration for Energy-Efficient Operation in 5G NR IAB Systems
by Vitalii Beschastnyi, Uliana Morozova, Egor Machnev, Darya Ostrikova, Yuliya Gaidamaka and Konstantin Samouylov
Future Internet 2025, 17(11), 514; https://doi.org/10.3390/fi17110514 - 10 Nov 2025
Viewed by 260
Abstract
The utilization of high millimeter wave (mmWave, 30–100 GHz) in 5G New Radio (NR) systems and sub-terahertz (sub-THz, 100–300 GHz) in future 6G requires dense deployments of base stations (BSs) to provide uninterrupted connectivity to the users. 3GPP Integrated Access and Backhaul (IAB) [...] Read more.
The utilization of high millimeter wave (mmWave, 30–100 GHz) in 5G New Radio (NR) systems and sub-terahertz (sub-THz, 100–300 GHz) in future 6G requires dense deployments of base stations (BSs) to provide uninterrupted connectivity to the users. 3GPP Integrated Access and Backhaul (IAB) deployments that utilize wireless relay nodes offer cost-efficient densification options for these systems. However, the infrastructure that is often scaled and deployed for busy-hour traffic conditions is not used efficiently during periods when traffic demands are lower, resulting in excessive power consumption. In this work, we consider the IAB roadside deployment option and demonstrate that the deployment designed to meet traffic demands during busy-hour traffic conditions can be efficiently controlled to provide large power savings during other times of the day. To demonstrate the feasibility of the solution, we will utilize the tools of stochastic geometry and queuing theory. Our numerical results show that the dynamic switching of IAB nodes may lead to power savings of up to 40% depending on the traffic and deployment specifics. The proposed methodology also allows us to maintain the specified upper bound on the transit delay and improve the utilization of active IAB nodes. Full article
(This article belongs to the Special Issue Intelligent Telecommunications Mobile Networks)
Show Figures

Figure 1

23 pages, 1693 KB  
Article
Machine Learning Pipeline for Early Diabetes Detection: A Comparative Study with Explainable AI
by Yas Barzegar, Atrin Barzegar, Francesco Bellini, Fabrizio D'Ascenzo, Irina Gorelova and Patrizio Pisani
Future Internet 2025, 17(11), 513; https://doi.org/10.3390/fi17110513 - 10 Nov 2025
Viewed by 340
Abstract
The use of Artificial Intelligence (AI) in healthcare has significantly advanced early disease detection, enabling timely diagnosis and improved patient outcomes. This work proposes an end-to-end machine learning (ML) model for predicting diabetes based on data quality by following key steps, including advanced [...] Read more.
The use of Artificial Intelligence (AI) in healthcare has significantly advanced early disease detection, enabling timely diagnosis and improved patient outcomes. This work proposes an end-to-end machine learning (ML) model for predicting diabetes based on data quality by following key steps, including advanced preprocessing by KNN imputation, intelligent feature selection, class imbalance with a hybrid approach of SMOTEENN, and multi-model classification. We rigorously compared nine ML classifiers, namely ensemble approaches (Random Forest, CatBoost, XGBoost), Support Vector Machines (SVM), and Logistic Regression (LR) for the prediction of diabetes disease. We evaluated performance on specificity, accuracy, recall, precision, and F1-score to assess generalizability and robustness. We employed SHapley Additive exPlanations (SHAP) for explainability, ranking, and identifying the most influential clinical risk factors. SHAP analysis identified glucose levels as the dominant predictor, followed by BMI and age, providing clinically interpretable risk factors that align with established medical knowledge. Results indicate that ensemble models have the highest performance among the others, and CatBoost performed the best, which achieved an ROC-AUC of 0.972, an accuracy of 0.968, and an F1-score of 0.971. The model was successfully validated on two larger datasets (CDC BRFSS and a 130-hospital dataset), confirming its generalizability. This data-driven design provides a reproducible platform for applying useful and interpretable ML models in clinical practice as a primary application for future Internet-of-Things-based smart healthcare systems. Full article
(This article belongs to the Special Issue The Future Internet of Medical Things, 3rd Edition)
Show Figures

Graphical abstract

31 pages, 4999 KB  
Article
TrustFed-CTI: A Trust-Aware Federated Learning Framework for Privacy-Preserving Cyber Threat Intelligence Sharing Across Distributed Organizations
by Manel Mrabet
Future Internet 2025, 17(11), 512; https://doi.org/10.3390/fi17110512 - 10 Nov 2025
Viewed by 620
Abstract
The rapid evolution of cyber threats requires intelligence sharing between organizations while ensuring data privacy and contributor credibility. Existing centralized cyber threat intelligence (CTI) systems suffer from single points of failure, privacy concerns, and vulnerability to adversarial manipulation. This paper introduces TrustFed-CTI, a [...] Read more.
The rapid evolution of cyber threats requires intelligence sharing between organizations while ensuring data privacy and contributor credibility. Existing centralized cyber threat intelligence (CTI) systems suffer from single points of failure, privacy concerns, and vulnerability to adversarial manipulation. This paper introduces TrustFed-CTI, a novel trust-aware federated learning framework designed for privacy-preserving CTI collaboration across distributed organizations. The framework integrates a dynamic reputation-based trust scoring system to evaluate member reliability, along with differential privacy and secure multi-party computation to safeguard sensitive information. A trust-weighted model aggregation mechanism further mitigates the impact of adversarial participants. A context-aware trust engine continuously monitors the consistency of threat patterns, authenticity of data sources, and contribution quality to dynamically adjust trust scores. Extensive experiments on practical datasets including APT campaign reports, MITRE ATT&CK indicators, and honeypot logs demonstrate a 22.6% improvement in detection accuracy, 28% faster convergence, and robust resistance to up to 35% malicious participants. The proposed framework effectively addresses critical vulnerabilities in decentralized CTI collaboration, offering a scalable and privacy-preserving mechanism for secure intelligence sharing without compromising organizational autonomy. Full article
(This article belongs to the Special Issue Distributed Machine Learning and Federated Edge Computing for IoT)
Show Figures

Figure 1

24 pages, 598 KB  
Article
Privacy Concerns in ChatGPT Data Collection and Its Impact on Individuals
by Leena Mohammad Alzamil, Alawiayyah Mohammed Alhasani and Suhair Alshehri
Future Internet 2025, 17(11), 511; https://doi.org/10.3390/fi17110511 - 10 Nov 2025
Viewed by 1426
Abstract
With the rapid adoption of generative AI technologies across various sectors, it has become increasingly important to understand how these systems handle personal data. The study examines users’ awareness of the types of data collected, the risks involved, and their implications for privacy [...] Read more.
With the rapid adoption of generative AI technologies across various sectors, it has become increasingly important to understand how these systems handle personal data. The study examines users’ awareness of the types of data collected, the risks involved, and their implications for privacy and security. A comprehensive literature review was conducted to contextualize the ethical, technical, and regulatory challenges associated with generative AI, followed by a pilot survey targeting ChatGPT users from a variety of demographics. The results of the study revealed a significant gap in users’ understanding of data practices, with many participants expressing concerns about unauthorized access to data, prolonged data retention, and a lack of transparency. Despite recognizing the benefits of ChatGPT in various applications, users expressed strong demands for greater control over their data, clearer consent mechanisms, and more transparent communication from developers. The study concludes by emphasizing the need for multi-dimensional solutions that combine technological innovation, regulatory reform, and user-centered design. Recommendations include implementing explainable AI, enhancing educational efforts, adopting privacy-by-design principles, and establishing robust governance frameworks. By addressing these challenges, developers, policymakers, and stakeholders can enhance trust, promote ethical AI deployment, and ensure that generative AI systems serve the public good while respecting individual rights and privacy. Full article
Show Figures

Figure 1

19 pages, 6992 KB  
Article
AI-Based Proactive Maintenance for Cultural Heritage Conservation: A Hybrid Neuro-Fuzzy Approach
by Otilia Elena Dragomir and Florin Dragomir
Future Internet 2025, 17(11), 510; https://doi.org/10.3390/fi17110510 - 5 Nov 2025
Viewed by 766
Abstract
Cultural heritage conservation faces escalating challenges from environmental threats and resource constraints, necessitating innovative preservation strategies that balance predictive accuracy with interpretability. This study presents a hybrid neuro-fuzzy framework addressing critical gaps in heritage conservation practice through sequential integration of feedforward neural networks [...] Read more.
Cultural heritage conservation faces escalating challenges from environmental threats and resource constraints, necessitating innovative preservation strategies that balance predictive accuracy with interpretability. This study presents a hybrid neuro-fuzzy framework addressing critical gaps in heritage conservation practice through sequential integration of feedforward neural networks (FF-NNs) and Mamdani-type fuzzy inference systems (MFISs). The system processes multi-sensor data (temperature, vibration, pressure) through a two-stage architecture: an FF-NN for pattern recognition and an MFIS for interpretable decision-making. Evaluation on 1000 synthetic heritage building monitoring samples (70% training, 30% testing) demonstrates mean accuracy of 94.3% (±0.62%), precision of 92.3% (±0.78%), and recall of 90.3% (±0.70%) across five independent runs. Feature importance analysis reveals temperature as the dominant fault detection driver (60.6% variance contribution), followed by pressure (36.7%), while vibration contributes negatively (−2.8%). The hybrid architecture overcomes the accuracy–interpretability trade-off inherent in standalone approaches: while the FF-NN achieves superior fault detection, the MFIS provides transparent maintenance recommendations essential for conservation professional validation. However, comparative analysis reveals that rigid fuzzy rule structures constrain detection capabilities for borderline cases, reducing recall from 96% (standalone FF-NN) to 47% (hybrid system) in fault-dominant scenarios. This limitation highlights the need for adaptive fuzzy integration mechanisms in safety-critical heritage applications. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
Show Figures

Figure 1

14 pages, 1602 KB  
Article
Frame and Utterance Emotional Alignment for Speech Emotion Recognition
by Seounghoon Byun and Seok-Pil Lee
Future Internet 2025, 17(11), 509; https://doi.org/10.3390/fi17110509 - 5 Nov 2025
Viewed by 574
Abstract
Speech Emotion Recognition (SER) is important for applications such as Human–Computer Interaction (HCI) and emotion-aware services. Traditional SER models rely on utterance-level labels, aggregating frame-level representations through pooling operations. However, emotional states can vary across frames within an utterance, making it difficult for [...] Read more.
Speech Emotion Recognition (SER) is important for applications such as Human–Computer Interaction (HCI) and emotion-aware services. Traditional SER models rely on utterance-level labels, aggregating frame-level representations through pooling operations. However, emotional states can vary across frames within an utterance, making it difficult for models to learn consistent and robust representations. To address this issue, we propose two auxiliary loss functions, Emotional Attention Loss (EAL) and Frame-to-Utterance Alignment Loss (FUAL). The proposed approach uses a Classification token (CLS) self-attention pooling mechanism, where the CLS summarizes the entire utterance sequence. EAL encourages frames of the same emotion to align closely with the CLS while separating frames of different classes, and FUAL enforces consistency between frame-level and utterance-level predictions to stabilize training. Model training proceeds in two stages: Stage 1 fine-tunes the wav2vec 2.0 backbone with Cross-Entropy (CE) loss to obtain stable frame embeddings, and stage 2 jointly optimizes CE, EAL and FUAL within the CLS-based pooling framework. Experiments on the IEMOCAP four-class dataset demonstrate that our method consistently outperforms baseline models, showing that the proposed losses effectively address representation inconsistencies and improve SER performance. This work advances Artificial Intelligence by improving the ability of models to understand human emotions through speech. Full article
Show Figures

Graphical abstract

33 pages, 7618 KB  
Article
Data-Driven Predictive Analytics for Dynamic Aviation Systems: Optimising Fleet Maintenance and Flight Operations Through Machine Learning
by Elmin Marevac, Esad Kadušić, Natasa Živić, Dženan Hamzić and Narcisa Hadžajlić
Future Internet 2025, 17(11), 508; https://doi.org/10.3390/fi17110508 - 4 Nov 2025
Viewed by 1348
Abstract
The aviation industry operates as a complex, dynamic system generating vast volumes of data from aircraft sensors, flight schedules, and external sources. Managing this data is critical for mitigating disruptive and costly events such as mechanical failures and flight delays. This paper presents [...] Read more.
The aviation industry operates as a complex, dynamic system generating vast volumes of data from aircraft sensors, flight schedules, and external sources. Managing this data is critical for mitigating disruptive and costly events such as mechanical failures and flight delays. This paper presents a comprehensive application of predictive analytics and machine learning to enhance aviation safety and operational efficiency. We address two core challenges: predictive maintenance of aircraft engines and forecasting flight delays. For maintenance, we utilise NASA’s C-MAPSS simulation dataset to develop and compare models, including one-dimensional convolutional neural networks (1D CNNs) and long short-term memory networks (LSTMs), for classifying engine health status and predicting the Remaining Useful Life (RUL), achieving classification accuracy up to 97%. For operational efficiency, we analyse historical flight data to build regression models for predicting departure delays, identifying key contributing factors such as airline, origin airport, and scheduled time. Our methodology highlights the critical role of Exploratory Data Analysis (EDA), feature selection, and data preprocessing in managing high-volume, heterogeneous data sources. The results demonstrate the significant potential of integrating these predictive models into aviation Business Intelligence (BI) systems to transition from reactive to proactive decision-making. The study concludes by discussing the integration challenges within existing data architectures and the future potential of these approaches for optimising complex, networked transportation systems. Full article
Show Figures

Figure 1

28 pages, 1341 KB  
Article
Distributing Quantum Computations, Shot-Wise
by Giuseppe Bisicchia, Giuseppe Clemente, Jose Garcia-Alonso, Juan Manuel Murillo, Massimo D’Elia and Antonio Brogi
Future Internet 2025, 17(11), 507; https://doi.org/10.3390/fi17110507 - 4 Nov 2025
Viewed by 549
Abstract
NISQ (Noisy Intermediate-Scale Quantum) era constraints, high sensitivity to noise and limited qubit count, impose significant barriers on the usability of QPUs (Quantum Process Units) capabilities. To overcome these challenges, researchers are exploring methods to maximize the utility of existing QPUs despite their [...] Read more.
NISQ (Noisy Intermediate-Scale Quantum) era constraints, high sensitivity to noise and limited qubit count, impose significant barriers on the usability of QPUs (Quantum Process Units) capabilities. To overcome these challenges, researchers are exploring methods to maximize the utility of existing QPUs despite their limitations. Building upon the idea that the execution of a quantum circuit’s shots does not need to be treated as a singular monolithic unit, we propose a methodological framework, termed shot-wise, which enables the distribution of shots for a single circuit across multiple QPUs. Our framework features customizable policies to adapt to various scenarios. Additionally, it introduces a calibration method to pre-evaluate the accuracy and reliability of each QPU’s output before the actual distribution process and an incremental execution mechanism for dynamically managing the shot allocation and policy updates. Such an approach enables flexible and fine-grained management of the distribution process, taking into account various user-defined constraints and (contrasting) objectives. Demonstration results show that shot-wise distribution consistently and significantly improves the execution performance, with no significant drawbacks and additional qualitative advantages. Overall, the shot-wise methodology improves result stability and often outperforms single QPU runs, offering a robust and flexible approach to managing variability in quantum computing. Full article
Show Figures

Figure 1

16 pages, 793 KB  
Article
Zero-Copy Messaging: Low-Latency Inter-Task Communication in CHERI-Enabled RTOS
by Mina Soltani Siapoush and Jim Alves-Foss
Future Internet 2025, 17(11), 506; https://doi.org/10.3390/fi17110506 - 4 Nov 2025
Viewed by 517
Abstract
Efficient and secure inter-task communication (ITC) is critical in real-time embedded systems, particularly in security-sensitive architectures. Traditional ITC mechanisms in Real-Time Operating Systems (RTOSs) often incur high latency from kernel trapping, context-switch overhead, and multiple data copies during message passing. This paper introduces [...] Read more.
Efficient and secure inter-task communication (ITC) is critical in real-time embedded systems, particularly in security-sensitive architectures. Traditional ITC mechanisms in Real-Time Operating Systems (RTOSs) often incur high latency from kernel trapping, context-switch overhead, and multiple data copies during message passing. This paper introduces a zero-copy, capability-protected ITC framework for CHERI-enabled RTOS environments that achieves both high performance and strong compartmental isolation. The approach integrates mutexes and semaphores encapsulated as sealed capabilities, a shared memory ring buffer for messaging, and compartment-local stubs to eliminate redundant data copies and reduce cross-compartment transitions. Temporal safety is ensured through hardware-backed capability expiration, mitigating use-after-free vulnerabilities. Implemented as a reference application on the CHERIoT RTOS, the framework delivers up to 3× lower mutex lock latency and over 70% faster message transfers compared to baseline FreeRTOS, while preserving deterministic real-time behavior. Security evaluation confirms resilience against unauthorized access, capability leakage, and TOCTTO vulnerabilities. These results demonstrate that capability-based zero-copy ITC can be a practical and performance-optimal solution for constrained embedded systems that demand high throughput, low latency, and verifiable isolation guarantees. Full article
(This article belongs to the Special Issue Cybersecurity in the Age of AI, IoT, and Edge Computing)
Show Figures

Figure 1

37 pages, 774 KB  
Article
Resilient Federated Learning for Vehicular Networks: A Digital Twin and Blockchain-Empowered Approach
by Jian Li, Chuntao Zheng and Ziyao Chen
Future Internet 2025, 17(11), 505; https://doi.org/10.3390/fi17110505 - 3 Nov 2025
Viewed by 521
Abstract
Federated learning (FL) is a foundational technology for enabling collaborative intelligence in vehicular edge computing (VEC). However, the volatile network topology caused by high vehicle mobility and the profound security risks of model poisoning attacks severely undermine its practical deployment. This paper introduces [...] Read more.
Federated learning (FL) is a foundational technology for enabling collaborative intelligence in vehicular edge computing (VEC). However, the volatile network topology caused by high vehicle mobility and the profound security risks of model poisoning attacks severely undermine its practical deployment. This paper introduces DTB-FL, a novel framework that synergistically integrates digital twin (DT) and blockchain technologies to establish a secure and efficient learning paradigm. DTB-FL leverages a digital twin to create a real-time virtual replica of the network, enabling a predictive, mobility-aware participant selection strategy that preemptively mitigates network instability. Concurrently, a private blockchain underpins a decentralized trust infrastructure, employing a dynamic reputation system to secure model aggregation and smart contracts to automate fair incentives. Crucially, these components are synergistic: The DT provides a stable cohort of participants, enhancing the accuracy of the blockchain’s reputation assessment, while the blockchain feeds reputation scores back to the DT to refine future selections. Extensive simulations demonstrate that DTB-FL accelerates model convergence by 43% compared to FedAvg and maintains 75% accuracy under poisoning attacks even when 40% of participants are malicious—a scenario where baseline FL methods degrade to below 40% accuracy. The framework also exhibits high resilience to network dynamics, sustaining performance at vehicle speeds up to 120 km/h. DTB-FL provides a comprehensive, cross-layer solution that transforms vehicular FL from a vulnerable theoretical model into a practical, robust, and scalable platform for next-generation intelligent transportation systems. Full article
Show Figures

Figure 1

34 pages, 9628 KB  
Article
Modeling Interaction Patterns in Visualizations with Eye-Tracking: A Characterization of Reading and Information Styles
by Angela Locoro and Luigi Lavazza
Future Internet 2025, 17(11), 504; https://doi.org/10.3390/fi17110504 - 3 Nov 2025
Viewed by 534
Abstract
In data visualization, users’ scanning patterns are as crucial as their reading patterns in text-based media. Yet, no systematic attempt exists to characterize this activity with basic features, such as reading speed and scanpaths, nor to relate them to data complexity and information [...] Read more.
In data visualization, users’ scanning patterns are as crucial as their reading patterns in text-based media. Yet, no systematic attempt exists to characterize this activity with basic features, such as reading speed and scanpaths, nor to relate them to data complexity and information disposition. To fill this gap, this paper proposes a model-based method to analyze and interpret those features from eye-tracking data. To this end, the bias-noise model is applied to a data visualization eye-tracking dataset available online, and enriched with areas of interest labels. The positive results of this method are as follows: (i) the identification of users’ reading styles like meticulous, systematic, and serendipitous; (ii) the characterization of information disposition as gathered or scattered, and of information complexity as more or less dense; (iii) the discovery of a behavioural pattern of efficiency, given that the more visualizations were read by a participant, the greater their reading speed, consistency, and predictability of reading; (iv) the identification of encoding and title areas of interest as the primary loci of attention in visualizations, with a peculiar back-and-forth reading pattern; (v) the identification of the encoding area of interest as the fastest to read in less dense visualization types, such as bars, circles, and lines charts. Future experiments involving participants from diverse cultural backgrounds could not only validate the observed behavioural patterns, but also enrich the experimental framework with additional perspectives. Full article
(This article belongs to the Special Issue Human-Centered Artificial Intelligence)
Show Figures

Graphical abstract

62 pages, 2365 KB  
Review
Securing the SDN Data Plane in Emerging Technology Domains: A Review
by Travis Quinn, Faycal Bouhafs and Frank den Hartog
Future Internet 2025, 17(11), 503; https://doi.org/10.3390/fi17110503 - 3 Nov 2025
Viewed by 1186
Abstract
Over the last decade, Software-Defined Networking (SDN) has garnered increasing research interest for networking and security. This interest stems from the programmability and dynamicity offered by SDN, as well as the growing importance of SDN as a foundational technology of future telecommunications networks [...] Read more.
Over the last decade, Software-Defined Networking (SDN) has garnered increasing research interest for networking and security. This interest stems from the programmability and dynamicity offered by SDN, as well as the growing importance of SDN as a foundational technology of future telecommunications networks and the greater Internet. However, research into SDN security has focused disproportionately on the security of the control plane, resulting in the relative trivialization of data plane security methods and a corresponding lack of appreciation of the data plane in SDN security discourse. To remedy this, this paper provides a comprehensive review of SDN data plane security research, classified into three primary research domains and several sub-domains. The three primary research domains are as follows: security capabilities within the data plane, security of the SDN infrastructure, and dynamic routing within the data plane. Our work resulted in the identification of specific strengths and weaknesses in existing research, as well as promising future directions, based on novelty and overlap with emerging technology domains. The most striking future directions are the use of hybrid SDN architectures leveraging a programmable data plane, SDN for heterogeneous network security, and the development of trust-based methods for SDN management and security, including trust-based routing. Full article
Show Figures

Figure 1

14 pages, 1197 KB  
Article
ABMS-Driven Reinforcement Learning for Dynamic Resource Allocation in Mass Casualty Incidents
by Ionuț Murarețu, Alexandra Vultureanu-Albiși, Sorin Ilie and Costin Bădică
Future Internet 2025, 17(11), 502; https://doi.org/10.3390/fi17110502 - 3 Nov 2025
Viewed by 467
Abstract
This paper introduces a novel framework that integrates reinforcement learning with declarative modeling and mathematical optimization for dynamic resource allocation during mass casualty incidents. Our approach leverages Mesa as an agent-based modeling library to develop a flexible and scalable simulation environment as a [...] Read more.
This paper introduces a novel framework that integrates reinforcement learning with declarative modeling and mathematical optimization for dynamic resource allocation during mass casualty incidents. Our approach leverages Mesa as an agent-based modeling library to develop a flexible and scalable simulation environment as a decision support system for emergency response. This paper addresses the challenge of efficiently allocating casualties to hospitals by combining mixed-integer linear and constraint programming while enabling a central decision-making component to adapt allocation strategies based on experience. The two-layer architecture ensures that casualty-to-hospital assignments satisfy geographical and medical constraints while optimizing resource usage. The reinforcement learning component receives feedback through agent-based simulation outcomes, using survival rates as the reward signal to guide future allocation decisions. Our experimental evaluation, using simulated emergency scenarios, shows a significant improvement in survival rates compared to traditional optimization approaches. The results indicate that the hybrid approach successfully combines the robustness of declarative modeling and the adaptability required for smart decision making in complex and dynamic emergency scenarios. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop