Next Issue
Volume 17, August
Previous Issue
Volume 17, June
 
 

Future Internet, Volume 17, Issue 7 (July 2025) – 51 articles

Cover Story (view full-size image): The rapid expansion of Edge and Industrial Internet of Things (IIoT) systems has intensified the risk and complexity of cyberattacks. In this study, we propose an optimized hybrid DL framework for robust intrusion detection in Edge and IIoT environments. To enhance the training and convergence of the GAN component, we integrate an improved chimp optimization algorithm (IChOA) for hyperparameter tuning and feature refinement. Extensive experiments are conducted to assess the model’s performance compared to several state-of-the-art techniques and the results demonstrate that the proposed Transformer–GAN–AE framework outperforms all baseline methods, achieving a maximum accuracy of 98.92%, along with superior recall and AUC values. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
34 pages, 1835 KiB  
Article
Advancing Neurodegenerative Disease Management: Technical, Ethical, and Regulatory Insights from the NeuroPredict Platform
by Marilena Ianculescu, Lidia Băjenaru, Ana-Mihaela Vasilevschi, Maria Gheorghe-Moisii and Cristina-Gabriela Gheorghe
Future Internet 2025, 17(7), 320; https://doi.org/10.3390/fi17070320 - 21 Jul 2025
Viewed by 205
Abstract
On a worldwide scale, neurodegenerative diseases, including multiple sclerosis, Parkinson’s, and Alzheimer’s, face considerable healthcare challenges demanding the development of novel approaches to early detection and efficient treatment. With its ability to provide real-time patient monitoring, customized medical care, and advanced predictive analytics, [...] Read more.
On a worldwide scale, neurodegenerative diseases, including multiple sclerosis, Parkinson’s, and Alzheimer’s, face considerable healthcare challenges demanding the development of novel approaches to early detection and efficient treatment. With its ability to provide real-time patient monitoring, customized medical care, and advanced predictive analytics, artificial intelligence (AI) is fundamentally transforming the way healthcare is provided. Through the integration of wearable physiological sensors, motion sensors, and neurological assessment tools, the NeuroPredict platform harnesses AI and smart sensor technologies to enhance the management of specific neurodegenerative diseases. Machine learning algorithms process these data flows to find patterns that point out disease evolution. This paper covers the design and architecture of the NeuroPredict platform, stressing the ethical and regulatory requirements that guide its development. Initial development of AI algorithms for disease monitoring, technical achievements, and constant enhancements driven by early user feedback are addressed in the discussion section. To ascertain the platform’s trustworthiness and data security, it also points towards risk analysis and mitigation approaches. The NeuroPredict platform’s capability for achieving AI-driven smart healthcare solutions is highlighted, even though it is currently in the development stage. Subsequent research is expected to focus on boosting data integration, expanding AI models, and providing regulatory compliance for clinical application. The current results are based on incremental laboratory tests using simulated user roles, with no clinical patient data involved so far. This study reports an experimental technology evaluation of modular components of the NeuroPredict platform, integrating multimodal sensors and machine learning pipelines in a laboratory-based setting, with future co-design and clinical validation foreseen for a later project phase. Full article
(This article belongs to the Special Issue Artificial Intelligence-Enabled Smart Healthcare)
Show Figures

Figure 1

28 pages, 2139 KiB  
Article
An Improved Approach to DNS Covert Channel Detection Based on DBM-ENSec
by Xinyu Li, Xiaoying Wang, Guoqing Yang, Jinsha Zhang, Chunhui Li, Fangfang Cui and Ruize Gu
Future Internet 2025, 17(7), 319; https://doi.org/10.3390/fi17070319 - 21 Jul 2025
Viewed by 165
Abstract
The covert nature of DNS covert channels makes them a widely utilized method for data exfiltration by malicious attackers. In response to this challenge, the present study proposes a detection methodology for DNS covert channels that employs a Deep Boltzmann Machine with Enhanced [...] Read more.
The covert nature of DNS covert channels makes them a widely utilized method for data exfiltration by malicious attackers. In response to this challenge, the present study proposes a detection methodology for DNS covert channels that employs a Deep Boltzmann Machine with Enhanced Security (DBM-ENSec). This approach entails the creation of a dataset through the collection of malicious traffic associated with various DNS covert channel attacks. Time-dependent grouping features are excluded, and feature optimization is conducted on individual traffic data through feature selection and normalization to minimize redundancy, enhancing the differentiation and stability of the features. The result of this process is the extraction of 23-dimensional features for each DNS packet. The extracted features are converted to gray scale images to improve the interpretability of the model and then fed into an improved Deep Boltzmann Machine for further optimization. The optimized features are then processed by an ensemble of classifiers (including Random Forest, XGBoost, LightGBM, and CatBoost) for detection purposes. Experimental results show that the proposed method achieves 99.92% accuracy in detecting DNS covert channels, with a validation accuracy of up to 98.52% on publicly available datasets. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

87 pages, 5171 KiB  
Review
Toward Secure Smart Grid Systems: Risks, Threats, Challenges, and Future Directions
by Jean Paul A. Yaacoub, Hassan N. Noura, Ola Salman and Khaled Chahine
Future Internet 2025, 17(7), 318; https://doi.org/10.3390/fi17070318 - 21 Jul 2025
Viewed by 367
Abstract
The evolution of electrical power systems into smart grids has brought about significant advancements in electricity generation, transmission, and utilization. These cutting-edge grids have shown potential as an effective way to maximize energy efficiency, manage resources effectively, and enhance overall reliability and sustainability. [...] Read more.
The evolution of electrical power systems into smart grids has brought about significant advancements in electricity generation, transmission, and utilization. These cutting-edge grids have shown potential as an effective way to maximize energy efficiency, manage resources effectively, and enhance overall reliability and sustainability. However, with the integration of complex technologies and interconnected systems inherent to smart grids comes a new set of safety and security challenges that must be addressed. First, this paper provides an in-depth review of the key considerations surrounding safety and security in smart grid environments, identifying potential risks, vulnerabilities, and challenges associated with deploying smart grid infrastructure within the context of the Internet of Things (IoT). In response, we explore both cryptographic and non-cryptographic countermeasures, emphasizing the need for adaptive, lightweight, and proactive security mechanisms. As a key contribution, we introduce a layered classification framework that maps smart grid attacks to affected components and defense types, providing a clearer structure for analyzing the impact of threats and responses. In addition, we identify current gaps in the literature, particularly in real-time anomaly detection, interoperability, and post-quantum cryptographic protocols, thus offering forward-looking recommendations to guide future research. Finally, we present the Multi-Layer Threat-Defense Alignment Framework, a unique addition that provides a methodical and strategic approach to cybersecurity planning by aligning smart grid threats and defenses across architectural layers. Full article
(This article belongs to the Special Issue Secure Integration of IoT and Cloud Computing)
Show Figures

Figure 1

42 pages, 2129 KiB  
Review
Ensemble Learning Approaches for Multi-Class Intrusion Detection Systems for the Internet of Vehicles (IoV): A Comprehensive Survey
by Manal Alharthi, Faiza Medjek and Djamel Djenouri
Future Internet 2025, 17(7), 317; https://doi.org/10.3390/fi17070317 - 19 Jul 2025
Viewed by 363
Abstract
The emergence of the Internet of Vehicles (IoV) has revolutionized intelligent transportation and communication systems. However, IoV presents many complex and ever-changing security challenges and thus requires robust cybersecurity protocols. This paper comprehensively describes and evaluates ensemble learning approaches for multi-class intrusion detection [...] Read more.
The emergence of the Internet of Vehicles (IoV) has revolutionized intelligent transportation and communication systems. However, IoV presents many complex and ever-changing security challenges and thus requires robust cybersecurity protocols. This paper comprehensively describes and evaluates ensemble learning approaches for multi-class intrusion detection systems in the IoV environment. The study evaluates several approaches, such as stacking, voting, boosting, and bagging. A comprehensive review of the literature spanning 2020 to 2025 reveals important trends and topics that require further investigation and the relative merits of different ensemble approaches. The NSL-KDD, CICIDS2017, and UNSW-NB15 datasets are widely used to evaluate the performance of Ensemble Learning-Based Intrusion Detection Systems (ELIDS). ELIDS evaluation is usually carried out using some popular performance metrics, including Precision, Accuracy, Recall, F1-score, and Area Under Receiver Operating Characteristic Curve (AUC-ROC), which were used to evaluate and measure the effectiveness of different ensemble learning methods. Given the increasing complexity and frequency of cyber threats in IoV environments, ensemble learning methods such as bagging, boosting, and stacking enhance adaptability and robustness. These methods aggregate multiple learners to improve detection rates, reduce false positives, and ensure more resilient intrusion detection models that can evolve alongside emerging attack patterns. Full article
Show Figures

Figure 1

27 pages, 2572 KiB  
Article
Parallel Agent-Based Framework for Analyzing Urban Agricultural Supply Chains
by Manuel Ignacio Manríquez, Veronica Gil-Costa and Mauricio Marin
Future Internet 2025, 17(7), 316; https://doi.org/10.3390/fi17070316 - 19 Jul 2025
Viewed by 137
Abstract
This work presents a parallel agent-based framework designed to analyze the dynamics of vegetable trade within a metropolitan area. The system integrates agent-based and discrete event techniques to capture the complex interactions among farmers, vendors, and consumers in urban agricultural supply chains. Decision-making [...] Read more.
This work presents a parallel agent-based framework designed to analyze the dynamics of vegetable trade within a metropolitan area. The system integrates agent-based and discrete event techniques to capture the complex interactions among farmers, vendors, and consumers in urban agricultural supply chains. Decision-making processes are modeled in detail: farmers select crops based on market trends and environmental risks, while vendors and consumers adapt their purchasing behavior according to seasonality, prices, and availability. To efficiently handle the computational demands of large-scale scenarios, we adopt an optimistic approximate parallel execution strategy. Furthermore, we introduce a credit-based load balancing mechanism that mitigates the effects of heterogeneous communication patterns and improves scalability. This framework enables detailed analysis of food distribution systems in urban contexts, offering insights relevant to smart cities and digital agriculture initiatives. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Figure 1

31 pages, 4220 KiB  
Article
A Novel Multi-Server Federated Learning Framework in Vehicular Edge Computing
by Fateme Mazloomi, Shahram Shah Heydari and Khalil El-Khatib
Future Internet 2025, 17(7), 315; https://doi.org/10.3390/fi17070315 - 19 Jul 2025
Viewed by 243
Abstract
Federated learning (FL) has emerged as a powerful approach for privacy-preserving model training in autonomous vehicle networks, where real-world deployments rely on multiple roadside units (RSUs) serving heterogeneous clients with intermittent connectivity. While most research focuses on single-server or hierarchical cloud-based FL, multi-server [...] Read more.
Federated learning (FL) has emerged as a powerful approach for privacy-preserving model training in autonomous vehicle networks, where real-world deployments rely on multiple roadside units (RSUs) serving heterogeneous clients with intermittent connectivity. While most research focuses on single-server or hierarchical cloud-based FL, multi-server FL can alleviate the communication bottlenecks of traditional setups. To this end, we propose an edge-based, multi-server FL (MS-FL) framework that combines performance-driven aggregation at each server—including statistical weighting of peer updates and outlier mitigation—with an application layer handover protocol that preserves model updates when vehicles move between RSU coverage areas. We evaluate MS-FL on both MNIST and GTSRB benchmarks under shard- and Dirichlet-based non-IID splits, comparing it against single-server FL and a two-layer edge-plus-cloud baseline. Over multiple communication rounds, MS-FL with the Statistical Performance-Aware Aggregation method and Dynamic Weighted Averaging Aggregation achieved up to a 20-percentage-point improvement in accuracy and consistent gains in precision, recall, and F1-score (95% confidence), while matching the low latency of edge-only schemes and avoiding the extra model transfer delays of cloud-based aggregation. These results demonstrate that coordinated cooperation among servers based on model quality and seamless handovers can accelerate convergence, mitigate data heterogeneity, and deliver robust, privacy-aware learning in connected vehicle environments. Full article
Show Figures

Figure 1

27 pages, 3641 KiB  
Article
TriagE-NLU: A Natural Language Understanding System for Clinical Triage and Intervention in Multilingual Emergency Dialogues
by Béatrix-May Balaban, Ioan Sacală and Alina-Claudia Petrescu-Niţă
Future Internet 2025, 17(7), 314; https://doi.org/10.3390/fi17070314 - 18 Jul 2025
Viewed by 144
Abstract
Telemedicine in emergency contexts presents unique challenges, particularly in multilingual and low-resource settings where accurate, clinical understanding and triage decision support are critical. This paper introduces TriagE-NLU, a novel multilingual natural language understanding system designed to perform both semantic parsing and clinical intervention [...] Read more.
Telemedicine in emergency contexts presents unique challenges, particularly in multilingual and low-resource settings where accurate, clinical understanding and triage decision support are critical. This paper introduces TriagE-NLU, a novel multilingual natural language understanding system designed to perform both semantic parsing and clinical intervention classification from emergency dialogues. The system is built on a federated learning architecture to ensure data privacy and adaptability across regions and is trained using TriageX, a synthetic, clinically grounded dataset covering five languages (English, Spanish, Romanian, Arabic, and Mandarin). TriagE-NLU integrates fine-tuned multilingual transformers with a hybrid rules-and-policy decision engine, enabling it to parse structured medical information (symptoms, risk factors, temporal markers) and recommend appropriate interventions based on recognized patterns. Evaluation against strong multilingual baselines, including mT5, mBART, and XLM-RoBERTa, demonstrates superior performance by TriagE-NLU, achieving F1 scores of 0.91 for semantic parsing and 0.89 for intervention classification, along with 0.92 accuracy and a BLEU score of 0.87. These results validate the system’s robustness in multilingual emergency telehealth and its ability to generalize across diverse input scenarios. This paper establishes a new direction for privacy-preserving, AI-assisted triage systems. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

35 pages, 2073 KiB  
Review
Using the Zero Trust Five-Step Implementation Process with Smart Environments: State-of-the-Art Review and Future Directions
by Shruti Kulkarni, Alexios Mylonas and Stilianos Vidalis
Future Internet 2025, 17(7), 313; https://doi.org/10.3390/fi17070313 - 18 Jul 2025
Viewed by 311
Abstract
There is a growing pressure on industry to secure environments and demonstrate their commitment in taking right steps to secure their products. This is because of the growing number of security compromises in the IT industry, Operational Technology environment, Internet of Things environment [...] Read more.
There is a growing pressure on industry to secure environments and demonstrate their commitment in taking right steps to secure their products. This is because of the growing number of security compromises in the IT industry, Operational Technology environment, Internet of Things environment and smart home devices. These compromises are not just about data breaches or data exfiltration, but also about unauthorised access to devices that are not configured correctly and vulnerabilities in software components, which usually lead to insecure authentication and authorisation. Incorrect configurations are usually in the form of devices being made available on the Internet (public domain), reusable credentials, access granted without verifying the requestor, and easily available credentials like default credentials. Organisations seeking to address the dual pressure of demonstrating steps in the right direction and addressing unauthorised access to resources can find a viable approach in the form of the zero trust concept. Zero trust principles are about moving security controls closer to the data, applications, assets and services and are based on the principle of “never trust, always verify”. As it stands today, zero trust research has advanced far beyond the concept of “never trust, always verify”. This paper provides the culmination of a literature review of research conducted in the space of smart home devices and IoT and the applicability of the zero trust five-step implementation process to secure them. We discuss the history of zero trust, the tenets of zero trust, the five-step implementation process for zero trust, and its adoption for smart home devices and Internet of Things, and we provide suggestions for future research. Full article
Show Figures

Figure 1

21 pages, 2065 KiB  
Article
Enhancing Security in 5G and Future 6G Networks: Machine Learning Approaches for Adaptive Intrusion Detection and Prevention
by Konstantinos Kalodanis, Charalampos Papapavlou and Georgios Feretzakis
Future Internet 2025, 17(7), 312; https://doi.org/10.3390/fi17070312 - 18 Jul 2025
Viewed by 294
Abstract
The evolution from 4G to 5G—and eventually to the forthcoming 6G networks—has revolutionized wireless communications by enabling high-speed, low-latency services that support a wide range of applications, including the Internet of Things (IoT), smart cities, and critical infrastructures. However, the unique characteristics of [...] Read more.
The evolution from 4G to 5G—and eventually to the forthcoming 6G networks—has revolutionized wireless communications by enabling high-speed, low-latency services that support a wide range of applications, including the Internet of Things (IoT), smart cities, and critical infrastructures. However, the unique characteristics of these networks—extensive connectivity, device heterogeneity, and architectural flexibility—impose significant security challenges. This paper introduces a comprehensive framework for enhancing the security of current and emerging wireless networks by integrating state-of-the-art machine learning (ML) techniques into intrusion detection and prevention systems. It also thoroughly explores the key aspects of wireless network security, including architectural vulnerabilities in both 5G and future 6G networks, novel ML algorithms tailored to address evolving threats, privacy-preserving mechanisms, and regulatory compliance with the EU AI Act. Finally, a Wireless Intrusion Detection Algorithm (WIDA) is proposed, demonstrating promising results in improving wireless network security. Full article
(This article belongs to the Special Issue Advanced 5G and Beyond Networks)
Show Figures

Figure 1

55 pages, 6352 KiB  
Review
A Deep Learning Framework for Enhanced Detection of Polymorphic Ransomware
by Mazen Gazzan, Bader Alobaywi, Mohammed Almutairi and Frederick T. Sheldon
Future Internet 2025, 17(7), 311; https://doi.org/10.3390/fi17070311 - 18 Jul 2025
Viewed by 412
Abstract
Ransomware, a significant cybersecurity threat, encrypts files and causes substantial damage, making early detection crucial yet challenging. This paper introduces a novel multi-phase framework for early ransomware detection, designed to enhance accuracy and minimize false positives. The framework addresses the limitations of existing [...] Read more.
Ransomware, a significant cybersecurity threat, encrypts files and causes substantial damage, making early detection crucial yet challenging. This paper introduces a novel multi-phase framework for early ransomware detection, designed to enhance accuracy and minimize false positives. The framework addresses the limitations of existing methods by integrating operational data with situational and threat intelligence, enabling it to dynamically adapt to the evolving ransomware landscape. Key innovations include (1) data augmentation using a Bi-Gradual Minimax Generative Adversarial Network (BGM-GAN) to generate synthetic ransomware attack patterns, addressing data insufficiency; (2) Incremental Mutual Information Selection (IMIS) for dynamically selecting relevant features, adapting to evolving ransomware behaviors and reducing computational overhead; and (3) a Deep Belief Network (DBN) detection architecture, trained on the augmented data and optimized with Uncertainty-Aware Dynamic Early Stopping (UA-DES) to prevent overfitting. The model demonstrates a 4% improvement in detection accuracy (from 90% to 94%) through synthetic data generation and reduces false positives from 15.4% to 14%. The IMIS technique further increases accuracy to 96% while reducing false positives. The UA-DES optimization boosts accuracy to 98.6% and lowers false positives to 10%. Overall, this framework effectively addresses the challenges posed by evolving ransomware, significantly enhancing detection accuracy and reliability. Full article
Show Figures

Figure 1

40 pages, 2206 KiB  
Review
Toward Generative AI-Based Intrusion Detection Systems for the Internet of Vehicles (IoV)
by Isra Mahmoudi, Djallel Eddine Boubiche, Samir Athmani, Homero Toral-Cruz and Freddy I. Chan-Puc
Future Internet 2025, 17(7), 310; https://doi.org/10.3390/fi17070310 - 17 Jul 2025
Viewed by 442
Abstract
The increasing complexity and scale of Internet of Vehicles (IoV) networks pose significant security challenges, necessitating the development of advanced intrusion detection systems (IDS). Traditional IDS approaches, such as rule-based and signature-based methods, are often inadequate in detecting novel and sophisticated attacks due [...] Read more.
The increasing complexity and scale of Internet of Vehicles (IoV) networks pose significant security challenges, necessitating the development of advanced intrusion detection systems (IDS). Traditional IDS approaches, such as rule-based and signature-based methods, are often inadequate in detecting novel and sophisticated attacks due to their limited adaptability and dependency on predefined patterns. To overcome these limitations, machine learning (ML) and deep learning (DL)-based IDS have been introduced, offering better generalization and the ability to learn from data. However, these models can still struggle with zero-day attacks, require large volumes of labeled data, and may be vulnerable to adversarial examples. In response to these challenges, Generative AI-based IDS—leveraging models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Transformers—have emerged as promising solutions that offer enhanced adaptability, synthetic data generation for training, and improved detection capabilities for evolving threats. This survey provides an overview of IoV architecture, vulnerabilities, and classical IDS techniques while focusing on the growing role of Generative AI in strengthening IoV security. It discusses the current landscape, highlights the key challenges, and outlines future research directions aimed at building more resilient and intelligent IDS for the IoV ecosystem. Full article
Show Figures

Figure 1

47 pages, 3078 KiB  
Article
Leveraging Blockchain for Ethical AI: Mitigating Digital Threats and Strengthening Societal Resilience
by Chibuzor Udokwu, Roxana Voicu-Dorobanțu, Abiodun Afolayan Ogunyemi, Alex Norta, Nata Sturua and Stefan Craß
Future Internet 2025, 17(7), 309; https://doi.org/10.3390/fi17070309 - 17 Jul 2025
Viewed by 859
Abstract
This position paper proposes a conceptual framework (CF-BIAI-SXT) for integrating blockchain with AI to enhance ethical governance, transparency, and privacy in high-risk AI applications that ensure societal resilience through the mitigation of sexual exploitation. Sextortion is a growing form of digital sexual exploitation, [...] Read more.
This position paper proposes a conceptual framework (CF-BIAI-SXT) for integrating blockchain with AI to enhance ethical governance, transparency, and privacy in high-risk AI applications that ensure societal resilience through the mitigation of sexual exploitation. Sextortion is a growing form of digital sexual exploitation, and the role of AI in its mitigation and the ethical issues that arise provide a good case for this paper. Through a combination of systematic and narrative literature reviews, the paper first explores the ethical shortcomings of existing AI systems in sextortion prevention and assesses the capacity of blockchain operations to mitigate these limitations. It then develops CF-BIAI-SXT, a framework operationalized through BPMN-modeled components and structured into a three-layer implementation strategy composed of technical enablement, governance alignment, and continuous oversight. The framework is then situated within real-world regulatory constraints, including GDPR and the EU AI Act. This position paper concludes that a resilient society needs ethical, privacy-first, and socially resilient digital infrastructures, and integrating two core technologies, such as AI and blockchain, creates a viable pathway towards this desideratum. Mitigating high-risk environments, such as sextortion, may be a fundamental first step in this pathway, with the potential expansion to other forms of online threats. Full article
(This article belongs to the Special Issue AI and Blockchain: Synergies, Challenges, and Innovations)
Show Figures

Figure 1

39 pages, 784 KiB  
Review
A Review of Research on Secure Aggregation for Federated Learning
by Xing Zhang, Yuexiang Luo and Tianning Li
Future Internet 2025, 17(7), 308; https://doi.org/10.3390/fi17070308 - 17 Jul 2025
Viewed by 299
Abstract
Federated learning (FL) is an advanced distributed machine learning method that effectively solves the data silo problem. With the increasing popularity of federated learning and the growing importance of privacy protection, federated learning methods that can securely aggregate models have received widespread attention. [...] Read more.
Federated learning (FL) is an advanced distributed machine learning method that effectively solves the data silo problem. With the increasing popularity of federated learning and the growing importance of privacy protection, federated learning methods that can securely aggregate models have received widespread attention. Federated learning enables clients to train models locally and share their model updates with the server. While this approach allows collaborative model training without exposing raw data, it still risks leaking sensitive information. To enhance privacy protection in federated learning, secure aggregation is considered a key enabling technology that requires further in-depth investigation. This paper summarizes the definition, classification, and applications of federated learning; reviews secure aggregation protocols proposed to address privacy and security issues in federated learning; extensively analyzes the selected protocols; and concludes by highlighting the significant challenges and future research directions in applying secure aggregation in federated learning. The purpose of this paper is to review and analyze prior research, evaluate the advantages and disadvantages of various secure aggregation schemes, and propose potential future research directions. This work aims to serve as a valuable reference for researchers studying secure aggregation in federated learning. Full article
Show Figures

Figure 1

23 pages, 1755 KiB  
Article
An Efficient Continuous-Variable Quantum Key Distribution with Parameter Optimization Using Elitist Elk Herd Random Immigrants Optimizer and Adaptive Depthwise Separable Convolutional Neural Network
by Vidhya Prakash Rajendran, Deepalakshmi Perumalsamy, Chinnasamy Ponnusamy and Ezhil Kalaimannan
Future Internet 2025, 17(7), 307; https://doi.org/10.3390/fi17070307 - 17 Jul 2025
Viewed by 283
Abstract
Quantum memory is essential for the prolonged storage and retrieval of quantum information. Nevertheless, no current studies have focused on the creation of effective quantum memory for continuous variables while accounting for the decoherence rate. This work presents an effective continuous-variable quantum key [...] Read more.
Quantum memory is essential for the prolonged storage and retrieval of quantum information. Nevertheless, no current studies have focused on the creation of effective quantum memory for continuous variables while accounting for the decoherence rate. This work presents an effective continuous-variable quantum key distribution method with parameter optimization utilizing the Elitist Elk Herd Random Immigrants Optimizer (2E-HRIO) technique. At the outset of transmission, the quantum device undergoes initialization and authentication via Compressed Hash-based Message Authentication Code with Encoded Post-Quantum Hash (CHMAC-EPQH). The settings are subsequently optimized from the authenticated device via 2E-HRIO, which mitigates the effects of decoherence by adaptively tuning system parameters. Subsequently, quantum bits are produced from the verified device, and pilot insertion is executed within the quantum bits. The pilot-inserted signal is thereafter subjected to pulse shaping using a Gaussian filter. The pulse-shaped signal undergoes modulation. Authenticated post-modulation, the prediction of link failure is conducted through an authenticated channel using Radial Density-Based Spatial Clustering of Applications with Noise. Subsequently, transmission occurs via a non-failure connection. The receiver performs channel equalization on the received signal with Recursive Regularized Least Mean Squares. Subsequently, a dataset for side-channel attack authentication is gathered and preprocessed, followed by feature extraction and classification using Adaptive Depthwise Separable Convolutional Neural Networks (ADS-CNNs), which enhances security against side-channel attacks. The quantum state is evaluated based on the signal received, and raw data are collected. Thereafter, a connection is established between the transmitter and receiver. Both the transmitter and receiver perform the scanning process. Thereafter, the calculation and correction of the error rate are performed based on the sifting results. Ultimately, privacy amplification and key authentication are performed using the repaired key via B-CHMAC-EPQH. The proposed system demonstrated improved resistance to decoherence and side-channel attacks, while achieving a reconciliation efficiency above 90% and increased key generation rate. Full article
Show Figures

Graphical abstract

14 pages, 4648 KiB  
Article
Cyber-Physical System and 3D Visualization for a SCADA-Based Drinking Water Supply: A Case Study in the Lerma Basin, Mexico City
by Gabriel Sepúlveda-Cervantes, Eduardo Vega-Alvarado, Edgar Alfredo Portilla-Flores and Eduardo Vivanco-Rodríguez
Future Internet 2025, 17(7), 306; https://doi.org/10.3390/fi17070306 - 17 Jul 2025
Viewed by 287
Abstract
Cyber-physical systems such as Supervisory Control and Data Acquisition (SCADA) have been applied in industrial automation and infrastructure management for decades. They are hybrid tools for administration, monitoring, and continuous control of real physical systems through their computational representation. SCADA systems have evolved [...] Read more.
Cyber-physical systems such as Supervisory Control and Data Acquisition (SCADA) have been applied in industrial automation and infrastructure management for decades. They are hybrid tools for administration, monitoring, and continuous control of real physical systems through their computational representation. SCADA systems have evolved along with computing technology, from their beginnings with low-performance computers, monochrome monitors and communication networks with a range of a few hundred meters, to high-performance systems with advanced 3D graphics and wired and wireless computer networks. This article presents a methodology for the design of a SCADA system with a 3D Visualization for Drinking Water Supply, and its implementation in the Lerma Basin System of Mexico City as a case study. The monitoring of water consumption from the wells is presented, as well as the pressure levels throughout the system. The 3D visualization is generated from the GIS information and the communication is carried out using a hybrid radio frequency transmission system, satellite, and telephone network. The pumps that extract water from each well are teleoperated and monitored in real time. The developed system can be scaled to generate a simulator of water behavior of the Lerma Basin System and perform contingency planning. Full article
Show Figures

Figure 1

19 pages, 512 KiB  
Article
Attack Surface Score for Software Systems
by Yudeep Rajbhandari, Rokin Maharjan, Sakshi Shrestha and Tomas Cerny
Future Internet 2025, 17(7), 305; https://doi.org/10.3390/fi17070305 - 14 Jul 2025
Viewed by 273
Abstract
Software attack surfaces define the external boundaries—entry points, communication channels, and sensitive data stores through which adversaries may compromise a system. This paper introduces a scoring mechanism that produces a normalized attack-surface metric in the range of 0–1. Building on the established Damage-Potential-to-Effort [...] Read more.
Software attack surfaces define the external boundaries—entry points, communication channels, and sensitive data stores through which adversaries may compromise a system. This paper introduces a scoring mechanism that produces a normalized attack-surface metric in the range of 0–1. Building on the established Damage-Potential-to-Effort ratio, our approach further incorporates real-world vulnerability intelligence drawn from MITRE’s CVE and CWE repositories. We compute each application’s score by ingesting preliminary findings from a static-analysis tool and processing them through our unified model. To assess effectiveness, we validate the scoring system across a spectrum of scenarios, from a simple Java application to complex enterprise applications. The resulting metric offers development and security teams a concise, objective measure to monitor an application’s attack surface and hence proactively identify vulnerabilities in their applications. This tool can also be used to benchmark various third-party or dependent applications, enabling both developers and security practitioners to better manage risk. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

17 pages, 1184 KiB  
Article
A Biologically Inspired Cost-Efficient Zero-Trust Security Approach for Attacker Detection and Classification in Inter-Satellite Communication Networks
by Sridhar Varadala and Hao Xu
Future Internet 2025, 17(7), 304; https://doi.org/10.3390/fi17070304 - 13 Jul 2025
Viewed by 209
Abstract
In next-generation Low-Earth-Orbit (LEO) satellite networks, securing inter-satellite communication links (ISLs) through strong authentication is essential due to the network’s dynamic and distributed structure. Traditional authentication systems often struggle in these environments, leading to the adoption of Zero-Trust Security (ZTS) models. However, current [...] Read more.
In next-generation Low-Earth-Orbit (LEO) satellite networks, securing inter-satellite communication links (ISLs) through strong authentication is essential due to the network’s dynamic and distributed structure. Traditional authentication systems often struggle in these environments, leading to the adoption of Zero-Trust Security (ZTS) models. However, current ZTS protocols typically introduce high computational overhead, especially as the number of satellite nodes grows, which can impact both security and network performance. To overcome these challenges, a new bio-inspired ZTS framework called Manta Ray Foraging Cost-Optimized Zero-Trust Security (MRFCO-ZTS) has been introduced. This approach uses data-driven learning methods to enhance security across satellite communications. It continuously evaluates access requests by applying a cost function that accounts for risk level, likelihood of attack, and computational delay. The Manta Ray Foraging Optimization (MRFO) algorithm is used to minimize this cost, enabling effective classification of nodes as either trusted or malicious based on historical authentication records and real-time behavior. MRFCO-ZTS improves the accuracy of attacker detection while maintaining secure data exchange between authenticated satellites. Its effectiveness has been tested through numerical simulations under different satellite traffic conditions, with performance measured in terms of security accuracy, latency, and operational efficiency. Full article
(This article belongs to the Special Issue Joint Design and Integration in Smart IoT Systems, 2nd Edition)
Show Figures

Figure 1

19 pages, 1780 KiB  
Article
A Case Study on Monolith to Microservices Decomposition with Variational Autoencoder-Based Graph Neural Network
by Rokin Maharjan, Korn Sooksatra, Tomas Cerny, Yudeep Rajbhandari and Sakshi Shrestha
Future Internet 2025, 17(7), 303; https://doi.org/10.3390/fi17070303 - 13 Jul 2025
Viewed by 289
Abstract
Microservice is a popular architecture for developing cloud-native applications. However, decomposing a monolithic application into microservices remains a challenging task. This complexity arises from the need to account for factors such as component dependencies, cohesive clusters, and bounded contexts. To address this challenge, [...] Read more.
Microservice is a popular architecture for developing cloud-native applications. However, decomposing a monolithic application into microservices remains a challenging task. This complexity arises from the need to account for factors such as component dependencies, cohesive clusters, and bounded contexts. To address this challenge, we present an automated approach to decomposing monolithic applications into microservices. Our approach uses static code analysis to generate a dependency graph of the monolithic application. Then, a variational autoencoder (VAE) is used to extract features from the components of a monolithic application. Finally, the C-means algorithm is used to cluster the components into possible microservices. We evaluate our approach using a third-party benchmark comprising both monolithic and microservice implementations. Additionally, we compare its performance against two existing decomposition techniques. The results demonstrate the potential of our method as a practical tool for guiding the transition from monolithic to microservice architectures. Full article
Show Figures

Figure 1

22 pages, 4661 KiB  
Article
The Investigation of Queuing Models to Calculate Journey Times to Develop an Intelligent Transport System for Smart Cities
by Vatsal Mehta, Glenford Mapp and Vaibhav Gandhi
Future Internet 2025, 17(7), 302; https://doi.org/10.3390/fi17070302 - 7 Jul 2025
Viewed by 424
Abstract
Intelligent transport systems are a major component of smart cities because their deployment should result in reduced journey times, less traffic congestion and a significant reduction in road deaths, which will greatly improve the quality of life of their citizens. New technologies such [...] Read more.
Intelligent transport systems are a major component of smart cities because their deployment should result in reduced journey times, less traffic congestion and a significant reduction in road deaths, which will greatly improve the quality of life of their citizens. New technologies such as vehicular networks allow more information be available in realtime, and this information can be used with new analytical models to obtain more accurate estimates of journey times. This would be extremely useful to drivers and will also enable transport authorities to optimise the transport network. This paper addresses these issues using a model-based approach to provide a new way of estimating the delay along specified routes. A journey is defined as the traversal of several road links and junctions from source to destination. The delay at the junctions is analysed using the zero-server Markov chain technique. This is then combined with the Jackson network to analyse the delay across multiple junctions. The delay at road links is analysed using an M/M/K/K model. The results were validated using two simulators: SUMO and VISSIM. A real scenario is also examined to determine the best route. The preliminary results of this model-based analysis look promising but more work is needed to make it useful for wide-scale deployment. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

20 pages, 1179 KiB  
Article
Conv1D-GRU-Self Attention: An Efficient Deep Learning Framework for Detecting Intrusions in Wireless Sensor Networks
by Kenan Honore Robacky Mbongo, Kanwal Ahmed, Orken Mamyrbayev, Guanghui Wang, Fang Zuo, Ainur Akhmediyarova, Nurzhan Mukazhanov and Assem Ayapbergenova
Future Internet 2025, 17(7), 301; https://doi.org/10.3390/fi17070301 - 4 Jul 2025
Viewed by 404
Abstract
Wireless Sensor Networks (WSNs) consist of distributed sensor nodes that collect and transmit environmental data, often in resource-constrained and unsecured environments. These characteristics make WSNs highly vulnerable to various security threats. To address this, the objective of this research is to design and [...] Read more.
Wireless Sensor Networks (WSNs) consist of distributed sensor nodes that collect and transmit environmental data, often in resource-constrained and unsecured environments. These characteristics make WSNs highly vulnerable to various security threats. To address this, the objective of this research is to design and evaluate a deep learning-based Intrusion Detection System (IDS) that is both accurate and efficient for real-time threat detection in WSNs. This study proposes a hybrid IDS model combining one-dimensional Convolutional Neural Networks (Conv1Ds), Gated Recurrent Units (GRUs), and Self-Attention mechanisms. A Conv1D extracts spatial features from network traffic, GRU captures temporal dependencies, and Self-Attention emphasizes critical sequence components, collectively enhancing detection of subtle and complex intrusion patterns. The model was evaluated using the WSN-DS dataset and demonstrated superior performance compared to traditional machine learning and simpler deep learning models. It achieved an accuracy of 98.6%, precision of 98.63%, recall of 98.6%, F1-score of 98.6%, and an ROC-AUC of 0.9994, indicating strong predictive capability even with imbalanced data. In addition to centralized training, the model was tested under cooperative, node-based learning conditions, where each node independently detects anomalies and contributes to a collective decision-making framework. This distributed approach improves detection efficiency and robustness. The proposed IDS offers a scalable and resilient solution tailored to the unique challenges of WSN security. Full article
Show Figures

Figure 1

19 pages, 1514 KiB  
Article
A UAV Trajectory Optimization and Task Offloading Strategy Based on Hybrid Metaheuristic Algorithm in Mobile Edge Computing
by Yeqiang Zheng, An Li, Yihu Wen and Gaocai Wang
Future Internet 2025, 17(7), 300; https://doi.org/10.3390/fi17070300 - 3 Jul 2025
Viewed by 355
Abstract
In the UAV-assisted mobile edge computing (MEC) communication system, the UAV receives the data offloaded by multiple ground user devices as an aerial base station. Among them, due to the limited battery storage of a UAV, energy saving is a key issue in [...] Read more.
In the UAV-assisted mobile edge computing (MEC) communication system, the UAV receives the data offloaded by multiple ground user devices as an aerial base station. Among them, due to the limited battery storage of a UAV, energy saving is a key issue in a UAV-assisted MEC system. However, for a low-altitude flying UAV, successful obstacle avoidance is also very necessary. This paper aims to maximize the system energy efficiency (defined as the ratio of the total amount of offloaded data to the energy consumption of the UAV) to meet the maneuverability and three-dimensional obstacle avoidance constraints of a UAV. A joint optimization strategy with maximized energy efficiency for the UAV flight trajectory and user device task offloading rate is proposed. In order to solve this problem, hybrid alternating metaheuristics for energy optimization are given. Due to the non-convexity and fractional structure of the optimization problem, it can be transformed into an equivalent parameter optimization problem using the Dinkelbach method and then divided into two sub-optimization problems that are alternately optimized using metaheuristic algorithms. The experimental results show that the strategy proposed in this paper can enable a UAV to avoid obstacles during flight by detouring or crossing, and the trajectory does not overlap with obstacles, effectively achieving two-dimensional and three-dimensional obstacle avoidance. In addition, compared with related solving methods, the solving method in this paper has significantly higher success than traditional algorithms. In comparison with related optimization strategies, the strategy proposed in this paper can effectively reduce the overall energy consumption of UAV. Full article
Show Figures

Figure 1

25 pages, 1524 KiB  
Article
Detecting Emerging DGA Malware in Federated Environments via Variational Autoencoder-Based Clustering and Resource-Aware Client Selection
by Ma Viet Duc, Pham Minh Dang, Tran Thu Phuong, Truong Duc Truong, Vu Hai and Nguyen Huu Thanh
Future Internet 2025, 17(7), 299; https://doi.org/10.3390/fi17070299 - 3 Jul 2025
Viewed by 361
Abstract
Domain Generation Algorithms (DGAs) remain a persistent technique used by modern malware to establish stealthy command-and-control (C&C) channels, thereby evading traditional blacklist-based defenses. Detecting such evolving threats is especially challenging in decentralized environments where raw traffic data cannot be aggregated due to privacy [...] Read more.
Domain Generation Algorithms (DGAs) remain a persistent technique used by modern malware to establish stealthy command-and-control (C&C) channels, thereby evading traditional blacklist-based defenses. Detecting such evolving threats is especially challenging in decentralized environments where raw traffic data cannot be aggregated due to privacy or policy constraints. To address this, we present FedSAGE, a security-aware federated intrusion detection framework that combines Variational Autoencoder (VAE)-based latent representation learning with unsupervised clustering and resource-efficient client selection. Each client encodes its local domain traffic into a semantic latent space using a shared, pre-trained VAE trained solely on benign domains. These embeddings are clustered via affinity propagation to group clients with similar data distributions and identify outliers indicative of novel threats without requiring any labeled DGA samples. Within each cluster, FedSAGE selects only the fastest clients for training, balancing computational constraints with threat visibility. Experimental results from the multi-zones DGA dataset show that FedSAGE improves detection accuracy by up to 11.6% and reduces energy consumption by up to 93.8% compared to standard FedAvg under non-IID conditions. Notably, the latent clustering perfectly recovers ground-truth DGA family zones, enabling effective anomaly detection in a fully unsupervised manner while remaining privacy-preserving. These foundations demonstrate that FedSAGE is a practical and lightweight approach for decentralized detection of evasive malware, offering a viable solution for secure and adaptive defense in resource-constrained edge environments. Full article
(This article belongs to the Special Issue Security of Computer System and Network)
Show Figures

Figure 1

24 pages, 4350 KiB  
Article
HECS4MQTT: A Multi-Layer Security Framework for Lightweight and Robust Encryption in Healthcare IoT Communications
by Saud Alharbi, Wasan Awad and David Bell
Future Internet 2025, 17(7), 298; https://doi.org/10.3390/fi17070298 - 30 Jun 2025
Viewed by 359
Abstract
Internet of Things (IoT) technology in healthcare has enabled innovative services that enhance patient monitoring, diagnostics and medical data management. However, securing sensitive health data while maintaining system efficiency of resource-constrained IoT devices remains a critical challenge. This work presents a comprehensive end-to-end [...] Read more.
Internet of Things (IoT) technology in healthcare has enabled innovative services that enhance patient monitoring, diagnostics and medical data management. However, securing sensitive health data while maintaining system efficiency of resource-constrained IoT devices remains a critical challenge. This work presents a comprehensive end-to-end IoT security framework for healthcare environments, addressing encryption at two key levels: lightweight encryption at the edge for resource-constrained devices and robust end-to-end encryption when transmitting data to the cloud via MQTT cloud brokers. The proposed system leverages multi-broker MQTT architecture to optimize resource utilization and enhance message reliability. At the edge, lightweight cryptographic techniques ensure low-latency encryption before transmitting data via a secure MQTT broker hosted within the hospital infrastructure. To safeguard data as it moves beyond the hospital to the cloud, stronger end-to-end encryption are applied to ensure end-to-end security, such as AES-256 and TLS 1.3, to ensure confidentiality and resilience over untrusted networks. A proof-of-concept Python 3.10 -based MQTT implementation is developed using open-source technologies. Security and performance evaluations demonstrate the feasibility of the multi-layer encryption approach, effectively balancing computational overhead with data protection. Security and performance evaluations demonstrate that our novel HECS4MQTT (Health Edge Cloud Security for MQTT) framework achieves a unique balance between efficiency and security. Unlike existing solutions that either impose high computational overhead at the edge or rely solely on transport-layer protection, HECS4MQTT introduces a layered encryption strategy that decouples edge and cloud security requirements. This design minimizes processing delays on constrained devices while maintaining strong cryptographic protection when data crosses trust boundaries. The framework also introduces a lightweight bridge component for re-encryption and integrity enforcement, thereby reducing broker compromise risk and supporting compliance with healthcare security regulations. Our HECS4MQTT framework offers a scalable, adaptable, and trust-separated security model, ensuring enhanced confidentiality, integrity, and availability of healthcare data while remaining suitable for deployment in real-world, latency-sensitive, and resource-limited medical environments. Full article
(This article belongs to the Special Issue Secure Integration of IoT and Cloud Computing)
Show Figures

Figure 1

25 pages, 2065 KiB  
Article
Lower-Latency Screen Updates over QUIC with Forward Error Correction
by Nooshin Eghbal and Paul Lu
Future Internet 2025, 17(7), 297; https://doi.org/10.3390/fi17070297 - 30 Jun 2025
Viewed by 274
Abstract
There are workloads that do not need the total data ordering enforced by the Transmission Control Protocol (TCP). For example, Virtual Network Computing (VNC) has a sequence of pixel-based updates in which the order of rectangles can be relaxed. However, VNC runs over [...] Read more.
There are workloads that do not need the total data ordering enforced by the Transmission Control Protocol (TCP). For example, Virtual Network Computing (VNC) has a sequence of pixel-based updates in which the order of rectangles can be relaxed. However, VNC runs over the TCP and can have higher latency due to unnecessary blocking to ensure total ordering. By using Quick UDP Internet Connections (QUIC) as the underlying protocol, we are able to implement a partial order delivery approach, which can be combined with Forward Error Correction (FEC) to reduce data latency. Our earlier work on consistency fences provides a mechanism and semantic foundation for partial ordering. Our new evaluation on the Emulab testbed, with two different synthetic workloads for streaming and non-streaming updates, shows that our partial order and FEC strategy can reduce the blocking time and inter-delivery time of rectangles compared to total delivery. For one workload, partially ordered data with FEC can reduce the 99-percentile message-blocking time to 0.4 ms versus 230 ms with totally ordered data. That workload was with 0.5% packet loss, 100 ms Round-Trip Time (RTT), and 100 Mbps bandwidth. We study the impact of varying the packet-loss rate, RTT, bandwidth, and CCA and demonstrate that partial order and FEC latency improvements grow as we increase packet loss and RTT, especially with the emerging Bottleneck Bandwidth and Round-Trip propagation time (BBR) congestion control algorithm. Full article
Show Figures

Figure 1

21 pages, 2134 KiB  
Article
Optimizing Trajectories for Rechargeable Agricultural Robots in Greenhouse Climatic Sensing Using Deep Reinforcement Learning with Proximal Policy Optimization Algorithm
by Ashraf Sharifi, Sara Migliorini and Davide Quaglia
Future Internet 2025, 17(7), 296; https://doi.org/10.3390/fi17070296 - 30 Jun 2025
Viewed by 236
Abstract
The experimentation of agricultural robots has been increasing in recent years, both in greenhouses and open fields. While agricultural robots are inherently useful for automating various farming tasks, their presence can also be leveraged to collect measurements along their paths. This approach enables [...] Read more.
The experimentation of agricultural robots has been increasing in recent years, both in greenhouses and open fields. While agricultural robots are inherently useful for automating various farming tasks, their presence can also be leveraged to collect measurements along their paths. This approach enables the creation of a complete and detailed picture of the climate conditions inside a greenhouse, reducing the need to distribute a large number of physical devices among the crops. In this regard, choosing the best visiting sequence of the Points of Interest (PoIs) regarding where to perform the measurements deserves particular attention. This trajectory planning has to carefully combine the amount and significance of the collected data with the energy requirements of the robot. In this paper, we propose a method based on Deep Reinforcement Learning enriched with a Proximal Policy Optimization (PPO) algorithm for determining the best trajectory an agricultural robot must follow to balance the number of measurements and autonomy adequately. The proposed approach has been applied to a real-world case study regarding a greenhouse in Verona (Italy) and compared with other existing state-of-the-art approaches. Full article
(This article belongs to the Special Issue Smart Technology: Artificial Intelligence, Robotics and Algorithms)
Show Figures

Graphical abstract

22 pages, 1158 KiB  
Article
FODIT: A Filter-Based Module for Optimizing Data Storage in B5G IoT Environments
by Bruno Ramos-Cruz, Francisco J. Quesada-Real, Javier Andreu-Pérez and Jessica Zaqueros-Martinez
Future Internet 2025, 17(7), 295; https://doi.org/10.3390/fi17070295 - 30 Jun 2025
Viewed by 228
Abstract
In the rapidly evolving landscape of the Internet of Things (IoT), managing the vast volumes of data generated by connected devices presents significant challenges, particularly in B5G IoT environments. One key issue is data redundancy, where identical data is stored several times because [...] Read more.
In the rapidly evolving landscape of the Internet of Things (IoT), managing the vast volumes of data generated by connected devices presents significant challenges, particularly in B5G IoT environments. One key issue is data redundancy, where identical data is stored several times because it is captured by multiple sensors. To address this, we introduce “FODIT”, a filter-based module designed to optimize data storage in IoT systems. FODIT leverages probabilistic data structures, specifically filters, to improve storage efficiency and query performance. We hypothesize that applying these structures can significantly reduce redundancy and accelerate data access in resource-constrained IoT deployments. We validate our hypothesis through targeted simulations under a specific and rare configuration: high-frequency and high-redundancy environments, with controlled duplication rates between 4% and 8%. These experiments involve data storage in local databases, cloud-based systems, and distributed ledger technologies (DLTs). The results demonstrate FODIT’s ability to reduce storage requirements and improve query responsiveness under these stress-test conditions. Furthermore, the proposed approach has broader applicability, particularly in DLT-based environments such as blockchain, where efficient querying remains a critical challenge. Nonetheless, some limitations remain, especially regarding the current data structure used to maintain consistency with the DLT, and the need for further adaptation to real-world contexts with dynamic workloads. This research highlights the potential of filter-based techniques to improve data management in IoT and blockchain systems, contributing to the development of more scalable and responsive infrastructures. Full article
Show Figures

Figure 1

24 pages, 798 KiB  
Article
ICRSSD: Identification and Classification for Railway Structured Sensitive Data
by Yage Jin, Hongming Chen, Rui Ma, Yanhua Wu and Qingxin Li
Future Internet 2025, 17(7), 294; https://doi.org/10.3390/fi17070294 - 30 Jun 2025
Viewed by 249
Abstract
The rapid growth of the railway industry has resulted in the accumulation of large structured data that makes data security a critical component of reliable railway system operations. However, existing methods for identifying and classifying often suffer from limitations such as overly coarse [...] Read more.
The rapid growth of the railway industry has resulted in the accumulation of large structured data that makes data security a critical component of reliable railway system operations. However, existing methods for identifying and classifying often suffer from limitations such as overly coarse identification granularity and insufficient flexibility in classification. To address these issues, we propose ICRSSD, a two-stage method for identification and classification in terms of the railway domain. The identification stage focuses on obtaining the sensitivity of all attributes. We first divide structured data into canonical data and semi-canonical data at a finer granularity to improve the identification accuracy. For canonical data, we use information entropy to calculate the initial sensitivity. Subsequently, we update the attribute sensitivities through cluster analysis and association rule mining. For semi-canonical data, we calculate attribute sensitivity by using a combination of regular expressions and keyword lists. In the classification stage, to further enhance accuracy, we adopt a dynamic and multi-granularity classified strategy. It considers the relative sensitivity of attributes across different scenarios and classifies them into three levels based on the sensitivity values obtained during the identification stage. Additionally, we design a rule base specifically for the identification and classification of sensitive data in the railway domain. This rule base enables effective data identification and classification, while also supporting the expiry management of sensitive attribute labels. To improve the efficiency of regular expression generation, we developed an auxiliary tool with the help of large language models and a well-designed prompt framework. We conducted experiments on a real-world dataset from the railway domain. The results demonstrate that ICRSSD significantly improves the accuracy and adaptability of sensitive data identification and classification in the railway domain. Full article
Show Figures

Figure 1

22 pages, 5161 KiB  
Article
AUV Trajectory Planning for Optimized Sensor Data Collection in Internet of Underwater Things
by Talal S. Almuzaini and Andrey V. Savkin
Future Internet 2025, 17(7), 293; https://doi.org/10.3390/fi17070293 - 30 Jun 2025
Viewed by 253
Abstract
Efficient and timely data collection in Underwater Acoustic Sensor Networks (UASNs) for Internet of Underwater Things (IoUT) applications remains a significant challenge due to the inherent limitations of the underwater environment. This paper presents a Value of Information (VoI)-based trajectory planning framework for [...] Read more.
Efficient and timely data collection in Underwater Acoustic Sensor Networks (UASNs) for Internet of Underwater Things (IoUT) applications remains a significant challenge due to the inherent limitations of the underwater environment. This paper presents a Value of Information (VoI)-based trajectory planning framework for a single Autonomous Underwater Vehicle (AUV) operating in coordination with an Unmanned Surface Vehicle (USV) to collect data from multiple Cluster Heads (CHs) deployed across an uneven seafloor. The proposed approach employs a VoI model that captures both the importance and timeliness of sensed data, guiding the AUV to collect and deliver critical information before its value significantly degrades. A forward Dynamic Programming (DP) algorithm is used to jointly optimize the AUV’s trajectory and the USV’s start and end positions, with the objective of maximizing the total residual VoI upon mission completion. The trajectory design incorporates the AUV’s kinematic constraints into travel time estimation, enabling accurate VoI evaluation throughout the mission. Simulation results show that the proposed strategy consistently outperforms conventional baselines in terms of residual VoI and overall system efficiency. These findings highlight the advantages of VoI-aware planning and AUV–USV collaboration for effective data collection in challenging underwater environments. Full article
Show Figures

Figure 1

28 pages, 4804 KiB  
Article
Towards Automatic Detection of Pneumothorax in Emergency Care with Deep Learning Using Multi-Source Chest X-ray Data
by Santiago Ibañez Caturla, Juan de Dios Berná Mestre and Oscar Martinez Mozos
Future Internet 2025, 17(7), 292; https://doi.org/10.3390/fi17070292 - 29 Jun 2025
Viewed by 417
Abstract
Pneumothorax is a potentially life-threatening condition defined as the collapse of the lung due to air leakage into the chest cavity. Delays in the diagnosis of pneumothorax can lead to severe complications and even mortality. A significant challenge in pneumothorax diagnosis is the [...] Read more.
Pneumothorax is a potentially life-threatening condition defined as the collapse of the lung due to air leakage into the chest cavity. Delays in the diagnosis of pneumothorax can lead to severe complications and even mortality. A significant challenge in pneumothorax diagnosis is the shortage of radiologists, resulting in the absence of written reports in plain X-rays and, consequently, impacting patient care. In this paper, we propose an automatic triage system for pneumothorax detection in X-ray images based on deep learning. We address this problem from the perspective of multi-source domain adaptation where different datasets available on the Internet are used for training and testing. In particular, we use datasets which contain chest X-ray images corresponding to different conditions (including pneumothorax). A convolutional neural network (CNN) with an EfficientNet architecture is trained and optimized to identify radiographic signs of pneumothorax using those public datasets. We present the results using cross-dataset validation, demonstrating the robustness and generalization capabilities of our multi-source solution across different datasets. The experimental results demonstrate the model’s potential to assist clinicians in prioritizing and correctly detecting urgent cases of pneumothorax using different integrated deployment strategies. Full article
(This article belongs to the Special Issue Artificial Intelligence-Enabled Smart Healthcare)
Show Figures

Figure 1

20 pages, 2579 KiB  
Article
ERA-MADDPG: An Elastic Routing Algorithm Based on Multi-Agent Deep Deterministic Policy Gradient in SDN
by Wanwei Huang, Hongchang Liu, Yingying Li and Linlin Ma
Future Internet 2025, 17(7), 291; https://doi.org/10.3390/fi17070291 - 29 Jun 2025
Viewed by 317
Abstract
To address the fact that changes in network topology can have an impact on the performance of routing, this paper proposes an Elastic Routing Algorithm based on Multi-Agent Deep Deterministic Policy Gradient (ERA-MADDPG), which is implemented within the framework of Multi-Agent Deep Deterministic [...] Read more.
To address the fact that changes in network topology can have an impact on the performance of routing, this paper proposes an Elastic Routing Algorithm based on Multi-Agent Deep Deterministic Policy Gradient (ERA-MADDPG), which is implemented within the framework of Multi-Agent Deep Deterministic Policy Gradient (MADDPG) in deep reinforcement learning. The algorithm first builds a three-layer architecture based on Software-Defined Networking (SDN). The top-down layers are the multi-agent layer, the controller layer, and the data layer. The architecture’s processing flow, including real-time data layer information collection and dynamic policy generation, enables the ERA-MADDPG algorithm to exhibit strong elasticity by quickly adjusting routing decisions in response to topology changes. The actor-critic framework combined with Convolutional Neural Networks (CNN) to implement the ERA-MADDPG routing algorithm effectively improves training efficiency, enhances learning stability, facilitates collaboration, and improves algorithm generalization and applicability. Finally, simulation experiments demonstrate that the convergence speed of the ERA-MADDPG routing algorithm outperforms that of the Multi-Agent Deep Q-Network (MADQN) algorithm and the Smart Routing based on Deep Reinforcement Learning (SR-DRL) algorithm, and the training speed in the initial phase is improved by approximately 20.9% and 39.1% compared to the MADQN algorithm and SR-DRL algorithm, respectively. The elasticity performance of ERA-MADDPG is quantified by re-convergence speed: under 5–15% topology node/link changes, its re-convergence speed is over 25% faster than that of MADQN and SR-DRL, demonstrating superior capability to maintain routing efficiency in dynamic environments. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop