Previous Issue
Volume 17, April
 
 

Future Internet, Volume 17, Issue 5 (May 2025) – 35 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 1034 KiB  
Article
A Novel Crowdsourcing-Assisted 5G Wireless Signal Ranging Technique in MEC Architecture
by Rui Lu, Lei Shi, Yinlong Liu and Zhongkai Dang
Future Internet 2025, 17(5), 220; https://doi.org/10.3390/fi17050220 - 14 May 2025
Abstract
In complex indoor and outdoor scenarios, traditional GPS-based ranging technology faces limitations in availability due to signal occlusion and user privacy issues. Wireless signal ranging technology based on 5G base stations has emerged as a potential alternative. However, existing methods are limited by [...] Read more.
In complex indoor and outdoor scenarios, traditional GPS-based ranging technology faces limitations in availability due to signal occlusion and user privacy issues. Wireless signal ranging technology based on 5G base stations has emerged as a potential alternative. However, existing methods are limited by low efficiency in constructing static signal databases, poor environmental adaptability, and high resource overhead, restricting their practical application. This paper proposes a 5G wireless signal ranging framework that integrates mobile edge computing (MEC) and crowdsourced intelligence to systematically address the aforementioned issues. This study designs a progressive solution by (1) building a crowdsourced data collection network, using mobile terminals equipped with GPS technology to automatically collect device signal features, replacing inefficient manual drive tests; (2) developing a progressive signal update algorithm that integrates real-time crowdsourced data and historical signals to optimize the signal fingerprint database in dynamic environments; (3) establishing an edge service architecture to offload signal matching and trajectory estimation tasks to MEC nodes, using lightweight computing engines to reduce the load on the core network. Experimental results demonstrate a mean positioning error of 5 m, with 95% of devices achieving errors within 10 m, as well as building and floor prediction error rates of 0.5% and 1%, respectively. The proposed framework outperforms traditional static methods by 3× in ranging accuracy while maintaining computational efficiency, achieving significant improvements in environmental adaptability and service scalability. Full article
Show Figures

Figure 1

2 pages, 130 KiB  
Correction
Correction: Kalodanis et al. High-Risk AI Systems—Lie Detection Application. Future Internet 2025, 17, 26
by Konstantinos Kalodanis, Panagiotis Rizomiliotis, Georgios Feretzakis, Charalampos Papapavlou and Dimosthenis Anagnostopoulos
Future Internet 2025, 17(5), 219; https://doi.org/10.3390/fi17050219 - 14 May 2025
Abstract
In the original publication [...] Full article
23 pages, 1875 KiB  
Article
U-SCAD: An Unsupervised Method of System Call-Driven Anomaly Detection for Containerized Edge Clouds
by Jiawei Ye, Ming Yan, Shenglin Wu, Jingxuan Tan and Jie Wu
Future Internet 2025, 17(5), 218; https://doi.org/10.3390/fi17050218 - 14 May 2025
Abstract
Container technology is currently one of the mainstream technologies in the field of cloud computing, yet its adoption in resource-constrained, latency-sensitive edge environments introduces unique security challenges. While existing system call-based anomaly-detection methods partially address these issues, they suffer from high false positive [...] Read more.
Container technology is currently one of the mainstream technologies in the field of cloud computing, yet its adoption in resource-constrained, latency-sensitive edge environments introduces unique security challenges. While existing system call-based anomaly-detection methods partially address these issues, they suffer from high false positive rates and excessive computational overhead. To achieve security and observability in edge-native containerized environments and lower the cost of computing resources, we propose an unsupervised anomaly-detection method based on system calls. This method filters out unnecessary system call data through automatic rule generation and an unsupervised classification model. To increase the accuracy of anomaly detection and reduce the false positive rates, this method embeds system calls into sequences using the proposed Syscall2vec and processes the remain sequences in favor of the anomaly detection model’s analysis. We conduct experiments using our method with a background based on modern containerized cloud microservices. The results show that the detection part of our method improves the F1 score by 23.88% and 41.31%, respectively, as compared to HIDS and LSTM-VAE. Moreover, our method can effectively reduce the original processing data to 13%, which means that it significantly lowers the cost of computing resources. Full article
Show Figures

Figure 1

22 pages, 5507 KiB  
Article
A Web-Based Application for Smart City Data Analysis and Visualization
by Panagiotis Karampakakis, Despoina Ioakeimidou, Periklis Chatzimisios and Konstantinos A. Tsintotas
Future Internet 2025, 17(5), 217; https://doi.org/10.3390/fi17050217 - 13 May 2025
Abstract
Smart cities are urban areas that use contemporary technology to improve citizens’ overall quality of life. These modern digital civil hubs aim to manage environmental conditions, traffic flow, and infrastructure through interconnected and data-driven decision-making systems. Today, many applications employ intelligent sensors for [...] Read more.
Smart cities are urban areas that use contemporary technology to improve citizens’ overall quality of life. These modern digital civil hubs aim to manage environmental conditions, traffic flow, and infrastructure through interconnected and data-driven decision-making systems. Today, many applications employ intelligent sensors for real-time data acquisition, leveraging visualization to derive actionable insights. However, despite the proliferation of such platforms, challenges like high data volume, noise, and incompleteness continue to hinder practical visual analysis. As missing data is a frequent issue in visualizing those urban sensing systems, our approach prioritizes their correction as a fundamental step. We deploy a hybrid imputation strategy combining SARIMAX, k-nearest neighbors, and random forest regression to address this. Building on this foundation, we propose an interactive web-based pipeline that processes, analyzes, and presents the sensor data provided by Basel’s “Smarte Strasse”. Our platform receives and projects environmental measurements, i.e., NO2, O3, PM2.5, and traffic noise, as well as mobility indicators such as vehicle speed and type, parking occupancy, and electric vehicle charging behavior. By resolving gaps in the data, we provide a solid foundation for high-fidelity and quality visual analytics. Built on the Flask web framework, the platform incorporates performance optimizations through Flask-Caching. Concerning the user’s dashboard, it supports interactive exploration via dynamic charts and spatial maps. This way, we demonstrate how future internet technologies permit the accessibility of complex urban sensor data for research, planning, and public engagement. Lastly, our open-source web-based application keeps reproducible, privacy-aware urban analytics. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Graphical abstract

20 pages, 1198 KiB  
Article
Mitigating Class Imbalance in Network Intrusion Detection with Feature-Regularized GANs
by Jing Li, Wei Zong, Yang-Wai Chow and Willy Susilo
Future Internet 2025, 17(5), 216; https://doi.org/10.3390/fi17050216 - 13 May 2025
Abstract
Network Intrusion Detection Systems (NIDS) often suffer from severe class imbalance, where minority attack types are underrepresented, leading to degraded detection performance. To address this challenge, we propose a novel augmentation framework that integrates Soft Nearest Neighbor Loss (SNNL) into Generative Adversarial Networks [...] Read more.
Network Intrusion Detection Systems (NIDS) often suffer from severe class imbalance, where minority attack types are underrepresented, leading to degraded detection performance. To address this challenge, we propose a novel augmentation framework that integrates Soft Nearest Neighbor Loss (SNNL) into Generative Adversarial Networks (GANs), including WGAN, CWGAN, and WGAN-GP. Unlike traditional oversampling methods (e.g., SMOTE, ADASYN), our approach improves feature-space alignment between real and synthetic samples, enhancing classifier generalization on rare classes. Experiments on NSL-KDD, CSE-CIC-IDS2017, and CSE-CIC-IDS2018 show that SNNL-augmented GANs consistently improve minority-class F1-scores without degrading overall accuracy or majority-class performance. UMAP visualizations confirm that SNNL produces more compact and class-consistent sample distributions. We also evaluate the computational overhead, finding the added cost moderate. These results demonstrate the effectiveness and practicality of SNNL as a general enhancement for GAN-based data augmentation in imbalanced NIDS tasks. Full article
Show Figures

Figure 1

36 pages, 2702 KiB  
Article
Multi-Criteria Genetic Algorithm for Optimizing Distributed Computing Systems in Neural Network Synthesis
by Valeriya V. Tynchenko, Ivan Malashin, Sergei O. Kurashkin, Vadim Tynchenko, Andrei Gantimurov, Vladimir Nelyub and Aleksei Borodulin
Future Internet 2025, 17(5), 215; https://doi.org/10.3390/fi17050215 - 13 May 2025
Abstract
Artificial neural networks (ANNs) are increasingly effective in addressing complex scientific and technological challenges. However, challenges persist in synthesizing neural network models and defining their structural parameters. This study investigates the use of parallel evolutionary algorithms on distributed computing systems (DCSs) to optimize [...] Read more.
Artificial neural networks (ANNs) are increasingly effective in addressing complex scientific and technological challenges. However, challenges persist in synthesizing neural network models and defining their structural parameters. This study investigates the use of parallel evolutionary algorithms on distributed computing systems (DCSs) to optimize energy consumption and computational time. New mathematical models for DCS performance and reliability are proposed, based on a mass service system framework, along with a multi-criteria optimization model designed for resource-intensive computational problems. This model employs a multi-criteria GA to generate a diverse set of Pareto-optimal solutions. Additionally, a decision-support system is developed, incorporating the multi-criteria GA, allowing for customization of the genetic algorithm (GA) and the construction of specialized ANNs for specific problem domains. The application of the decision-support system (DSS) demonstrated performance of 1220.745 TFLOPS and an availability factor of 99.03%. These findings highlight the potential of the proposed DCS framework to enhance computational efficiency in relevant applications. Full article
(This article belongs to the Special Issue Parallel and Distributed Systems)
Show Figures

Graphical abstract

16 pages, 2616 KiB  
Article
Low-Complexity Microclimate Classification in Smart Greenhouses: A Fuzzy-Neural Approach
by Cristian Bua, Francesco Fiorini, Michele Pagano, Davide Adami and Stefano Giordano
Future Internet 2025, 17(5), 214; https://doi.org/10.3390/fi17050214 - 13 May 2025
Abstract
Maintaining optimal microclimatic conditions within greenhouses represents a significant challenge in modern agricultural contexts, where prediction systems play a crucial role in regulating temperature and humidity, thereby enabling timely interventions to prevent plant diseases or adverse growth conditions. In this work, we propose [...] Read more.
Maintaining optimal microclimatic conditions within greenhouses represents a significant challenge in modern agricultural contexts, where prediction systems play a crucial role in regulating temperature and humidity, thereby enabling timely interventions to prevent plant diseases or adverse growth conditions. In this work, we propose a novel approach which integrates a cascaded Feed-Forward Neural Network (FFNN) with the Granular Computing paradigm to achieve accurate microclimate forecasting and reduced computational complexity. The experimental results demonstrate that the accuracy of our approach is the same as that of the FFNN-based approach but the complexity is reduced, making this solution particularly well suited for deployment on edge devices with limited computational capabilities. Our innovative approach has been validated using a real-world dataset collected from four greenhouses and integrated into a distributed network architecture. This setup supports the execution of predictive models both on sensors deployed within the greenhouse and at the network edge, where more computationally intensive models can be utilized to enhance decision-making accuracy. Full article
(This article belongs to the Special Issue Joint Design and Integration in Smart IoT Systems)
Show Figures

Figure 1

3 pages, 144 KiB  
Editorial
Industrial Internet of Things (IIoT): Trends and Technologies
by Zhihao Liu, Franco Davoli and Davide Borsatti
Future Internet 2025, 17(5), 213; https://doi.org/10.3390/fi17050213 - 13 May 2025
Abstract
The Industrial Internet of Things (IIoT) integrates sensors, machines, and data processing in industrial facilities to enable real-time monitoring, predictive insights, and autonomous control of equipment [...] Full article
(This article belongs to the Special Issue Industrial Internet of Things (IIoT): Trends and Technologies)
33 pages, 10361 KiB  
Article
Enhanced Propaganda Detection in Public Social Media Discussions Using a Fine-Tuned Deep Learning Model: A Diffusion of Innovation Perspective
by Pir Noman Ahmad, Adnan Muhammad Shah and KangYoon Lee
Future Internet 2025, 17(5), 212; https://doi.org/10.3390/fi17050212 - 12 May 2025
Viewed by 123
Abstract
During the COVID-19 pandemic, social media platforms emerged as both vital information sources and conduits for the rapid spread of propaganda and misinformation. However, existing studies often rely on single-label classification, lack contextual sensitivity, or use models that struggle to effectively capture nuanced [...] Read more.
During the COVID-19 pandemic, social media platforms emerged as both vital information sources and conduits for the rapid spread of propaganda and misinformation. However, existing studies often rely on single-label classification, lack contextual sensitivity, or use models that struggle to effectively capture nuanced propaganda cues across multiple categories. These limitations hinder the development of robust, generalizable detection systems in dynamic online environments. In this study, we propose a novel deep learning (DL) framework grounded in fine-tuning the RoBERTa model for a multi-label, multi-class (ML-MC) classification task, selecting RoBERTa due to its strong contextual representation capabilities and demonstrated superiority in complex NLP tasks. Our approach is rigorously benchmarked against traditional and neural methods, including, TF-IDF with n-grams, Conditional Random Fields (CRFs), and long short-term memory (LSTM) networks. While LSTM models show strong performance in capturing sequential patterns, our RoBERTa-based model achieves the highest overall accuracy at 88%, outperforming state-of-the-art baselines. Framed within the diffusion of innovations theory, the proposed model offers clear relative advantages—including accuracy, scalability, and contextual adaptability—that support its early adoption by Information Systems researchers and practitioners. This study not only contributes a high-performing detection model but also delivers methodological and theoretical insights for combating propaganda in digital discourse, enhancing resilience in online information ecosystems. Full article
Show Figures

Figure 1

27 pages, 9653 KiB  
Article
DNS over HTTPS Tunneling Detection System Based on Selected Features via Ant Colony Optimization
by Hardi Sabah Talabani, Zrar Khalid Abdul and Hardi Mohammed Mohammed Saleh
Future Internet 2025, 17(5), 211; https://doi.org/10.3390/fi17050211 - 7 May 2025
Viewed by 204
Abstract
DNS over HTTPS (DoH) is an advanced version of the traditional DNS protocol that prevents eavesdropping and man-in-the-middle attacks by encrypting queries and responses. However, it introduces new challenges such as encrypted traffic communication, masking malicious activity, tunneling attacks, and complicating intrusion detection [...] Read more.
DNS over HTTPS (DoH) is an advanced version of the traditional DNS protocol that prevents eavesdropping and man-in-the-middle attacks by encrypting queries and responses. However, it introduces new challenges such as encrypted traffic communication, masking malicious activity, tunneling attacks, and complicating intrusion detection system (IDS) packet inspection. In contrast, unencrypted packets in the traditional Non-DoH version remain vulnerable to eavesdropping, privacy breaches, and spoofing. To address these challenges, an optimized dual-path feature selection approach is designed to select the most efficient packet features for binary class (DoH-Normal, DoH-Malicious) and multiclass (Non-DoH, DoH-Normal, DoH-Malicious) classification. Ant Colony Optimization (ACO) is integrated with machine learning algorithms such as XGBoost, K-Nearest Neighbors (KNN), Random Forest (RF), and Convolutional Neural Networks (CNNs) using CIRA-CIC-DoHBrw-2020 as the benchmark dataset. Experimental results show that the proposed model selects the most effective features for both scenarios, achieving the highest detection and outperforming previous studies in IDS. The highest accuracy obtained for binary and multiclass classifications was 0.9999 and 0.9955, respectively. The optimized feature set contributed significantly to reducing computational costs and processing time across all utilized classifiers. The results provide a robust, fast, and accurate solution to challenges associated with encrypted DNS packets. Full article
Show Figures

Figure 1

23 pages, 682 KiB  
Article
A Blockchain-Based Strategy for Certifying Timestamps in a Distributed Healthcare Emergency Response Systems
by Daniele Marletta, Alessandro Midolo and Emiliano Tramontana
Future Internet 2025, 17(5), 210; https://doi.org/10.3390/fi17050210 - 7 May 2025
Viewed by 80
Abstract
A high level of data integrity is a strong requirement in systems where the life of people depends on accurate and timely responses. In healthcare emergency response systems, a centralized authority that handles data related to occurring events is prone to challenges, such [...] Read more.
A high level of data integrity is a strong requirement in systems where the life of people depends on accurate and timely responses. In healthcare emergency response systems, a centralized authority that handles data related to occurring events is prone to challenges, such as, e.g., disputes over event timestamps and data authenticity. To address both the potential lack of trust among collaborating parties and the inability of an authority to clearly certify events by itself, this paper proposes a blockchain-based framework designed to provide proof of integrity and authenticity of data in healthcare emergency response systems. The proposed solution integrates blockchain technology to certify the accuracy of events throughout their incident lifecycle. Critical events are timestamped and hashed using SHA-256; then, such hashes are stored immutably on an EVM-compatible blockchain via smart contracts. The system combines blockchain technology with cloud storage to ensure scalability, security, and transparency. Blockchain technology provides the advantage of eliminating a trusted server, providing timestamping and reducing costs by forgoing such a service. The experimental results, using publicly available incident data, demonstrated the feasibility and effectiveness of this approach. The system provides a cost-effective, scalable solution for managing incident data while keeping a proof of their integrity. The proposed blockchain-based framework offers a reliable, transparent mechanism for certifying incident-related data. This fosters trust among healthcare emergency response system actors. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT—3rd Edition)
Show Figures

Figure 1

21 pages, 2595 KiB  
Article
Adversarial Training for Mitigating Insider-Driven XAI-Based Backdoor Attacks
by R. G. Gayathri, Atul Sajjanhar and Yong Xiang
Future Internet 2025, 17(5), 209; https://doi.org/10.3390/fi17050209 - 6 May 2025
Viewed by 157
Abstract
The study investigates how adversarial training techniques can be used to introduce backdoors into deep learning models by an insider with privileged access to training data. The research demonstrates an insider-driven poison-label backdoor approach in which triggers are introduced into the training dataset. [...] Read more.
The study investigates how adversarial training techniques can be used to introduce backdoors into deep learning models by an insider with privileged access to training data. The research demonstrates an insider-driven poison-label backdoor approach in which triggers are introduced into the training dataset. These triggers misclassify poisoned inputs while maintaining standard classification on clean data. An adversary can improve the stealth and effectiveness of such attacks by utilizing XAI techniques, which makes the detection of such attacks more difficult. The study uses publicly available datasets to evaluate the robustness of the deep learning models in this situation. Our experiments show that adversarial training considerably reduces backdoor attacks. These results are verified using various performance metrics, revealing model vulnerabilities and possible countermeasures. The findings demonstrate the importance of robust training techniques and effective adversarial defenses to improve the security of deep learning models against insider-driven backdoor attacks. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Figure 1

20 pages, 1664 KiB  
Article
A Network Traffic Characteristics Reconstruction Method for Mitigating the Impact of Packet Loss in Edge Computing Scenarios
by Jiawei Ye, Yanting Chen, Aierpanjiang Simayi, Yu Liu, Zhihui Lu and Jie Wu
Future Internet 2025, 17(5), 208; https://doi.org/10.3390/fi17050208 - 5 May 2025
Viewed by 232
Abstract
This paper presents TCReC, an innovative model designed for reconstructing network traffic characteristics in the presence of packet loss. With the rapid expansion of wireless networks driven by edge computing, IoT, and 5G technologies, challenges such as transmission instability, channel competition, and environmental [...] Read more.
This paper presents TCReC, an innovative model designed for reconstructing network traffic characteristics in the presence of packet loss. With the rapid expansion of wireless networks driven by edge computing, IoT, and 5G technologies, challenges such as transmission instability, channel competition, and environmental interference have led to significant packet loss rates, adversely impacting deep learning-based network traffic analysis tasks. To address this issue, TCReC leverages masked autoencoder techniques to reconstruct missing traffic features, ensuring reliable input for downstream tasks in edge computing scenarios. Experimental results demonstrate that TCReC maintains detection model accuracy within 10% of the original data, even under packet loss rates as high as 70%. For instance, on the ISCX-VPN-2016 dataset, TCReC achieves a Reconstruction Ability Index (RAI) of 94.02%, while on the CIC-IDS-2017 dataset, it achieves an RAI of 94.99% when combined with LSTM, significantly outperforming other methods such as Transformer, KNN, and RNN. Additionally, TCReC exhibits robustness across various packet loss scenarios, consistently delivering high-quality feature reconstruction for both attack traffic and common Internet application data. TCReC provides a robust solution for network traffic analysis in high-loss edge computing scenarios, offering practical value for real-world deployment. Full article
Show Figures

Figure 1

22 pages, 11622 KiB  
Article
Classification of Hacker’s Posts Based on Zero-Shot, Few-Shot, and Fine-Tuned LLMs in Environments with Constrained Resources
by Theodoros Giannilias, Andreas Papadakis, Nikolaos Nikolaou and Theodore Zahariadis
Future Internet 2025, 17(5), 207; https://doi.org/10.3390/fi17050207 - 5 May 2025
Viewed by 267
Abstract
This paper investigates, applies, and evaluates state-of-the-art Large Language Models (LLMs) for the classification of posts from a dark web hackers’ forum into four cyber-security categories. The LLMs applied included Mistral-7B-Instruct-v0.2, Gemma-1.1-7B, Llama-3-8B-Instruct, and Llama-2-7B, with zero-shot learning, few-shot learning, and fine-tuning. The [...] Read more.
This paper investigates, applies, and evaluates state-of-the-art Large Language Models (LLMs) for the classification of posts from a dark web hackers’ forum into four cyber-security categories. The LLMs applied included Mistral-7B-Instruct-v0.2, Gemma-1.1-7B, Llama-3-8B-Instruct, and Llama-2-7B, with zero-shot learning, few-shot learning, and fine-tuning. The four cyber-security categories consisted of “Access Control and Management”, “Availability Protection and Security by Design Mechanisms”, “Software and Firmware Flaws”, and “not relevant”. The hackers’ posts were also classified and labelled by a human cyber-security expert, allowing a detailed evaluation of the classification accuracy per each LLM and customization/learning method. We verified LLM fine-tuning as the most effective mechanism to enhance the accuracy and reliability of the classifications. The results include the methodology applied and the labelled hackers’ posts dataset. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Figure 1

15 pages, 1473 KiB  
Article
XGBoost-Based Detection of DDoS Attacks in Named Data Networking
by Liang Liu, Weiqing Yu, Zhijun Wu and Silin Peng
Future Internet 2025, 17(5), 206; https://doi.org/10.3390/fi17050206 - 4 May 2025
Viewed by 181
Abstract
Named Data Networking (NDN) is highly susceptible to Distributed Denial of Service (DDoS) attacks, such as Interest Flooding Attack (IFA) and Cache Pollution Attack (CPA). These attacks exploit the inherent data retrieval and caching mechanisms of NDN, leading to severe disruptions in data [...] Read more.
Named Data Networking (NDN) is highly susceptible to Distributed Denial of Service (DDoS) attacks, such as Interest Flooding Attack (IFA) and Cache Pollution Attack (CPA). These attacks exploit the inherent data retrieval and caching mechanisms of NDN, leading to severe disruptions in data availability and network efficiency, thereby undermining the overall performance and reliability of the system. In this paper, an attack detection method based on an improved XGBoost is proposed and applied to the hybrid attack pattern of IFA and CPA. Through experiments, the performance of the new attacks and the efficacy of the detection algorithm are analyzed. In comparison with other algorithms, the proposed method is demonstrated to have advantages in terms of the advanced nature of the proposed classifier, which is confirmed by the AUC-score. Full article
Show Figures

Figure 1

36 pages, 889 KiB  
Review
Securing Blockchain Systems: A Layer-Oriented Survey of Threats, Vulnerability Taxonomy, and Detection Methods
by Mohammad Jaminur Islam, Saminur Islam, Mahmud Hossain, Shahid Noor and S. M. Riazul Islam
Future Internet 2025, 17(5), 205; https://doi.org/10.3390/fi17050205 - 3 May 2025
Viewed by 320
Abstract
Blockchain technology is emerging as a pivotal framework to enhance the security of internet-based systems, especially as advancements in machine learning (ML), artificial intelligence (AI), and cyber–physical systems such as smart grids and IoT applications in healthcare continue to accelerate. Although these innovations [...] Read more.
Blockchain technology is emerging as a pivotal framework to enhance the security of internet-based systems, especially as advancements in machine learning (ML), artificial intelligence (AI), and cyber–physical systems such as smart grids and IoT applications in healthcare continue to accelerate. Although these innovations promise significant improvements, security remains a critical challenge. Blockchain offers a secure foundation for integrating diverse technologies; however, vulnerabilities—including adversarial exploits—can undermine performance and compromise application reliability. To address these risks effectively, it is essential to comprehensively analyze the vulnerability landscape of blockchain systems. This paper contributes in two key ways. First, it presents a unique layer-based framework for analyzing and illustrating security attacks within blockchain architectures. Second, it introduces a novel taxonomy that classifies existing research on blockchain vulnerability detection. Our analysis reveals that while ML and deep learning offer promising approaches for detecting vulnerabilities, their effectiveness often depends on access to extensive and high-quality datasets. Additionally, the layer-based framework demonstrates that vulnerabilities span all layers of a blockchain system, with attacks frequently targeting the consensus process, network integrity, and smart contract code. Overall, this paper provides a comprehensive overview of blockchain security threats and detection methods, emphasizing the need for a multifaceted approach to safeguard these evolving systems. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT—3rd Edition)
Show Figures

Graphical abstract

27 pages, 3296 KiB  
Article
AI-Powered Stroke Diagnosis System: Methodological Framework and Implementation
by Marta Narigina, Agris Vindecs, Dušanka Bošković, Yuri Merkuryev and Andrejs Romanovs
Future Internet 2025, 17(5), 204; https://doi.org/10.3390/fi17050204 - 2 May 2025
Viewed by 112
Abstract
This study introduces an AI-based framework for stroke diagnosis that merges clinical data and curated imaging data. The system utilizes traditional machine learning and advanced deep learning techniques to tackle dataset imbalances and variability in stroke presentations. Our approach involves rigorous data preprocessing, [...] Read more.
This study introduces an AI-based framework for stroke diagnosis that merges clinical data and curated imaging data. The system utilizes traditional machine learning and advanced deep learning techniques to tackle dataset imbalances and variability in stroke presentations. Our approach involves rigorous data preprocessing, feature engineering, and ensemble techniques to optimize the predictive performance. Comprehensive evaluations demonstrate that gradient-boosted models outperform in accuracy, while CNNs enhance stroke detection rates. Calibration and threshold optimization are utilized to align predictions with clinical requirements, ensuring diagnostic reliability. This multi-modal framework highlights the capacity of AI to accelerate stroke diagnosis and aid clinical decision making, ultimately enhancing patient outcomes in critical care. Full article
Show Figures

Figure 1

18 pages, 935 KiB  
Article
C6EnPLS: A High-Performance Computing Job Dataset for the Analysis of Linear Solvers’ Power Consumption
by Marcello Artioli, Andrea Borghesi, Marta Chinnici, Anna Ciampolini, Michele Colonna, Davide De Chiara and Daniela Loreti
Future Internet 2025, 17(5), 203; https://doi.org/10.3390/fi17050203 - 30 Apr 2025
Viewed by 192
Abstract
In recent decades, driven by global efforts towards sustainability, the priorities of HPC facilities have changed to include maximising energy efficiency besides computing performance. In this regard, a crucial open question is how to accurately predict the contribution of each parallel job to [...] Read more.
In recent decades, driven by global efforts towards sustainability, the priorities of HPC facilities have changed to include maximising energy efficiency besides computing performance. In this regard, a crucial open question is how to accurately predict the contribution of each parallel job to the system’s energy consumption. Accurate estimations in this sense could offer an initial insight into the overall power requirements of the system, and provide meaningful information for, e.g., power-aware scheduling, load balancing, infrastructure design, etc. While ML-based attempts employing large training datasets of past executions may suffer from the high variability of HPC workloads, a more specific knowledge of the nature of the jobs can improve prediction accuracy. In this work, we restrict our attention to the rather pervasive task of linear system resolution. We propose a methodology to build a large dataset of runs (including the measurements coming from physical sensors deployed on a large HPC cluster), and we report a statistical analysis and preliminary evaluation of the efficacy of the obtained dataset when employed to train well-established ML methods aiming to predict the energy footprint of specific software. Full article
(This article belongs to the Special Issue Distributed Machine Learning and Federated Edge Computing for IoT)
Show Figures

Figure 1

55 pages, 3433 KiB  
Review
SoK: Delegated Security in the Internet of Things
by Emiliia Geloczi, Felix Klement, Patrick Struck and Stefan Katzenbeisser
Future Internet 2025, 17(5), 202; https://doi.org/10.3390/fi17050202 - 30 Apr 2025
Viewed by 110
Abstract
The increased use of electronic devices in the Internet of Things (IoT) leads not only to an improved comfort of living but also to an increased risk of attacks. IoT security has thus become an important research field. However, due to limits on [...] Read more.
The increased use of electronic devices in the Internet of Things (IoT) leads not only to an improved comfort of living but also to an increased risk of attacks. IoT security has thus become an important research field. However, due to limits on performance and bandwidth, IoT devices are often not powerful enough to execute, e.g., costly cryptographic algorithms or protocols. This limitation can be solved through a delegation concept. By delegating certain operations to devices with sufficient resources, it is possible to achieve a high level of security without overloading a device that needs protection. In this paper, we give an overview of current approaches for security delegation in the context of IoT, formalise security notions, discuss the security of existing approaches, and identify further research questions. Furthermore, a mathematical formalisation of the CIA triad (confidentiality, integrity, and availability) is proposed for the predefined application areas, in order to evaluate the different approaches. Full article
(This article belongs to the Special Issue Cybersecurity in the IoT)
Show Figures

Figure 1

32 pages, 3449 KiB  
Article
Optimizing Internet of Things Services Placement in Fog Computing Using Hybrid Recommendation System
by Hanen Ben Rjeb, Layth Sliman, Hela Zorgati, Raoudha Ben Djemaa and Amine Dhraief
Future Internet 2025, 17(5), 201; https://doi.org/10.3390/fi17050201 - 30 Apr 2025
Viewed by 221
Abstract
Fog Computing extends Cloud computing capabilities by providing computational resources closer to end users. Fog Computing has gained considerable popularity in various domains such as drones, autonomous vehicles, and smart cities. In this context, the careful selection of suitable Fog resources and the [...] Read more.
Fog Computing extends Cloud computing capabilities by providing computational resources closer to end users. Fog Computing has gained considerable popularity in various domains such as drones, autonomous vehicles, and smart cities. In this context, the careful selection of suitable Fog resources and the optimal assignment of services to these resources (the service placement problem (SPP)) is essential. Numerous studies have attempted to tackle this issue. However, to the best of our knowledge, none of the previously proposed works took into consideration the dynamic context awareness and the user preferences for IoT service placement. To deal with this issue, we propose a hybrid recommendation system for service placement that combines two techniques: collaborative filtering and content-based recommendation. By considering user and service context, user preferences, service needs, and resource availability, the proposed recommendation system provides optimal placement suggestions for each IoT service. To assess the efficiency of the proposed system, a validation scenario based on Internet of Drones (IoD) was simulated and tested. The results show that the proposed approach leads to a considerable reduction in waiting time and a substantial improvement in resource utilization and the number of executed services. Full article
Show Figures

Graphical abstract

17 pages, 12426 KiB  
Article
Implementation and Performance Analysis of an Industrial Robot’s Vision System Based on Cloud Vision Services
by Ioana-Livia Stefan, Andrei Mateescu, Ionut Lentoiu, Silviu Raileanu, Florin Daniel Anton, Dragos Constantin Popescu and Ioan Stefan Sacala
Future Internet 2025, 17(5), 200; https://doi.org/10.3390/fi17050200 - 30 Apr 2025
Viewed by 170
Abstract
With its fast advancements, cloud computing opens many opportunities for research in various applications from the robotics field. In our paper, we further explore the prospect of integrating Cloud AI object recognition services into an industrial robotics sorting task. Starting from our previously [...] Read more.
With its fast advancements, cloud computing opens many opportunities for research in various applications from the robotics field. In our paper, we further explore the prospect of integrating Cloud AI object recognition services into an industrial robotics sorting task. Starting from our previously implemented solution on a digital twin, we are now putting our proposed architecture to the test in the real world, on an industrial robot, where factors such as illumination, shadows, different colors, and textures of the materials influence the performance of the vision system. We compare the results of our suggested method with those from an industrial machine vision software, indicating promising performance and opening additional application perspectives in the robotics field, simultaneously with the continuous improvement of Cloud and AI technology. Full article
(This article belongs to the Special Issue Artificial Intelligence and Control Systems for Industry 4.0 and 5.0)
Show Figures

Graphical abstract

35 pages, 1503 KiB  
Systematic Review
Integrating AIoT Technologies in Aquaculture: A Systematic Review
by Fahmida Wazed Tina, Nasrin Afsarimanesh, Anindya Nag and Md Eshrat E. Alahi
Future Internet 2025, 17(5), 199; https://doi.org/10.3390/fi17050199 - 30 Apr 2025
Viewed by 1055
Abstract
The increasing global demand for seafood underscores the necessity for sustainable aquaculture practices. However, several challenges, including rising operational costs, variable environmental conditions, and the threat of disease outbreaks, impede progress in this field. This review explores the transformative role of the Artificial [...] Read more.
The increasing global demand for seafood underscores the necessity for sustainable aquaculture practices. However, several challenges, including rising operational costs, variable environmental conditions, and the threat of disease outbreaks, impede progress in this field. This review explores the transformative role of the Artificial Intelligence of Things (AIoT) in mitigating these challenges. We analyse current research on AIoT applications in aquaculture, with a strong emphasis on the use of IoT sensors for real-time data collection and AI algorithms for effective data analysis. Our focus areas include monitoring water quality, implementing smart feeding strategies, detecting diseases, analysing fish behaviour, and employing automated counting techniques. Nevertheless, several research gaps remain, particularly regarding the integration of AI in broodstock management, the development of multimodal AI systems, and challenges regarding model generalization. Future advancements in AIoT should prioritise real-time adaptability, cost-effectiveness, and sustainability while emphasizing the importance of multimodal systems, advanced biosensing capabilities, and digital twin technologies. In conclusion, while AIoT presents substantial opportunities for enhancing aquaculture practices, successful implementation will depend on overcoming challenges related to scalability, cost, and technical expertise, improving models’ adaptability, and ensuring environmental sustainability. Full article
(This article belongs to the Special Issue Internet of Things (IoT) in Smart City)
Show Figures

Graphical abstract

30 pages, 18616 KiB  
Article
Leveraging Retrieval-Augmented Generation for Automated Smart Home Orchestration
by Negin Jahanbakhsh, Mario Vega-Barbas, Iván Pau, Lucas Elvira-Martín, Hirad Moosavi and Carolina García-Vázquez
Future Internet 2025, 17(5), 198; https://doi.org/10.3390/fi17050198 - 29 Apr 2025
Viewed by 227
Abstract
The rapid growth of smart home technologies, driven by the expansion of the Internet of Things (IoT), has introduced both opportunities and challenges in automating daily routines and orchestrating device interactions. Traditional rule-based automation systems often fall short in adapting to dynamic conditions, [...] Read more.
The rapid growth of smart home technologies, driven by the expansion of the Internet of Things (IoT), has introduced both opportunities and challenges in automating daily routines and orchestrating device interactions. Traditional rule-based automation systems often fall short in adapting to dynamic conditions, integrating heterogeneous devices, and responding to evolving user needs. To address these limitations, this study introduces a novel smart home orchestration framework that combines generative Artificial Intelligence (AI), Retrieval-Augmented Generation (RAG), and the modular OSGi framework. The proposed system allows users to express requirements in natural language, which are then interpreted and transformed into executable service bundles by large language models (LLMs) enhanced with contextual knowledge retrieved from vector databases. These AI-generated service bundles are dynamically deployed via OSGi, enabling real-time service adaptation without system downtime. Manufacturer-provided device capabilities are seamlessly integrated into the orchestration pipeline, ensuring compatibility and extensibility. The framework was validated through multiple use-case scenarios involving dynamic device discovery, on-demand code generation, and adaptive orchestration based on user preferences. Results highlight the system’s ability to enhance automation efficiency, personalization, and resilience. This work demonstrates the feasibility and advantages of AI-driven orchestration in realising intelligent, flexible, and scalable smart home environments. Full article
(This article belongs to the Special Issue Joint Design and Integration in Smart IoT Systems)
Show Figures

Figure 1

31 pages, 1997 KiB  
Article
Leveraging Blockchain Technology for Secure 5G Offloading Processes
by Cristina Regueiro, Santiago de Diego and Borja Urkizu
Future Internet 2025, 17(5), 197; https://doi.org/10.3390/fi17050197 - 29 Apr 2025
Viewed by 334
Abstract
This paper presents a secure 5G offloading mechanism leveraging Blockchain technology and Self-Sovereign Identity (SSI). The advent of 5G has significantly enhanced the capabilities of all sectors, enabling innovative applications and improving security and efficiency. However, challenges such as limited infrastructure, signal interference, [...] Read more.
This paper presents a secure 5G offloading mechanism leveraging Blockchain technology and Self-Sovereign Identity (SSI). The advent of 5G has significantly enhanced the capabilities of all sectors, enabling innovative applications and improving security and efficiency. However, challenges such as limited infrastructure, signal interference, and high upgrade costs persist. Offloading processes already address these issues but they require more transparency and security. This paper proposes a Blockchain-based marketplace using Hyperledger Fabric to optimize resource allocation and enhance security. This marketplace facilitates the exchange of services and resources among operators, promoting competition and flexibility. Additionally, the paper introduces an SSI-based authentication system to ensure privacy and security during the offloading process. The architecture and components of the marketplace and authentication system are detailed, along with their data models and operations. Performance evaluations indicate that the proposed solutions do not significantly degrade offloading times, making them suitable for everyday applications. As a result, the integration of Blockchain and SSI technologies enhances the security and efficiency of 5G offloading. Full article
(This article belongs to the Special Issue 5G Security: Challenges, Opportunities, and the Road Ahead)
Show Figures

Figure 1

27 pages, 960 KiB  
Article
Ephemeral Node Identifiers for Enhanced Flow Privacy
by Gregor Tamati Haywood and Saleem Noel Bhatti
Future Internet 2025, 17(5), 196; https://doi.org/10.3390/fi17050196 - 28 Apr 2025
Viewed by 180
Abstract
The Internet Protocol (IP) uses numerical address values carried in IP packets at the network layer to allow correct forwarding of packets between source and destination. Those address values must be kept visible in all parts of the network. By definition, those addresses [...] Read more.
The Internet Protocol (IP) uses numerical address values carried in IP packets at the network layer to allow correct forwarding of packets between source and destination. Those address values must be kept visible in all parts of the network. By definition, those addresses must carry enough information to identify the source and destination for the communication. This means that successive flows of IP packets can be correlated—it is possible for an observer of the flows to easily link them to an individual source and so, potentially, to an individual user. To alleviate this privacy concern, it is desirable to have ephemeral address values—values that have a limited lifespan and so make flow correlation more difficult for an attacker. However, the IP address is also used in the end-to-end communication state for transport layer flows so must remain consistent to allow correct operation at the transport layer. We present a solution to this tension in requirements by the use of ephemeral Node Identifier (eNID) values in IP packets as part of the address value. We have implemented our approach as an extension to IPv6 in the FreeBSD14 operating system kernel. We have evaluated the implementation with existing applications over both a testbed network in a controlled environment, as well as with global IPv6 network connectivity. Our results show that eNIDs work with existing applications and over existing IPv6 networks. Our analyses shows that using eNIDs creates a disruption to the correlation of flows and so effectively perturbs linkability. As our approach is a network layer (layer 3) mechanism, it is usable by any transport layer (layer 4) protocol, improving privacy for all applications and all users. Full article
Show Figures

Figure 1

28 pages, 11862 KiB  
Article
An Improved Reference Paper Collection System Using Web Scraping with Three Enhancements
by Tresna Maulana Fahrudin, Nobuo Funabiki, Komang Candra Brata, Inzali Naing, Soe Thandar Aung, Amri Muhaimin and Dwi Arman Prasetya
Future Internet 2025, 17(5), 195; https://doi.org/10.3390/fi17050195 - 28 Apr 2025
Viewed by 341
Abstract
Nowadays, accessibility to academic papers has been significantly improved with electric publications on the internet, where open access has become common. At the same time, it has increased workloads in literature surveys for researchers who usually manually download PDF files and check their [...] Read more.
Nowadays, accessibility to academic papers has been significantly improved with electric publications on the internet, where open access has become common. At the same time, it has increased workloads in literature surveys for researchers who usually manually download PDF files and check their contents. To solve this drawback, we have proposed a reference paper collection system using a web scraping technology and natural language models. However, our previous system often finds a limited number of relevant reference papers after taking long time, since it relies on one paper search website and runs on a single thread at a multi-core CPU. In this paper, we present an improved reference paper collection system with three enhancements to solve them: (1) integrating the APIs from multiple paper search web sites, namely, the bulk search endpoint in the Semantic Scholar API, the article search endpoint in the DOAJ API, and the search and fetch endpoint in the PubMed API to retrieve article metadata, (2) running the program on multiple threads for multi-core CPU, and (3) implementing Dynamic URL Redirection, Regex-based URL Parsing, and HTML Scraping with URL Extraction for fast checking of PDF file accessibility, along with sentence embedding to assess relevance based on semantic similarity. For evaluations, we compare the number of obtained reference papers and the response time between the proposal, our previous work, and common literature search tools in five reference paper queries. The results show that the proposal increases the number of relevant reference papers by 64.38% and reduces the time by 59.78% on average compared to our previous work, while outperforming common literature search tools in reference papers. Thus, the effectiveness of the proposed system has been demonstrated in our experiments. Full article
(This article belongs to the Special Issue ICT and AI in Intelligent E-systems)
Show Figures

Figure 1

20 pages, 1267 KiB  
Article
BPDM-GCN: Backup Path Design Method Based on Graph Convolutional Neural Network
by Wanwei Huang, Huicong Yu, Yingying Li, Xi He and Rui Chen
Future Internet 2025, 17(5), 194; https://doi.org/10.3390/fi17050194 - 27 Apr 2025
Viewed by 245
Abstract
To address the problems of poor applicability of existing fault link recovery algorithms in network topology migration and backup path congestion, this paper proposes a backup path algorithm based on graph convolutional neural to improve deep deterministic policy gradient. First, the BPDM-GCN backup [...] Read more.
To address the problems of poor applicability of existing fault link recovery algorithms in network topology migration and backup path congestion, this paper proposes a backup path algorithm based on graph convolutional neural to improve deep deterministic policy gradient. First, the BPDM-GCN backup path algorithm is constructed within a deep deterministic policy gradient training framework. It uses graph convolutional networks to detect changes in network topology, aiming to optimize data transmission delay and bandwidth occupancy within the network topology. After iterative training of the BPDM-GCN algorithm, the comprehensive link weights within the network topology are generated. Then, according to the comprehensive link weight and taking the shortest path as the optimization objective, a backup path implementation method based on the incremental shortest path tree is designed to reduce the phasor data transmission delay in the backup path. In conclusion, the experimental results show that the backup path formulated by this algorithm exhibits reduced data transmission delay, minimal path extension, and a high success rate in recovering failed links. Compared to the superior NRLF-RL algorithm, the BPDM-GCN algorithm achieves a reduction of approximately 14.29% in the average failure link recovery delay and an increase of approximately 5.24% in the failure link recovery success rate. Full article
Show Figures

Figure 1

17 pages, 3936 KiB  
Article
Developing Quantum Trusted Platform Module (QTPM) to Advance IoT Security
by Guobin Xu, Oluwole Adetifa, Jianzhou Mao, Eric Sakk and Shuangbao Wang
Future Internet 2025, 17(5), 193; https://doi.org/10.3390/fi17050193 - 26 Apr 2025
Viewed by 199
Abstract
Randomness is integral to computer security, influencing fields such as cryptography and machine learning. In the context of cybersecurity, particularly for the Internet of Things (IoT), high levels of randomness are essential to secure cryptographic protocols. Quantum computing introduces significant risks to traditional [...] Read more.
Randomness is integral to computer security, influencing fields such as cryptography and machine learning. In the context of cybersecurity, particularly for the Internet of Things (IoT), high levels of randomness are essential to secure cryptographic protocols. Quantum computing introduces significant risks to traditional encryption methods. To address these challenges, we propose investigating a quantum-safe solution for IoT-trusted computing. Specifically, we implement the first lightweight, practical integration of a quantum random number generator (QRNG) with a software-based trusted platform module (TPM) to create a deployable quantum trusted platform module (QTPM) prototype for IoT systems to improve cryptographic capabilities. The proposed quantum entropy as a service (QEaaS) framework further extends quantum entropy access to legacy and resource-constrained devices. Through the evaluation, we compare the performance of QRNG with traditional Pseudo-random Number Generators (PRNGs), demonstrating the effectiveness of the quantum TPM. Our paper highlights the transformative potential of integrating quantum technology to bolster IoT security. Full article
Show Figures

Figure 1

30 pages, 5336 KiB  
Article
Railway Cloud Resource Management as a Service
by Ivaylo Atanasov, Dragomira Dimitrova, Evelina Pencheva and Ventsislav Trifonov
Future Internet 2025, 17(5), 192; https://doi.org/10.3390/fi17050192 - 24 Apr 2025
Cited by 1 | Viewed by 309
Abstract
Cloud computing has the potential to accelerate the digital journey of railways. Railway systems are big and complex, involving a lot of parts, like trains, tracks, signaling systems, and control systems, among others. The application of cloud computing technologies in the railway industry [...] Read more.
Cloud computing has the potential to accelerate the digital journey of railways. Railway systems are big and complex, involving a lot of parts, like trains, tracks, signaling systems, and control systems, among others. The application of cloud computing technologies in the railway industry has the potential to enhance operational efficiency, data management, and overall system performance. Cloud management is essential for complex systems, and the automation of management services can speed up the provisioning, deployment, and maintenance of cloud infrastructure and applications by enabling visibility across the environment. It can provide consistent and unified management over resource allocation, streamline security processes, and automate the monitoring of key performance indicators. Key railway cloud management challenges include the lack of open interfaces and standardization, which are related to the vendor lock-in problem. In this paper, we propose an approach to design the railway cloud resource management as a service. Based on typical use cases, the requirements to fault and performance management of the railway cloud resources are identified. The main functionality is designed as RESTful services. The approach feasibility is proved by formal verification of the cloud resource management models supported by cloud management application and services. The proposed approach is open, in contrast to any proprietary solutions and feature scalability and interoperability. Full article
(This article belongs to the Special Issue Cloud and Edge Computing for the Next-Generation Networks)
Show Figures

Figure 1

38 pages, 4044 KiB  
Article
Trustworthy AI and Federated Learning for Intrusion Detection in 6G-Connected Smart Buildings
by Rosario G. Garroppo, Pietro Giuseppe Giardina, Giada Landi and Marco Ruta
Future Internet 2025, 17(5), 191; https://doi.org/10.3390/fi17050191 - 23 Apr 2025
Viewed by 341
Abstract
Smart building applications require robust security measures to ensure system functionality, privacy, and security. To this end, this paper proposes a Federated Learning Intrusion Detection System (FL-IDS) composed of two convolutional neural network (CNN) models to detect network and IoT device attacks simultaneously. [...] Read more.
Smart building applications require robust security measures to ensure system functionality, privacy, and security. To this end, this paper proposes a Federated Learning Intrusion Detection System (FL-IDS) composed of two convolutional neural network (CNN) models to detect network and IoT device attacks simultaneously. Collaborative training across multiple cooperative smart buildings enables model development without direct data sharing, ensuring privacy by design. Furthermore, the design of the proposed method considers three key principles: sustainability, adaptability, and trustworthiness. The proposed data pre-processing and engineering system significantly reduces the amount of data to be processed by the CNN, helping to limit the processing load and associated energy consumption towards more sustainable Artificial Intelligence (AI) techniques. Furthermore, the data engineering process, which includes sampling, feature extraction, and transformation of data into images, is designed considering its adaptability to integrate new sensor data and to fit seamlessly into a zero-touch system, following the principles of Machine Learning Operations (MLOps). The designed CNNs allow for the investigation of AI reasoning, implementing eXplainable AI (XAI) techniques such as the correlation map analyzed in this paper. Using the ToN-IoT dataset, the results show that the proposed FL-IDS achieves performance comparable to that of its centralized counterpart. To address the specific vulnerabilities of FL, a secure and robust aggregation method is introduced, making the system resistant to poisoning attacks from up to 20% of the participating clients. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop