Next Issue
Volume 17, July
Previous Issue
Volume 17, May
 
 

Future Internet, Volume 17, Issue 6 (June 2025) – 42 articles

Cover Story (view full-size image): Intrusion detection in the Internet of Things (IoT) environments is increasingly critical due to the rapid proliferation of connected devices and the growing sophistication of cyber threats. Traditional detection methods often fall short in identifying multi-class attacks. This paper proposes a novel hybrid intrusion detection framework that integrates transformer networks with generative adversarial networks (GANs), aiming to enhance both detection accuracy and robustness. Experimental results show that our hybrid framework consistently outperforms baseline methods, in both binary and multi-class intrusion detection tasks. The superiority of the proposed model was further validated through statistically significant t-test results, indicating both efficiency and stability. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
27 pages, 594 KiB  
Article
A C-Based Framework for Low-Cost Real-Time Embedded Systems
by Ivan Cibrario Bertolotti
Future Internet 2025, 17(6), 269; https://doi.org/10.3390/fi17060269 - 19 Jun 2025
Viewed by 153
Abstract
This paper presents a framework that enables programmers to deploy embedded real-time firmware of Internet of Things (IoT) devices more conveniently than using plain C/C++-language programming, by abstracting away from low-level details and the ad hoc management of multiple, diverse network technologies. Moreover, [...] Read more.
This paper presents a framework that enables programmers to deploy embedded real-time firmware of Internet of Things (IoT) devices more conveniently than using plain C/C++-language programming, by abstracting away from low-level details and the ad hoc management of multiple, diverse network technologies. Moreover, unlike other proposals, the framework is able to accommodate both time and event-driven applications. Experimental results show that for Modbus-CAN communication, the worst-case time overhead of the framework is less than 6% of the total combined processing and communication time. Its memory requirement is less than 5% and 4% of the Flash memory and RAM available on a typical IoT microcontroller. The framework also compares favorably with respect to two other approaches in terms of the sustainable minimum cycle time, memory overhead, and level of programming abstraction when tested on a simple real-time algorithm. Full article
Show Figures

Figure 1

19 pages, 1823 KiB  
Review
A Bibliometric Analysis and Visualization of In-Vehicle Communication Protocols
by Iftikhar Hussain, Manuel J. C. S. Reis, Carlos Serôdio and Frederico Branco
Future Internet 2025, 17(6), 268; https://doi.org/10.3390/fi17060268 - 19 Jun 2025
Viewed by 515
Abstract
This research examined the domain of intelligent transportation systems (ITS) by analyzing the impact of scholarly work and thematic prevalence, as well as focusing attention on vehicles, their technologies, cybersecurity, and related scholarly technologies. This was performed by examining the scientific literature indexed [...] Read more.
This research examined the domain of intelligent transportation systems (ITS) by analyzing the impact of scholarly work and thematic prevalence, as well as focusing attention on vehicles, their technologies, cybersecurity, and related scholarly technologies. This was performed by examining the scientific literature indexed in the Scopus database. This study analysed 2919 documents published between 2018 and 2025. The findings indicated that the highest and most significant journal was derived from IEEE Transactions on Vehicular Technology, with significant standing to the growth of communication and computing on vehicles with edge computing and AI optimization of vehicular systems. In addition, important PST research conferences highlighted the growing interest in academic research in cybersecurity for vehicle networks. Sensor networks, pose forensics, and privacy-preserving communication frameworks were some of the significant contributing fields marking the significance of the interdisciplinary nature of this research. Employing bibliometric analysis, the literature illustrated the multiple channels integrating knowledge creation and innovation in ITS through citation analysis. The outcome suggested an increasingly sophisticated research area, weighing technical progress and increasing concern about security and privacy measures. Further studies must investigate edge computing integrated with AI, advanced privacy-preserving linguistic protocols, and new vehicular network intrusion detection systems. Full article
Show Figures

Figure 1

35 pages, 1485 KiB  
Article
Detecting Cyber Threats in UWF-ZeekDataFall22 Using K-Means Clustering in the Big Data Environment
by Sikha S. Bagui, Germano Correa Silva De Carvalho, Asmi Mishra, Dustin Mink, Subhash C. Bagui and Stephanie Eager
Future Internet 2025, 17(6), 267; https://doi.org/10.3390/fi17060267 - 18 Jun 2025
Viewed by 240
Abstract
In an era marked by the rapid growth of the Internet of Things (IoT), network security has become increasingly critical. Traditional Intrusion Detection Systems, particularly signature-based methods, struggle to identify evolving cyber threats such as Advanced Persistent Threats (APTs)and zero-day attacks. Such threats [...] Read more.
In an era marked by the rapid growth of the Internet of Things (IoT), network security has become increasingly critical. Traditional Intrusion Detection Systems, particularly signature-based methods, struggle to identify evolving cyber threats such as Advanced Persistent Threats (APTs)and zero-day attacks. Such threats or attacks go undetected with supervised machine-learning methods. In this paper, we apply K-means clustering, an unsupervised clustering technique, to a newly created modern network attack dataset, UWF-ZeekDataFall22. Since this dataset contains labeled Zeek logs, the dataset was de-labeled before using this data for K-means clustering. The labeled data, however, was used in the evaluation phase, to determine the attack clusters post-clustering. In order to identify APTs as well as zero-day attack clusters, three different labeling heuristics were evaluated to determine the attack clusters. To address the challenges faced by Big Data, the Big Data framework, that is, Apache Spark and PySpark, were used for our development environment. In addition, the uniqueness of this work is also in using connection-based features. Using connection-based features, an in-depth study is done to determine the effect of the number of clusters, seeds, as well as features, for each of the different labeling heuristics. If the objective is to detect every single attack, the results indicate that 325 clusters with a seed of 200, using an optimal set of features, would be able to correctly place 99% of attacks. Full article
Show Figures

Figure 1

26 pages, 623 KiB  
Article
Significance of Machine Learning-Driven Algorithms for Effective Discrimination of DDoS Traffic Within IoT Systems
by Mohammed N. Alenezi
Future Internet 2025, 17(6), 266; https://doi.org/10.3390/fi17060266 - 18 Jun 2025
Viewed by 325
Abstract
As digital infrastructure continues to expand, networks, web services, and Internet of Things (IoT) devices become increasingly vulnerable to distributed denial of service (DDoS) attacks. Remarkably, IoT devices have become attracted to DDoS attacks due to their common deployment and limited applied security [...] Read more.
As digital infrastructure continues to expand, networks, web services, and Internet of Things (IoT) devices become increasingly vulnerable to distributed denial of service (DDoS) attacks. Remarkably, IoT devices have become attracted to DDoS attacks due to their common deployment and limited applied security measures. Therefore, attackers take advantage of the growing number of unsecured IoT devices to reflect massive traffic that overwhelms networks and disrupts necessary services, making protection of IoT devices against DDoS attacks a major concern for organizations and administrators. In this paper, the effectiveness of supervised machine learning (ML) classification and deep learning (DL) algorithms in detecting DDoS attacks on IoT networks was investigated by conducting an extensive analysis of network traffic dataset (legitimate and malicious). The performance of the models and data quality improved when emphasizing the impact of feature selection and data pre-processing approaches. Five machine learning models were evaluated by utilizing the Edge-IIoTset dataset: Random Forest (RF), Support Vector Machine (SVM), Long Short-Term Memory (LSTM), and K-Nearest Neighbors (KNN) with multiple K values, and Convolutional Neural Network (CNN). Findings revealed that the RF model outperformed other models by delivering optimal detection speed and remarkable performance across all evaluation metrics, while KNN (K = 7) emerged as the most efficient model in terms of training time. Full article
(This article belongs to the Special Issue Cybersecurity in the IoT)
Show Figures

Figure 1

21 pages, 931 KiB  
Article
JorGPT: Instructor-Aided Grading of Programming Assignments with Large Language Models (LLMs)
by Jorge Cisneros-González, Natalia Gordo-Herrera, Iván Barcia-Santos and Javier Sánchez-Soriano
Future Internet 2025, 17(6), 265; https://doi.org/10.3390/fi17060265 - 18 Jun 2025
Viewed by 358
Abstract
This paper explores the application of large language models (LLMs) to automate the evaluation of programming assignments in an undergraduate “Introduction to Programming” course. This study addresses the challenges of manual grading, including time constraints and potential inconsistencies, by proposing a system that [...] Read more.
This paper explores the application of large language models (LLMs) to automate the evaluation of programming assignments in an undergraduate “Introduction to Programming” course. This study addresses the challenges of manual grading, including time constraints and potential inconsistencies, by proposing a system that integrates several LLMs to streamline the assessment process. The system utilizes a graphic interface to process student submissions, allowing instructors to select an LLM and customize the grading rubric. A comparative analysis, using LLMs from OpenAI, Google, DeepSeek and ALIBABA to evaluate student code submissions, revealed a strong correlation between LLM-generated grades and those assigned by human instructors. Specifically, the reduced model using statistically significant variables demonstrates a high explanatory power, with an adjusted R2 of 0.9156 and a Mean Absolute Error of 0.4579, indicating that LLMs can effectively replicate human grading. The findings suggest that LLMs can automate grading when paired with human oversight, drastically reducing the instructor workload, transforming a task estimated to take more than 300 h of manual work into less than 15 min of automated processing and improving the efficiency and consistency of assessment in computer science education. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence in Smart Societies)
Show Figures

Graphical abstract

24 pages, 1082 KiB  
Article
An Explainable Machine Learning Approach for IoT-Supported Shaft Power Estimation and Performance Analysis for Marine Vessels
by Yiannis Kiouvrekis, Katerina Gkirtzou, Sotiris Zikas, Dimitris Kalatzis, Theodor Panagiotakopoulos, Zoran Lajic, Dimitris Papathanasiou and Ioannis Filippopoulos
Future Internet 2025, 17(6), 264; https://doi.org/10.3390/fi17060264 - 17 Jun 2025
Cited by 1 | Viewed by 255
Abstract
In the evolving landscape of green shipping, the accurate estimation of shaft power is critical for reducing fuel consumption and greenhouse gas emissions. This study presents an explainable machine learning framework for shaft power prediction, utilising real-world Internet of Things (IoT) sensor data [...] Read more.
In the evolving landscape of green shipping, the accurate estimation of shaft power is critical for reducing fuel consumption and greenhouse gas emissions. This study presents an explainable machine learning framework for shaft power prediction, utilising real-world Internet of Things (IoT) sensor data collected from nine (9) Very Large Crude Carriers (VLCCs) over a 36-month period. A diverse set of models—ranging from traditional algorithms such as Decision Trees and Support Vector Machines to advanced ensemble methods like XGBoost and LightGBM—were developed and evaluated. Model performance was assessed using the coefficient of determination (R2) and RMSE, with XGBoost achieving the highest accuracy (R2=0.9490, RMSE 888) and LightGBM being close behind (R2=0.9474, RMSE 902), with both substantially exceeding the industry baseline model (R2=0.9028, RMSE 1500). Explainability was integrated through SHapley Additive exPlanations (SHAP), offering detailed insights into the influence of each input variable. Features such as draft, GPS speed, and time since last dry dock consistently emerged as key predictors. The results demonstrate the robustness and interpretability of tree-based methods, offering a data-driven alternative to traditional performance estimation techniques and supporting the maritime industry’s transition toward more efficient and sustainable operations. Full article
Show Figures

Figure 1

14 pages, 263 KiB  
Article
A Grover Search-Based Quantum Key Agreement Protocol for Secure Internet of Medical Things Communication
by Tzung-Her Chen
Future Internet 2025, 17(6), 263; https://doi.org/10.3390/fi17060263 - 17 Jun 2025
Viewed by 170
Abstract
The rapid integration of the Internet of Medical Things (IoMT) into healthcare systems raises urgent demands for secure communication mechanisms capable of protecting sensitive patient data. Quantum key agreement (QKA), a collaborative approach to key generation based on quantum principles, provides an attractive [...] Read more.
The rapid integration of the Internet of Medical Things (IoMT) into healthcare systems raises urgent demands for secure communication mechanisms capable of protecting sensitive patient data. Quantum key agreement (QKA), a collaborative approach to key generation based on quantum principles, provides an attractive alternative to traditional quantum key distribution (QKD), as it eliminates dependence on a trusted authority and ensures equal participation from all users. QKA demonstrates particular suitability for IoMT’s decentralized medical networks by eliminating trusted authority dependence while ensuring equitable participation among all participants. This addresses fundamental challenges where centralized trust models introduce vulnerabilities and asymmetric access patterns that compromise egalitarian principles essential for medical data sharing. However, practical QKA applications in IoMT remain limited, particularly for schemes that avoid complex entanglement operations and authenticated classical channels. Among the few QKA protocols employing Grover’s search algorithm (GSA), existing proposals potentially suffer from limitations in fairness and security. In this paper, the author proposes an improved GSA-based QKA protocol that ensures fairness, security, and correctness without requiring an authenticated classical communication channel. The proposed scheme guarantees that each participant’s input equally contributes to the final key, preventing manipulation by any user subgroup. The scheme combines Grover’s algorithm with the decoy photon technique to ensure secure quantum transmission. Security analysis confirms resistance to external attacks, including intercept-resend, entanglement probes, and device-level exploits, as well as insider threats such as parameter manipulation. Fairness is achieved through a symmetric protocol design rooted in quantum mechanical principles. Efficiency evaluation shows a theoretical efficiency of approximately 25%, while eliminating the need for quantum memory. These results position the proposed protocol as a practical and scalable solution for future secure quantum communication systems, particularly within distributed IoMT environments. Full article
(This article belongs to the Special Issue The Future Internet of Medical Things, 3rd Edition)
30 pages, 1317 KiB  
Article
Enhancing Healthcare Security: A Unified RBAC and ABAC Risk-Aware Access Control Approach
by Hany F. Atlam and Yifu Yang
Future Internet 2025, 17(6), 262; https://doi.org/10.3390/fi17060262 - 14 Jun 2025
Viewed by 385
Abstract
Healthcare systems are increasingly vulnerable to security threats due to their reliance on digital platforms. Traditional access control models like Role-Based Access Control (RBAC) and Attribute-Based Access Control (ABAC) have limitations in mitigating evolving risks in these systems. Despite their unique features, these [...] Read more.
Healthcare systems are increasingly vulnerable to security threats due to their reliance on digital platforms. Traditional access control models like Role-Based Access Control (RBAC) and Attribute-Based Access Control (ABAC) have limitations in mitigating evolving risks in these systems. Despite their unique features, these models face limitations in mitigating evolving risks in healthcare systems. Traditional models are primarily oriented towards allocating permissions according to predetermined roles or policies, which results in challenges in effectively adapting to the dynamic complexities of modern healthcare ecosystems. Therefore, this paper proposes a novel risk-aware RBAC and ABAC access control model to enhance the flexibility, adaptability and security issues associated with healthcare systems. The proposed model integrates RBAC for role-based categorization, ABAC for fine-grained control based on user attributes and environmental factors, and Risk-Based Access Control (RiBAC) for dynamic risk assessment. The proposed model dynamically adjusts access permissions based on risk values, ensuring accurate and adaptable access control decisions. The experimental results demonstrate the feasibility and effectiveness of the proposed model in granting access to authorized users while denying access to unauthorized users. Through a series of 43 experiments that simulate various scenarios of access control operations in the healthcare system, the proposed model demonstrates significant improvement in the accuracy, precision, and recall of access control decisions compared to traditional models. The proposed model’s ability to dynamically assess risk and adjust access permissions based on contextual factors significantly enhances its ability to mitigate threats and protect sensitive medical data. Full article
Show Figures

Figure 1

30 pages, 1687 KiB  
Article
Network-, Cost-, and Renewable-Aware Ant Colony Optimization for Energy-Efficient Virtual Machine Placement in Cloud Datacenters
by Ali Mohammad Baydoun and Ahmed Sherif Zekri
Future Internet 2025, 17(6), 261; https://doi.org/10.3390/fi17060261 - 14 Jun 2025
Viewed by 339
Abstract
Virtual machine (VM) placement in cloud datacenters is a complex multi-objective challenge involving trade-offs among energy efficiency, carbon emissions, and network performance. This paper proposes NCRA-DP-ACO (Network-, Cost-, and Renewable-Aware Ant Colony Optimization with Dynamic Power Usage Effectiveness (PUE)), a bio-inspired metaheuristic that [...] Read more.
Virtual machine (VM) placement in cloud datacenters is a complex multi-objective challenge involving trade-offs among energy efficiency, carbon emissions, and network performance. This paper proposes NCRA-DP-ACO (Network-, Cost-, and Renewable-Aware Ant Colony Optimization with Dynamic Power Usage Effectiveness (PUE)), a bio-inspired metaheuristic that optimizes VM placement across geographically distributed datacenters. The approach integrates real-time solar energy availability, dynamic PUE modeling, and multi-criteria decision-making to enable environmentally and cost-efficient resource allocation. The experimental results show that NCRA-DP-ACO reduces power consumption by 13.7%, carbon emissions by 6.9%, and live VM migrations by 48.2% compared to state-of-the-art methods while maintaining Service Level Agreement (SLA) compliance. These results indicate the algorithm’s potential to support more environmentally and cost-efficient cloud management across dynamic infrastructure scenarios. Full article
Show Figures

Graphical abstract

24 pages, 4250 KiB  
Article
Joint Exploitation of Physical-Layer and Artificial Features for Privacy-Preserving Distributed Source Camera Identification
by Hui Tian, Haibao Chen, Yuyan Zhao and Jiawei Zhang
Future Internet 2025, 17(6), 260; https://doi.org/10.3390/fi17060260 - 13 Jun 2025
Cited by 1 | Viewed by 257
Abstract
Identifying the source camera of a digital image is a critical task for ensuring image authenticity. In this paper, we propose a novel privacy-preserving distributed source camera identification scheme that jointly exploits both physical-layer fingerprint features and a carefully designed artificial tag. Specifically, [...] Read more.
Identifying the source camera of a digital image is a critical task for ensuring image authenticity. In this paper, we propose a novel privacy-preserving distributed source camera identification scheme that jointly exploits both physical-layer fingerprint features and a carefully designed artificial tag. Specifically, we build a hybrid fingerprint model by combining sensor level hardware fingerprints with artificial tag features to characterize the unique identity of the camera in a digital image. To address privacy concerns, the proposed scheme incorporates a privacy-preserving strategy that encrypts not only the hybrid fingerprint parameters, but also the image content itself. Furthermore, within the distributed framework, the identification task performed by a single secondary user is formulated as a binary hypothesis testing problem. Experimental results demonstrated the effectiveness of the proposed scheme in accurately identifying source cameras, particularly under complex conditions such as those involving images processed by social media platforms. Notably, for social media platform identification, our method achieved average accuracy improvements of 7.19% on the Vision dataset and 8.87% on the Forchheim dataset compared to a representative baseline. Full article
Show Figures

Figure 1

36 pages, 2493 KiB  
Systematic Review
A Systematic Review of Cyber Range Taxonomies: Trends, Gaps, and a Proposed Taxonomy
by Pilleriin Lillemets, Nabaa Bashir Jawad, Joseph Kashi, Ahmad Sabah and Nicola Dragoni
Future Internet 2025, 17(6), 259; https://doi.org/10.3390/fi17060259 - 12 Jun 2025
Viewed by 356
Abstract
Cyber ranges have become essential platforms for realistic cybersecurity training, research, and development. Existing taxonomies often describe the functional aspects of cyber ranges—scenario design, team configurations, and evaluation metrics— focusing less on the underlying technologies that enable modern training. In this paper, we [...] Read more.
Cyber ranges have become essential platforms for realistic cybersecurity training, research, and development. Existing taxonomies often describe the functional aspects of cyber ranges—scenario design, team configurations, and evaluation metrics— focusing less on the underlying technologies that enable modern training. In this paper, we highlight the current trends and persistent gaps in the existing literature and propose a taxonomy that decouples functional capabilities from the enabling infrastructure, adding a dedicated Technology dimension. We derived and refined this taxonomy through an iterative literature mapping process, culminating in a proposed taxonomy that highlights key emerging trends such as cyber–physical integration, federation across multiple sites, and AI-driven orchestration. Key findings include the identification of clear convergences and divergences in existing taxonomies and concrete recommendations for future research directions, such as integrating socio-technical considerations and conducting systematic empirical validation. Our framework aims to guide researchers, developers, and practitioners in designing, implementing, and comparing cyber range solutions. An online Taxonomy Mapping Toolkit is developed to allow the cyber ranges research community to take advantage of the proposed taxonomy and build upon it as new advancements emerge. Full article
(This article belongs to the Special Issue Security of Computer System and Network)
Show Figures

Figure 1

36 pages, 1717 KiB  
Article
Generative Adversarial and Transformer Network Synergy for Robust Intrusion Detection in IoT Environments
by Pardis Sadatian Moghaddam, Ali Vaziri, Sarvenaz Sadat Khatami, Francisco Hernando-Gallego and Diego Martín
Future Internet 2025, 17(6), 258; https://doi.org/10.3390/fi17060258 - 12 Jun 2025
Viewed by 403
Abstract
Intrusion detection in the Internet of Things (IoT) environments is increasingly critical due to the rapid proliferation of connected devices and the growing sophistication of cyber threats. Traditional detection methods often fall short in identifying multi-class attacks, particularly in the presence of high-dimensional [...] Read more.
Intrusion detection in the Internet of Things (IoT) environments is increasingly critical due to the rapid proliferation of connected devices and the growing sophistication of cyber threats. Traditional detection methods often fall short in identifying multi-class attacks, particularly in the presence of high-dimensional and imbalanced IoT traffic. To address these challenges, this paper proposes a novel hybrid intrusion detection framework that integrates transformer networks with generative adversarial networks (GANs), aiming to enhance both detection accuracy and robustness. In the proposed architecture, the transformer component effectively models temporal and contextual dependencies within traffic sequences, while the GAN component generates synthetic data to improve feature diversity and mitigate class imbalance. Additionally, an improved non-dominated sorting biogeography-based optimization (INSBBO) algorithm is employed to fine-tune the hyper-parameters of the hybrid model, further enhancing learning stability and detection performance. The model is trained and evaluated on the CIC-IoT-2023 and TON_IoT dataset, which contains a diverse range of real-world IoT traffic and attack scenarios. Experimental results show that our hybrid framework consistently outperforms baseline methods, in both binary and multi-class intrusion detection tasks. The transformer-GAN achieves a multi-class classification accuracy of 99.67%, with an F1-score of 99.61%, and an area under the curve (AUC) of 99.80% in the CIC-IoT-2023 dataset, and achieves 98.84% accuracy, 98.79% F1-score, and 99.12% AUC on the TON_IoT dataset. The superiority of the proposed model was further validated through statistically significant t-test results, lower execution time compared to baselines, and minimal standard deviation across runs, indicating both efficiency and stability. The proposed framework offers a promising approach for enhancing the security and resilience of next-generation IoT systems. Full article
Show Figures

Graphical abstract

23 pages, 650 KiB  
Review
Advancing TinyML in IoT: A Holistic System-Level Perspective for Resource-Constrained AI
by Leandro Antonio Pazmiño Ortiz, Ivonne Fernanda Maldonado Soliz and Vanessa Katherine Guevara Balarezo
Future Internet 2025, 17(6), 257; https://doi.org/10.3390/fi17060257 - 11 Jun 2025
Viewed by 455
Abstract
Resource-constrained devices, including low-power Internet of Things (IoT) nodes, microcontrollers, and edge computing platforms, have increasingly become the focal point for deploying on-device intelligence. By integrating artificial intelligence (AI) closer to data sources, these systems aim to achieve faster responses, reduce bandwidth usage, [...] Read more.
Resource-constrained devices, including low-power Internet of Things (IoT) nodes, microcontrollers, and edge computing platforms, have increasingly become the focal point for deploying on-device intelligence. By integrating artificial intelligence (AI) closer to data sources, these systems aim to achieve faster responses, reduce bandwidth usage, and preserve privacy. Nevertheless, implementing AI in limited hardware environments poses substantial challenges in terms of computation, energy efficiency, model complexity, and reliability. This paper provides a comprehensive review of state-of-the-art methodologies, examining how recent advances in model compression, TinyML frameworks, and federated learning paradigms are enabling AI in tightly constrained devices. We highlight both established and emergent techniques for optimizing resource usage while addressing security, privacy, and ethical concerns. We then illustrate opportunities in key application domains—such as healthcare, smart cities, agriculture, and environmental monitoring—where localized intelligence on resource-limited devices can have broad societal impact. By exploring architectural co-design strategies, algorithmic innovations, and pressing research gaps, this paper offers a roadmap for future investigations and industrial applications of AI in resource-constrained devices. Full article
Show Figures

Figure 1

21 pages, 791 KiB  
Article
Building Equi-Width Histograms on Homomorphically Encrypted Data
by Dragoș Lazea, Anca Hangan and Tudor Cioara
Future Internet 2025, 17(6), 256; https://doi.org/10.3390/fi17060256 - 10 Jun 2025
Viewed by 266
Abstract
Histograms are widely used for summarizing data distributions, detecting anomalies, and improving machine learning models’ accuracy. However, traditional histogram-based methods require access to raw data, raising privacy concerns, particularly in sensitive IoT applications. Encryption-based techniques offer potential solutions; however, they secure the data [...] Read more.
Histograms are widely used for summarizing data distributions, detecting anomalies, and improving machine learning models’ accuracy. However, traditional histogram-based methods require access to raw data, raising privacy concerns, particularly in sensitive IoT applications. Encryption-based techniques offer potential solutions; however, they secure the data in transit or storage, requiring decryption during analysis, which exposes raw data to potential privacy risks. In this paper, we propose a method for constructing privacy-preserving histograms directly on homomorphically encrypted IoT data, leveraging the Fast Fully Homomorphic Encryption over the Torus (TFHE) scheme implemented in the Concrete framework. To overcome the challenges posed by homomorphic encryption, we redesign the traditional histogram construction algorithm, optimizing it for secure computation by addressing constraints related to nested loops and conditional statements. As an evaluation use case, we have considered an outlier detection mechanism based on histogram frequency counts, ensuring that all data and computations remain encrypted throughout the process. Our method achieves results consistent with plaintext-based outlier detection while maintaining reasonable computational overhead compared to those reported in the existing literature. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

20 pages, 1102 KiB  
Article
Exact and Approximation Algorithms for Task Offloading with Service Caching and Dependency in Mobile Edge Computing
by Bowen Cui and Jianwei Zhang
Future Internet 2025, 17(6), 255; https://doi.org/10.3390/fi17060255 - 10 Jun 2025
Viewed by 226
Abstract
With the continuous development of the Internet of Things (IoT) and communication technologies, the demand for low latency in practical applications is becoming increasingly significant. Mobile edge computing, as a promising computational model, is receiving growing attention. However, most existing studies fail to [...] Read more.
With the continuous development of the Internet of Things (IoT) and communication technologies, the demand for low latency in practical applications is becoming increasingly significant. Mobile edge computing, as a promising computational model, is receiving growing attention. However, most existing studies fail to consider two critical factors: task dependency and service caching. Additionally, the majority of proposed solutions are not related to the optimal solution. We investigate the task offloading problem in mobile edge computing. Considering the requirements of applications for service caching and task dependency, we define an optimization problem to minimize the delay under the constraint of maximum completion cost and present a (1+ϵ)-approximation algorithm and an exact algorithm. Specifically, the offloading scheme is determined based on the relationships between tasks as well as the cost and delay incurred by data transmission and task execution. Simulation results demonstrate that in all cases, the offloading schemes obtained by our algorithm consistently outperform other algorithms. Moreover, the approximation ratio to the optimal solution from the approximation algorithm is validated to be less than (1+ϵ), and the exact algorithm consistently produces the optimal solution. Full article
Show Figures

Figure 1

29 pages, 9734 KiB  
Article
Internet of Things (IoT)-Based Solutions for Uneven Roads and Balanced Vehicle Systems Using YOLOv8
by Momotaz Begum, Abm Kamrul Islam Riad, Abdullah Al Mamun, Thofazzol Hossen, Salah Uddin, Md Nurul Absur and Hossain Shahriar
Future Internet 2025, 17(6), 254; https://doi.org/10.3390/fi17060254 - 9 Jun 2025
Viewed by 473
Abstract
Uneven roads pose significant challenges to vehicle stability, passenger comfort, and safety, especially in snowy and mountainous regions. These problems are often complex and challenging to resolve with traditional detection and stabilization methods. This paper presents a dual-method approach to improving vehicle stability [...] Read more.
Uneven roads pose significant challenges to vehicle stability, passenger comfort, and safety, especially in snowy and mountainous regions. These problems are often complex and challenging to resolve with traditional detection and stabilization methods. This paper presents a dual-method approach to improving vehicle stability by identifying road irregularities and dynamically adjusting the balance. The proposed solution combines YOLOv8 for real-time road anomaly detection with a GY-521 sensor to track the speed of servo motors, facilitating immediate stabilization. YOLOv8 achieves a peak precision of 0.99 at a confidence threshold of 1.0 rate in surface recognition, surpassing conventional sensor-based detection. The vehicle design is divided into two sections: an upper passenger seating area and a lower section that contains the engine and wheels. The GY-521 sensor is strategically placed to monitor road conditions, while the servomotor stabilizes the upper section, ensuring passenger comfort and reducing the risk of accidents. This setup maintains stability even on uneven terrain. Furthermore, the proposed solution significantly reduces collision risk, vehicle wear, and maintenance costs while improving operational efficiency. Its compatibility with various vehicles and capabilities makes it an excellent candidate for enhancing road safety and driving experience in challenging environments. In addition, this work marks a crucial step towards a safer, more sustainable, and more comfortable transportation system. Full article
Show Figures

Figure 1

18 pages, 1289 KiB  
Article
Topology-Aware Anchor Node Selection Optimization for Enhanced DV-Hop Localization in IoT
by Haixu Niu, Yonghai Li, Shuaixin Hou, Tianfei Chen, Lijun Sun, Mingyang Gu and Muhammad Irsyad Abdullah
Future Internet 2025, 17(6), 253; https://doi.org/10.3390/fi17060253 - 8 Jun 2025
Viewed by 243
Abstract
Node localization is a critical challenge in Internet of Things (IoT) applications. The DV-Hop algorithm, which relies on hop counts for localization, assumes that network nodes are uniformly distributed. It estimates actual distances between nodes based on the number of hops. However, in [...] Read more.
Node localization is a critical challenge in Internet of Things (IoT) applications. The DV-Hop algorithm, which relies on hop counts for localization, assumes that network nodes are uniformly distributed. It estimates actual distances between nodes based on the number of hops. However, in practical IoT networks, node distribution is often non-uniform, leading to complex and irregular topologies that significantly reduce the localization accuracy of the original DV-Hop algorithm. To improve localization performance in non-uniform topologies, we propose an enhanced DV-Hop algorithm using Grey Wolf Optimization (GWO). First, the impact of non-uniform node distribution on hop count and average hop distance is analyzed. A binary Grey Wolf Optimization algorithm (BGWO) is then applied to develop an optimal anchor node selection strategy. This strategy eliminates anchor nodes with high estimation errors and selects a subset of high-quality anchors to improve the localization of unknown nodes. Second, in the multilateration stage, the traditional least square method is replaced by a continuous GWO algorithm to solve the distance equations with higher precision. Simulated experimental results show that the proposed GWO-enhanced DV-Hop algorithm significantly improves localization accuracy in non-uniform topologies. Full article
(This article belongs to the Special Issue Convergence of IoT, Edge and Cloud Systems)
Show Figures

Figure 1

37 pages, 2907 KiB  
Review
LLM4Rec: A Comprehensive Survey on the Integration of Large Language Models in Recommender Systems—Approaches, Applications and Challenges
by Sarama Shehmir and Rasha Kashef
Future Internet 2025, 17(6), 252; https://doi.org/10.3390/fi17060252 - 4 Jun 2025
Viewed by 873
Abstract
The synthesis of large language models (LLMs) and recommender systems has been a game-changer in tailored content onslaught with applications ranging from e-commerce, social media, and education to health care. This survey covers the usage of LLMs for content recommendations (LLM4Rec). LLM4Rec has [...] Read more.
The synthesis of large language models (LLMs) and recommender systems has been a game-changer in tailored content onslaught with applications ranging from e-commerce, social media, and education to health care. This survey covers the usage of LLMs for content recommendations (LLM4Rec). LLM4Rec has opened up a whole set of challenges in terms of scale, real-time processing, and data privacy, all of which we touch upon along with potential future directions for research in areas such as multimodal recommendations and reinforcement learning for long-term engagement. This survey combines existing developments and outlines possible future developments, thus becoming a point of reference for other researchers and practitioners in developing the future of LLM-based recommendation systems. Full article
(This article belongs to the Special Issue Deep Learning in Recommender Systems)
Show Figures

Figure 1

23 pages, 750 KiB  
Article
Hybrid Model for Novel Attack Detection Using a Cluster-Based Machine Learning Classification Approach for the Internet of Things (IoT)
by Naveed Ahmed, Md Asri Ngadi, Abdulaleem Ali Almazroi and Nouf Atiahallah Alghanmi
Future Internet 2025, 17(6), 251; https://doi.org/10.3390/fi17060251 - 31 May 2025
Viewed by 375
Abstract
To combat the growing danger of zero-day attacks on IoT networks, this study introduces a Cluster-Based Classification (CBC) method. Security vulnerabilities have become more apparent with the growth of IoT devices, calling for new approaches to identify unique threats quickly. The hybrid CBC [...] Read more.
To combat the growing danger of zero-day attacks on IoT networks, this study introduces a Cluster-Based Classification (CBC) method. Security vulnerabilities have become more apparent with the growth of IoT devices, calling for new approaches to identify unique threats quickly. The hybrid CBC approach uses optimized k-means clustering to find commonalities across different abnormalities, intending to quickly identify and classify unknown harmful attacks in a varied IoT network. The technique is fine-tuned for eight-class and two-class classifications, supporting different attacks using the IoTCIC2023 dataset and SelectKBest feature selection. Robust analysis is achieved by evaluating and aggregating the performance of machine learning classifiers such as XGBoost, AdaBoost, KNN, and Random Forest. In two-class classification, Random Forest achieves 95.11% accuracy, while in eight-class classification, KNN tops the charts with 88.24%. These results demonstrate noteworthy accuracy. The suggested CBC technique is effective, as shown by comparisons with state-of-the-art approaches. Despite several caveats and dataset specifications, this study provides a useful tool for academics and practitioners in the ever-changing field of cybersecurity by suggesting a method to strengthen the security of IoT networks against new threats. Full article
(This article belongs to the Special Issue Privacy and Security Issues in IoT Systems)
Show Figures

Figure 1

25 pages, 5629 KiB  
Article
Signal Preprocessing for Enhanced IoT Device Identification Using Support Vector Machine
by Rene Francisco Santana-Cruz, Martin Moreno, Daniel Aguilar-Torres, Román Arturo Valverde-Domínguez and Rubén Vázquez-Medina
Future Internet 2025, 17(6), 250; https://doi.org/10.3390/fi17060250 - 31 May 2025
Viewed by 316
Abstract
Device identification based on radio frequency fingerprinting is widely used to improve the security of Internet of Things systems. However, noise and acquisition inconsistencies in raw radio frequency signals can affect the effectiveness of classification, identification and authentication algorithms used to distinguish Bluetooth [...] Read more.
Device identification based on radio frequency fingerprinting is widely used to improve the security of Internet of Things systems. However, noise and acquisition inconsistencies in raw radio frequency signals can affect the effectiveness of classification, identification and authentication algorithms used to distinguish Bluetooth devices. This study investigates how the RF signal preprocessing techniques affect the performance of a support vector machine classifier based on radio frequency fingerprinting. Four options derived from an RF signal preprocessing technique are evaluated, each of which is applied to the raw radio frequency signals in an attempt to improve the consistency between signals emitted by the same Bluetooth device. Experiments conducted on raw Bluetooth signals from twentyfour smartphone radios from two public databases of RF signals show that selecting an appropriate RF signal preprocessing approach can significantly improve the effectiveness of a support vector machine classifier-based algorithm used to discriminate Bluetooth devices. Full article
Show Figures

Graphical abstract

54 pages, 17044 KiB  
Review
Perspectives and Research Challenges in Wireless Communications Hardware for the Future Internet and Its Applications Services
by Dimitrios G. Arnaoutoglou, Tzichat M. Empliouk, Theodoros N. F. Kaifas, Constantinos L. Zekios and George A. Kyriacou
Future Internet 2025, 17(6), 249; https://doi.org/10.3390/fi17060249 - 31 May 2025
Viewed by 581
Abstract
The transition from 5G to 6G wireless systems introduces new challenges at the physical layer, including the need for higher frequency operations, massive MIMO deployment, advanced beamforming techniques, and sustainable energy harvesting mechanisms. A plethora of feature articles, review and white papers, and [...] Read more.
The transition from 5G to 6G wireless systems introduces new challenges at the physical layer, including the need for higher frequency operations, massive MIMO deployment, advanced beamforming techniques, and sustainable energy harvesting mechanisms. A plethora of feature articles, review and white papers, and roadmaps elaborate on the perspectives and research challenges of wireless systems, in general, including both unified physical and cyber space. Hence, this paper presents a comprehensive review of the technological challenges and recent advancements in wireless communication hardware that underpin the development of next-generation networks, particularly 6G. Emphasizing the physical layer, the study explores critical enabling technologies including beamforming, massive MIMO, reconfigurable intelligent surfaces (RIS), millimeter-wave (mmWave) and terahertz (THz) communications, wireless power transfer, and energy harvesting. These technologies are analyzed in terms of their functional roles, implementation challenges, and integration into future wireless infrastructure. Beyond traditional physical layer components, the paper also discusses the role of reconfigurable RF front-ends, innovative antenna architectures, and user-end devices that contribute to the adaptability and efficiency of emerging communication systems. In addition, the inclusion of application-driven paradigms such as digital twins highlights how new use cases are shaping design requirements and pushing the boundaries of hardware capabilities. By linking foundational physical-layer technologies with evolving application demands, this work provides a holistic perspective aimed at guiding future research directions and informing the design of scalable, energy-efficient, and resilient wireless communication platforms for the Future Internet. Specifically, we first try to identify the demands and, in turn, explore existing or emerging technologies that have the potential to meet these needs. Especially, there will be an extended reference about the state-of-the-art antennas for massive MIMO terrestrial and non-terrestrial networks. Full article
(This article belongs to the Special Issue Joint Design and Integration in Smart IoT Systems)
Show Figures

Figure 1

31 pages, 1011 KiB  
Article
A Tale of Many Networks: Splitting and Merging of Chord-like Overlays in Partitioned Networks
by Tobias Amft and Kalman Graffi
Future Internet 2025, 17(6), 248; https://doi.org/10.3390/fi17060248 - 31 May 2025
Viewed by 317
Abstract
Peer-to-peer overlays define an approach to operating data management platforms, which are robust against censorship attempts from countries or large enterprises. The robustness of such overlays is endangered in the presence of national Internet isolations, such as was the case in recent years [...] Read more.
Peer-to-peer overlays define an approach to operating data management platforms, which are robust against censorship attempts from countries or large enterprises. The robustness of such overlays is endangered in the presence of national Internet isolations, such as was the case in recent years during political revolutions. In this paper, we focus on splits and, with stronger emphasis, on the merging of ring-based overlays in the presence of network partitioning in the underlying Internet due to various reasons. We present a new merging algorithm named the Ring Reunion Algorithm and highlight a method for reducing the number of messages in both separated and united overlay states. The algorithm is parallelized for accelerated merging and is able to automatically detect overlay partitioning and start the corresponding merging processes. Through simulations, we evaluate the new Ring Reunion Algorithm in its simple and parallelized forms in comparison to a plain Chord algorithm, the Chord–Zip algorithm, and two versions of the Ring-Unification Algorithm. The evaluation shows that only our parallelized Ring Reunion Algorithm allows the merging of two, three, and more isolated overlay networks in parallel. Our approach quickly merges the overlays, even under churn, and stabilizes the node contacts in the overlay with small traffic overhead. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Figure 1

27 pages, 1766 KiB  
Article
Enhanced Peer-to-Peer Botnet Detection Using Differential Evolution for Optimized Feature Selection
by Sangita Baruah, Vaskar Deka, Dulumani Das, Utpal Barman and Manob Jyoti Saikia
Future Internet 2025, 17(6), 247; https://doi.org/10.3390/fi17060247 - 30 May 2025
Viewed by 386
Abstract
With the growing prevalence of cybercrime, botnets have emerged as a significant threat, infiltrating an increasing number of legitimate computers annually. Challenges arising for organizations, educational institutions, and individuals as a result of botnet attacks include distributed denial of service (DDoS) attacks, phishing [...] Read more.
With the growing prevalence of cybercrime, botnets have emerged as a significant threat, infiltrating an increasing number of legitimate computers annually. Challenges arising for organizations, educational institutions, and individuals as a result of botnet attacks include distributed denial of service (DDoS) attacks, phishing attacks, and extortion attacks, generation of spam, and identity theft. The stealthy nature of botnets, characterized by constant alterations in network structures, attack methodologies, and data transmission patterns, poses a growing difficulty in their detection. This paper introduces an innovative strategy for mitigating botnet threats. Employing differential evolution, we propose a feature selection approach that enhances the ability to discern peer-to-peer (P2P) botnet traffic amidst evolving cyber threats. Differential evolution is a population-based meta-heuristic technique which can be applied to nonlinear and non-differentiable optimization problems owing to its fast convergence and use of few control parameters. Apart from that, an ensemble learning algorithm is also employed to support and enhance the detection phase, providing a robust defense against the dynamic and sophisticated nature of modern P2P botnets. The results demonstrate that our model achieves 99.99% accuracy, 99.49% precision, 98.98% recall, and 99.23% F1-score, which outperform the state-of-the-art P2P detection approaches. Full article
Show Figures

Figure 1

22 pages, 698 KiB  
Article
An AI-Driven Framework for Integrated Security and Privacy in Internet of Things Using Quantum-Resistant Blockchain
by Mahmoud Elkhodr
Future Internet 2025, 17(6), 246; https://doi.org/10.3390/fi17060246 - 30 May 2025
Viewed by 422
Abstract
The growing deployment of the Internet of Things (IoT) across various sectors introduces significant security and privacy challenges. Although numerous individual solutions exist, comprehensive frameworks that effectively combine advanced technologies to address evolving threats are lacking. This paper presents the Integrated Adaptive Security [...] Read more.
The growing deployment of the Internet of Things (IoT) across various sectors introduces significant security and privacy challenges. Although numerous individual solutions exist, comprehensive frameworks that effectively combine advanced technologies to address evolving threats are lacking. This paper presents the Integrated Adaptive Security Framework for IoT (IASF-IoT), which integrates artificial intelligence, blockchain technology, and quantum-resistant cryptography into a unified solution tailored for IoT environments. Central to the framework is an adaptive AI-driven security orchestration mechanism, complemented by blockchain-based identity management, lightweight quantum-resistant protocols, and Digital Twins to predict and proactively mitigate threats. A theoretical performance model and large-scale simulation involving 1000 heterogeneous IoT devices were used to evaluate the framework. Results showed that IASF-IoT achieved detection accuracy between 85% and 99%, with simulated energy consumption remaining below 1.5 mAh per day and response times averaging around 2 s. These findings suggest that the framework offers strong potential for scalable, low-overhead security in resource-constrained IoT environments. Full article
(This article belongs to the Special Issue Security and Privacy in AI-Powered Systems)
Show Figures

Figure 1

27 pages, 5632 KiB  
Article
Semantic Fusion of Health Data: Implementing a Federated Virtualized Knowledge Graph Framework Leveraging Ontop System
by Abid Ali Fareedi, Stephane Gagnon, Ahmad Ghazawneh and Raul Valverde
Future Internet 2025, 17(6), 245; https://doi.org/10.3390/fi17060245 - 30 May 2025
Viewed by 338
Abstract
Data integration (DI) and semantic interoperability (SI) are critical in healthcare, enabling seamless, patient-centric data sharing across systems to meet the demand for instant, unambiguous access to health information. Federated information systems (FIS) highlight auspicious issues for seamless DI and SI stemming from [...] Read more.
Data integration (DI) and semantic interoperability (SI) are critical in healthcare, enabling seamless, patient-centric data sharing across systems to meet the demand for instant, unambiguous access to health information. Federated information systems (FIS) highlight auspicious issues for seamless DI and SI stemming from diverse data sources or models. We present a hybrid ontology-based design science research engineering (ODSRE) methodology that combines design science activities with ontology engineering principles to address the above-mentioned issues. The ODSRE constructs a systematic mechanism leveraging the Ontop virtual paradigm to establish a state-of-the-art federated virtual knowledge graph framework (FVKG) embedded virtualized knowledge graph approach to mitigate the aforementioned challenges effectively. The proposed FVKG helps construct a virtualized data federation leveraging the Ontop semantic query engine that effectively resolves data bottlenecks. Using a virtualized technique, the FVKG helps to reduce data migration, ensures low latency and dynamic freshness, and facilitates real-time access while upholding integrity and coherence throughout the federation system. As a result, we suggest a customized framework for constructing ontological monolithic semantic artifacts, especially in FIS. The proposed FVKG incorporates ontology-based data access (OBDA) to build a monolithic virtualized repository that integrates various ontological-driven artifacts and ensures semantic alignments using schema mapping techniques. Full article
Show Figures

Figure 1

27 pages, 4256 KiB  
Article
A Robust Conformal Framework for IoT-Based Predictive Maintenance
by Alberto Moccardi, Claudia Conte, Rajib Chandra Ghosh and Francesco Moscato
Future Internet 2025, 17(6), 244; https://doi.org/10.3390/fi17060244 - 30 May 2025
Viewed by 480
Abstract
This study, set within the vast and varied research field of industrial Internet of Things (IoT) systems, proposes a methodology to address uncertainty quantification (UQ) issues in predictive maintenance (PdM) practices. At its core, this paper leverages the commercial modular aero-propulsion system simulation [...] Read more.
This study, set within the vast and varied research field of industrial Internet of Things (IoT) systems, proposes a methodology to address uncertainty quantification (UQ) issues in predictive maintenance (PdM) practices. At its core, this paper leverages the commercial modular aero-propulsion system simulation (CMAPSS) dataset to evaluate different artificial intelligence (AI) prognostic algorithms for remaining useful life (RUL) forecasting while supporting the estimation of a robust confidence interval (CI). The methodology primarily involves the comparison of statistical learning (SL), machine learning (ML), and deep learning (DL) techniques for each different scenario of the CMAPSS, evaluating the performances through a tailored metric, the S-score metric, and then benchmarking diverse conformal-based uncertainty estimation techniques, remarkably naive, weighted, and bootstrapping, offering a more suitable and reliable alternative to classical RUL prediction. The results obtained highlight the peculiarities and benefits of the conformal approach, despite probabilistic models favoring the adoption of complex models in cases where the operating conditions of the machine are multiple, and suggest the use of weighted conformal practices in non-exchangeability conditions while recommending bootstrapping alternatives for contexts with a more substantial presence of noise in the data. Full article
(This article belongs to the Special Issue Artificial Intelligence-Enabled Internet of Things (IoT))
Show Figures

Figure 1

18 pages, 546 KiB  
Article
Resource Allocation for Federated Learning with Heterogeneous Computing Capability in Cloud–Edge–Client IoT Architecture
by Xubo Zhang and Yang Luo
Future Internet 2025, 17(6), 243; https://doi.org/10.3390/fi17060243 - 30 May 2025
Viewed by 280
Abstract
A federated learning (FL) framework for cloud–edge–client collaboration performs local aggregation of model parameters through edges, reducing communication overhead from clients to the cloud. This framework is particularly suitable for Internet of Things (IoT)-based secure computing scenarios that require extensive computation and frequent [...] Read more.
A federated learning (FL) framework for cloud–edge–client collaboration performs local aggregation of model parameters through edges, reducing communication overhead from clients to the cloud. This framework is particularly suitable for Internet of Things (IoT)-based secure computing scenarios that require extensive computation and frequent parameter updates, as it leverages the distributed nature of IoT devices to enhance data privacy and reduce latency. To address the issue of high-computation-capability clients waiting due to varying computing capabilities under heterogeneous device conditions, this paper proposes an improved resource allocation scheme based on a three-layer FL framework. This scheme optimizes the communication parameter volume from clients to the edge by implementing a method based on random dropout and parameter completion before and after communication, ensuring that local models can be transmitted to the edge simultaneously, regardless of different computation times. This scheme effectively resolves the problem of high-computation-capability clients experiencing long waiting times. Additionally, it optimizes the similarity pairing method, the Shapley Value (SV) aggregation strategy, and the client selection method to better accommodate heterogeneous computing capabilities found in IoT environments. Experiments demonstrate that this improved scheme is more suitable for heterogeneous IoT client scenarios, reducing system latency and energy consumption while enhancing model performance. Full article
Show Figures

Figure 1

22 pages, 2851 KiB  
Review
Inter-Data Center RDMA: Challenges, Status, and Future Directions
by Xiaoying Huang and Jingwei Wang
Future Internet 2025, 17(6), 242; https://doi.org/10.3390/fi17060242 - 29 May 2025
Viewed by 359
Abstract
Remote Direct Memory Access (RDMA) has been widely implemented in data centers (DCs) due to its high-bandwidth, low-latency, and low-overhead characteristics. In recent years, as various applications relying on inter-DC interconnection have continuously emerged, the demand for deploying RDMA across DCs has been [...] Read more.
Remote Direct Memory Access (RDMA) has been widely implemented in data centers (DCs) due to its high-bandwidth, low-latency, and low-overhead characteristics. In recent years, as various applications relying on inter-DC interconnection have continuously emerged, the demand for deploying RDMA across DCs has been on the rise. Numerous studies have focused on intra-DC RDMA. However, research on inter-DC RDMA is relatively scarce, yet it is showing a growing tendency. Inspired by this trend, this article identifies and discusses specific challenges in inter-DC RDMA deployment, such as congestion control and load balancing. Subsequently, it surveys the recent progress in enhancing the applicability of inter-DC RDMA. Finally, it presents future research directions and opportunities. As the first review article focusing on inter-DC RDMA, this article aims to provide valuable insights and guidance for future research in this emerging field. By systematically reviewing the current state of inter-DC RDMA, we hope to establish a foundation that will inspire and direct subsequent studies. Full article
Show Figures

Figure 1

21 pages, 14175 KiB  
Article
Navigating Data Corruption in Machine Learning: Balancing Quality, Quantity, and Imputation Strategies
by Qi Liu and Wanjing Ma
Future Internet 2025, 17(6), 241; https://doi.org/10.3390/fi17060241 - 29 May 2025
Viewed by 248
Abstract
Data corruption, including missing and noisy entries, is a common challenge in real-world machine learning. This paper examines its impact and mitigation strategies through two experimental setups: supervised NLP tasks (NLP-SL) and deep reinforcement learning for traffic signal control (Signal-RL). This study analyzes [...] Read more.
Data corruption, including missing and noisy entries, is a common challenge in real-world machine learning. This paper examines its impact and mitigation strategies through two experimental setups: supervised NLP tasks (NLP-SL) and deep reinforcement learning for traffic signal control (Signal-RL). This study analyzes how varying corruption levels affect model performance, evaluate imputation strategies, and assess whether expanding datasets can counteract corruption effects. The results indicate that performance degradation follows a diminishing-return pattern, well modeled by an exponential function. Noisy data harm performance more than missing data, especially in sequential tasks like Signal-RL where errors may compound. Imputation helps recover missing data but can introduce noise, with its effectiveness depending on corruption severity and imputation accuracy. This study identifies clear boundaries between when imputation is beneficial versus harmful, and classifies tasks as either noise-sensitive or noise-insensitive. Larger datasets reduce corruption effects but offer diminishing gains at high corruption levels. These insights guide the design of robust systems, emphasizing smart data collection, imputation decisions, and preprocessing strategies in noisy environments. Full article
(This article belongs to the Special Issue Smart Technology: Artificial Intelligence, Robotics and Algorithms)
Show Figures

Figure 1

26 pages, 1761 KiB  
Article
Enhancing Customer Quality of Experience Through Omnichannel Digital Strategies: Evidence from a Service Environment in an Emerging Context
by Fabricio Miguel Moreno-Menéndez, Victoriano Eusebio Zacarías-Rodríguez, Sara Ricardina Zacarías-Vallejos, Vicente González-Prida, Pedro Emil Torres-Quillatupa, Hilario Romero-Girón, José Francisco Vía y Rada-Vittes and Luis Ángel Huaynate-Espejo
Future Internet 2025, 17(6), 240; https://doi.org/10.3390/fi17060240 - 29 May 2025
Viewed by 387
Abstract
The proliferation of digital platforms and interactive technologies has transformed the way service providers engage with their customers, particularly in emerging economies, where digital inclusion is an ongoing process. This study explores the relationship between omnichannel strategies and customer satisfaction, conceptualized here as [...] Read more.
The proliferation of digital platforms and interactive technologies has transformed the way service providers engage with their customers, particularly in emerging economies, where digital inclusion is an ongoing process. This study explores the relationship between omnichannel strategies and customer satisfaction, conceptualized here as a proxy for Quality of Experience (QoE), within a smart service station located in a digitally underserved region. Grounded in customer journey theory and the expectancy–disconfirmation paradigm, the study investigates how data integration, digital payment systems, and logistical flexibility—key components of intelligent e-service systems—influence user perceptions and satisfaction. Based on a correlational design with a non-probabilistic sample of 108 customers, the findings reveal a moderate association between overall omnichannel integration and satisfaction (ρ = 0.555, p < 0.01). However, a multiple regression analysis indicates that no individual dimension significantly predicts satisfaction (adjusted R2 = 0.002). These results suggest that while users value system integration and interaction flexibility, no single technical feature drives satisfaction independently. The study contributes to the growing field of intelligent human-centric service systems by contextualizing QoE and digital inclusion within emerging markets and by emphasizing the importance of perceptual factors in ICT-enabled environments. Full article
(This article belongs to the Special Issue ICT and AI in Intelligent E-systems)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop