Previous Issue
Volume 17, June
 
 

Future Internet, Volume 17, Issue 7 (July 2025) – 33 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 4661 KiB  
Article
The Investigation of Queuing Models to Calculate Journey Times to Develop an Intelligent Transport System for Smart Cities
by Vatsal Mehta, Glenford Mapp and Vaibhav Gandhi
Future Internet 2025, 17(7), 302; https://doi.org/10.3390/fi17070302 - 7 Jul 2025
Viewed by 206
Abstract
Intelligent transport systems are a major component of smart cities because their deployment should result in reduced journey times, less traffic congestion and a significant reduction in road deaths, which will greatly improve the quality of life of their citizens. New technologies such [...] Read more.
Intelligent transport systems are a major component of smart cities because their deployment should result in reduced journey times, less traffic congestion and a significant reduction in road deaths, which will greatly improve the quality of life of their citizens. New technologies such as vehicular networks allow more information be available in realtime, and this information can be used with new analytical models to obtain more accurate estimates of journey times. This would be extremely useful to drivers and will also enable transport authorities to optimise the transport network. This paper addresses these issues using a model-based approach to provide a new way of estimating the delay along specified routes. A journey is defined as the traversal of several road links and junctions from source to destination. The delay at the junctions is analysed using the zero-server Markov chain technique. This is then combined with the Jackson network to analyse the delay across multiple junctions. The delay at road links is analysed using an M/M/K/K model. The results were validated using two simulators: SUMO and VISSIM. A real scenario is also examined to determine the best route. The preliminary results of this model-based analysis look promising but more work is needed to make it useful for wide-scale deployment. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

20 pages, 1179 KiB  
Article
Conv1D-GRU-Self Attention: An Efficient Deep Learning Framework for Detecting Intrusions in Wireless Sensor Networks
by Kenan Honore Robacky Mbongo, Kanwal Ahmed, Orken Mamyrbayev, Guanghui Wang, Fang Zuo, Ainur Akhmediyarova, Nurzhan Mukazhanov and Assem Ayapbergenova
Future Internet 2025, 17(7), 301; https://doi.org/10.3390/fi17070301 - 4 Jul 2025
Viewed by 247
Abstract
Wireless Sensor Networks (WSNs) consist of distributed sensor nodes that collect and transmit environmental data, often in resource-constrained and unsecured environments. These characteristics make WSNs highly vulnerable to various security threats. To address this, the objective of this research is to design and [...] Read more.
Wireless Sensor Networks (WSNs) consist of distributed sensor nodes that collect and transmit environmental data, often in resource-constrained and unsecured environments. These characteristics make WSNs highly vulnerable to various security threats. To address this, the objective of this research is to design and evaluate a deep learning-based Intrusion Detection System (IDS) that is both accurate and efficient for real-time threat detection in WSNs. This study proposes a hybrid IDS model combining one-dimensional Convolutional Neural Networks (Conv1Ds), Gated Recurrent Units (GRUs), and Self-Attention mechanisms. A Conv1D extracts spatial features from network traffic, GRU captures temporal dependencies, and Self-Attention emphasizes critical sequence components, collectively enhancing detection of subtle and complex intrusion patterns. The model was evaluated using the WSN-DS dataset and demonstrated superior performance compared to traditional machine learning and simpler deep learning models. It achieved an accuracy of 98.6%, precision of 98.63%, recall of 98.6%, F1-score of 98.6%, and an ROC-AUC of 0.9994, indicating strong predictive capability even with imbalanced data. In addition to centralized training, the model was tested under cooperative, node-based learning conditions, where each node independently detects anomalies and contributes to a collective decision-making framework. This distributed approach improves detection efficiency and robustness. The proposed IDS offers a scalable and resilient solution tailored to the unique challenges of WSN security. Full article
Show Figures

Figure 1

19 pages, 1514 KiB  
Article
A UAV Trajectory Optimization and Task Offloading Strategy Based on Hybrid Metaheuristic Algorithm in Mobile Edge Computing
by Yeqiang Zheng, An Li, Yihu Wen and Gaocai Wang
Future Internet 2025, 17(7), 300; https://doi.org/10.3390/fi17070300 - 3 Jul 2025
Viewed by 197
Abstract
In the UAV-assisted mobile edge computing (MEC) communication system, the UAV receives the data offloaded by multiple ground user devices as an aerial base station. Among them, due to the limited battery storage of a UAV, energy saving is a key issue in [...] Read more.
In the UAV-assisted mobile edge computing (MEC) communication system, the UAV receives the data offloaded by multiple ground user devices as an aerial base station. Among them, due to the limited battery storage of a UAV, energy saving is a key issue in a UAV-assisted MEC system. However, for a low-altitude flying UAV, successful obstacle avoidance is also very necessary. This paper aims to maximize the system energy efficiency (defined as the ratio of the total amount of offloaded data to the energy consumption of the UAV) to meet the maneuverability and three-dimensional obstacle avoidance constraints of a UAV. A joint optimization strategy with maximized energy efficiency for the UAV flight trajectory and user device task offloading rate is proposed. In order to solve this problem, hybrid alternating metaheuristics for energy optimization are given. Due to the non-convexity and fractional structure of the optimization problem, it can be transformed into an equivalent parameter optimization problem using the Dinkelbach method and then divided into two sub-optimization problems that are alternately optimized using metaheuristic algorithms. The experimental results show that the strategy proposed in this paper can enable a UAV to avoid obstacles during flight by detouring or crossing, and the trajectory does not overlap with obstacles, effectively achieving two-dimensional and three-dimensional obstacle avoidance. In addition, compared with related solving methods, the solving method in this paper has significantly higher success than traditional algorithms. In comparison with related optimization strategies, the strategy proposed in this paper can effectively reduce the overall energy consumption of UAV. Full article
Show Figures

Figure 1

25 pages, 1524 KiB  
Article
Detecting Emerging DGA Malware in Federated Environments via Variational Autoencoder-Based Clustering and Resource-Aware Client Selection
by Ma Viet Duc, Pham Minh Dang, Tran Thu Phuong, Truong Duc Truong, Vu Hai and Nguyen Huu Thanh
Future Internet 2025, 17(7), 299; https://doi.org/10.3390/fi17070299 - 3 Jul 2025
Viewed by 229
Abstract
Domain Generation Algorithms (DGAs) remain a persistent technique used by modern malware to establish stealthy command-and-control (C&C) channels, thereby evading traditional blacklist-based defenses. Detecting such evolving threats is especially challenging in decentralized environments where raw traffic data cannot be aggregated due to privacy [...] Read more.
Domain Generation Algorithms (DGAs) remain a persistent technique used by modern malware to establish stealthy command-and-control (C&C) channels, thereby evading traditional blacklist-based defenses. Detecting such evolving threats is especially challenging in decentralized environments where raw traffic data cannot be aggregated due to privacy or policy constraints. To address this, we present FedSAGE, a security-aware federated intrusion detection framework that combines Variational Autoencoder (VAE)-based latent representation learning with unsupervised clustering and resource-efficient client selection. Each client encodes its local domain traffic into a semantic latent space using a shared, pre-trained VAE trained solely on benign domains. These embeddings are clustered via affinity propagation to group clients with similar data distributions and identify outliers indicative of novel threats without requiring any labeled DGA samples. Within each cluster, FedSAGE selects only the fastest clients for training, balancing computational constraints with threat visibility. Experimental results from the multi-zones DGA dataset show that FedSAGE improves detection accuracy by up to 11.6% and reduces energy consumption by up to 93.8% compared to standard FedAvg under non-IID conditions. Notably, the latent clustering perfectly recovers ground-truth DGA family zones, enabling effective anomaly detection in a fully unsupervised manner while remaining privacy-preserving. These foundations demonstrate that FedSAGE is a practical and lightweight approach for decentralized detection of evasive malware, offering a viable solution for secure and adaptive defense in resource-constrained edge environments. Full article
(This article belongs to the Special Issue Security of Computer System and Network)
Show Figures

Figure 1

24 pages, 4350 KiB  
Article
HECS4MQTT: A Multi-Layer Security Framework for Lightweight and Robust Encryption in Healthcare IoT Communications
by Saud Alharbi, Wasan Awad and David Bell
Future Internet 2025, 17(7), 298; https://doi.org/10.3390/fi17070298 - 30 Jun 2025
Viewed by 223
Abstract
Internet of Things (IoT) technology in healthcare has enabled innovative services that enhance patient monitoring, diagnostics and medical data management. However, securing sensitive health data while maintaining system efficiency of resource-constrained IoT devices remains a critical challenge. This work presents a comprehensive end-to-end [...] Read more.
Internet of Things (IoT) technology in healthcare has enabled innovative services that enhance patient monitoring, diagnostics and medical data management. However, securing sensitive health data while maintaining system efficiency of resource-constrained IoT devices remains a critical challenge. This work presents a comprehensive end-to-end IoT security framework for healthcare environments, addressing encryption at two key levels: lightweight encryption at the edge for resource-constrained devices and robust end-to-end encryption when transmitting data to the cloud via MQTT cloud brokers. The proposed system leverages multi-broker MQTT architecture to optimize resource utilization and enhance message reliability. At the edge, lightweight cryptographic techniques ensure low-latency encryption before transmitting data via a secure MQTT broker hosted within the hospital infrastructure. To safeguard data as it moves beyond the hospital to the cloud, stronger end-to-end encryption are applied to ensure end-to-end security, such as AES-256 and TLS 1.3, to ensure confidentiality and resilience over untrusted networks. A proof-of-concept Python 3.10 -based MQTT implementation is developed using open-source technologies. Security and performance evaluations demonstrate the feasibility of the multi-layer encryption approach, effectively balancing computational overhead with data protection. Security and performance evaluations demonstrate that our novel HECS4MQTT (Health Edge Cloud Security for MQTT) framework achieves a unique balance between efficiency and security. Unlike existing solutions that either impose high computational overhead at the edge or rely solely on transport-layer protection, HECS4MQTT introduces a layered encryption strategy that decouples edge and cloud security requirements. This design minimizes processing delays on constrained devices while maintaining strong cryptographic protection when data crosses trust boundaries. The framework also introduces a lightweight bridge component for re-encryption and integrity enforcement, thereby reducing broker compromise risk and supporting compliance with healthcare security regulations. Our HECS4MQTT framework offers a scalable, adaptable, and trust-separated security model, ensuring enhanced confidentiality, integrity, and availability of healthcare data while remaining suitable for deployment in real-world, latency-sensitive, and resource-limited medical environments. Full article
(This article belongs to the Special Issue Secure Integration of IoT and Cloud Computing)
Show Figures

Figure 1

25 pages, 2065 KiB  
Article
Lower-Latency Screen Updates over QUIC with Forward Error Correction
by Nooshin Eghbal and Paul Lu
Future Internet 2025, 17(7), 297; https://doi.org/10.3390/fi17070297 - 30 Jun 2025
Viewed by 148
Abstract
There are workloads that do not need the total data ordering enforced by the Transmission Control Protocol (TCP). For example, Virtual Network Computing (VNC) has a sequence of pixel-based updates in which the order of rectangles can be relaxed. However, VNC runs over [...] Read more.
There are workloads that do not need the total data ordering enforced by the Transmission Control Protocol (TCP). For example, Virtual Network Computing (VNC) has a sequence of pixel-based updates in which the order of rectangles can be relaxed. However, VNC runs over the TCP and can have higher latency due to unnecessary blocking to ensure total ordering. By using Quick UDP Internet Connections (QUIC) as the underlying protocol, we are able to implement a partial order delivery approach, which can be combined with Forward Error Correction (FEC) to reduce data latency. Our earlier work on consistency fences provides a mechanism and semantic foundation for partial ordering. Our new evaluation on the Emulab testbed, with two different synthetic workloads for streaming and non-streaming updates, shows that our partial order and FEC strategy can reduce the blocking time and inter-delivery time of rectangles compared to total delivery. For one workload, partially ordered data with FEC can reduce the 99-percentile message-blocking time to 0.4 ms versus 230 ms with totally ordered data. That workload was with 0.5% packet loss, 100 ms Round-Trip Time (RTT), and 100 Mbps bandwidth. We study the impact of varying the packet-loss rate, RTT, bandwidth, and CCA and demonstrate that partial order and FEC latency improvements grow as we increase packet loss and RTT, especially with the emerging Bottleneck Bandwidth and Round-Trip propagation time (BBR) congestion control algorithm. Full article
Show Figures

Figure 1

21 pages, 2134 KiB  
Article
Optimizing Trajectories for Rechargeable Agricultural Robots in Greenhouse Climatic Sensing Using Deep Reinforcement Learning with Proximal Policy Optimization Algorithm
by Ashraf Sharifi, Sara Migliorini and Davide Quaglia
Future Internet 2025, 17(7), 296; https://doi.org/10.3390/fi17070296 - 30 Jun 2025
Viewed by 139
Abstract
The experimentation of agricultural robots has been increasing in recent years, both in greenhouses and open fields. While agricultural robots are inherently useful for automating various farming tasks, their presence can also be leveraged to collect measurements along their paths. This approach enables [...] Read more.
The experimentation of agricultural robots has been increasing in recent years, both in greenhouses and open fields. While agricultural robots are inherently useful for automating various farming tasks, their presence can also be leveraged to collect measurements along their paths. This approach enables the creation of a complete and detailed picture of the climate conditions inside a greenhouse, reducing the need to distribute a large number of physical devices among the crops. In this regard, choosing the best visiting sequence of the Points of Interest (PoIs) regarding where to perform the measurements deserves particular attention. This trajectory planning has to carefully combine the amount and significance of the collected data with the energy requirements of the robot. In this paper, we propose a method based on Deep Reinforcement Learning enriched with a Proximal Policy Optimization (PPO) algorithm for determining the best trajectory an agricultural robot must follow to balance the number of measurements and autonomy adequately. The proposed approach has been applied to a real-world case study regarding a greenhouse in Verona (Italy) and compared with other existing state-of-the-art approaches. Full article
(This article belongs to the Special Issue Smart Technology: Artificial Intelligence, Robotics and Algorithms)
Show Figures

Graphical abstract

22 pages, 1158 KiB  
Article
FODIT: A Filter-Based Module for Optimizing Data Storage in B5G IoT Environments
by Bruno Ramos-Cruz, Francisco J. Quesada-Real, Javier Andreu-Pérez and Jessica Zaqueros-Martinez
Future Internet 2025, 17(7), 295; https://doi.org/10.3390/fi17070295 - 30 Jun 2025
Viewed by 151
Abstract
In the rapidly evolving landscape of the Internet of Things (IoT), managing the vast volumes of data generated by connected devices presents significant challenges, particularly in B5G IoT environments. One key issue is data redundancy, where identical data is stored several times because [...] Read more.
In the rapidly evolving landscape of the Internet of Things (IoT), managing the vast volumes of data generated by connected devices presents significant challenges, particularly in B5G IoT environments. One key issue is data redundancy, where identical data is stored several times because it is captured by multiple sensors. To address this, we introduce “FODIT”, a filter-based module designed to optimize data storage in IoT systems. FODIT leverages probabilistic data structures, specifically filters, to improve storage efficiency and query performance. We hypothesize that applying these structures can significantly reduce redundancy and accelerate data access in resource-constrained IoT deployments. We validate our hypothesis through targeted simulations under a specific and rare configuration: high-frequency and high-redundancy environments, with controlled duplication rates between 4% and 8%. These experiments involve data storage in local databases, cloud-based systems, and distributed ledger technologies (DLTs). The results demonstrate FODIT’s ability to reduce storage requirements and improve query responsiveness under these stress-test conditions. Furthermore, the proposed approach has broader applicability, particularly in DLT-based environments such as blockchain, where efficient querying remains a critical challenge. Nonetheless, some limitations remain, especially regarding the current data structure used to maintain consistency with the DLT, and the need for further adaptation to real-world contexts with dynamic workloads. This research highlights the potential of filter-based techniques to improve data management in IoT and blockchain systems, contributing to the development of more scalable and responsive infrastructures. Full article
Show Figures

Figure 1

24 pages, 798 KiB  
Article
ICRSSD: Identification and Classification for Railway Structured Sensitive Data
by Yage Jin, Hongming Chen, Rui Ma, Yanhua Wu and Qingxin Li
Future Internet 2025, 17(7), 294; https://doi.org/10.3390/fi17070294 - 30 Jun 2025
Viewed by 187
Abstract
The rapid growth of the railway industry has resulted in the accumulation of large structured data that makes data security a critical component of reliable railway system operations. However, existing methods for identifying and classifying often suffer from limitations such as overly coarse [...] Read more.
The rapid growth of the railway industry has resulted in the accumulation of large structured data that makes data security a critical component of reliable railway system operations. However, existing methods for identifying and classifying often suffer from limitations such as overly coarse identification granularity and insufficient flexibility in classification. To address these issues, we propose ICRSSD, a two-stage method for identification and classification in terms of the railway domain. The identification stage focuses on obtaining the sensitivity of all attributes. We first divide structured data into canonical data and semi-canonical data at a finer granularity to improve the identification accuracy. For canonical data, we use information entropy to calculate the initial sensitivity. Subsequently, we update the attribute sensitivities through cluster analysis and association rule mining. For semi-canonical data, we calculate attribute sensitivity by using a combination of regular expressions and keyword lists. In the classification stage, to further enhance accuracy, we adopt a dynamic and multi-granularity classified strategy. It considers the relative sensitivity of attributes across different scenarios and classifies them into three levels based on the sensitivity values obtained during the identification stage. Additionally, we design a rule base specifically for the identification and classification of sensitive data in the railway domain. This rule base enables effective data identification and classification, while also supporting the expiry management of sensitive attribute labels. To improve the efficiency of regular expression generation, we developed an auxiliary tool with the help of large language models and a well-designed prompt framework. We conducted experiments on a real-world dataset from the railway domain. The results demonstrate that ICRSSD significantly improves the accuracy and adaptability of sensitive data identification and classification in the railway domain. Full article
Show Figures

Figure 1

22 pages, 5161 KiB  
Article
AUV Trajectory Planning for Optimized Sensor Data Collection in Internet of Underwater Things
by Talal S. Almuzaini and Andrey V. Savkin
Future Internet 2025, 17(7), 293; https://doi.org/10.3390/fi17070293 - 30 Jun 2025
Viewed by 140
Abstract
Efficient and timely data collection in Underwater Acoustic Sensor Networks (UASNs) for Internet of Underwater Things (IoUT) applications remains a significant challenge due to the inherent limitations of the underwater environment. This paper presents a Value of Information (VoI)-based trajectory planning framework for [...] Read more.
Efficient and timely data collection in Underwater Acoustic Sensor Networks (UASNs) for Internet of Underwater Things (IoUT) applications remains a significant challenge due to the inherent limitations of the underwater environment. This paper presents a Value of Information (VoI)-based trajectory planning framework for a single Autonomous Underwater Vehicle (AUV) operating in coordination with an Unmanned Surface Vehicle (USV) to collect data from multiple Cluster Heads (CHs) deployed across an uneven seafloor. The proposed approach employs a VoI model that captures both the importance and timeliness of sensed data, guiding the AUV to collect and deliver critical information before its value significantly degrades. A forward Dynamic Programming (DP) algorithm is used to jointly optimize the AUV’s trajectory and the USV’s start and end positions, with the objective of maximizing the total residual VoI upon mission completion. The trajectory design incorporates the AUV’s kinematic constraints into travel time estimation, enabling accurate VoI evaluation throughout the mission. Simulation results show that the proposed strategy consistently outperforms conventional baselines in terms of residual VoI and overall system efficiency. These findings highlight the advantages of VoI-aware planning and AUV–USV collaboration for effective data collection in challenging underwater environments. Full article
Show Figures

Figure 1

28 pages, 4804 KiB  
Article
Towards Automatic Detection of Pneumothorax in Emergency Care with Deep Learning Using Multi-Source Chest X-ray Data
by Santiago Ibañez Caturla, Juan de Dios Berná Mestre and Oscar Martinez Mozos
Future Internet 2025, 17(7), 292; https://doi.org/10.3390/fi17070292 - 29 Jun 2025
Viewed by 195
Abstract
Pneumothorax is a potentially life-threatening condition defined as the collapse of the lung due to air leakage into the chest cavity. Delays in the diagnosis of pneumothorax can lead to severe complications and even mortality. A significant challenge in pneumothorax diagnosis is the [...] Read more.
Pneumothorax is a potentially life-threatening condition defined as the collapse of the lung due to air leakage into the chest cavity. Delays in the diagnosis of pneumothorax can lead to severe complications and even mortality. A significant challenge in pneumothorax diagnosis is the shortage of radiologists, resulting in the absence of written reports in plain X-rays and, consequently, impacting patient care. In this paper, we propose an automatic triage system for pneumothorax detection in X-ray images based on deep learning. We address this problem from the perspective of multi-source domain adaptation where different datasets available on the Internet are used for training and testing. In particular, we use datasets which contain chest X-ray images corresponding to different conditions (including pneumothorax). A convolutional neural network (CNN) with an EfficientNet architecture is trained and optimized to identify radiographic signs of pneumothorax using those public datasets. We present the results using cross-dataset validation, demonstrating the robustness and generalization capabilities of our multi-source solution across different datasets. The experimental results demonstrate the model’s potential to assist clinicians in prioritizing and correctly detecting urgent cases of pneumothorax using different integrated deployment strategies. Full article
(This article belongs to the Special Issue Artificial Intelligence-Enabled Smart Healthcare)
Show Figures

Figure 1

20 pages, 2579 KiB  
Article
ERA-MADDPG: An Elastic Routing Algorithm Based on Multi-Agent Deep Deterministic Policy Gradient in SDN
by Wanwei Huang, Hongchang Liu, Yingying Li and Linlin Ma
Future Internet 2025, 17(7), 291; https://doi.org/10.3390/fi17070291 - 29 Jun 2025
Viewed by 227
Abstract
To address the fact that changes in network topology can have an impact on the performance of routing, this paper proposes an Elastic Routing Algorithm based on Multi-Agent Deep Deterministic Policy Gradient (ERA-MADDPG), which is implemented within the framework of Multi-Agent Deep Deterministic [...] Read more.
To address the fact that changes in network topology can have an impact on the performance of routing, this paper proposes an Elastic Routing Algorithm based on Multi-Agent Deep Deterministic Policy Gradient (ERA-MADDPG), which is implemented within the framework of Multi-Agent Deep Deterministic Policy Gradient (MADDPG) in deep reinforcement learning. The algorithm first builds a three-layer architecture based on Software-Defined Networking (SDN). The top-down layers are the multi-agent layer, the controller layer, and the data layer. The architecture’s processing flow, including real-time data layer information collection and dynamic policy generation, enables the ERA-MADDPG algorithm to exhibit strong elasticity by quickly adjusting routing decisions in response to topology changes. The actor-critic framework combined with Convolutional Neural Networks (CNN) to implement the ERA-MADDPG routing algorithm effectively improves training efficiency, enhances learning stability, facilitates collaboration, and improves algorithm generalization and applicability. Finally, simulation experiments demonstrate that the convergence speed of the ERA-MADDPG routing algorithm outperforms that of the Multi-Agent Deep Q-Network (MADQN) algorithm and the Smart Routing based on Deep Reinforcement Learning (SR-DRL) algorithm, and the training speed in the initial phase is improved by approximately 20.9% and 39.1% compared to the MADQN algorithm and SR-DRL algorithm, respectively. The elasticity performance of ERA-MADDPG is quantified by re-convergence speed: under 5–15% topology node/link changes, its re-convergence speed is over 25% faster than that of MADQN and SR-DRL, demonstrating superior capability to maintain routing efficiency in dynamic environments. Full article
Show Figures

Figure 1

24 pages, 649 KiB  
Systematic Review
Algorithms for Load Balancing in Next-Generation Mobile Networks: A Systematic Literature Review
by Juan Ochoa-Aldeán, Carlos Silva-Cárdenas, Renato Torres, Jorge Ivan Gonzalez and Sergio Fortes
Future Internet 2025, 17(7), 290; https://doi.org/10.3390/fi17070290 - 28 Jun 2025
Viewed by 236
Abstract
Background: Machine learning methods are increasingly being used in mobile network optimization systems, especially next-generation mobile networks. The need for enhanced radio resource allocation schemes, improved user mobility and increased throughput, driven by a rising demand for data, has necessitated the development of [...] Read more.
Background: Machine learning methods are increasingly being used in mobile network optimization systems, especially next-generation mobile networks. The need for enhanced radio resource allocation schemes, improved user mobility and increased throughput, driven by a rising demand for data, has necessitated the development of diverse algorithms that optimize output values based on varied input parameters. In this context, we identify the main topics related to cellular networks and machine learning algorithms in order to pinpoint areas where the optimization of parameters is crucial. Furthermore, the wide range of available algorithms often leads to confusion and disorder during classification processes. It is crucial to note that next-generation networks are expected to require reduced latency times, especially for sensitive applications such as Industry 4.0. Research Question: An analysis of the existing literature on mobile network load balancing methods was conducted to identify systems that operate using semi-automatic, automatic and hybrid algorithms. Our research question is as follows: What are the automatic, semi-automatic and hybrid load balancing algorithms that can be applied to next-generation mobile networks? Contribution: This paper aims to present a comprehensive analysis and classification of the algorithms used in this area of study; in order to identify the most suitable for load balancing optimization in next-generation mobile networks, we have organized the classification into three categories, automatic, semi-automatic and hybrid, which will allow for a clear and concise idea of both theoretical and field studies that relate these three types of algorithms with next-generation networks. Figures and tables illustrate the number of algorithms classified by type. In addition, the most important articles related to this topic from five different scientific databases are summarized. Methodology: For this research, we employed the PRISMA method to conduct a systematic literature review of the aforementioned study areas. Findings: The results show that, despite the scarce literature on the subject, the use of load balancing algorithms significantly influences the deployment and performance of next-generation mobile networks. This study highlights the critical role that algorithm selection should play in 5G network optimization, in particular to address latency reduction, dynamic resource allocation and scalability in dense user environments, key challenges for applications such as industrial automation and real-time communications. Our classification framework provides a basis for operators to evaluate algorithmic trade-offs in scenarios such as network fragmentation or edge computing. To fill existing gaps, we propose further research on AI-driven hybrid models that integrate real-time data analytics with predictive algorithms, enabling proactive load management in ultra-reliable 5G/6G architectures. Given this background, it is crucial to conduct further research on the effects of technologies used for load balancing optimization. This line of research is worthy of consideration. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

33 pages, 8285 KiB  
Article
TrustShare: Secure and Trusted Blockchain Framework for Threat Intelligence Sharing
by Hisham Ali, William J. Buchanan, Jawad Ahmad, Marwan Abubakar, Muhammad Shahbaz Khan and Isam Wadhaj
Future Internet 2025, 17(7), 289; https://doi.org/10.3390/fi17070289 - 27 Jun 2025
Viewed by 279
Abstract
We introduce TrustShare, a novel blockchain-based framework designed to enable secure, privacy-preserving, and trust-aware cyber threat intelligence (CTI) sharing across organizational boundaries. Leveraging Hyperledger Fabric, the architecture supports fine-grained access control and immutability through smart contract-enforced trust policies. The system combines Ciphertext-Policy [...] Read more.
We introduce TrustShare, a novel blockchain-based framework designed to enable secure, privacy-preserving, and trust-aware cyber threat intelligence (CTI) sharing across organizational boundaries. Leveraging Hyperledger Fabric, the architecture supports fine-grained access control and immutability through smart contract-enforced trust policies. The system combines Ciphertext-Policy Attribute-Based Encryption (CP-ABE) with temporal, spatial, and controlled revelation constraints to grant data owners precise control over shared intelligence. To ensure scalable decentralized storage, encrypted CTI is distributed via the IPFS, with blockchain-anchored references ensuring verifiability and traceability. Using STIX for structuring and TAXII for exchange, the framework complies with the GDPR requirements, embedding revocation and the right to be forgotten through certificate authorities. The experimental validation demonstrates that TrustShare achieves low-latency retrieval, efficient encryption performance, and robust scalability in containerized deployments. By unifying decentralized technologies with cryptographic enforcement and regulatory compliance, TrustShare sets a foundation for the next generation of sovereign and trustworthy threat intelligence collaboration. Full article
(This article belongs to the Special Issue Distributed Machine Learning and Federated Edge Computing for IoT)
Show Figures

Figure 1

27 pages, 8848 KiB  
Article
Empirical Investigation on Practical Robustness of Keystroke Recognition Using WiFi Sensing for Future IoT Applications
by Haoming Wang, Aryan Sharma, Deepak Mishra, Aruna Seneviratne and Eliathamby Ambikairajah
Future Internet 2025, 17(7), 288; https://doi.org/10.3390/fi17070288 - 27 Jun 2025
Viewed by 158
Abstract
The widespread use of WiFi Internet-of-Things (IoT) devices has rendered them valuable tools for detecting information about the physical environment. Recent studies have demonstrated that WiFi Channel State Information (CSI) can detect physical events like movement, occupancy increases, and gestures. This paper empirically [...] Read more.
The widespread use of WiFi Internet-of-Things (IoT) devices has rendered them valuable tools for detecting information about the physical environment. Recent studies have demonstrated that WiFi Channel State Information (CSI) can detect physical events like movement, occupancy increases, and gestures. This paper empirically investigates the conditions under which WiFi sensing technology remains effective for keystroke detection. To achieve this timely goal of assessing whether it can raise any privacy concerns, experiments are conducted using commodity hardware to predict the accuracy of WiFi CSI in detecting keys pressed on a keyboard. Our novel results show that, in an ideal setting with a robotic arm, the position of a specific key can be predicted with 99% accuracy using a simple machine learning classifier. Furthermore, human finger localisation over a key and actual key-press recognition is also successfully achieved, with 94% and 89% reduced accuracy values, respectively. Moreover, our detailed investigation reveals that to ensure high accuracy, the gap distance between each test object must be substantial, while the size of the test group should be limited. Finally, we show WiFi sensing technology has limitations in small-scale gesture recognition for generic settings where proper device positioning is crucial. Specifically, detecting keyed words achieves an overall accuracy of 94% for the forefinger and 87% for multiple fingers when only the right hand is used. Accuracy drops to 56% when using both hands. We conclude WiFi sensing is effective in controlled indoor environments, but it has limitations due to the device location and the limited granularity of sensing objects. Full article
Show Figures

Graphical abstract

27 pages, 2004 KiB  
Article
Cross-Lingual Cross-Domain Transfer Learning for Rumor Detection
by Eliana Providel, Marcelo Mendoza and Mauricio Solar
Future Internet 2025, 17(7), 287; https://doi.org/10.3390/fi17070287 - 26 Jun 2025
Viewed by 176
Abstract
This study introduces a novel method that merges propagation-based transfer learning with word embeddings for rumor detection. This approach aims to use data from languages with abundant resources to enhance performance in languages with limited availability of annotated corpora in this task. Furthermore, [...] Read more.
This study introduces a novel method that merges propagation-based transfer learning with word embeddings for rumor detection. This approach aims to use data from languages with abundant resources to enhance performance in languages with limited availability of annotated corpora in this task. Furthermore, we augment our rumor detection framework with two supplementary tasks—stance classification and bot detection—to reinforce the primary task of rumor detection. Utilizing our proposed multi-task system, which incorporates cascade learning models, we generate several pre-trained models that are subsequently fine-tuned for rumor detection in English and Spanish. The results show improvements over the baselines, thus empirically validating the efficacy of our proposed approach. A Macro-F1 of 0.783 is achieved for the Spanish language, and a Macro-F1 of 0.945 is achieved for the English language. Full article
Show Figures

Graphical abstract

18 pages, 1059 KiB  
Article
Exponential Backoff and Its Security Implications for Safety-Critical OT Protocols over TCP/IP Networks
by Matthew Boeding, Paul Scalise, Michael Hempel, Hamid Sharif and Juan Lopez, Jr.
Future Internet 2025, 17(7), 286; https://doi.org/10.3390/fi17070286 - 26 Jun 2025
Viewed by 214
Abstract
The convergence of Operational Technology (OT) and Information Technology (IT) networks has become increasingly prevalent with the growth of Industrial Internet of Things (IIoT) applications. This shift, while enabling enhanced automation, remote monitoring, and data sharing, also introduces new challenges related to communication [...] Read more.
The convergence of Operational Technology (OT) and Information Technology (IT) networks has become increasingly prevalent with the growth of Industrial Internet of Things (IIoT) applications. This shift, while enabling enhanced automation, remote monitoring, and data sharing, also introduces new challenges related to communication latency and cybersecurity. Oftentimes, legacy OT protocols were adapted to the TCP/IP stack without an extensive review of the ramifications to their robustness, performance, or safety objectives. To further accommodate the IT/OT convergence, protocol gateways were introduced to facilitate the migration from serial protocols to TCP/IP protocol stacks within modern IT/OT infrastructure. However, they often introduce additional vulnerabilities by exposing traditionally isolated protocols to external threats. This study investigates the security and reliability implications of migrating serial protocols to TCP/IP stacks and the impact of protocol gateways, utilizing two widely used OT protocols: Modbus TCP and DNP3. Our protocol analysis finds a significant safety-critical vulnerability resulting from this migration, and our subsequent tests clearly demonstrate its presence and impact. A multi-tiered testbed, consisting of both physical and emulated components, is used to evaluate protocol performance and the effects of device-specific implementation flaws. Through this analysis of specifications and behaviors during communication interruptions, we identify critical differences in fault handling and the impact on time-sensitive data delivery. The findings highlight how reliance on lower-level IT protocols can undermine OT system resilience, and they inform the development of mitigation strategies to enhance the robustness of industrial communication networks. Full article
Show Figures

Figure 1

22 pages, 2732 KiB  
Article
AI-Based Learning Recommendations: Use in Higher Education
by Prabin Dahal, Saptadi Nugroho, Claudia Schmidt and Volker Sänger
Future Internet 2025, 17(7), 285; https://doi.org/10.3390/fi17070285 - 26 Jun 2025
Viewed by 267
Abstract
We propose the extension for Artificial Intelligence (AI)-supported learning recommendations within higher education, focusing on enhancing the widely-used Moodle Learning Management System (LMS) and extending it to the Learning eXperience Platform (LXP). The proposed LXP is an enhancement of Moodle, with an emphasis [...] Read more.
We propose the extension for Artificial Intelligence (AI)-supported learning recommendations within higher education, focusing on enhancing the widely-used Moodle Learning Management System (LMS) and extending it to the Learning eXperience Platform (LXP). The proposed LXP is an enhancement of Moodle, with an emphasis on learning support and learner motivation, incorporating various recommendation types such as content-based, collaborative, and session-based recommendations to provide the next learning resources given by lecturers and retrieved from the content curation of Open Educational Resources (OER) for the learners. In addition, we integrated a chatbot using Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) with AI-based recommendations to provide an effective learning experience. Full article
(This article belongs to the Special Issue Deep Learning in Recommender Systems)
Show Figures

Figure 1

30 pages, 9859 KiB  
Article
Strategies and Challenges in Detecting XSS Vulnerabilities Using an Innovative Cookie Collector
by Germán Rodríguez-Galán, Eduardo Benavides-Astudillo, Daniel Nuñez-Agurto, Pablo Puente-Ponce, Sonia Cárdenas-Delgado and Mauricio Loachamín-Valencia
Future Internet 2025, 17(7), 284; https://doi.org/10.3390/fi17070284 - 26 Jun 2025
Viewed by 255
Abstract
This study presents a system for automatic cookie collection using bots that simulate user browsing behavior. Five bots were deployed, one for each of the most commonly used university browsers, enabling comprehensive data collection across multiple platforms. The infrastructure included an Ubuntu server [...] Read more.
This study presents a system for automatic cookie collection using bots that simulate user browsing behavior. Five bots were deployed, one for each of the most commonly used university browsers, enabling comprehensive data collection across multiple platforms. The infrastructure included an Ubuntu server with PiHole and Tshark services, facilitating cookie classification and association with third-party advertising and tracking networks. The BotSoul algorithm automated navigation, analyzing 440,000 URLs over 10.9 days with uninterrupted bot operation. The collected data established relationships between visited domains, generated cookies, and captured traffic, providing a solid foundation for security and privacy analysis. Machine learning models were developed to classify suspicious web domains and predict their vulnerability to XSS attacks. Additionally, clustering algorithms enabled user segmentation based on cookie data, identification of behavioral patterns, enhanced personalized web recommendations, and browsing experience optimization. The results highlight the system’s effectiveness in detecting security threats and improving navigation through adaptive recommendations. This research marks a significant advancement in web security and privacy, laying the groundwork for future improvements in protecting user information. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

33 pages, 1292 KiB  
Systematic Review
Metaverse Architectures: A Comprehensive Systematic Review of Definitions and Frameworks
by Cemile Boztas, Essam Ghadafi and Rasha Ibrahim
Future Internet 2025, 17(7), 283; https://doi.org/10.3390/fi17070283 - 26 Jun 2025
Viewed by 295
Abstract
The metaverse is a multidimensional reality space that represents the evolving interface between humans, digital systems, and spatial computing. This research aims to provide a comprehensive synthesis of metaverse definitions and architectural frameworks through the analysis of academic literature. A systematic literature review [...] Read more.
The metaverse is a multidimensional reality space that represents the evolving interface between humans, digital systems, and spatial computing. This research aims to provide a comprehensive synthesis of metaverse definitions and architectural frameworks through the analysis of academic literature. A systematic literature review of 103 peer-reviewed studies was conducted, from which 33 explicit definitions and 25 architectural models were selected for detailed analysis. A mixed-methods qualitative approach was employed, combining a systematic literature analysis, word frequency analysis, and a thematic analysis to develop an expanded conceptual and architectural understanding. The analysis of 33 definitions revealed eight recurring conceptual themes, which were synthesised into five overarching categories: conceptual and linguistic constructs, spatial-environmental design, technological orientation, economic and governance systems, and social and human interaction. A total of 25 studies proposing architectural models are categorised into three thematic groups: technology-based, domain-based and layered architectures. A significant contribution of this study is the categorisation and comparative evaluation of foundational architectural models, specifically the extensively referenced three-, four-, and five-layer frameworks. The findings are structured around two primary areas of focus: the conceptual definition and the comparative analysis of established metaverse architecture models, focusing on a set of well-established models that exhibit conceptual clarity, structural coherence, and widespread acceptance in the literature. This integrated methodological approach not only offers a dual-perspective analysis that bridges conceptualisation and system design but also introduces an analytical framework that supports future standardisation and governance discussions. Full article
Show Figures

Figure 1

24 pages, 7080 KiB  
Review
Responsible Resilience in Cyber–Physical–Social Systems: A New Paradigm for Emergent Cyber Risk Modeling
by Theresa Sobb, Nour Moustafa and Benjamin Turnbull
Future Internet 2025, 17(7), 282; https://doi.org/10.3390/fi17070282 - 25 Jun 2025
Viewed by 241
Abstract
As cyber systems increasingly converge with physical infrastructure and social processes, they give rise to Complex Cyber–Physical–Social Systems (C-CPSS), whose emergent behaviors pose unique risks to security and mission assurance. Traditional cyber–physical system models often fail to address the unpredictability arising from human [...] Read more.
As cyber systems increasingly converge with physical infrastructure and social processes, they give rise to Complex Cyber–Physical–Social Systems (C-CPSS), whose emergent behaviors pose unique risks to security and mission assurance. Traditional cyber–physical system models often fail to address the unpredictability arising from human and organizational dynamics, leaving critical gaps in how cyber risks are assessed and managed across interconnected domains. The challenge lies in building resilient systems that not only resist disruption, but also absorb, recover, and adapt—especially in the face of complex, nonlinear, and often unintentionally emergent threats. This paper introduces the concept of ‘responsible resilience’, defined as the capacity of systems to adapt to cyber risks using trustworthy, transparent agent-based models that operate within socio-technical contexts. We identify a fundamental research gap in the treatment of social complexity and emergence in existing the cyber–physical system literature. To address this, we propose the E3R modeling paradigm—a novel framework for conceptualizing Emergent, Risk-Relevant Resilience in C-CPSS. This paradigm synthesizes human-in-the-loop diagrams, agent-based Artificial Intelligence simulations, and ontology-driven representations to model the interdependencies and feedback loops driving unpredictable cyber risk propagation more effectively. Compared to conventional cyber–physical system models, E3R accounts for adaptive risks across social, cyber, and physical layers, enabling a more accurate and ethically grounded foundation for cyber defence and mission assurance. Our analysis of the literature review reveals the underrepresentation of socio-emergent risk modeling in the literature, and our results indicate that existing models—especially those in industrial and healthcare applications of cyber–physical systems—lack the generalizability and robustness necessary for complex, cross-domain environments. The E3R framework thus marks a significant step forward in understanding and mitigating emergent threats in future digital ecosystems. Full article
(This article belongs to the Special Issue Internet of Things and Cyber-Physical Systems, 3rd Edition)
Show Figures

Figure 1

23 pages, 1454 KiB  
Article
The Internet of Things, Fog, and Cloud Continuum: Integration Challenges and Opportunities for Smart Cities
by Rodger Lea, Toni Adame, Alexandre Berne and Selma Azaiez
Future Internet 2025, 17(7), 281; https://doi.org/10.3390/fi17070281 - 25 Jun 2025
Viewed by 305
Abstract
This paper explores the broad area of Smart City services and how the evolving Cloud-Edge-IoT continuum can support application deployment in Smart Cities. We initially introduce a range of Smart City services and highlight their computational needs. We then discuss the role of [...] Read more.
This paper explores the broad area of Smart City services and how the evolving Cloud-Edge-IoT continuum can support application deployment in Smart Cities. We initially introduce a range of Smart City services and highlight their computational needs. We then discuss the role of the Cloud-Edge-IoT continuum as a technological platform to meet those needs. To validate this approach, we present the COGNIFOG platform, a Cloud-Edge-IoT platform developed to support city-centric use cases, and an initial technology trial that shows the early benefits of using the platform. We conclude with plans for improvements to COGNIFOG based on the trials and with a broader set of observations on the future of the Cloud-Edge-IoT continuum in Smart City services and applications. Full article
Show Figures

Figure 1

31 pages, 1086 KiB  
Article
Measurement of the Functional Size of Web Analytics Implementation: A COSMIC-Based Case Study Using Machine Learning
by Ammar Abdallah, Alain Abran, Munthir Qasaimeh, Malik Qasaimeh and Bashar Abdallah
Future Internet 2025, 17(7), 280; https://doi.org/10.3390/fi17070280 - 25 Jun 2025
Viewed by 291
Abstract
To fully leverage Google Analytics and derive actionable insights, web analytics practitioners must go beyond standard implementation and customize the setup for specific functional requirements, which involves additional web development efforts. Previous studies have not provided solutions for estimating web analytics development efforts, [...] Read more.
To fully leverage Google Analytics and derive actionable insights, web analytics practitioners must go beyond standard implementation and customize the setup for specific functional requirements, which involves additional web development efforts. Previous studies have not provided solutions for estimating web analytics development efforts, and practitioners must rely on ad hoc practices for time and budget estimation. This study presents a COSMIC-based measurement framework to measure the functional size of Google Analytics implementations, including two examples. Next, a set of 50 web analytics projects were sized in COSMIC Function Points and used as inputs to various machine learning (ML) effort estimation models. A comparison of predicted effort values with actual values indicated that Linear Regression, Extra Trees, and Random Forest ML models performed well in terms of low Root Mean Square Error (RMSE), high Testing Accuracy, and strong Standard Accuracy (SA) scores. These results demonstrate the feasibility of applying functional size for web analytics and its usefulness in predicting web analytics project efforts. This study contributes to enhancing rigor in web analytics project management, thereby enabling more effective resource planning and allocation. Full article
Show Figures

Figure 1

34 pages, 2216 KiB  
Article
An Optimized Transformer–GAN–AE for Intrusion Detection in Edge and IIoT Systems: Experimental Insights from WUSTL-IIoT-2021, EdgeIIoTset, and TON_IoT Datasets
by Ahmad Salehiyan, Pardis Sadatian Moghaddam and Masoud Kaveh
Future Internet 2025, 17(7), 279; https://doi.org/10.3390/fi17070279 - 24 Jun 2025
Viewed by 261
Abstract
The rapid expansion of Edge and Industrial Internet of Things (IIoT) systems has intensified the risk and complexity of cyberattacks. Detecting advanced intrusions in these heterogeneous and high-dimensional environments remains challenging. As the IIoT becomes integral to critical infrastructure, ensuring security is crucial [...] Read more.
The rapid expansion of Edge and Industrial Internet of Things (IIoT) systems has intensified the risk and complexity of cyberattacks. Detecting advanced intrusions in these heterogeneous and high-dimensional environments remains challenging. As the IIoT becomes integral to critical infrastructure, ensuring security is crucial to prevent disruptions and data breaches. Traditional IDS approaches often fall short against evolving threats, highlighting the need for intelligent and adaptive solutions. While deep learning (DL) offers strong capabilities for pattern recognition, single-model architectures often lack robustness. Thus, hybrid and optimized DL models are increasingly necessary to improve detection performance and address data imbalance and noise. In this study, we propose an optimized hybrid DL framework that combines a transformer, generative adversarial network (GAN), and autoencoder (AE) components, referred to as Transformer–GAN–AE, for robust intrusion detection in Edge and IIoT environments. To enhance the training and convergence of the GAN component, we integrate an improved chimp optimization algorithm (IChOA) for hyperparameter tuning and feature refinement. The proposed method is evaluated using three recent and comprehensive benchmark datasets, WUSTL-IIoT-2021, EdgeIIoTset, and TON_IoT, widely recognized as standard testbeds for IIoT intrusion detection research. Extensive experiments are conducted to assess the model’s performance compared to several state-of-the-art techniques, including standard GAN, convolutional neural network (CNN), deep belief network (DBN), time-series transformer (TST), bidirectional encoder representations from transformers (BERT), and extreme gradient boosting (XGBoost). Evaluation metrics include accuracy, recall, AUC, and run time. Results demonstrate that the proposed Transformer–GAN–AE framework outperforms all baseline methods, achieving a best accuracy of 98.92%, along with superior recall and AUC values. The integration of IChOA enhances GAN stability and accelerates training by optimizing hyperparameters. Together with the transformer for temporal feature extraction and the AE for denoising, the hybrid architecture effectively addresses complex, imbalanced intrusion data. The proposed optimized Transformer–GAN–AE model demonstrates high accuracy and robustness, offering a scalable solution for real-world Edge and IIoT intrusion detection. Full article
Show Figures

Figure 1

23 pages, 2431 KiB  
Article
SatScope: A Data-Driven Simulator for Low-Earth-Orbit Satellite Internet
by Qichen Wang, Guozheng Yang, Yongyu Liang, Chiyu Chen, Qingsong Zhao and Sugai Chen
Future Internet 2025, 17(7), 278; https://doi.org/10.3390/fi17070278 - 24 Jun 2025
Viewed by 229
Abstract
The rapid development of low-Earth-orbit (LEO) satellite constellations has not only provided global users with low-latency and unrestricted high-speed data services but also presented researchers with the challenge of understanding dynamic changes in global network behavior. Unlike geostationary satellites and terrestrial internet infrastructure, [...] Read more.
The rapid development of low-Earth-orbit (LEO) satellite constellations has not only provided global users with low-latency and unrestricted high-speed data services but also presented researchers with the challenge of understanding dynamic changes in global network behavior. Unlike geostationary satellites and terrestrial internet infrastructure, LEO satellites move at a relative velocity of 7.6 km/s, leading to frequent alterations in their connectivity status with ground stations. Given the complexity of the space environment, current research on LEO satellite internet primarily focuses on modeling and simulation. However, existing LEO satellite network simulators often overlook the global network characteristics of these systems. We present SatScope, a data-driven simulator for LEO satellite internet. SatScope consists of three main components, space segment modeling, ground segment modeling, and network simulation configuration, providing researchers with an interface to interact with these models. Utilizing both space and ground segment models, SatScope can configure various network topology models, routing algorithms, and load balancing schemes, thereby enabling the evaluation of optimization algorithms for LEO satellite communication systems. We also compare SatScope’s fidelity, lightweight design, scalability, and openness against other simulators. Based on our simulation results using SatScope, we propose two metrics—ground node IP coverage rate and the number of satellite service IPs—to assess the service performance of single-layer satellite networks. Our findings reveal that during each network handover, on average, 38.94% of nodes and 83.66% of links change. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

35 pages, 2010 KiB  
Article
Intelligent Transmission Control Scheme for 5G mmWave Networks Employing Hybrid Beamforming
by Hazem (Moh’d Said) Hatamleh, As’ad Mahmoud As’ad Alnaser, Roba Mahmoud Ali Aloglah, Tomader Jamil Bani Ata, Awad Mohamed Ramadan and Omar Radhi Aqeel Alzoubi
Future Internet 2025, 17(7), 277; https://doi.org/10.3390/fi17070277 - 24 Jun 2025
Viewed by 243
Abstract
Hybrid beamforming plays a critical role in evaluating wireless communication technology, particularly for millimeter-wave (mmWave) multiple-input multiple-out (MIMO) communication. Several hybrid beamforming systems are investigated for millimeter-wave multiple-input multiple-output (MIMO) communication. The deployment of huge grant-free transmission in the millimeter-wave (mmWave) band is [...] Read more.
Hybrid beamforming plays a critical role in evaluating wireless communication technology, particularly for millimeter-wave (mmWave) multiple-input multiple-out (MIMO) communication. Several hybrid beamforming systems are investigated for millimeter-wave multiple-input multiple-output (MIMO) communication. The deployment of huge grant-free transmission in the millimeter-wave (mmWave) band is required due to the growing demands for spectrum resources in upcoming enormous machine-type communication applications. Ultra-high data speed, reduced latency, and improved connection are all promised by the development of 5G mmWave networks. Yet, due to severe route loss and directional communication requirements, there are substantial obstacles to transmission reliability and energy efficiency. To address this limitation in this research we present an intelligent transmission control scheme tailored to 5G mmWave networks. Transport control protocol (TCP) performance over mmWave links can be enhanced for network protocols by utilizing the mmWave scalable (mmS)-TCP. To ensure that users have the stronger average power, we suggest a novel method called row compression two-stage learning-based accurate multi-path processing network with received signal strength indicator-based association strategy (RCTS-AMP-RSSI-AS) for an estimate of both the direct and indirect channels. To change user scenarios and maintain effective communication constantly, we utilize the innovative method known as multi-user scenario-based MATD3 (Mu-MATD3). To improve performance, we introduce the novel method of “digital and analog beam training with long-short term memory (DAH-BT-LSTM)”. Finally, as optimizing network performance requires bottleneck-aware congestion reduction, the low-latency congestion control schemes (LLCCS) are proposed. The overall proposed method improves the performance of 5G mmWave networks. Full article
(This article belongs to the Special Issue Advances in Wireless and Mobile Networking—2nd Edition)
Show Figures

Figure 1

30 pages, 4112 KiB  
Article
Tourism Sentiment Chain Representation Model and Construction from Tourist Reviews
by Bosen Li, Rui Li, Junhao Wang and Aihong Song
Future Internet 2025, 17(7), 276; https://doi.org/10.3390/fi17070276 - 23 Jun 2025
Viewed by 215
Abstract
Current tourism route recommendation systems often overemphasize popular destinations, thereby overlooking geographical accessibility between attractions and the experiential coherence of the journey. Leveraging multidimensional attribute perceptions derived from tourist reviews, this study proposes a Spatial–Semantic Integrated Model for Tourist Attraction Representation (SSIM-TAR), which [...] Read more.
Current tourism route recommendation systems often overemphasize popular destinations, thereby overlooking geographical accessibility between attractions and the experiential coherence of the journey. Leveraging multidimensional attribute perceptions derived from tourist reviews, this study proposes a Spatial–Semantic Integrated Model for Tourist Attraction Representation (SSIM-TAR), which holistically encodes the composite attributes and multifaceted evaluations of attractions. Integrating these multidimensional features with inter-attraction relationships, three relational metrics are defined and fused: spatial proximity, resonance correlation, and thematic-sentiment similarity, forming a Tourist Attraction Multidimensional Association Network (MAN-SRT). This network enables precise characterization of complex inter-attraction dependencies. Building upon MAN-SRT, the Tourism Sentiment Chain (TSC) model is proposed that incorporates geographical accessibility, associative resonance, and thematic-sentiment synergy to optimize the selection and sequential arrangement of attractions in personalized route planning. Results demonstrate that SSIM-TAR effectively captures the integrated attributes and experiential quality of tourist attractions, while MAN-SRT reveals distinct multidimensional association patterns. Compared with popular platforms such as “Qunar” and “Mafengwo”, the TSC approach yields routes with enhanced spatial efficiency and thematic-sentiment coherence. This study advances tourism route modeling by jointly analyzing multidimensional experiential quality through spatial–semantic feature fusion and by achieving an integrated optimization of geographical accessibility and experiential coherence in route design. Full article
Show Figures

Figure 1

35 pages, 8431 KiB  
Article
Integrating Physical Unclonable Functions with Machine Learning for the Authentication of Edge Devices in IoT Networks
by Abdul Manan Sheikh, Md. Rafiqul Islam, Mohamed Hadi Habaebi, Suriza Ahmad Zabidi, Athaur Rahman Bin Najeeb and Adnan Kabbani
Future Internet 2025, 17(7), 275; https://doi.org/10.3390/fi17070275 - 21 Jun 2025
Viewed by 320
Abstract
Edge computing (EC) faces unique security threats due to its distributed architecture, resource-constrained devices, and diverse applications, making it vulnerable to data breaches, malware infiltration, and device compromise. The mitigation strategies against EC data security threats include encryption, secure authentication, regular updates, tamper-resistant [...] Read more.
Edge computing (EC) faces unique security threats due to its distributed architecture, resource-constrained devices, and diverse applications, making it vulnerable to data breaches, malware infiltration, and device compromise. The mitigation strategies against EC data security threats include encryption, secure authentication, regular updates, tamper-resistant hardware, and lightweight security protocols. Physical Unclonable Functions (PUFs) are digital fingerprints for device authentication that enhance interconnected devices’ security due to their cryptographic characteristics. PUFs produce output responses against challenge inputs based on the physical structure and intrinsic manufacturing variations of an integrated circuit (IC). These challenge-response pairs (CRPs) enable secure and reliable device authentication. Our work implements the Arbiter PUF (APUF) on Altera Cyclone IV FPGAs installed on the ALINX AX4010 board. The proposed APUF has achieved performance metrics of 49.28% uniqueness, 38.6% uniformity, and 89.19% reliability. The robustness of the proposed APUF against machine learning (ML)-based modeling attacks is tested using supervised Support Vector Machines (SVMs), logistic regression (LR), and an ensemble of gradient boosting (GB) models. These ML models were trained over more than 19K CRPs, achieving prediction accuracies of 61.1%, 63.5%, and 63%, respectively, thus cementing the resiliency of the device against modeling attacks. However, the proposed APUF exhibited its vulnerability to Multi-Layer Perceptron (MLP) and random forest (RF) modeling attacks, with 95.4% and 95.9% prediction accuracies, gaining successful authentication. APUFs are well-suited for device authentication due to their lightweight design and can produce a vast number of challenge-response pairs (CRPs), even in environments with limited resources. Our findings confirm that our approach effectively resists widely recognized attack methods to model PUFs. Full article
(This article belongs to the Special Issue Distributed Machine Learning and Federated Edge Computing for IoT)
Show Figures

Figure 1

28 pages, 4256 KiB  
Article
Accessible IoT Dashboard Design with AI-Enhanced Descriptions for Visually Impaired Users
by George Alex Stelea, Livia Sangeorzan and Nicoleta Enache-David
Future Internet 2025, 17(7), 274; https://doi.org/10.3390/fi17070274 - 21 Jun 2025
Viewed by 757
Abstract
The proliferation of the Internet of Things (IoT) has led to an abundance of data streams and real-time dashboards in domains such as smart cities, healthcare, manufacturing, and agriculture. However, many current IoT dashboards emphasize complex visualizations with minimal textual cues, posing significant [...] Read more.
The proliferation of the Internet of Things (IoT) has led to an abundance of data streams and real-time dashboards in domains such as smart cities, healthcare, manufacturing, and agriculture. However, many current IoT dashboards emphasize complex visualizations with minimal textual cues, posing significant barriers to users with visual impairments who rely on screen readers or other assistive technologies. This paper presents AccessiDashboard, a web-based IoT dashboard platform that prioritizes accessible design from the ground up. The system uses semantic HTML5 and WAI-ARIA compliance to ensure that screen readers can accurately interpret and navigate the interface. In addition to standard chart presentations, AccessiDashboard automatically generates long descriptions of graphs and visual elements, offering a text-first alternative interface for non-visual data exploration. The platform supports multi-modal data consumption (visual charts, bullet lists, tables, and narrative descriptions) and leverages Large Language Models (LLMs) to produce context-aware textual representations of sensor data. A privacy-by-design approach is adopted for the AI integration to address ethical and regulatory concerns. Early evaluation suggests that AccessiDashboard reduces cognitive and navigational load for users with vision disabilities, demonstrating its potential as a blueprint for future inclusive IoT monitoring solutions. Full article
(This article belongs to the Special Issue Human-Centered Artificial Intelligence)
Show Figures

Graphical abstract

34 pages, 15255 KiB  
Article
An Experimental Tethered UAV-Based Communication System with Continuous Power Supply
by Veronica Rodriguez, Christian Tipantuña, Diego Reinoso, Jorge Carvajal-Rodriguez, Carlos Egas Acosta, Pablo Proaño and Xavier Hesselbach
Future Internet 2025, 17(7), 273; https://doi.org/10.3390/fi17070273 - 20 Jun 2025
Viewed by 228
Abstract
Ensuring reliable communication in remote or disaster-affected areas is a technical challenge due to unplanned deployment and mobilization, meaning placement difficulties and high operation costs of conventional telecommunications infrastructures. To address this problem, unmanned aerial vehicles (UAVs) have emerged as an excellent alternative [...] Read more.
Ensuring reliable communication in remote or disaster-affected areas is a technical challenge due to unplanned deployment and mobilization, meaning placement difficulties and high operation costs of conventional telecommunications infrastructures. To address this problem, unmanned aerial vehicles (UAVs) have emerged as an excellent alternative to provide quick connectivity in remote or disaster-affected regions at a reasonable cost. However, the limited battery autonomy of UAVs restricts their flight service time. This paper proposes a communication system based on a tethered UAV (T-UAV) capable of continuous operation through a wired power network connected to a ground station. The communications system is based on low-cost devices, such as Raspberry Pi platforms, and offers wireless IP telephony services, providing high-quality and reliable communication. Experimental tests assessed power consumption, UAV stability, and data transmission performance. Our results prove that the T-UAV, based on a quadcopter drone, operates stably at 16 V and 20 A, ensuring consistent VoIP communications at a height of 10 m with low latency. These experimental findings underscore the potential of T-UAVs as cost-effective alternatives for extending or providing communication networks in remote regions, emergency scenarios, or underserved areas. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop