Next Issue
Volume 17, September
Previous Issue
Volume 17, July
 
 

Future Internet, Volume 17, Issue 8 (August 2025) – 58 articles

Cover Story (view full-size image): Named Data Networking (NDN) represents a promising Information-Centric Networking architecture that emphasizes content names rather than host addresses for communication. Interest Flooding Attacks (IFAs) exploit the Pending Interest Table (PIT) by injecting malicious Interest packets for non-existent or unsatisfiable content, leading to resource exhaustion and denial-of-service attacks. We examine detection and mitigation strategies from 2013 to 2024 to provide current insights into this evolving security challenge. This survey establishes a taxonomy of attack variants, extends to domain-specific implementations in resource-constrained environments, examines adaptations for Internet of Things (IoT) and high-mobility vehicular scenarios, and highlights five critical research directions. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 4917 KB  
Article
A High-Capacity Reversible Data Hiding Scheme for Encrypted Hyperspectral Images Using Multi-Layer MSB Block Labeling and ERLE Compression
by Yijie Lin, Chia-Chen Lin, Zhe-Min Yeh, Ching-Chun Chang and Chin-Chen Chang
Future Internet 2025, 17(8), 378; https://doi.org/10.3390/fi17080378 - 21 Aug 2025
Viewed by 226
Abstract
In the context of secure and efficient data transmission over the future Internet, particularly for remote sensing and geospatial applications, reversible data hiding (RDH) in encrypted hyperspectral images (HSIs) has emerged as a critical technology. This paper proposes a novel RDH scheme specifically [...] Read more.
In the context of secure and efficient data transmission over the future Internet, particularly for remote sensing and geospatial applications, reversible data hiding (RDH) in encrypted hyperspectral images (HSIs) has emerged as a critical technology. This paper proposes a novel RDH scheme specifically designed for encrypted HSIs, offering enhanced embedding capacity without compromising data security or reversibility. The approach introduces a multi-layer block labeling mechanism that leverages the similarity of most significant bits (MSBs) to accurately locate embeddable regions. To minimize auxiliary information overhead, we incorporate an Extended Run-Length Encoding (ERLE) algorithm for effective label map compression. The proposed method achieves embedding rates of up to 3.79 bits per pixel per band (bpppb), while ensuring high-fidelity reconstruction, as validated by strong PSNR metrics. Comprehensive security evaluations using NPCR, UACI, and entropy confirm the robustness of the encryption. Extensive experiments across six standard hyperspectral datasets demonstrate the superiority of our method over existing RDH techniques in terms of capacity, embedding rate, and reconstruction quality. These results underline the method’s potential for secure data embedding in next-generation Internet-based geospatial and remote sensing systems. Full article
Show Figures

Figure 1

30 pages, 3080 KB  
Article
A High-Acceptance-Rate VxWorks Fuzzing Framework Based on Protocol Feature Fusion and Memory Extraction
by Yichuan Wang, Jiazhao Han, Xi Deng and Xinhong Hei
Future Internet 2025, 17(8), 377; https://doi.org/10.3390/fi17080377 - 21 Aug 2025
Viewed by 315
Abstract
With the widespread application of Internet of Things (IoT) devices, the security of embedded systems faces severe challenges. As an embedded operating system widely used in critical mission scenarios, the security of the TCP stack in VxWorks directly affects system reliability. However, existing [...] Read more.
With the widespread application of Internet of Things (IoT) devices, the security of embedded systems faces severe challenges. As an embedded operating system widely used in critical mission scenarios, the security of the TCP stack in VxWorks directly affects system reliability. However, existing protocol fuzzing methods based on network communication struggle to adapt to the complex state machine and grammatical rules of the TCP. Additionally, the lack of a runtime feedback mechanism for closed-source VxWorks systems leads to low testing efficiency. This paper proposes the vxTcpFuzzer framework, which generates structured test cases by integrating the field features of the TCP. Innovatively, it uses the memory data changes of VxWorks network protocol processing tasks as a coverage metric and combines a dual anomaly detection mechanism (WDB detection and heartbeat detection) to achieve precise anomaly capture. We conducted experimental evaluations on three VxWorks system devices, where vxTcpFuzzer successfully triggered multiple potential vulnerabilities, verifying the framework’s effectiveness. Compared with three existing classic fuzzing schemes, vxTcpFuzzer demonstrates significant advantages in test case acceptance rates (44.94–54.92%) and test system abnormal rates (23.79–34.70%) across the three VxWorks devices. The study confirms that protocol feature fusion and memory feedback mechanisms can effectively enhance the depth and efficiency of protocol fuzzing for VxWorks systems. Furthermore, this approach offers a practical and effective solution for uncovering TCP vulnerabilities in black-box environments. Full article
(This article belongs to the Special Issue Secure Integration of IoT and Cloud Computing)
Show Figures

Figure 1

33 pages, 2477 KB  
Systematic Review
Patient-Oriented Smart Applications to Support the Diagnosis, Rehabilitation, and Care of Patients with Parkinson’s: An Umbrella Review
by Rute Bastardo, João Pavão, Ana Isabel Martins, Anabela G. Silva and Nelson Pacheco Rocha
Future Internet 2025, 17(8), 376; https://doi.org/10.3390/fi17080376 - 19 Aug 2025
Viewed by 365
Abstract
This umbrella review aimed to identify, analyze, and synthesize the results of existing literature reviews related to patient-oriented smart applications to support healthcare provision for patients with Parkinson’s. An electronic search was conducted on Scopus, Web of Science, and PubMed, and, after screening [...] Read more.
This umbrella review aimed to identify, analyze, and synthesize the results of existing literature reviews related to patient-oriented smart applications to support healthcare provision for patients with Parkinson’s. An electronic search was conducted on Scopus, Web of Science, and PubMed, and, after screening using predefined eligibility criteria, 85 reviews were included in the umbrella review. The included studies reported on smart applications integrating wearable devices, smartphones, serious computerized games, and other technologies (e.g., ambient intelligence, computer-based objective assessments, or online platforms) to support the diagnosis and monitoring of patients with Parkinson’s, improve physical and cognitive rehabilitation, and support disease management. Numerous smart applications are potentially useful for patients with Parkinson’s, although their full clinical potential has not yet been demonstrated. This is because the quality of their clinical assessments, as well as aspects related to their acceptability and compliance with requirements from regulatory bodies, have not yet been adequately studied. Future research requires more aligned methods and procedures for experimental assessments, as well as collaborative efforts to avoid replication and promote advances on the topic. Full article
Show Figures

Figure 1

34 pages, 639 KB  
Systematic Review
Federated Learning for Anomaly Detection: A Systematic Review on Scalability, Adaptability, and Benchmarking Framework
by Le-Hang Lim, Lee-Yeng Ong and Meng-Chew Leow
Future Internet 2025, 17(8), 375; https://doi.org/10.3390/fi17080375 - 18 Aug 2025
Viewed by 375
Abstract
Anomaly detection plays an increasingly important role in maintaining the stability and reliability of modern distributed systems. Federated Learning (FL) is an emerging method that shows strong potential in enabling anomaly detection across decentralised environments. However, there are some crucial and tricky research [...] Read more.
Anomaly detection plays an increasingly important role in maintaining the stability and reliability of modern distributed systems. Federated Learning (FL) is an emerging method that shows strong potential in enabling anomaly detection across decentralised environments. However, there are some crucial and tricky research challenges that remain unresolved, such as ensuring scalability, adaptability to dynamic server clusters, and the development of standardised evaluation frameworks for FL. This review aims to address the research gaps through a comprehensive analysis of existing studies. In this paper, a systematic review is conducted by covering three main aspects of the application of FL in anomaly detection: the impact of communication overhead towards scalability and real-time performance, the adaptability of FL frameworks to dynamic server clusters, and the key components required for a standardised benchmarking framework of FL-based anomaly detection. There are a total of 43 relevant articles, published between 2020 and 2025, which were selected from IEEE Xplore, Scopus, and ArXiv. The research findings highlight the potential of asynchronous updates and selective update mechanisms in improving FL’s real-time performance and scalability. This review primarily focuses on anomaly detection tasks in distributed system environments, such as network traffic analysis, IoT devices, and industrial monitoring, rather than domains like computer vision or financial fraud detection. While FL frameworks can handle dynamic client changes, the problem of data heterogeneity among the clients remains a significant obstacle that affects the model convergence speed. Moreover, the lack of a unified benchmarking framework to evaluate the performance of FL in anomaly detection poses a challenge to fair comparisons among the experimental results. Full article
Show Figures

Figure 1

39 pages, 5376 KB  
Article
Efficient Charging Station Selection for Minimizing Total Travel Time of Electric Vehicles
by Yaqoob Al-Zuhairi, Prashanth Kannan, Alberto Bazán Guillén, Luis J. de la Cruz Llopis and Mónica Aguilar Igartua
Future Internet 2025, 17(8), 374; https://doi.org/10.3390/fi17080374 - 18 Aug 2025
Viewed by 383
Abstract
Electric vehicles (EVs) have gained significant attention in recent decades for their environmental benefits. However, their widespread adoption poses challenges due to limited charging infrastructure and long charging times, often resulting in underutilized charging stations (CSs) and unnecessary queues that complicate travel planning. [...] Read more.
Electric vehicles (EVs) have gained significant attention in recent decades for their environmental benefits. However, their widespread adoption poses challenges due to limited charging infrastructure and long charging times, often resulting in underutilized charging stations (CSs) and unnecessary queues that complicate travel planning. Therefore, selecting the appropriate CS is essential for minimizing the total travel time of EVs, as it depends on both driving time and the required charging duration. This selection process requires estimating the energy required to reach each candidate CS and then continue to the destination, while also checking if the EV’s battery level is sufficient for a direct trip. To address this gap, we propose an integrated platform that leverages two ensemble machine learning models: Bi-LSTM + XGBoost to predict energy consumption, and FFNN + XGBoost for identifying the most suitable CS by considering required energy, waiting time at CS, charging speed, and driving time based on varying traffic conditions. This integration forms the core novelty of our system to optimize CS selection to minimize the total trip duration. This approach was validated with SUMO simulations and OpenStreetMap data, demonstrating a mean absolute error (MAE) ranging from 2.29 to 4.5 min, depending on traffic conditions, outperforming conventional approaches that rely on SUMO functions and mathematical calculations, which typically yielded MAEs between 5.1 and 10 min. These findings highlight the proposed system’s effectiveness in reducing total travel time, improving charging infrastructure utilization, and enhancing the overall experience for EV drivers. Full article
Show Figures

Figure 1

13 pages, 857 KB  
Article
Software Agents as Information-Sharing Enhancers in Security-Sensitive Organizations
by Yonit Rusho, Daphne Ruth Raban, Michal Chalamish and Vered Pnueli
Future Internet 2025, 17(8), 373; https://doi.org/10.3390/fi17080373 - 18 Aug 2025
Viewed by 376
Abstract
This study examines the influence of software agents on information-sharing behavior within security-sensitive organizations, where confidentiality and hierarchical culture often limit the flow of knowledge. While such organizations aim to collect, analyze, and disseminate information for security purposes, internal sharing dynamics are shaped [...] Read more.
This study examines the influence of software agents on information-sharing behavior within security-sensitive organizations, where confidentiality and hierarchical culture often limit the flow of knowledge. While such organizations aim to collect, analyze, and disseminate information for security purposes, internal sharing dynamics are shaped by competing norms of secrecy and collaboration. To explore this tension, we developed a digital simulation game in which participants from security-sensitive organizations engaged in collaborative tasks over three rounds. In rounds two and three, software agents were introduced to interact with participants by sharing public and classified information. A total of 28 participants took part, generating 1626 text-based interactions. Findings indicate that (1) information-sharing patterns in security-sensitive contexts differ significantly from those in non-sensitive environments; (2) when software agents share classified information, participants are more likely to share sensitive data in return; (3) when participants are aware of the agents’ presence, they reduce classified sharing and increase public sharing; and (4) agents that share both public and classified information lead to decreased public and increased classified sharing. These results provide insight into the role of artificial agents in shaping communication behaviors in secure environments and inform strategies for training and design in knowledge-sensitive organizational settings. Full article
Show Figures

Figure 1

29 pages, 1386 KB  
Article
A Hybrid Zero Trust Deployment Model for Securing O-RAN Architecture in 6G Networks
by Max Hashem Eiza, Brian Akwirry, Alessandro Raschella, Michael Mackay and Mukesh Kumar Maheshwari
Future Internet 2025, 17(8), 372; https://doi.org/10.3390/fi17080372 - 18 Aug 2025
Viewed by 341
Abstract
The evolution toward sixth generation (6G) wireless networks promises higher performance, greater flexibility, and enhanced intelligence. However, it also introduces a substantially enlarged attack surface driven by open, disaggregated, and multi-vendor Open RAN (O-RAN) architectures that will be utilised in 6G networks. This [...] Read more.
The evolution toward sixth generation (6G) wireless networks promises higher performance, greater flexibility, and enhanced intelligence. However, it also introduces a substantially enlarged attack surface driven by open, disaggregated, and multi-vendor Open RAN (O-RAN) architectures that will be utilised in 6G networks. This paper addresses the urgent need for a practical Zero Trust (ZT) deployment model tailored to O-RAN specification. To do so, we introduce a novel hybrid ZT deployment model that establishes the trusted foundation for AI/ML-driven security in O-RAN, integrating macro-level enclave segmentation with micro-level application sandboxing for xApps/rApps. In our model, the Policy Decision Point (PDP) centrally manages dynamic policies, while distributed Policy Enforcement Points (PEPs) reside in logical enclaves, agents, and gateways to enable per-session, least-privilege access control across all O-RAN interfaces. We demonstrate feasibility via a Proof of Concept (PoC) implemented with Kubernetes and Istio and based on the NIST Policy Machine (PM). The PoC illustrates how pods can represent enclaves and sidecar proxies can embody combined agent/gateway functions. Performance discussion indicates that enclave-based deployment adds 1–10 ms of additional per-connection latency while CPU/memory overhead from running a sidecar proxy per enclave is approximately 5–10% extra utilisation, with each proxy consuming roughly 100–200 MB of RAM. Full article
(This article belongs to the Special Issue Secure and Trustworthy Next Generation O-RAN Optimisation)
Show Figures

Figure 1

24 pages, 2709 KB  
Article
Unsupervised Person Re-Identification via Deep Attribute Learning
by Shun Zhang, Yaohui Xu, Xuebin Zhang, Boyang Cheng and Ke Wang
Future Internet 2025, 17(8), 371; https://doi.org/10.3390/fi17080371 - 15 Aug 2025
Viewed by 328
Abstract
Driven by growing public security demands and the advancement of intelligent surveillance systems, person re-identification (ReID) has emerged as a prominent research focus in the field of computer vision. However, this task presents challenges due to its high sensitivity to variations in visual [...] Read more.
Driven by growing public security demands and the advancement of intelligent surveillance systems, person re-identification (ReID) has emerged as a prominent research focus in the field of computer vision. However, this task presents challenges due to its high sensitivity to variations in visual appearance caused by factors such as body pose and camera parameters. Although deep learning-based methods have achieved marked progress in ReID, the high cost of annotation remains a challenge that cannot be overlooked. To address this, we propose an unsupervised attribute learning framework that eliminates the need for costly manual annotations while maintaining high accuracy. The framework learns the mid-level human attributes (such as clothing type and gender) that are robust to substantial visual appearance variations and can hence boost the accuracy of attributes with a small amount of labeled data. To carry out our framework, we present a part-based convolutional neural network (CNN) architecture, which consists of two components for image and body attribute learning on a global level and upper- and lower-body image and attribute learning at a local level. The proposed architecture is trained to learn attribute-semantic and identity-discriminative feature representations simultaneously. For model learning, we first train our part-based network using a supervised approach on a labeled attribute dataset. Then, we apply an unsupervised clustering method to assign pseudo-labels to unlabeled images in a target dataset using our trained network. To improve feature compatibility, we introduce an attribute consistency scheme for unsupervised domain adaptation on this unlabeled target data. During training on the target dataset, we alternately perform three steps: extracting features with the updated model, assigning pseudo-labels to unlabeled images, and fine-tuning the model. Through a unified framework that fuses complementary attribute-label and identity label information, our approach achieves considerable improvements of 10.6% and 3.91% mAP on Market-1501→DukeMTMC-ReID and DukeMTMC-ReID→Market-1501 unsupervised domain adaptation tasks, respectively. Full article
(This article belongs to the Special Issue Advances in Deep Learning and Next-Generation Internet Technologies)
Show Figures

Figure 1

17 pages, 3211 KB  
Article
Adaptive and User-Friendly Framework for Image Classification with Transfer Learning Models
by Manan Khatri, Manmita Sahoo, Sameer Sayyad and Javed Sayyad
Future Internet 2025, 17(8), 370; https://doi.org/10.3390/fi17080370 - 15 Aug 2025
Viewed by 298
Abstract
The increasing demand for accessible and efficient machine learning solutions has led to the development of the Adaptive Learning Framework (ALF) for multi-class, single-label image classification. Unlike existing low-code tools, ALF integrates multiple transfer learning backbones with a guided, adaptive workflow that empowers [...] Read more.
The increasing demand for accessible and efficient machine learning solutions has led to the development of the Adaptive Learning Framework (ALF) for multi-class, single-label image classification. Unlike existing low-code tools, ALF integrates multiple transfer learning backbones with a guided, adaptive workflow that empowers non-technical users to create custom classification models without specialized expertise. It employs pre-trained models from TensorFlow Hub to significantly reduce computational costs and training times while maintaining high accuracy. The platform’s User Interface (UI), built using Streamlit, enables intuitive operations, such as dataset upload, class definition, and model training, without coding requirements. This research focuses on small-scale image datasets to demonstrate ALF’s accessibility and ease of use. Evaluation metrics highlight the superior performance of transfer learning approaches, with the InceptionV2 model architecture achieving the highest accuracy. By bridging the gap between complex deep learning methods and real-world usability, ALF addresses practical needs across fields like education and industry. Full article
Show Figures

Figure 1

17 pages, 2245 KB  
Article
An Energy-Efficient Scheme for Waking Co-Channel TDMA in LoRa Networks via the Integration of Bidirectional Timestamp Correction and Address Recognition
by Zongliang Xu, Guicai Yu, Yingcong Luo and Hao Jiang
Future Internet 2025, 17(8), 369; https://doi.org/10.3390/fi17080369 - 14 Aug 2025
Viewed by 258
Abstract
To address the issues of high energy consumption, data collisions, and invalid wake-ups of nontarget nodes in large-scale node-deployment scenarios of long-range (LoRa) star networks, this paper proposes an energy-saving wake-up scheme that combines (i) time-division multiple access (TDMA) slot allocation based on [...] Read more.
To address the issues of high energy consumption, data collisions, and invalid wake-ups of nontarget nodes in large-scale node-deployment scenarios of long-range (LoRa) star networks, this paper proposes an energy-saving wake-up scheme that combines (i) time-division multiple access (TDMA) slot allocation based on bidirectional timestamp correction with (ii) a sensing and communication integrated (ISAC) scheme proposed for physical address identification of LoRa nodes operating on the same channel. The scheme incorporates parameter estimation of the LoRa channel, which effectively enhances the identification accuracy and improves the system’s robustness. The proposed scheme consists of two parts: First, in case nodes briefly lose power, a bidirectional timestamp calibration algorithm and GPS-assisted timing are used to synchronize the gateway and each node with high precision, ensuring the accurate scheduling of the TDMA mechanism. Second, based on time synchronization, a “slot–LoRa module address” mapping table is constructed to set the communication time points between the gateway and each node. The gateway can wake the target nodes at specific, precise communication time points. Experimental results show that the proposed method maintains the error range within ±1 ms. The significant decrease in the rate of unnecessary node wake-up decreases data collisions and energy waste in the same channel environment. Energy savings scale with network size, thereby significantly extending the network life cycle. Full article
Show Figures

Figure 1

29 pages, 1615 KB  
Review
Internet of Things Driven Digital Twin for Intelligent Manufacturing in Shipbuilding Workshops
by Caiping Liang, Xiang Li, Wenxu Niu and Yansong Zhang
Future Internet 2025, 17(8), 368; https://doi.org/10.3390/fi17080368 - 14 Aug 2025
Viewed by 493
Abstract
Intelligent manufacturing research has focused on digital twins (DTs) due to the growing integration of physical and cyber systems. This study thoroughly explores the Internet of Things (IoT) as a cornerstone of DTs, showing its promise and limitations in intelligent shipbuilding digital transformation [...] Read more.
Intelligent manufacturing research has focused on digital twins (DTs) due to the growing integration of physical and cyber systems. This study thoroughly explores the Internet of Things (IoT) as a cornerstone of DTs, showing its promise and limitations in intelligent shipbuilding digital transformation workshops. We analyze the progress of IoT protocols, digital twin frameworks, and intelligent ship manufacturing. A unique bidirectional digital twin system for shipbuilding workshops uses the Internet of Things to communicate data between real and virtual workshops. This research uses a steel-cutting workshop to demonstrate the digital transformation of the production line, including data collection, transmission, storage, and simulation analysis. Then, major hurdles to digital technology application in shipbuilding are comprehensively examined. Critical barriers to DT deployment in shipbuilding environments are systematically analyzed, including technical standard unification, communication security, real-time performance guarantees, cross-workshop collaboration mechanisms, and the deep integration of artificial intelligence. Adaptive solutions include hybrid edge-cloud computing architectures for latency-sensitive tasks and reinforcement learning-based smart scheduling algorithms. The findings suggest that IoT-driven digital transformation may modernize shipbuilding workshops in new ways. Full article
Show Figures

Figure 1

29 pages, 919 KB  
Article
DDoS Defense Strategy Based on Blockchain and Unsupervised Learning Techniques in SDN
by Shengmin Peng, Jialin Tian, Xiangyu Zheng, Shuwu Chen and Zhaogang Shu
Future Internet 2025, 17(8), 367; https://doi.org/10.3390/fi17080367 - 13 Aug 2025
Viewed by 423
Abstract
With the rapid development of technologies such as cloud computing, big data, and the Internet of Things (IoT), Software-Defined Networking (SDN) is emerging as a new network architecture for the modern Internet. SDN separates the control plane from the data plane, allowing a [...] Read more.
With the rapid development of technologies such as cloud computing, big data, and the Internet of Things (IoT), Software-Defined Networking (SDN) is emerging as a new network architecture for the modern Internet. SDN separates the control plane from the data plane, allowing a central controller, the SDN controller, to quickly direct the routing devices within the topology to forward data packets, thus providing flexible traffic management for communication between information sources. However, traditional Distributed Denial of Service (DDoS) attacks still significantly impact SDN systems. This paper proposes a novel dual-layer strategy capable of detecting and mitigating DDoS attacks in an SDN network environment. The first layer of the strategy enhances security by using blockchain technology to replace the SDN flow table storage container in the northbound interface of the SDN controller. Smart contracts are then used to process the stored flow table information. We employ the time window algorithm and the token bucket algorithm to construct the first layer strategy to defend against obvious DDoS attacks. To detect and mitigate less obvious DDoS attacks, we design a second-layer strategy that uses a composite data feature correlation coefficient calculation method and the Isolation Forest algorithm from unsupervised learning techniques to perform binary classification, thereby identifying abnormal traffic. We conduct experimental validation using the publicly available DDoS dataset CIC-DDoS2019. The results show that using this strategy in the SDN network reduces the average deviation of round-trip time (RTT) by approximately 38.86% compared with the original SDN network without this strategy. Furthermore, the accuracy of DDoS attack detection reaches 97.66% and an F1 score of 92.2%. Compared with other similar methods, under comparable detection accuracy, the deployment of our strategy in small-scale SDN network topologies provides faster detection speeds for DDoS attacks and exhibits less fluctuation in detection time. This indicates that implementing this strategy can effectively identify DDoS attacks without affecting the stability of data transmission in the SDN network environment. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

22 pages, 1780 KB  
Systematic Review
The Future of Education: A Systematic Literature Review of Self-Directed Learning with AI
by Carmen del Rosario Navas Bonilla, Luis Miguel Viñan Carrasco, Jhoanna Carolina Gaibor Pupiales and Daniel Eduardo Murillo Noriega
Future Internet 2025, 17(8), 366; https://doi.org/10.3390/fi17080366 - 13 Aug 2025
Viewed by 704
Abstract
As digital transformation continues to redefine education, understanding how emerging technologies can enhance self-directed learning (SDL) becomes essential for learners, educators, instructional designers, and policymakers, as this approach supports personalized learning, strengthens student autonomy, and responds to the demands of more flexible and [...] Read more.
As digital transformation continues to redefine education, understanding how emerging technologies can enhance self-directed learning (SDL) becomes essential for learners, educators, instructional designers, and policymakers, as this approach supports personalized learning, strengthens student autonomy, and responds to the demands of more flexible and dynamic educational environments. This systematic review examines how artificial intelligence (AI) tools enhance SDL by offering personalized, adaptive, and real-time support for learners in online environments. Following the PRISMA 2020 methodology, a literature search was conducted to identify relevant studies published between 2020 and 2025. After applying inclusion, exclusion, and quality criteria, 77 studies were selected for in-depth analysis. The findings indicate that AI-powered tools such as intelligent tutoring systems, chatbots, conversational agents, and natural language processing applications promote learner autonomy, enable self-regulation, provide real-time feedback, and support individualized learning paths. However, several challenges persist, including overreliance on technology, cognitive overload, and diminished human interaction. These insights suggest that, while AI plays a transformative role in the evolution of education, its integration must be guided by thoughtful pedagogical design, ethical considerations, and a learner-centered approach to fully support the future of education through the internet. Full article
Show Figures

Figure 1

41 pages, 1857 KB  
Review
The Adaptive Ecosystem of MaaS-Driven Cookie Theft: Dynamics, Anticipatory Analysis Concepts, and Proactive Defenses
by Leandro Antonio Pazmiño Ortiz, Ivonne Fernanda Maldonado Soliz and Vanessa Katherine Guevara Balarezo
Future Internet 2025, 17(8), 365; https://doi.org/10.3390/fi17080365 - 11 Aug 2025
Viewed by 559
Abstract
The industrialization of cybercrime, principally through Malware-as-a-Service (MaaS), has elevated HTTP cookie theft to a critical cybersecurity challenge, enabling attackers to bypass multi-factor authentication and perpetrate large-scale account takeovers. Employing a Holistic and Integrative Review methodology, this paper dissects the intricate, adaptive ecosystem [...] Read more.
The industrialization of cybercrime, principally through Malware-as-a-Service (MaaS), has elevated HTTP cookie theft to a critical cybersecurity challenge, enabling attackers to bypass multi-factor authentication and perpetrate large-scale account takeovers. Employing a Holistic and Integrative Review methodology, this paper dissects the intricate, adaptive ecosystem of MaaS-driven cookie theft. We systematically characterize the co-evolving arms race between offensive and defensive strategies (2020–2025), revealing a critical strategic asymmetry where attackers optimize for speed and low cost, while effective defenses demand significant resources. To shift security from a reactive to an anticipatory posture, a multi-dimensional predictive framework is not only proposed but is also detailed as a formalized, testable algorithm, integrating technical, economic, and behavioral indicators to forecast emerging threat trajectories. Our findings conclude that long-term security hinges on disrupting the underlying cybercriminal economic model; we therefore reframe proactive countermeasures like Zero-Trust principles and ephemeral tokens as economic weapons designed to devalue the stolen asset. Finally, the paper provides a prioritized, multi-year research roadmap and a practical decision-tree framework to guide the implementation of these advanced, collaborative cybersecurity strategies to counter this pervasive and evolving threat. Full article
Show Figures

Figure 1

21 pages, 394 KB  
Article
Enabling Horizontal Collaboration in Logistics Through Secure Multi-Party Computation
by Gabriele Spini, Stephan Krenn, Erich Teppan, Christina Petschnigg and Elena Wiegelmann
Future Internet 2025, 17(8), 364; https://doi.org/10.3390/fi17080364 - 8 Aug 2025
Viewed by 423
Abstract
The road transport sector is currently facing significant challenges, due in part to CO2 emissions, high fuel prices, and a shortage of staff. These issues are partially caused by more than 40% of truck journeys being “empty runs” in some member states [...] Read more.
The road transport sector is currently facing significant challenges, due in part to CO2 emissions, high fuel prices, and a shortage of staff. These issues are partially caused by more than 40% of truck journeys being “empty runs” in some member states of the European Union and heavy under-utilization of deck space for non-empty runs. In order to overcome said inefficiency, this paper proposes a decentralized platform to facilitate collaborative transport networks (CTNs), i.e., to enable horizontal collaboration to increase load factors and reduce costs and CO2 emissions. Our solution leverages secure multi-party computation (MPC) to guarantee that no sensitive business information is leaked to competing hauliers. The system optimizes truck assignments by modeling logistics as a weighted graph that considers orders and truck capacities while maintaining strict confidentiality. Our approach addresses key barriers to CTN adoption, such as lack of trust and data privacy. Implemented using MPyC without extensive optimizations, we demonstrate the efficiency and effectiveness in increasing the average load factor, while achieving acceptable running times (in the order of hours) for arguably meaningful instance sizes (up to 1000 orders). After leveraging a rather simplistic modeling inspired by previous work, we finally give an outlook of possible extensions toward more realistic models and estimate their impact on efficiency. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Graphical abstract

27 pages, 1120 KB  
Article
Beyond Prompt Chaining: The TB-CSPN Architecture for Agentic AI
by Uwe M. Borghoff, Paolo Bottoni and Remo Pareschi
Future Internet 2025, 17(8), 363; https://doi.org/10.3390/fi17080363 - 8 Aug 2025
Viewed by 401
Abstract
Current agentic AI frameworks such as LangGraph and AutoGen simulate autonomy via sequential prompt chaining but lack true multi-agent coordination architectures. These systems conflate semantic reasoning with orchestration, requiring LLMs at every coordination step and limiting scalability. By contrast, TB-CSPN (Topic-Based Communication Space [...] Read more.
Current agentic AI frameworks such as LangGraph and AutoGen simulate autonomy via sequential prompt chaining but lack true multi-agent coordination architectures. These systems conflate semantic reasoning with orchestration, requiring LLMs at every coordination step and limiting scalability. By contrast, TB-CSPN (Topic-Based Communication Space Petri Net) is a hybrid formal architecture that fundamentally separates semantic processing from coordination logic. Unlike traditional Petri net applications, where the entire system state is encoded within the network structure, TB-CSPN uses Petri nets exclusively for coordination workflow modeling, letting communication and interaction between agents drive semantically rich, topic-based representations. At the same time, unlike first-generation agentic frameworks, here LLMs are confined to topic extraction, with business logic coordination implemented by structured token communication. This hybrid architectural separation preserves human strategic oversight (as supervisors) while delegating consultant and worker roles to LLMs and specialized AI agents, avoiding the state-space explosion typical of monolithic formal systems. Our empirical evaluation shows that TB-CSPN achieves 62.5% faster processing, 66.7% fewer LLM API calls, and 167% higher throughput compared to LangGraph-style orchestration, without sacrificing reliability. Scaling experiments with 10–100 agents reveal sub-linear memory growth (10× efficiency improvement), directly contradicting traditional Petri Net scalability concerns through our semantic-coordination-based architectural separation. These performance gains arise from the hybrid design, where coordination patterns remain constant while semantic spaces scale independently. TB-CSPN demonstrates that efficient agentic AI emerges not by over-relying on modern AI components but by embedding them strategically within a hybrid architecture that combines formal coordination guarantees with semantic flexibility. Our implementation and evaluation methodology are openly available, inviting community validation and extension of these principles. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Graphical abstract

26 pages, 11496 KB  
Article
Parallel Algorithm for NP-Hard Problem of Channel Resource Allocation Optimization in Ad Hoc and Sensor Networks
by Valeriy Ivanov and Maxim Tereshonok
Future Internet 2025, 17(8), 362; https://doi.org/10.3390/fi17080362 - 8 Aug 2025
Viewed by 285
Abstract
This paper proposes a technique to estimate the minimal quantity of orthogonal channel resources required for ad hoc and sensor network connectivity. Simultaneously, the resource allocation to each specific line is conducted by grouping lines into concurrent transmission sets. Our proposed technique uses [...] Read more.
This paper proposes a technique to estimate the minimal quantity of orthogonal channel resources required for ad hoc and sensor network connectivity. Simultaneously, the resource allocation to each specific line is conducted by grouping lines into concurrent transmission sets. Our proposed technique uses the physical-based interference model assumption, where each node interferes with every other node, turning ad hoc and sensor network performance optimization problems into the NP-hard ones. In contrast to most of the other works with the physical-based interference model assumption, we mitigate the combinatorial explosion of concurrently transmitting line sets considering the global interference instead of localizing the interference with line or space partitioning. We have performed the mitigation, firstly, using pairwise mutually acceptable line sets for each line. Then, based on the limitations of pairwise sets, we construct the tree of mutually acceptable interfering line sets. Then, from the created tree, we find the minimal set cover of concurrently transmitting line sets. The tree construction has the exponential worst-case time and space complexity if all lines in the network can transmit together. By randomly pruning the tree and using the genetic algorithm to find the pruned tree which gives the same minimal set cover as the full tree, we have reduced the worst space and time complexities to the polynomial ones. We have devised our technique with parallelism in mind to speed up the resource allocation optimization even more. Based on an extensive simulation study with random network topologies of sizes up to 250 nodes and the average number of lines up to 2000, we estimated the time and space complexity for different tree pruning and optimization techniques and found the effective ones. Full article
(This article belongs to the Special Issue Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

30 pages, 2062 KB  
Article
A Multi-Layer Secure Sharing Framework for Aviation Big Data Based on Blockchain
by Qing Wang, Zhijun Wu and Yanrong Lu
Future Internet 2025, 17(8), 361; https://doi.org/10.3390/fi17080361 - 8 Aug 2025
Viewed by 353
Abstract
As a new type of production factor, data possesses multidimensional application value, and its pivotal role is becoming increasingly prominent in the aviation sector. Data sharing can significantly enhance the utilization efficiency of data resources and serves as one of the key tasks [...] Read more.
As a new type of production factor, data possesses multidimensional application value, and its pivotal role is becoming increasingly prominent in the aviation sector. Data sharing can significantly enhance the utilization efficiency of data resources and serves as one of the key tasks in building smart civil aviation. However, currently, data silos are pervasive, with vast amounts of data only being utilized and analyzed within limited scopes, leaving their full potential untapped. The challenges in data sharing primarily stem from three aspects: (1) Data owners harbor concerns regarding data security and privacy. (2) The highly dynamic and real-time nature of aviation operations imposes stringent requirements on the timeliness, stability, and reliability of data sharing, thereby constraining its scope and extent. (3) The lack of reasonable incentive mechanisms results in insufficient motivation for data owners to share. Consequently, addressing the issue of aviation big data sharing holds significant importance. Since the release of the Bitcoin whitepaper in 2008, blockchain technology has achieved continuous breakthroughs in the fields of data security and collaborative computing. Its unique characteristics—decentralization, tamper-proofing, traceability, and scalability—lay the foundation for its integration with aviation. Blockchain can deeply integrate with air traffic management (ATM) operations, effectively resolving trust, efficiency, and collaboration challenges in distributed scenarios for ATM data. To address the heterogeneous data usage requirements of different ATM stakeholders, this paper constructs a blockchain-based multi-level data security sharing architecture, enabling fine-grained management and secure collaboration. Furthermore, to meet the stringent timeliness demands of aviation operations and the storage pressure posed by massive data, this paper optimizes blockchain storage deployment and consensus mechanisms, thereby enhancing system scalability and processing efficiency. Additionally, a dual-mode data-sharing solution combining raw data sharing and model sharing is proposed, offering a novel approach to aviation big data sharing. Security and formal analyses demonstrate that the proposed solution is both secure and effective. Full article
Show Figures

Figure 1

21 pages, 655 KB  
Article
A Novel Framework Leveraging Large Language Models to Enhance Cold-Start Advertising Systems
by Albin Uruqi, Iosif Viktoratos and Athanasios Tsadiras
Future Internet 2025, 17(8), 360; https://doi.org/10.3390/fi17080360 - 8 Aug 2025
Viewed by 639
Abstract
The cold-start problem remains a critical challenge in personalized advertising, where users with limited or no interaction history often receive suboptimal recommendations. This study introduces a novel, three-stage framework that systematically integrates transformer architectures and large language models (LLMs) to improve recommendation accuracy, [...] Read more.
The cold-start problem remains a critical challenge in personalized advertising, where users with limited or no interaction history often receive suboptimal recommendations. This study introduces a novel, three-stage framework that systematically integrates transformer architectures and large language models (LLMs) to improve recommendation accuracy, transparency, and user experience throughout the entire advertising pipeline. The proposed approach begins with transformer-enhanced feature extraction, leveraging self-attention and learned positional encodings to capture deep semantic relationships among users, ads, and context. It then employs an ensemble integration strategy combining enhanced state-of-the-art models with optimized aggregation for robust prediction. Finally, an LLM-driven enhancement module performs semantic reranking, personalized message refinement, and natural language explanation generation while also addressing cold-start scenarios through pre-trained knowledge. The LLM component further supports diversification, fairness-aware ranking, and sentiment sensitivity in order to ensure more relevant, diverse, and ethically grounded recommendations. Extensive experiments on DigiX and Avazu datasets demonstrate notable gains in click-through rate prediction (CTR), while an in-depth real user evaluation showcases improvements in perceived ad relevance, message quality, transparency, and trust. This work advances the state-of-the-art by combining CTR models with interpretability and contextual reasoning. The strengths of the proposed method, such as its innovative integration of components, empirical validation, multifaceted LLM application, and ethical alignment highlight its potential as a robust, future-ready solution for personalized advertising. Full article
(This article belongs to the Special Issue Information Networks with Human-Centric LLMs)
Show Figures

Figure 1

44 pages, 1716 KB  
Article
Creating Automated Microsoft Bicep Application Infrastructure from GitHub in the Azure Cloud
by Vladislav Manolov, Daniela Gotseva and Nikolay Hinov
Future Internet 2025, 17(8), 359; https://doi.org/10.3390/fi17080359 - 7 Aug 2025
Viewed by 444
Abstract
Infrastructure as code (IaC) is essential for modern cloud development, enabling teams to define, deploy, and manage infrastructure in a consistent and repeatable manner. As organizations migrate to Azure, selecting the right approach is crucial for managing complexity, minimizing errors, and supporting DevOps [...] Read more.
Infrastructure as code (IaC) is essential for modern cloud development, enabling teams to define, deploy, and manage infrastructure in a consistent and repeatable manner. As organizations migrate to Azure, selecting the right approach is crucial for managing complexity, minimizing errors, and supporting DevOps practices. This paper examines the use of Azure Bicep with GitHub Actions to automate infrastructure deployment for an application in the Azure cloud. It explains how Bicep improves readability, modularity, and integration compared to traditional ARM templates and other automation tools. The solution utilizes a modular Bicep design to deploy resources, including virtual networks, managed identities, container apps, databases, and AI services, with environment-specific parameters for development, QA, and production. GitHub Actions workflows automate the building, deployment, and tearing down of infrastructure, ensuring consistent deployments across environments. Security considerations include managed identities, private networking, and secret management in continuous integration (CI) and continuous delivery (CD) pipelines. This paper provides a detailed architectural overview, workflow analysis, and implementation guidance to help teams adopt a robust, automated approach to Azure infrastructure deployment. By leveraging automation tooling and modern DevOps practices, organizations can streamline delivery and maintain secure, maintainable cloud environments. Full article
(This article belongs to the Special Issue IoT, Edge, and Cloud Computing in Smart Cities, 2nd Edition)
Show Figures

Graphical abstract

38 pages, 3566 KB  
Review
Enhancing Industrial Processes Through Augmented Reality: A Scoping Review
by Alba Miranda, Aracely M. Vallejo, Paulina Ayala, Marcelo V. Garcia and Jose E. Naranjo
Future Internet 2025, 17(8), 358; https://doi.org/10.3390/fi17080358 - 7 Aug 2025
Viewed by 570
Abstract
Augmented reality (AR) in industry improves training and technical assistance by overlaying digital information on real environments, facilitating the visualisation and understanding of complex processes. It also enables more effective remote collaboration, optimising problem solving and decision making in real time. This paper [...] Read more.
Augmented reality (AR) in industry improves training and technical assistance by overlaying digital information on real environments, facilitating the visualisation and understanding of complex processes. It also enables more effective remote collaboration, optimising problem solving and decision making in real time. This paper proposes a scoping review, using PRISMA guidelines, on the optimisation of industrial processes through the application of AR. The objectives of this study included characterising successful implementations of AR in various industrial processes, comparing different hardware, graphics engines, associated costs, and determining the percentage of optimisation achieved through AR. The databases included were Scopus, SpringerLink, IEEExplore, and MDPI. Eligibility criteria were defined as English-language articles published between 2019 and 2024 that provide significant contributions to AR applications in engineering. The Cochrane method was used to assess bias. The rigorous selection process resulted in the inclusion of 38 articles. Key findings indicate that AR reduces errors and execution times, improves efficiency and productivity, and optimises training and maintenance processes, leading to cost savings and quality improvement. Unity 3D is the most widely used graphics engine for AR applications. The main applications of AR are in maintenance, assembly, training and inspection, with maintenance being the most researched area. Challenges include the learning curve, high initial costs, and hardware limitations. Full article
(This article belongs to the Special Issue Machine Learning and Internet of Things in Industry 4.0)
Show Figures

Figure 1

21 pages, 559 KB  
Review
Interest Flooding Attacks in Named Data Networking and Mitigations: Recent Advances and Challenges
by Simeon Ogunbunmi, Yu Chen, Qi Zhao, Deeraj Nagothu, Sixiao Wei, Genshe Chen and Erik Blasch
Future Internet 2025, 17(8), 357; https://doi.org/10.3390/fi17080357 - 6 Aug 2025
Viewed by 440
Abstract
Named Data Networking (NDN) represents a promising Information-Centric Networking architecture that addresses limitations of traditional host-centric Internet protocols by emphasizing content names rather than host addresses for communication. While NDN offers advantages in content distribution, mobility support, and built-in security features, its stateful [...] Read more.
Named Data Networking (NDN) represents a promising Information-Centric Networking architecture that addresses limitations of traditional host-centric Internet protocols by emphasizing content names rather than host addresses for communication. While NDN offers advantages in content distribution, mobility support, and built-in security features, its stateful forwarding plane introduces significant vulnerabilities, particularly Interest Flooding Attacks (IFAs). These IFA attacks exploit the Pending Interest Table (PIT) by injecting malicious interest packets for non-existent or unsatisfiable content, leading to resource exhaustion and denial-of-service attacks against legitimate users. This survey examines research advances in IFA detection and mitigation from 2013 to 2024, analyzing seven relevant published detection and mitigation strategies to provide current insights into this evolving security challenge. We establish a taxonomy of attack variants, including Fake Interest, Unsatisfiable Interest, Interest Loop, and Collusive models, while examining their operational characteristics and network performance impacts. Our analysis categorizes defense mechanisms into five primary approaches: rate-limiting strategies, PIT management techniques, machine learning and artificial intelligence methods, reputation-based systems, and blockchain-enabled solutions. These approaches are evaluated for their effectiveness, computational requirements, and deployment feasibility. The survey extends to domain-specific implementations in resource-constrained environments, examining adaptations for Internet of Things deployments, wireless sensor networks, and high-mobility vehicular scenarios. Five critical research directions are proposed: adaptive defense mechanisms against sophisticated attackers, privacy-preserving detection techniques, real-time optimization for edge computing environments, standardized evaluation frameworks, and hybrid approaches combining multiple mitigation strategies. Full article
Show Figures

Figure 1

21 pages, 787 KB  
Article
Rethinking Modbus-UDP for Real-Time IIoT Systems
by Ivan Cibrario Bertolotti
Future Internet 2025, 17(8), 356; https://doi.org/10.3390/fi17080356 - 5 Aug 2025
Viewed by 414
Abstract
The original Modbus specification for RS-485 and RS-232 buses supported broadcast transmission. As the protocol evolved into Modbus-TCP, to use the TCP transport, this useful feature was lost, likely due to the point-to-point nature of TCP connections. Later proposals did not restore the [...] Read more.
The original Modbus specification for RS-485 and RS-232 buses supported broadcast transmission. As the protocol evolved into Modbus-TCP, to use the TCP transport, this useful feature was lost, likely due to the point-to-point nature of TCP connections. Later proposals did not restore the broadcast transmission capability, although they used UDP as transport and UDP, by itself, would have supported it. Moreover, they did not address the inherent lack of reliable delivery of UDP, leaving datagram loss detection and recovery to the application layer. This paper describes a novel redesign of Modbus-UDP that addresses the aforementioned shortcomings. It achieves a mean round-trip time of only 38% with respect to Modbus-TCP and seamlessly supports a previously published protocol based on Modbus broadcast. In addition, the built-in retransmission of Modbus-UDP reacts more efficiently than the equivalent Modbus-TCP mechanism, exhibiting 50% of its round-trip standard deviation when subject to a 1% two-way IP datagram loss probability. Combined with the lower overhead of UDP versus TCP, this makes the redesigned Modbus-UDP protocol better suited for a variety of Industrial Internet of Things systems with limited computing and communication resources. Full article
Show Figures

Figure 1

31 pages, 8580 KB  
Article
TSA-GRU: A Novel Hybrid Deep Learning Module for Learner Behavior Analytics in MOOCs
by Soundes Oumaima Boufaida, Abdelmadjid Benmachiche, Makhlouf Derdour, Majda Maatallah, Moustafa Sadek Kahil and Mohamed Chahine Ghanem
Future Internet 2025, 17(8), 355; https://doi.org/10.3390/fi17080355 - 5 Aug 2025
Viewed by 299
Abstract
E-Learning is an emerging dominant phenomenon in education, making the development of robust models that can accurately represent the dynamic behavior of learners in MOOCs even more critical. In this article, we propose the Temporal Sparse Attention-Gated Recurrent Unit (TSA-GRU), a novel deep [...] Read more.
E-Learning is an emerging dominant phenomenon in education, making the development of robust models that can accurately represent the dynamic behavior of learners in MOOCs even more critical. In this article, we propose the Temporal Sparse Attention-Gated Recurrent Unit (TSA-GRU), a novel deep learning framework that combines TSA with a sequential encoder based on the GRU. This hybrid model effectively reconstructs student response times and learning trajectories with high fidelity by leveraging tthe emporal embeddings of instructional and feedback activities. By dynamically filtering noise from student interactions, TSA-GRU generates context-aware representations that seamlessly integrate both short-term fluctuations and long-term learning patterns. Empirical evaluation on the 2009–2010 ASSISTments dataset demonstrates that TSA-GRU achieved a test accuracy of 95.60% and a test loss of 0.0209, outperforming Modular Sparse Attention-Gated Recurrent Unit (MSA-GRU), Bayesian Knowledge Tracing (BKT), Performance Factors Analysis (PFA), and TSA in the same experimental design. TSA-GRU converged in five training epochs; thus, while TSA-GRU is demonstrated to have strong predictive performance for knowledge tracing tasks, these findings are specific to the conducted dataset and should not be implicitly regarded as conclusive for all data. More statistical validation through five-fold cross-validation, confidence intervals, and paired t-tests have confirmed the robustness, consistency, and statistically significant superiority of TSA-GRU over the baseline model MSA-GRU. TSA-GRU’s scalability and capacity to incorporate a temporal dimension of knowledge can make it acceptably well-positioned to analyze complex learner behaviors and plan interventions for adaptive learning in computerized learning systems. Full article
Show Figures

Figure 1

3 pages, 150 KB  
Editorial
Editorial for the Special Issue “Virtual Reality and Metaverse: Impact on the Digital Transformation of Society—2nd Edition”
by Diego Vergara
Future Internet 2025, 17(8), 354; https://doi.org/10.3390/fi17080354 - 4 Aug 2025
Viewed by 335
Abstract
Virtual reality (VR) is transforming society by enabling the development of diverse applications across a wide range of fields [...] Full article
38 pages, 1194 KB  
Review
Transforming Data Annotation with AI Agents: A Review of Architectures, Reasoning, Applications, and Impact
by Md Monjurul Karim, Sangeen Khan, Dong Hoang Van, Xinyue Liu, Chunhui Wang and Qiang Qu
Future Internet 2025, 17(8), 353; https://doi.org/10.3390/fi17080353 - 2 Aug 2025
Viewed by 1858
Abstract
Data annotation serves as a critical foundation for artificial intelligence (AI) and machine learning (ML). Recently, AI agents powered by large language models (LLMs) have emerged as effective solutions to longstanding challenges in data annotation, such as scalability, consistency, cost, and limitations in [...] Read more.
Data annotation serves as a critical foundation for artificial intelligence (AI) and machine learning (ML). Recently, AI agents powered by large language models (LLMs) have emerged as effective solutions to longstanding challenges in data annotation, such as scalability, consistency, cost, and limitations in domain expertise. These agents facilitate intelligent automation and adaptive decision-making, thereby enhancing the efficiency and reliability of annotation workflows across various fields. Despite the growing interest in this area, a systematic understanding of the role and capabilities of AI agents in annotation is still underexplored. This paper seeks to fill that gap by providing a comprehensive review of how LLM-driven agents support advanced reasoning strategies, adaptive learning, and collaborative annotation efforts. We analyze agent architectures, integration patterns within workflows, and evaluation methods, along with real-world applications in sectors such as healthcare, finance, technology, and media. Furthermore, we evaluate current tools and platforms that support agent-based annotation, addressing key challenges such as quality assurance, bias mitigation, transparency, and scalability. Lastly, we outline future research directions, highlighting the importance of federated learning, cross-modal reasoning, and responsible system design to advance the development of next-generation annotation ecosystems. Full article
Show Figures

Graphical abstract

24 pages, 1294 KB  
Article
Confidential Smart Contracts and Blockchain to Implement a Watermarking Protocol
by Franco Frattolillo
Future Internet 2025, 17(8), 352; https://doi.org/10.3390/fi17080352 - 1 Aug 2025
Viewed by 422
Abstract
Watermarking protocols represent a possible solution to the problem of digital copyright protection of content distributed on the Internet. Their implementations, however, continue to be a complex problem due to the difficulties researchers encounter in proposing secure, easy-to-use and, at the same time, [...] Read more.
Watermarking protocols represent a possible solution to the problem of digital copyright protection of content distributed on the Internet. Their implementations, however, continue to be a complex problem due to the difficulties researchers encounter in proposing secure, easy-to-use and, at the same time, “trusted third parties” (TTPs)-free solutions. In this regard, implementations based on blockchain and smart contracts are among the most advanced and promising, even if they are affected by problems regarding the performance and privacy of the information exchanged and processed by smart contracts and managed by blockchains. This paper presents a watermarking protocol implemented by smart contracts and blockchain. The protocol uses a “layer-2” blockchain execution model and performs the computation in “trusted execution environments” (TEEs). Therefore, its implementation can guarantee efficient and confidential execution without compromising ease of use or resorting to TTPs. The protocol and its implementation can, thus, be considered a valid answer to the “trilemma” that afflicts the use of blockchains, managing to guarantee decentralization, security, and scalability. Full article
Show Figures

Figure 1

26 pages, 1033 KB  
Article
Internet of Things Platform for Assessment and Research on Cybersecurity of Smart Rural Environments
by Daniel Sernández-Iglesias, Llanos Tobarra, Rafael Pastor-Vargas, Antonio Robles-Gómez, Pedro Vidal-Balboa and João Sarraipa
Future Internet 2025, 17(8), 351; https://doi.org/10.3390/fi17080351 - 1 Aug 2025
Viewed by 390
Abstract
Rural regions face significant barriers to adopting IoT technologies, due to limited connectivity, energy constraints, and poor technical infrastructure. While urban environments benefit from advanced digital systems and cloud services, rural areas often lack the necessary conditions to deploy and evaluate secure and [...] Read more.
Rural regions face significant barriers to adopting IoT technologies, due to limited connectivity, energy constraints, and poor technical infrastructure. While urban environments benefit from advanced digital systems and cloud services, rural areas often lack the necessary conditions to deploy and evaluate secure and autonomous IoT solutions. To help overcome this gap, this paper presents the Smart Rural IoT Lab, a modular and reproducible testbed designed to replicate the deployment conditions in rural areas using open-source tools and affordable hardware. The laboratory integrates long-range and short-range communication technologies in six experimental scenarios, implementing protocols such as MQTT, HTTP, UDP, and CoAP. These scenarios simulate realistic rural use cases, including environmental monitoring, livestock tracking, infrastructure access control, and heritage site protection. Local data processing is achieved through containerized services like Node-RED, InfluxDB, MongoDB, and Grafana, ensuring complete autonomy, without dependence on cloud services. A key contribution of the laboratory is the generation of structured datasets from real network traffic captured with Tcpdump and preprocessed using Zeek. Unlike simulated datasets, the collected data reflect communication patterns generated from real devices. Although the current dataset only includes benign traffic, the platform is prepared for future incorporation of adversarial scenarios (spoofing, DoS) to support AI-based cybersecurity research. While experiments were conducted in an indoor controlled environment, the testbed architecture is portable and suitable for future outdoor deployment. The Smart Rural IoT Lab addresses a critical gap in current research infrastructure, providing a realistic and flexible foundation for developing secure, cloud-independent IoT solutions, contributing to the digital transformation of rural regions. Full article
Show Figures

Figure 1

28 pages, 1328 KB  
Review
Security Issues in IoT-Based Wireless Sensor Networks: Classifications and Solutions
by Dung T. Nguyen, Mien L. Trinh, Minh T. Nguyen, Thang C. Vu, Tao V. Nguyen, Long Q. Dinh and Mui D. Nguyen
Future Internet 2025, 17(8), 350; https://doi.org/10.3390/fi17080350 - 1 Aug 2025
Viewed by 585
Abstract
In recent years, the Internet of Things (IoT) has experienced considerable developments and has played an important role in various domains such as industry, agriculture, healthcare, transportation, and environment, especially for smart cities. Along with that, wireless sensor networks (WSNs) are considered to [...] Read more.
In recent years, the Internet of Things (IoT) has experienced considerable developments and has played an important role in various domains such as industry, agriculture, healthcare, transportation, and environment, especially for smart cities. Along with that, wireless sensor networks (WSNs) are considered to be important components of the IoT system (WSN-IoT) to create smart applications and automate processes. As the number of connected IoT devices increases, privacy and security issues become more complicated due to their external working environments and limited resources. Hence, solutions need to be updated to ensure that data and user privacy are protected from threats and attacks. To support the safety and reliability of such systems, in this paper, security issues in the WSN-IoT are addressed and classified as identifying security challenges and requirements for different kinds of attacks in either WSNs or IoT systems. In addition, security solutions corresponding to different types of attacks are provided, analyzed, and evaluated. We provide different comparisons and classifications based on specific goals and applications that hopefully can suggest suitable solutions for specific purposes in practical. We also suggest some research directions to support new security mechanisms. Full article
Show Figures

Figure 1

23 pages, 539 KB  
Article
AI-Supported EUD for Data Visualization: An Exploratory Case Study
by Sara Beschi, Daniela Fogli, Luigi Gargioni and Angela Locoro
Future Internet 2025, 17(8), 349; https://doi.org/10.3390/fi17080349 - 1 Aug 2025
Viewed by 345
Abstract
Data visualization is a key activity in data-driven decision making and is gaining momentum in many organizational contexts. However, the role and contribution of both end-user development (EUD) and artificial intelligence (AI) technologies for data visualization and analytics are still not clear or [...] Read more.
Data visualization is a key activity in data-driven decision making and is gaining momentum in many organizational contexts. However, the role and contribution of both end-user development (EUD) and artificial intelligence (AI) technologies for data visualization and analytics are still not clear or systematically studied. This work investigates how effectively AI-supported EUD tools may assist visual analytics tasks in organizations. An exploratory case study with eight interviews with key informants allowed a deep understanding of data analysis and visualization practices in a large Italian company. It aimed at identifying the various professional roles and competencies necessary in the business context, understanding the data sources and data formats exploited in daily activities, and formulating suitable hypotheses to guide the design of AI-supported EUD tools for data analysis and visualization. In particular, the results of interviews with key informants yielded the development of a prototype of an LLM-based EUD environment, which was then used with selected target users to collect their opinions and expectations about this type of intervention in their work practice and organization. All the data collected during the exploratory case study finally led to defining a set of design guidelines for AI-supported EUD for data visualization. Full article
(This article belongs to the Special Issue Human-Centered Artificial Intelligence)
Show Figures

Figure 1

Previous Issue
Back to TopTop