Journal Description
Future Internet
Future Internet
is an international, peer-reviewed, open access journal on internet technologies and the information society, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, dblp, Inspec, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Information Systems) / CiteScore - Q1 (Computer Networks and Communications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17 days after submission; acceptance to publication is undertaken in 3.6 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
3.6 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Unsupervised Person Re-Identification via Deep Attribute Learning
Future Internet 2025, 17(8), 371; https://doi.org/10.3390/fi17080371 - 15 Aug 2025
Abstract
Driven by growing public security demands and the advancement of intelligent surveillance systems, person re-identification (ReID) has emerged as a prominent research focus in the field of computer vision. %The primary objective of person ReID is to retrieve individuals with the same identity
[...] Read more.
Driven by growing public security demands and the advancement of intelligent surveillance systems, person re-identification (ReID) has emerged as a prominent research focus in the field of computer vision. %The primary objective of person ReID is to retrieve individuals with the same identity across different camera views. However, this task presents challenges due to its high sensitivity to variations in visual appearance caused by factors such as body pose and camera parameters. Although deep learning-based methods have achieved marked progress in ReID, the high cost of annotation remains a challenge that cannot be overlooked. To address this, we propose an unsupervised attribute learning framework that eliminates the need for costly manual annotations while maintaining high accuracy. The framework learns the mid-level human attributes (such as clothing type and gender) that are robust to substantial visual appearance variations and can hence boost the accuracy of attributes with a small amount of labeled data. To carry out our framework, we present a part-based convolutional neural network (CNN) architecture, which consists of two components for image and body attribute learning on a global level and upper- and lower-body image and attribute learning at a local level. The proposed architecture is trained to learn attribute-semantic and identity-discriminative feature representations simultaneously. For model learning, we first train our part-based network using a supervised approach on a labeled attribute dataset. Then, we apply an unsupervised clustering method to assign pseudo-labels to unlabeled images in a target dataset using our trained network. To improve feature compatibility, we introduce an attribute consistency scheme for unsupervised domain adaptation on this unlabeled target data. During training on the target dataset, we alternately perform three steps: extracting features with the updated model, assigning pseudo-labels to unlabeled images, and fine-tuning the model. % change Through a unified framework that fuses complementary attribute-label and identity label information, our approach achieves considerable improvements of 10.6\% and 3.91\% mAP on Market-1501→DukeMTMC-ReID and DukeMTMC-ReID→Market-1501 unsupervised domain adaptation tasks, respectively.
Full article
(This article belongs to the Special Issue Advances in Deep Learning and Next-Generation Internet Technologies)
Open AccessArticle
Adaptive and User-Friendly Framework for Image Classification with Transfer Learning Models
by
Manan Khatri, Manmita Sahoo, Sameer Sayyad and Javed Sayyad
Future Internet 2025, 17(8), 370; https://doi.org/10.3390/fi17080370 - 15 Aug 2025
Abstract
►▼
Show Figures
The increasing demand for accessible and efficient machine learning solutions has led to the development of the Adaptive Learning Framework (ALF) for multi-class, single-label image classification. Unlike existing low-code tools, ALF integrates multiple transfer learning backbones with a guided, adaptive workflow that empowers
[...] Read more.
The increasing demand for accessible and efficient machine learning solutions has led to the development of the Adaptive Learning Framework (ALF) for multi-class, single-label image classification. Unlike existing low-code tools, ALF integrates multiple transfer learning backbones with a guided, adaptive workflow that empowers non-technical users to create custom classification models without specialized expertise. It employs pre-trained models from TensorFlow Hub to significantly reduce computational costs and training times while maintaining high accuracy. The platform’s User Interface (UI), built using Streamlit, enables intuitive operations, such as dataset upload, class definition, and model training, without coding requirements. This research focuses on small-scale image datasets to demonstrate ALF’s accessibility and ease of use. Evaluation metrics highlight the superior performance of transfer learning approaches, with the InceptionV2 model architecture achieving the highest accuracy. By bridging the gap between complex deep learning methods and real-world usability, ALF addresses practical needs across fields like education and industry.
Full article

Figure 1
Open AccessArticle
An Energy-Efficient Scheme for Waking Co-Channel TDMA in LoRa Networks via the Integration of Bidirectional Timestamp Correction and Address Recognition
by
Zongliang Xu, Guicai Yu, Yingcong Luo and Hao Jiang
Future Internet 2025, 17(8), 369; https://doi.org/10.3390/fi17080369 - 14 Aug 2025
Abstract
To address the issues of high energy consumption, data collisions, and invalid wake-ups of nontarget nodes in large-scale node-deployment scenarios of long-range (LoRa) star networks, this paper proposes an energy-saving wake-up scheme that combines (i) time-division multiple access (TDMA) slot allocation based on
[...] Read more.
To address the issues of high energy consumption, data collisions, and invalid wake-ups of nontarget nodes in large-scale node-deployment scenarios of long-range (LoRa) star networks, this paper proposes an energy-saving wake-up scheme that combines (i) time-division multiple access (TDMA) slot allocation based on bidirectional timestamp correction with (ii) a sensing and communication integrated (ISAC) scheme proposed for physical address identification of LoRa nodes operating on the same channel. The scheme incorporates parameter estimation of the LoRa channel, which effectively enhances the identification accuracy and improves the system’s robustness. The proposed scheme consists of two parts: First, in case nodes briefly lose power, a bidirectional timestamp calibration algorithm and GPS-assisted timing are used to synchronize the gateway and each node with high precision, ensuring the accurate scheduling of the TDMA mechanism. Second, based on time synchronization, a “slot–LoRa module address” mapping table is constructed to set the communication time points between the gateway and each node. The gateway can wake the target nodes at specific, precise communication time points. Experimental results show that the proposed method maintains the error range within ±1 ms. The significant decrease in the rate of unnecessary node wake-up decreases data collisions and energy waste in the same channel environment. Energy savings scale with network size, thereby significantly extending the network life cycle.
Full article
(This article belongs to the Topic Advanced Propagation Channel Estimation Techniques for Sixth-Generation (6G) Wireless Communications)
►▼
Show Figures

Figure 1
Open AccessReview
Internet of Things Driven Digital Twin for Intelligent Manufacturing in Shipbuilding Workshops
by
Caiping Liang, Xiang Li, Wenxu Niu and Yansong Zhang
Future Internet 2025, 17(8), 368; https://doi.org/10.3390/fi17080368 - 14 Aug 2025
Abstract
►▼
Show Figures
Intelligent manufacturing research has focused on digital twins (DTs) due to the growing integration of physical and cyber systems. This study thoroughly explores the Internet of Things (IoT) as a cornerstone of DTs, showing its promise and limitations in intelligent shipbuilding digital transformation
[...] Read more.
Intelligent manufacturing research has focused on digital twins (DTs) due to the growing integration of physical and cyber systems. This study thoroughly explores the Internet of Things (IoT) as a cornerstone of DTs, showing its promise and limitations in intelligent shipbuilding digital transformation workshops. We analyze the progress of IoT protocols, digital twin frameworks, and intelligent ship manufacturing. A unique bidirectional digital twin system for shipbuilding workshops uses the Internet of Things to communicate data between real and virtual workshops. This research uses a steel-cutting workshop to demonstrate the digital transformation of the production line, including data collection, transmission, storage, and simulation analysis. Then, major hurdles to digital technology application in shipbuilding are comprehensively examined. Critical barriers to DT deployment in shipbuilding environments are systematically analyzed, including technical standard unification, communication security, real-time performance guarantees, cross-workshop collaboration mechanisms, and the deep integration of artificial intelligence. Adaptive solutions include hybrid edge-cloud computing architectures for latency-sensitive tasks and reinforcement learning-based smart scheduling algorithms. The findings suggest that IoT-driven digital transformation may modernize shipbuilding workshops in new ways.
Full article

Figure 1
Open AccessArticle
DDoS Defense Strategy Based on Blockchain and Unsupervised Learning Techniques in SDN
by
Shengmin Peng, Jialin Tian, Xiangyu Zheng, Shuwu Chen and Zhaogang Shu
Future Internet 2025, 17(8), 367; https://doi.org/10.3390/fi17080367 - 13 Aug 2025
Abstract
With the rapid development of technologies such as cloud computing, big data, and the Internet of Things (IoT), Software-Defined Networking (SDN) is emerging as a new network architecture for the modern Internet. SDN separates the control plane from the data plane, allowing a
[...] Read more.
With the rapid development of technologies such as cloud computing, big data, and the Internet of Things (IoT), Software-Defined Networking (SDN) is emerging as a new network architecture for the modern Internet. SDN separates the control plane from the data plane, allowing a central controller, the SDN controller, to quickly direct the routing devices within the topology to forward data packets, thus providing flexible traffic management for communication between information sources. However, traditional Distributed Denial of Service (DDoS) attacks still significantly impact SDN systems. This paper proposes a novel dual-layer strategy capable of detecting and mitigating DDoS attacks in an SDN network environment. The first layer of the strategy enhances security by using blockchain technology to replace the SDN flow table storage container in the northbound interface of the SDN controller. Smart contracts are then used to process the stored flow table information. We employ the time window algorithm and the token bucket algorithm to construct the first layer strategy to defend against obvious DDoS attacks. To detect and mitigate less obvious DDoS attacks, we design a second-layer strategy that uses a composite data feature correlation coefficient calculation method and the Isolation Forest algorithm from unsupervised learning techniques to perform binary classification, thereby identifying abnormal traffic. We conduct experimental validation using the publicly available DDoS dataset CIC-DDoS2019. The results show that using this strategy in the SDN network reduces the average deviation of round-trip time (RTT) by approximately 38.86% compared with the original SDN network without this strategy. Furthermore, the accuracy of DDoS attack detection reaches 97.66% and an F1 score of 92.2%. Compared with other similar methods, under comparable detection accuracy, the deployment of our strategy in small-scale SDN network topologies provides faster detection speeds for DDoS attacks and exhibits less fluctuation in detection time. This indicates that implementing this strategy can effectively identify DDoS attacks without affecting the stability of data transmission in the SDN network environment.
Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
The Future of Education: A Systematic Literature Review of Self-Directed Learning with AI
by
Carmen del Rosario Navas Bonilla, Luis Miguel Viñan Carrasco, Jhoanna Carolina Gaibor Pupiales and Daniel Eduardo Murillo Noriega
Future Internet 2025, 17(8), 366; https://doi.org/10.3390/fi17080366 - 13 Aug 2025
Abstract
►▼
Show Figures
As digital transformation continues to redefine education, understanding how emerging technologies can enhance self-directed learning (SDL) becomes essential for learners, educators, instructional designers, and policymakers, as this approach supports personalized learning, strengthens student autonomy, and responds to the demands of more flexible and
[...] Read more.
As digital transformation continues to redefine education, understanding how emerging technologies can enhance self-directed learning (SDL) becomes essential for learners, educators, instructional designers, and policymakers, as this approach supports personalized learning, strengthens student autonomy, and responds to the demands of more flexible and dynamic educational environments. This systematic review examines how artificial intelligence (AI) tools enhance SDL by offering personalized, adaptive, and real-time support for learners in online environments. Following the PRISMA 2020 methodology, a literature search was conducted to identify relevant studies published between 2020 and 2025. After applying inclusion, exclusion, and quality criteria, 77 studies were selected for in-depth analysis. The findings indicate that AI-powered tools such as intelligent tutoring systems, chatbots, conversational agents, and natural language processing applications promote learner autonomy, enable self-regulation, provide real-time feedback, and support individualized learning paths. However, several challenges persist, including overreliance on technology, cognitive overload, and diminished human interaction. These insights suggest that, while AI plays a transformative role in the evolution of education, its integration must be guided by thoughtful pedagogical design, ethical considerations, and a learner-centered approach to fully support the future of education through the internet.
Full article

Figure 1
Open AccessReview
The Adaptive Ecosystem of MaaS-Driven Cookie Theft: Dynamics, Anticipatory Analysis Concepts, and Proactive Defenses
by
Leandro Antonio Pazmiño Ortiz, Ivonne Fernanda Maldonado Soliz and Vanessa Katherine Guevara Balarezo
Future Internet 2025, 17(8), 365; https://doi.org/10.3390/fi17080365 - 11 Aug 2025
Abstract
►▼
Show Figures
The industrialization of cybercrime, principally through Malware-as-a-Service (MaaS), has elevated HTTP cookie theft to a critical cybersecurity challenge, enabling attackers to bypass multi-factor authentication and perpetrate large-scale account takeovers. Employing a Holistic and Integrative Review methodology, this paper dissects the intricate, adaptive ecosystem
[...] Read more.
The industrialization of cybercrime, principally through Malware-as-a-Service (MaaS), has elevated HTTP cookie theft to a critical cybersecurity challenge, enabling attackers to bypass multi-factor authentication and perpetrate large-scale account takeovers. Employing a Holistic and Integrative Review methodology, this paper dissects the intricate, adaptive ecosystem of MaaS-driven cookie theft. We systematically characterize the co-evolving arms race between offensive and defensive strategies (2020–2025), revealing a critical strategic asymmetry where attackers optimize for speed and low cost, while effective defenses demand significant resources. To shift security from a reactive to an anticipatory posture, a multi-dimensional predictive framework is not only proposed but is also detailed as a formalized, testable algorithm, integrating technical, economic, and behavioral indicators to forecast emerging threat trajectories. Our findings conclude that long-term security hinges on disrupting the underlying cybercriminal economic model; we therefore reframe proactive countermeasures like Zero-Trust principles and ephemeral tokens as economic weapons designed to devalue the stolen asset. Finally, the paper provides a prioritized, multi-year research roadmap and a practical decision-tree framework to guide the implementation of these advanced, collaborative cybersecurity strategies to counter this pervasive and evolving threat.
Full article

Figure 1
Open AccessArticle
Enabling Horizontal Collaboration in Logistics Through Secure Multi-Party Computation
by
Gabriele Spini, Stephan Krenn, Erich Teppan, Christina Petschnigg and Elena Wiegelmann
Future Internet 2025, 17(8), 364; https://doi.org/10.3390/fi17080364 - 8 Aug 2025
Abstract
The road transport sector is currently facing significant challenges, due in part to CO2 emissions, high fuel prices, and a shortage of staff. These issues are partially caused by more than 40% of truck journeys being “empty runs” in some member states
[...] Read more.
The road transport sector is currently facing significant challenges, due in part to CO2 emissions, high fuel prices, and a shortage of staff. These issues are partially caused by more than 40% of truck journeys being “empty runs” in some member states of the European Union and heavy under-utilization of deck space for non-empty runs. In order to overcome said inefficiency, this paper proposes a decentralized platform to facilitate collaborative transport networks (CTNs), i.e., to enable horizontal collaboration to increase load factors and reduce costs and CO2 emissions. Our solution leverages secure multi-party computation (MPC) to guarantee that no sensitive business information is leaked to competing hauliers. The system optimizes truck assignments by modeling logistics as a weighted graph that considers orders and truck capacities while maintaining strict confidentiality. Our approach addresses key barriers to CTN adoption, such as lack of trust and data privacy. Implemented using MPyC without extensive optimizations, we demonstrate the efficiency and effectiveness in increasing the average load factor, while achieving acceptable running times (in the order of hours) for arguably meaningful instance sizes (up to 1000 orders). After leveraging a rather simplistic modeling inspired by previous work, we finally give an outlook of possible extensions toward more realistic models and estimate their impact on efficiency.
Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
►▼
Show Figures

Figure 1
Open AccessArticle
Beyond Prompt Chaining: The TB-CSPN Architecture for Agentic AI
by
Uwe M. Borghoff, Paolo Bottoni and Remo Pareschi
Future Internet 2025, 17(8), 363; https://doi.org/10.3390/fi17080363 - 8 Aug 2025
Abstract
Current agentic AI frameworks such as LangGraph and AutoGen simulate autonomy via sequential prompt chaining but lack true multi-agent coordination architectures. These systems conflate semantic reasoning with orchestration, requiring LLMs at every coordination step and limiting scalability. By contrast, TB-CSPN (Topic-Based Communication Space
[...] Read more.
Current agentic AI frameworks such as LangGraph and AutoGen simulate autonomy via sequential prompt chaining but lack true multi-agent coordination architectures. These systems conflate semantic reasoning with orchestration, requiring LLMs at every coordination step and limiting scalability. By contrast, TB-CSPN (Topic-Based Communication Space Petri Net) is a hybrid formal architecture that fundamentally separates semantic processing from coordination logic. Unlike traditional Petri net applications, where the entire system state is encoded within the network structure, TB-CSPN uses Petri nets exclusively for coordination workflow modeling, letting communication and interaction between agents drive semantically rich, topic-based representations. At the same time, unlike first-generation agentic frameworks, here LLMs are confined to topic extraction, with business logic coordination implemented by structured token communication. This hybrid architectural separation preserves human strategic oversight (as supervisors) while delegating consultant and worker roles to LLMs and specialized AI agents, avoiding the state-space explosion typical of monolithic formal systems. Our empirical evaluation shows that TB-CSPN achieves 62.5% faster processing, 66.7% fewer LLM API calls, and 167% higher throughput compared to LangGraph-style orchestration, without sacrificing reliability. Scaling experiments with 10–100 agents reveal sub-linear memory growth (10× efficiency improvement), directly contradicting traditional Petri Net scalability concerns through our semantic-coordination-based architectural separation. These performance gains arise from the hybrid design, where coordination patterns remain constant while semantic spaces scale independently. TB-CSPN demonstrates that efficient agentic AI emerges not by over-relying on modern AI components but by embedding them strategically within a hybrid architecture that combines formal coordination guarantees with semantic flexibility. Our implementation and evaluation methodology are openly available, inviting community validation and extension of these principles.
Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
►▼
Show Figures

Figure 1
Open AccessArticle
Parallel Algorithm for NP-Hard Problem of Channel Resource Allocation Optimization in Ad Hoc and Sensor Networks
by
Valeriy Ivanov and Maxim Tereshonok
Future Internet 2025, 17(8), 362; https://doi.org/10.3390/fi17080362 - 8 Aug 2025
Abstract
This paper proposes a technique to estimate the minimal quantity of orthogonal channel resources required for ad hoc and sensor network connectivity. Simultaneously, the resource allocation to each specific line is conducted by grouping lines into concurrent transmission sets. Our proposed technique uses
[...] Read more.
This paper proposes a technique to estimate the minimal quantity of orthogonal channel resources required for ad hoc and sensor network connectivity. Simultaneously, the resource allocation to each specific line is conducted by grouping lines into concurrent transmission sets. Our proposed technique uses the physical-based interference model assumption, where each node interferes with every other node, turning ad hoc and sensor network performance optimization problems into the NP-hard ones. In contrast to most of the other works with the physical-based interference model assumption, we mitigate the combinatorial explosion of concurrently transmitting line sets considering the global interference instead of localizing the interference with line or space partitioning. We have performed the mitigation, firstly, using pairwise mutually acceptable line sets for each line. Then, based on the limitations of pairwise sets, we construct the tree of mutually acceptable interfering line sets. Then, from the created tree, we find the minimal set cover of concurrently transmitting line sets. The tree construction has the exponential worst-case time and space complexity if all lines in the network can transmit together. By randomly pruning the tree and using the genetic algorithm to find the pruned tree which gives the same minimal set cover as the full tree, we have reduced the worst space and time complexities to the polynomial ones. We have devised our technique with parallelism in mind to speed up the resource allocation optimization even more. Based on an extensive simulation study with random network topologies of sizes up to 250 nodes and the average number of lines up to 2000, we estimated the time and space complexity for different tree pruning and optimization techniques and found the effective ones.
Full article
(This article belongs to the Special Issue Wireless Sensor Networks and Internet of Things)
►▼
Show Figures

Figure 1
Open AccessArticle
A Multi-Layer Secure Sharing Framework for Aviation Big Data Based on Blockchain
by
Qing Wang, Zhijun Wu and Yanrong Lu
Future Internet 2025, 17(8), 361; https://doi.org/10.3390/fi17080361 - 8 Aug 2025
Abstract
►▼
Show Figures
As a new type of production factor, data possesses multidimensional application value, and its pivotal role is becoming increasingly prominent in the aviation sector. Data sharing can significantly enhance the utilization efficiency of data resources and serves as one of the key tasks
[...] Read more.
As a new type of production factor, data possesses multidimensional application value, and its pivotal role is becoming increasingly prominent in the aviation sector. Data sharing can significantly enhance the utilization efficiency of data resources and serves as one of the key tasks in building smart civil aviation. However, currently, data silos are pervasive, with vast amounts of data only being utilized and analyzed within limited scopes, leaving their full potential untapped. The challenges in data sharing primarily stem from three aspects: (1) Data owners harbor concerns regarding data security and privacy. (2) The highly dynamic and real-time nature of aviation operations imposes stringent requirements on the timeliness, stability, and reliability of data sharing, thereby constraining its scope and extent. (3) The lack of reasonable incentive mechanisms results in insufficient motivation for data owners to share. Consequently, addressing the issue of aviation big data sharing holds significant importance. Since the release of the Bitcoin whitepaper in 2008, blockchain technology has achieved continuous breakthroughs in the fields of data security and collaborative computing. Its unique characteristics—decentralization, tamper-proofing, traceability, and scalability—lay the foundation for its integration with aviation. Blockchain can deeply integrate with air traffic management (ATM) operations, effectively resolving trust, efficiency, and collaboration challenges in distributed scenarios for ATM data. To address the heterogeneous data usage requirements of different ATM stakeholders, this paper constructs a blockchain-based multi-level data security sharing architecture, enabling fine-grained management and secure collaboration. Furthermore, to meet the stringent timeliness demands of aviation operations and the storage pressure posed by massive data, this paper optimizes blockchain storage deployment and consensus mechanisms, thereby enhancing system scalability and processing efficiency. Additionally, a dual-mode data-sharing solution combining raw data sharing and model sharing is proposed, offering a novel approach to aviation big data sharing. Security and formal analyses demonstrate that the proposed solution is both secure and effective.
Full article

Figure 1
Open AccessArticle
A Novel Framework Leveraging Large Language Models to Enhance Cold-Start Advertising Systems
by
Albin Uruqi, Iosif Viktoratos and Athanasios Tsadiras
Future Internet 2025, 17(8), 360; https://doi.org/10.3390/fi17080360 - 8 Aug 2025
Abstract
The cold-start problem remains a critical challenge in personalized advertising, where users with limited or no interaction history often receive suboptimal recommendations. This study introduces a novel, three-stage framework that systematically integrates transformer architectures and large language models (LLMs) to improve recommendation accuracy,
[...] Read more.
The cold-start problem remains a critical challenge in personalized advertising, where users with limited or no interaction history often receive suboptimal recommendations. This study introduces a novel, three-stage framework that systematically integrates transformer architectures and large language models (LLMs) to improve recommendation accuracy, transparency, and user experience throughout the entire advertising pipeline. The proposed approach begins with transformer-enhanced feature extraction, leveraging self-attention and learned positional encodings to capture deep semantic relationships among users, ads, and context. It then employs an ensemble integration strategy combining enhanced state-of-the-art models with optimized aggregation for robust prediction. Finally, an LLM-driven enhancement module performs semantic reranking, personalized message refinement, and natural language explanation generation while also addressing cold-start scenarios through pre-trained knowledge. The LLM component further supports diversification, fairness-aware ranking, and sentiment sensitivity in order to ensure more relevant, diverse, and ethically grounded recommendations. Extensive experiments on DigiX and Avazu datasets demonstrate notable gains in click-through rate prediction (CTR), while an in-depth real user evaluation showcases improvements in perceived ad relevance, message quality, transparency, and trust. This work advances the state-of-the-art by combining CTR models with interpretability and contextual reasoning. The strengths of the proposed method, such as its innovative integration of components, empirical validation, multifaceted LLM application, and ethical alignment highlight its potential as a robust, future-ready solution for personalized advertising.
Full article
(This article belongs to the Special Issue Information Networks with Human-Centric LLMs)
►▼
Show Figures

Figure 1
Open AccessArticle
Creating Automated Microsoft Bicep Application Infrastructure from GitHub in the Azure Cloud
by
Vladislav Manolov, Daniela Gotseva and Nikolay Hinov
Future Internet 2025, 17(8), 359; https://doi.org/10.3390/fi17080359 - 7 Aug 2025
Abstract
Infrastructure as code (IaC) is essential for modern cloud development, enabling teams to define, deploy, and manage infrastructure in a consistent and repeatable manner. As organizations migrate to Azure, selecting the right approach is crucial for managing complexity, minimizing errors, and supporting DevOps
[...] Read more.
Infrastructure as code (IaC) is essential for modern cloud development, enabling teams to define, deploy, and manage infrastructure in a consistent and repeatable manner. As organizations migrate to Azure, selecting the right approach is crucial for managing complexity, minimizing errors, and supporting DevOps practices. This paper examines the use of Azure Bicep with GitHub Actions to automate infrastructure deployment for an application in the Azure cloud. It explains how Bicep improves readability, modularity, and integration compared to traditional ARM templates and other automation tools. The solution utilizes a modular Bicep design to deploy resources, including virtual networks, managed identities, container apps, databases, and AI services, with environment-specific parameters for development, QA, and production. GitHub Actions workflows automate the building, deployment, and tearing down of infrastructure, ensuring consistent deployments across environments. Security considerations include managed identities, private networking, and secret management in continuous integration (CI) and continuous delivery (CD) pipelines. This paper provides a detailed architectural overview, workflow analysis, and implementation guidance to help teams adopt a robust, automated approach to Azure infrastructure deployment. By leveraging automation tooling and modern DevOps practices, organizations can streamline delivery and maintain secure, maintainable cloud environments.
Full article
(This article belongs to the Special Issue IoT, Edge, and Cloud Computing in Smart Cities, 2nd Edition)
►▼
Show Figures

Graphical abstract
Open AccessReview
Enhancing Industrial Processes Through Augmented Reality: A Scoping Review
by
Alba Miranda, Aracely M. Vallejo, Paulina Ayala, Marcelo V. Garcia and Jose E. Naranjo
Future Internet 2025, 17(8), 358; https://doi.org/10.3390/fi17080358 - 7 Aug 2025
Abstract
Augmented reality (AR) in industry improves training and technical assistance by overlaying digital information on real environments, facilitating the visualisation and understanding of complex processes. It also enables more effective remote collaboration, optimising problem solving and decision making in real time. This paper
[...] Read more.
Augmented reality (AR) in industry improves training and technical assistance by overlaying digital information on real environments, facilitating the visualisation and understanding of complex processes. It also enables more effective remote collaboration, optimising problem solving and decision making in real time. This paper proposes a scoping review, using PRISMA guidelines, on the optimisation of industrial processes through the application of AR. The objectives of this study included characterising successful implementations of AR in various industrial processes, comparing different hardware, graphics engines, associated costs, and determining the percentage of optimisation achieved through AR. The databases included were Scopus, SpringerLink, IEEExplore, and MDPI. Eligibility criteria were defined as English-language articles published between 2019 and 2024 that provide significant contributions to AR applications in engineering. The Cochrane method was used to assess bias. The rigorous selection process resulted in the inclusion of 38 articles. Key findings indicate that AR reduces errors and execution times, improves efficiency and productivity, and optimises training and maintenance processes, leading to cost savings and quality improvement. Unity 3D is the most widely used graphics engine for AR applications. The main applications of AR are in maintenance, assembly, training and inspection, with maintenance being the most researched area. Challenges include the learning curve, high initial costs, and hardware limitations.
Full article
(This article belongs to the Special Issue Machine Learning and Internet of Things in Industry 4.0)
►▼
Show Figures

Figure 1
Open AccessReview
Interest Flooding Attacks in Named Data Networking and Mitigations: Recent Advances and Challenges
by
Simeon Ogunbunmi, Yu Chen, Qi Zhao, Deeraj Nagothu, Sixiao Wei, Genshe Chen and Erik Blasch
Future Internet 2025, 17(8), 357; https://doi.org/10.3390/fi17080357 - 6 Aug 2025
Abstract
Named Data Networking (NDN) represents a promising Information-Centric Networking architecture that addresses limitations of traditional host-centric Internet protocols by emphasizing content names rather than host addresses for communication. While NDN offers advantages in content distribution, mobility support, and built-in security features, its stateful
[...] Read more.
Named Data Networking (NDN) represents a promising Information-Centric Networking architecture that addresses limitations of traditional host-centric Internet protocols by emphasizing content names rather than host addresses for communication. While NDN offers advantages in content distribution, mobility support, and built-in security features, its stateful forwarding plane introduces significant vulnerabilities, particularly Interest Flooding Attacks (IFAs). These IFA attacks exploit the Pending Interest Table (PIT) by injecting malicious interest packets for non-existent or unsatisfiable content, leading to resource exhaustion and denial-of-service attacks against legitimate users. This survey examines research advances in IFA detection and mitigation from 2013 to 2024, analyzing seven relevant published detection and mitigation strategies to provide current insights into this evolving security challenge. We establish a taxonomy of attack variants, including Fake Interest, Unsatisfiable Interest, Interest Loop, and Collusive models, while examining their operational characteristics and network performance impacts. Our analysis categorizes defense mechanisms into five primary approaches: rate-limiting strategies, PIT management techniques, machine learning and artificial intelligence methods, reputation-based systems, and blockchain-enabled solutions. These approaches are evaluated for their effectiveness, computational requirements, and deployment feasibility. The survey extends to domain-specific implementations in resource-constrained environments, examining adaptations for Internet of Things deployments, wireless sensor networks, and high-mobility vehicular scenarios. Five critical research directions are proposed: adaptive defense mechanisms against sophisticated attackers, privacy-preserving detection techniques, real-time optimization for edge computing environments, standardized evaluation frameworks, and hybrid approaches combining multiple mitigation strategies.
Full article
(This article belongs to the Special Issue Intrusion Detection and Resiliency in Cyber-Physical Systems and Networks)
►▼
Show Figures

Figure 1
Open AccessArticle
Rethinking Modbus-UDP for Real-Time IIoT Systems
by
Ivan Cibrario Bertolotti
Future Internet 2025, 17(8), 356; https://doi.org/10.3390/fi17080356 - 5 Aug 2025
Abstract
The original Modbus specification for RS-485 and RS-232 buses supported broadcast transmission. As the protocol evolved into Modbus-TCP, to use the TCP transport, this useful feature was lost, likely due to the point-to-point nature of TCP connections. Later proposals did not restore the
[...] Read more.
The original Modbus specification for RS-485 and RS-232 buses supported broadcast transmission. As the protocol evolved into Modbus-TCP, to use the TCP transport, this useful feature was lost, likely due to the point-to-point nature of TCP connections. Later proposals did not restore the broadcast transmission capability, although they used UDP as transport and UDP, by itself, would have supported it. Moreover, they did not address the inherent lack of reliable delivery of UDP, leaving datagram loss detection and recovery to the application layer. This paper describes a novel redesign of Modbus-UDP that addresses the aforementioned shortcomings. It achieves a mean round-trip time of only 38% with respect to Modbus-TCP and seamlessly supports a previously published protocol based on Modbus broadcast. In addition, the built-in retransmission of Modbus-UDP reacts more efficiently than the equivalent Modbus-TCP mechanism, exhibiting 50% of its round-trip standard deviation when subject to a 1% two-way IP datagram loss probability. Combined with the lower overhead of UDP versus TCP, this makes the redesigned Modbus-UDP protocol better suited for a variety of Industrial Internet of Things systems with limited computing and communication resources.
Full article
(This article belongs to the Special Issue Industrial Internet of Things (IIoT): Trends and Technologies—2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
TSA-GRU: A Novel Hybrid Deep Learning Module for Learner Behavior Analytics in MOOCs
by
Soundes Oumaima Boufaida, Abdelmadjid Benmachiche, Makhlouf Derdour, Majda Maatallah, Moustafa Sadek Kahil and Mohamed Chahine Ghanem
Future Internet 2025, 17(8), 355; https://doi.org/10.3390/fi17080355 - 5 Aug 2025
Abstract
►▼
Show Figures
E-Learning is an emerging dominant phenomenon in education, making the development of robust models that can accurately represent the dynamic behavior of learners in MOOCs even more critical. In this article, we propose the Temporal Sparse Attention-Gated Recurrent Unit (TSA-GRU), a novel deep
[...] Read more.
E-Learning is an emerging dominant phenomenon in education, making the development of robust models that can accurately represent the dynamic behavior of learners in MOOCs even more critical. In this article, we propose the Temporal Sparse Attention-Gated Recurrent Unit (TSA-GRU), a novel deep learning framework that combines TSA with a sequential encoder based on the GRU. This hybrid model effectively reconstructs student response times and learning trajectories with high fidelity by leveraging tthe emporal embeddings of instructional and feedback activities. By dynamically filtering noise from student interactions, TSA-GRU generates context-aware representations that seamlessly integrate both short-term fluctuations and long-term learning patterns. Empirical evaluation on the 2009–2010 ASSISTments dataset demonstrates that TSA-GRU achieved a test accuracy of 95.60% and a test loss of 0.0209, outperforming Modular Sparse Attention-Gated Recurrent Unit (MSA-GRU), Bayesian Knowledge Tracing (BKT), Performance Factors Analysis (PFA), and TSA in the same experimental design. TSA-GRU converged in five training epochs; thus, while TSA-GRU is demonstrated to have strong predictive performance for knowledge tracing tasks, these findings are specific to the conducted dataset and should not be implicitly regarded as conclusive for all data. More statistical validation through five-fold cross-validation, confidence intervals, and paired t-tests have confirmed the robustness, consistency, and statistically significant superiority of TSA-GRU over the baseline model MSA-GRU. TSA-GRU’s scalability and capacity to incorporate a temporal dimension of knowledge can make it acceptably well-positioned to analyze complex learner behaviors and plan interventions for adaptive learning in computerized learning systems.
Full article

Figure 1
Open AccessEditorial
Editorial for the Special Issue “Virtual Reality and Metaverse: Impact on the Digital Transformation of Society—2nd Edition”
by
Diego Vergara
Future Internet 2025, 17(8), 354; https://doi.org/10.3390/fi17080354 - 4 Aug 2025
Abstract
Virtual reality (VR) is transforming society by enabling the development of diverse applications across a wide range of fields [...]
Full article
(This article belongs to the Special Issue Virtual Reality and Metaverse: Impact on the Digital Transformation of Society II)
Open AccessReview
Transforming Data Annotation with AI Agents: A Review of Architectures, Reasoning, Applications, and Impact
by
Md Monjurul Karim, Sangeen Khan, Dong Hoang Van, Xinyue Liu, Chunhui Wang and Qiang Qu
Future Internet 2025, 17(8), 353; https://doi.org/10.3390/fi17080353 - 2 Aug 2025
Abstract
Data annotation serves as a critical foundation for artificial intelligence (AI) and machine learning (ML). Recently, AI agents powered by large language models (LLMs) have emerged as effective solutions to longstanding challenges in data annotation, such as scalability, consistency, cost, and limitations in
[...] Read more.
Data annotation serves as a critical foundation for artificial intelligence (AI) and machine learning (ML). Recently, AI agents powered by large language models (LLMs) have emerged as effective solutions to longstanding challenges in data annotation, such as scalability, consistency, cost, and limitations in domain expertise. These agents facilitate intelligent automation and adaptive decision-making, thereby enhancing the efficiency and reliability of annotation workflows across various fields. Despite the growing interest in this area, a systematic understanding of the role and capabilities of AI agents in annotation is still underexplored. This paper seeks to fill that gap by providing a comprehensive review of how LLM-driven agents support advanced reasoning strategies, adaptive learning, and collaborative annotation efforts. We analyze agent architectures, integration patterns within workflows, and evaluation methods, along with real-world applications in sectors such as healthcare, finance, technology, and media. Furthermore, we evaluate current tools and platforms that support agent-based annotation, addressing key challenges such as quality assurance, bias mitigation, transparency, and scalability. Lastly, we outline future research directions, highlighting the importance of federated learning, cross-modal reasoning, and responsible system design to advance the development of next-generation annotation ecosystems.
Full article
(This article belongs to the Special Issue 2024 and 2025 Feature Papers from Future Internet’s Editorial Board Members)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Confidential Smart Contracts and Blockchain to Implement a Watermarking Protocol
by
Franco Frattolillo
Future Internet 2025, 17(8), 352; https://doi.org/10.3390/fi17080352 - 1 Aug 2025
Abstract
Watermarking protocols represent a possible solution to the problem of digital copyright protection of content distributed on the Internet. Their implementations, however, continue to be a complex problem due to the difficulties researchers encounter in proposing secure, easy-to-use and, at the same time,
[...] Read more.
Watermarking protocols represent a possible solution to the problem of digital copyright protection of content distributed on the Internet. Their implementations, however, continue to be a complex problem due to the difficulties researchers encounter in proposing secure, easy-to-use and, at the same time, “trusted third parties” (TTPs)-free solutions. In this regard, implementations based on blockchain and smart contracts are among the most advanced and promising, even if they are affected by problems regarding the performance and privacy of the information exchanged and processed by smart contracts and managed by blockchains. This paper presents a watermarking protocol implemented by smart contracts and blockchain. The protocol uses a “layer-2” blockchain execution model and performs the computation in “trusted execution environments” (TEEs). Therefore, its implementation can guarantee efficient and confidential execution without compromising ease of use or resorting to TTPs. The protocol and its implementation can, thus, be considered a valid answer to the “trilemma” that afflicts the use of blockchains, managing to guarantee decentralization, security, and scalability.
Full article
(This article belongs to the Special Issue Blockchain and Web3: Applications, Challenges and Future Trends—2nd Edition)
►▼
Show Figures

Figure 1

Journal Menu
► ▼ Journal Menu-
- Future Internet Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Algorithms, Future Internet, Information, Mathematics, Symmetry
Research on Data Mining of Electronic Health Records Using Deep Learning Methods
Topic Editors: Dawei Yang, Yu Zhu, Hongyi XinDeadline: 31 August 2025
Topic in
Algorithms, Applied Sciences, Future Internet, Information, Mathematics
Soft Computing and Machine Learning
Topic Editors: Rui Araújo, António Pedro Aguiar, Nuno Lau, Rodrigo Ventura, João FabroDeadline: 30 September 2025
Topic in
Education Sciences, Future Internet, Information, Sustainability
Advances in Online and Distance Learning
Topic Editors: Neil Gordon, Han ReichgeltDeadline: 31 December 2025
Topic in
Applied Sciences, Electronics, Future Internet, IoT, Technologies, Inventions, Sensors, Vehicles
Next-Generation IoT and Smart Systems for Communication and Sensing
Topic Editors: Dinh-Thuan Do, Vitor Fialho, Luis Pires, Francisco Rego, Ricardo Santos, Vasco VelezDeadline: 31 January 2026

Special Issues
Special Issue in
Future Internet
Intrusion Detection and Resiliency in Cyber-Physical Systems and Networks
Guest Editors: Olusola Tolulope Odeyomi, Temitayo OlowuDeadline: 20 August 2025
Special Issue in
Future Internet
Artificial Intelligence and Control Systems for Industry 4.0 and 5.0
Guest Editors: Filipe Pereira, Paulo LeitaoDeadline: 20 August 2025
Special Issue in
Future Internet
Intelligent Agents and Their Application
Guest Editors: Stanimir Stoyanov, Asya Stoyanova Doycheva, Veneta Tabakova-KomsalovaDeadline: 31 August 2025
Special Issue in
Future Internet
Moving Towards 6G Wireless Technologies—2nd Edition
Guest Editors: Alessandro Raschellà, Michael MackayDeadline: 31 August 2025
Topical Collections
Topical Collection in
Future Internet
Innovative People-Centered Solutions Applied to Industries, Cities and Societies
Collection Editors: Dino Giuli, Filipe Portela
Topical Collection in
Future Internet
Featured Reviews of Future Internet Research
Collection Editor: Dino Giuli
Topical Collection in
Future Internet
Computer Vision, Deep Learning and Machine Learning with Applications
Collection Editors: Remus Brad, Arpad Gellert
Topical Collection in
Future Internet
Machine Learning Approaches for User Identity
Collection Editors: Kaushik Roy, Mustafa Atay, Ajita Rattani