Journal Description
Future Internet
Future Internet
is an international, peer-reviewed, open access journal on internet technologies and the information society, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, dblp, Inspec, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Information Systems) / CiteScore - Q1 (Computer Networks and Communications)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17 days after submission; acceptance to publication is undertaken in 3.6 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
3.6 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Enabling Horizontal Collaboration in Logistics Through Secure Multi-Party Computation
Future Internet 2025, 17(8), 364; https://doi.org/10.3390/fi17080364 - 8 Aug 2025
Abstract
The road transport sector is currently facing significant challenges, due in part to CO2 emissions, high fuel prices, and a shortage of staff. These issues are partially caused by more than 40% of truck journeys being “empty runs” in some member states
[...] Read more.
The road transport sector is currently facing significant challenges, due in part to CO2 emissions, high fuel prices, and a shortage of staff. These issues are partially caused by more than 40% of truck journeys being “empty runs” in some member states of the European Union and heavy under-utilization of deck space for non-empty runs. In order to overcome said inefficiency, this paper proposes a decentralized platform to facilitate collaborative transport networks (CTNs), i.e., to enable horizontal collaboration to increase load factors and reduce costs and CO2 emissions. Our solution leverages secure multi-party computation (MPC) to guarantee that no sensitive business information is leaked to competing hauliers. The system optimizes truck assignments by modeling logistics as a weighted graph that considers orders and truck capacities while maintaining strict confidentiality. Our approach addresses key barriers to CTN adoption, such as lack of trust and data privacy. Implemented using MPyC without extensive optimizations, we demonstrate the efficiency and effectiveness in increasing the average load factor, while achieving acceptable running times (in the order of hours) for arguably meaningful instance sizes (up to 1,000 orders). After leveraging a rather simplistic modeling inspired by previous work, we finally give an outlook of possible extensions toward more realistic models and estimate their impact on efficiency.
Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
►
Show Figures
Open AccessArticle
Beyond Prompt Chaining: The TB-CSPN Architecture for Agentic AI
by
Uwe M. Borghoff, Paolo Bottoni and Remo Pareschi
Future Internet 2025, 17(8), 363; https://doi.org/10.3390/fi17080363 - 8 Aug 2025
Abstract
Current agentic AI frameworks such as LangGraph and AutoGen simulate autonomy via sequential prompt chaining but lack true multi-agent coordination architectures. These systems conflate semantic reasoning with orchestration, requiring LLMs at every coordination step and limiting scalability. By contrast, TB-CSPN (Topic-Based Communication Space
[...] Read more.
Current agentic AI frameworks such as LangGraph and AutoGen simulate autonomy via sequential prompt chaining but lack true multi-agent coordination architectures. These systems conflate semantic reasoning with orchestration, requiring LLMs at every coordination step and limiting scalability. By contrast, TB-CSPN (Topic-Based Communication Space Petri Net) is a hybrid formal architecture that fundamentally separates semantic processing from coordination logic. Unlike traditional Petri net applications, where the entire system state is encoded within the network structure, TB-CSPN uses Petri nets exclusively for coordination workflow modeling, letting communication and interaction between agents drive semantically rich, topic-based representations. At the same time, unlike first-generation agentic frameworks, here LLMs are confined to topic extraction, with business logic coordination implemented by structured token communication. This hybrid architectural separation preserves human strategic oversight (as supervisors) while delegating consultant and worker roles to LLMs and specialized AI agents, avoiding the state-space explosion typical of monolithic formal systems. Our empirical evaluation shows that TB-CSPN achieves 62.5% faster processing, 66.7% fewer LLM API calls, and 167% higher throughput compared to LangGraph-style orchestration, without sacrificing reliability. Scaling experiments with 10–100 agents reveal sub-linear memory growth (10× efficiency improvement), directly contradicting traditional Petri Net scalability concerns through our semantic-coordination-based architectural separation. These performance gains arise from the hybrid design, where coordination patterns remain constant while semantic spaces scale independently. TB-CSPN demonstrates that efficient agentic AI emerges not by over-relying on modern AI components but by embedding them strategically within a hybrid architecture that combines formal coordination guarantees with semantic flexibility. Our implementation and evaluation methodology are openly available, inviting community validation and extension of these principles.
Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Open AccessArticle
Parallel Algorithm for NP-Hard Problem of Channel Resource Allocation Optimization in Ad Hoc and Sensor Networks
by
Valeriy Ivanov and Maxim Tereshonok
Future Internet 2025, 17(8), 362; https://doi.org/10.3390/fi17080362 - 8 Aug 2025
Abstract
This paper proposes a technique to estimate the minimal quantity of orthogonal channel resources required for ad hoc and sensor network connectivity. Simultaneously, the resource allocation to each specific line is conducted by grouping lines into concurrent transmission sets. Our proposed technique uses
[...] Read more.
This paper proposes a technique to estimate the minimal quantity of orthogonal channel resources required for ad hoc and sensor network connectivity. Simultaneously, the resource allocation to each specific line is conducted by grouping lines into concurrent transmission sets. Our proposed technique uses the physical-based interference model assumption, where each node interferes with every other node, turning ad hoc and sensor network performance optimization problems into the NP-hard ones. In contrast to most of the other works with the physical-based interference model assumption, we mitigate the combinatorial explosion of concurrently transmitting line sets considering the global interference instead of localizing the interference with line or space partitioning. We have performed the mitigation, firstly, using pairwise mutually acceptable line sets for each line. Then, based on the limitations of pairwise sets, we construct the tree of mutually acceptable interfering line sets. Then, from the created tree, we find the minimal set cover of concurrently transmitting line sets. The tree construction has the exponential worst-case time and space complexity if all lines in the network can transmit together. By randomly pruning the tree and using the genetic algorithm to find the pruned tree which gives the same minimal set cover as the full tree, we have reduced the worst space and time complexities to the polynomial ones. We have devised our technique with parallelism in mind to speed up the resource allocation optimization even more. Based on an extensive simulation study with random network topologies of sizes up to 250 nodes and the average number of lines up to 2000, we estimated the time and space complexity for different tree pruning and optimization techniques and found the effective ones.
Full article
(This article belongs to the Special Issue Wireless Sensor Networks and Internet of Things)
►▼
Show Figures

Figure 1
Open AccessArticle
A Multi-Layer Secure Sharing Framework for Aviation Big Data Based on Blockchain
by
Qing Wang, Zhijun Wu and Yanrong Lu
Future Internet 2025, 17(8), 361; https://doi.org/10.3390/fi17080361 - 8 Aug 2025
Abstract
►▼
Show Figures
As a new type of production factor, data possesses multidimensional application value, and its pivotal role is becoming increasingly prominent in the aviation sector. Data sharing can significantly enhance the utilization efficiency of data resources and serves as one of the key tasks
[...] Read more.
As a new type of production factor, data possesses multidimensional application value, and its pivotal role is becoming increasingly prominent in the aviation sector. Data sharing can significantly enhance the utilization efficiency of data resources and serves as one of the key tasks in building smart civil aviation. However, currently, data silos are pervasive, with vast amounts of data only being utilized and analyzed within limited scopes, leaving their full potential untapped. The challenges in data sharing primarily stem from three aspects: (1) Data owners harbor concerns regarding data security and privacy. (2) The highly dynamic and real-time nature of aviation operations imposes stringent requirements on the timeliness, stability, and reliability of data sharing, thereby constraining its scope and extent. (3) The lack of reasonable incentive mechanisms results in insufficient motivation for data owners to share. Consequently, addressing the issue of aviation big data sharing holds significant importance. Since the release of the Bitcoin whitepaper in 2008, blockchain technology has achieved continuous breakthroughs in the fields of data security and collaborative computing. Its unique characteristics—decentralization, tamper-proofing, traceability, and scalability—lay the foundation for its integration with aviation. Blockchain can deeply integrate with air traffic management (ATM) operations, effectively resolving trust, efficiency, and collaboration challenges in distributed scenarios for ATM data. To address the heterogeneous data usage requirements of different ATM stakeholders, this paper constructs a blockchain-based multi-level data security sharing architecture, enabling fine-grained management and secure collaboration. Furthermore, to meet the stringent timeliness demands of aviation operations and the storage pressure posed by massive data, this paper optimizes blockchain storage deployment and consensus mechanisms, thereby enhancing system scalability and processing efficiency. Additionally, a dual-mode data-sharing solution combining raw data sharing and model sharing is proposed, offering a novel approach to aviation big data sharing. Security and formal analyses demonstrate that the proposed solution is both secure and effective.
Full article

Figure 1
Open AccessArticle
A Novel Framework Leveraging Large Language Models to Enhance Cold-Start Advertising Systems
by
Albin Uruqi, Iosif Viktoratos and Athanasios Tsadiras
Future Internet 2025, 17(8), 360; https://doi.org/10.3390/fi17080360 - 8 Aug 2025
Abstract
The cold-start problem remains a critical challenge in personalized advertising, where users with limited or no interaction history often receive suboptimal recommendations. This study introduces a novel, three-stage framework that systematically integrates transformer architectures and large language models (LLMs) to improve recommendation accuracy,
[...] Read more.
The cold-start problem remains a critical challenge in personalized advertising, where users with limited or no interaction history often receive suboptimal recommendations. This study introduces a novel, three-stage framework that systematically integrates transformer architectures and large language models (LLMs) to improve recommendation accuracy, transparency, and user experience throughout the entire advertising pipeline. The proposed approach begins with transformer-enhanced feature extraction, leveraging self-attention and learned positional encodings to capture deep semantic relationships among users, ads, and context. It then employs an ensemble integration strategy combining enhanced state-of-the-art models with optimized aggregation for robust prediction. Finally, an LLM-driven enhancement module performs semantic reranking, personalized message refinement, and natural language explanation generation while also addressing cold-start scenarios through pre-trained knowledge. The LLM component further supports diversification, fairness-aware ranking, and sentiment sensitivity in order to ensure more relevant, diverse, and ethically grounded recommendations. Extensive experiments on DigiX and Avazu datasets demonstrate notable gains in click-through rate prediction (CTR), while an in-depth real user evaluation showcases improvements in perceived ad relevance, message quality, transparency, and trust. This work advances the state-of-the-art by combining CTR models with interpretability and contextual reasoning. The strengths of the proposed method, such as its innovative integration of components, empirical validation, multifaceted LLM application, and ethical alignment highlight its potential as a robust, future-ready solution for personalized advertising.
Full article
(This article belongs to the Special Issue Information Networks with Human-Centric LLMs)
►▼
Show Figures

Figure 1
Open AccessArticle
Creating Automated Microsoft Bicep Application Infrastructure from GitHub in the Azure Cloud
by
Vladislav Manolov, Daniela Gotseva and Nikolay Hinov
Future Internet 2025, 17(8), 359; https://doi.org/10.3390/fi17080359 - 7 Aug 2025
Abstract
Infrastructure as code (IaC) is essential for modern cloud development, enabling teams to define, deploy, and manage infrastructure in a consistent and repeatable manner. As organizations migrate to Azure, selecting the right approach is crucial for managing complexity, minimizing errors, and supporting DevOps
[...] Read more.
Infrastructure as code (IaC) is essential for modern cloud development, enabling teams to define, deploy, and manage infrastructure in a consistent and repeatable manner. As organizations migrate to Azure, selecting the right approach is crucial for managing complexity, minimizing errors, and supporting DevOps practices. This paper examines the use of Azure Bicep with GitHub Actions to automate infrastructure deployment for an application in the Azure cloud. It explains how Bicep improves readability, modularity, and integration compared to traditional ARM templates and other automation tools. The solution utilizes a modular Bicep design to deploy resources, including virtual networks, managed identities, container apps, databases, and AI services, with environment-specific parameters for development, QA, and production. GitHub Actions workflows automate the building, deployment, and tearing down of infrastructure, ensuring consistent deployments across environments. Security considerations include managed identities, private networking, and secret management in continuous integration (CI) and continuous delivery (CD) pipelines. This paper provides a detailed architectural overview, workflow analysis, and implementation guidance to help teams adopt a robust, automated approach to Azure infrastructure deployment. By leveraging automation tooling and modern DevOps practices, organizations can streamline delivery and maintain secure, maintainable cloud environments.
Full article
(This article belongs to the Special Issue IoT, Edge, and Cloud Computing in Smart Cities, 2nd Edition)
►▼
Show Figures

Figure 1
Open AccessReview
Enhancing Industrial Processes Through Augmented Reality: A Scoping Review
by
Alba Miranda, Aracely M. Vallejo, Paulina Ayala, Marcelo V. Garcia and Jose E. Naranjo
Future Internet 2025, 17(8), 358; https://doi.org/10.3390/fi17080358 - 7 Aug 2025
Abstract
Augmented reality (AR) in industry improves training and technical assistance by overlaying digital information on real environments, facilitating the visualisation and understanding of complex processes. It also enables more effective remote collaboration, optimising problem solving and decision making in real time. This paper
[...] Read more.
Augmented reality (AR) in industry improves training and technical assistance by overlaying digital information on real environments, facilitating the visualisation and understanding of complex processes. It also enables more effective remote collaboration, optimising problem solving and decision making in real time. This paper proposes a scoping review, using PRISMA guidelines, on the optimisation of industrial processes through the application of AR. The objectives of this study included characterising successful implementations of AR in various industrial processes, comparing different hardware, graphics engines, associated costs, and determining the percentage of optimisation achieved through AR. The databases included were Scopus, SpringerLink, IEEExplore, and MDPI. Eligibility criteria were defined as English-language articles published between 2019 and 2024 that provide significant contributions to AR applications in engineering. The Cochrane method was used to assess bias. The rigorous selection process resulted in the inclusion of 38 articles. Key findings indicate that AR reduces errors and execution times, improves efficiency and productivity, and optimises training and maintenance processes, leading to cost savings and quality improvement. Unity 3D is the most widely used graphics engine for AR applications. The main applications of AR are in maintenance, assembly, training and inspection, with maintenance being the most researched area. Challenges include the learning curve, high initial costs, and hardware limitations.
Full article
(This article belongs to the Special Issue Machine Learning and Internet of Things in Industry 4.0)
►▼
Show Figures

Figure 1
Open AccessReview
Interest Flooding Attacks in Named Data Networking and Mitigations: Recent Advances and Challenges
by
Simeon Ogunbunmi, Yu Chen, Qi Zhao, Deeraj Nagothu, Sixiao Wei, Genshe Chen and Erik Blasch
Future Internet 2025, 17(8), 357; https://doi.org/10.3390/fi17080357 - 6 Aug 2025
Abstract
Named Data Networking (NDN) represents a promising Information-Centric Networking architecture that addresses limitations of traditional host-centric Internet protocols by emphasizing content names rather than host addresses for communication. While NDN offers advantages in content distribution, mobility support, and built-in security features, its stateful
[...] Read more.
Named Data Networking (NDN) represents a promising Information-Centric Networking architecture that addresses limitations of traditional host-centric Internet protocols by emphasizing content names rather than host addresses for communication. While NDN offers advantages in content distribution, mobility support, and built-in security features, its stateful forwarding plane introduces significant vulnerabilities, particularly Interest Flooding Attacks (IFAs). These IFA attacks exploit the Pending Interest Table (PIT) by injecting malicious interest packets for non-existent or unsatisfiable content, leading to resource exhaustion and denial-of-service attacks against legitimate users. This survey examines research advances in IFA detection and mitigation from 2013 to 2024, analyzing seven relevant published detection and mitigation strategies to provide current insights into this evolving security challenge. We establish a taxonomy of attack variants, including Fake Interest, Unsatisfiable Interest, Interest Loop, and Collusive models, while examining their operational characteristics and network performance impacts. Our analysis categorizes defense mechanisms into five primary approaches: rate-limiting strategies, PIT management techniques, machine learning and artificial intelligence methods, reputation-based systems, and blockchain-enabled solutions. These approaches are evaluated for their effectiveness, computational requirements, and deployment feasibility. The survey extends to domain-specific implementations in resource-constrained environments, examining adaptations for Internet of Things deployments, wireless sensor networks, and high-mobility vehicular scenarios. Five critical research directions are proposed: adaptive defense mechanisms against sophisticated attackers, privacy-preserving detection techniques, real-time optimization for edge computing environments, standardized evaluation frameworks, and hybrid approaches combining multiple mitigation strategies.
Full article
(This article belongs to the Special Issue Intrusion Detection and Resiliency in Cyber-Physical Systems and Networks)
►▼
Show Figures

Figure 1
Open AccessArticle
Rethinking Modbus-UDP for Real-Time IIoT Systems
by
Ivan Cibrario Bertolotti
Future Internet 2025, 17(8), 356; https://doi.org/10.3390/fi17080356 - 5 Aug 2025
Abstract
The original Modbus specification for RS-485 and RS-232 buses supported broadcast transmission. As the protocol evolved into Modbus-TCP, to use the TCP transport, this useful feature was lost, likely due to the point-to-point nature of TCP connections. Later proposals did not restore the
[...] Read more.
The original Modbus specification for RS-485 and RS-232 buses supported broadcast transmission. As the protocol evolved into Modbus-TCP, to use the TCP transport, this useful feature was lost, likely due to the point-to-point nature of TCP connections. Later proposals did not restore the broadcast transmission capability, although they used UDP as transport and UDP, by itself, would have supported it. Moreover, they did not address the inherent lack of reliable delivery of UDP, leaving datagram loss detection and recovery to the application layer. This paper describes a novel redesign of Modbus-UDP that addresses the aforementioned shortcomings. It achieves a mean round-trip time of only 38% with respect to Modbus-TCP and seamlessly supports a previously published protocol based on Modbus broadcast. In addition, the built-in retransmission of Modbus-UDP reacts more efficiently than the equivalent Modbus-TCP mechanism, exhibiting 50% of its round-trip standard deviation when subject to a 1% two-way IP datagram loss probability. Combined with the lower overhead of UDP versus TCP, this makes the redesigned Modbus-UDP protocol better suited for a variety of Industrial Internet of Things systems with limited computing and communication resources.
Full article
(This article belongs to the Special Issue Industrial Internet of Things (IIoT): Trends and Technologies—2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
TSA-GRU: A Novel Hybrid Deep Learning Module for Learner Behavior Analytics in MOOCs
by
Soundes Oumaima Boufaida, Abdelmadjid Benmachiche, Makhlouf Derdour, Majda Maatallah, Moustafa Sadek Kahil and Mohamed Chahine Ghanem
Future Internet 2025, 17(8), 355; https://doi.org/10.3390/fi17080355 - 5 Aug 2025
Abstract
►▼
Show Figures
E-Learning is an emerging dominant phenomenon in education, making the development of robust models that can accurately represent the dynamic behavior of learners in MOOCs even more critical. In this article, we propose the Temporal Sparse Attention-Gated Recurrent Unit (TSA-GRU), a novel deep
[...] Read more.
E-Learning is an emerging dominant phenomenon in education, making the development of robust models that can accurately represent the dynamic behavior of learners in MOOCs even more critical. In this article, we propose the Temporal Sparse Attention-Gated Recurrent Unit (TSA-GRU), a novel deep learning framework that combines TSA with a sequential encoder based on the GRU. This hybrid model effectively reconstructs student response times and learning trajectories with high fidelity by leveraging tthe emporal embeddings of instructional and feedback activities. By dynamically filtering noise from student interactions, TSA-GRU generates context-aware representations that seamlessly integrate both short-term fluctuations and long-term learning patterns. Empirical evaluation on the 2009–2010 ASSISTments dataset demonstrates that TSA-GRU achieved a test accuracy of 95.60% and a test loss of 0.0209, outperforming Modular Sparse Attention-Gated Recurrent Unit (MSA-GRU), Bayesian Knowledge Tracing (BKT), Performance Factors Analysis (PFA), and TSA in the same experimental design. TSA-GRU converged in five training epochs; thus, while TSA-GRU is demonstrated to have strong predictive performance for knowledge tracing tasks, these findings are specific to the conducted dataset and should not be implicitly regarded as conclusive for all data. More statistical validation through five-fold cross-validation, confidence intervals, and paired t-tests have confirmed the robustness, consistency, and statistically significant superiority of TSA-GRU over the baseline model MSA-GRU. TSA-GRU’s scalability and capacity to incorporate a temporal dimension of knowledge can make it acceptably well-positioned to analyze complex learner behaviors and plan interventions for adaptive learning in computerized learning systems.
Full article

Figure 1
Open AccessEditorial
Editorial for the Special Issue “Virtual Reality and Metaverse: Impact on the Digital Transformation of Society—2nd Edition”
by
Diego Vergara
Future Internet 2025, 17(8), 354; https://doi.org/10.3390/fi17080354 - 4 Aug 2025
Abstract
Virtual reality (VR) is transforming society by enabling the development of diverse applications across a wide range of fields [...]
Full article
(This article belongs to the Special Issue Virtual Reality and Metaverse: Impact on the Digital Transformation of Society II)
Open AccessReview
Transforming Data Annotation with AI Agents: A Review of Architectures, Reasoning, Applications, and Impact
by
Md Monjurul Karim, Sangeen Khan, Dong Hoang Van, Xinyue Liu, Chunhui Wang and Qiang Qu
Future Internet 2025, 17(8), 353; https://doi.org/10.3390/fi17080353 - 2 Aug 2025
Abstract
Data annotation serves as a critical foundation for artificial intelligence (AI) and machine learning (ML). Recently, AI agents powered by large language models (LLMs) have emerged as effective solutions to longstanding challenges in data annotation, such as scalability, consistency, cost, and limitations in
[...] Read more.
Data annotation serves as a critical foundation for artificial intelligence (AI) and machine learning (ML). Recently, AI agents powered by large language models (LLMs) have emerged as effective solutions to longstanding challenges in data annotation, such as scalability, consistency, cost, and limitations in domain expertise. These agents facilitate intelligent automation and adaptive decision-making, thereby enhancing the efficiency and reliability of annotation workflows across various fields. Despite the growing interest in this area, a systematic understanding of the role and capabilities of AI agents in annotation is still underexplored. This paper seeks to fill that gap by providing a comprehensive review of how LLM-driven agents support advanced reasoning strategies, adaptive learning, and collaborative annotation efforts. We analyze agent architectures, integration patterns within workflows, and evaluation methods, along with real-world applications in sectors such as healthcare, finance, technology, and media. Furthermore, we evaluate current tools and platforms that support agent-based annotation, addressing key challenges such as quality assurance, bias mitigation, transparency, and scalability. Lastly, we outline future research directions, highlighting the importance of federated learning, cross-modal reasoning, and responsible system design to advance the development of next-generation annotation ecosystems.
Full article
(This article belongs to the Special Issue 2024 and 2025 Feature Papers from Future Internet’s Editorial Board Members)
►▼
Show Figures

Figure 1
Open AccessArticle
Confidential Smart Contracts and Blockchain to Implement a Watermarking Protocol
by
Franco Frattolillo
Future Internet 2025, 17(8), 352; https://doi.org/10.3390/fi17080352 - 1 Aug 2025
Abstract
Watermarking protocols represent a possible solution to the problem of digital copyright protection of content distributed on the Internet. Their implementations, however, continue to be a complex problem due to the difficulties researchers encounter in proposing secure, easy-to-use and, at the same time,
[...] Read more.
Watermarking protocols represent a possible solution to the problem of digital copyright protection of content distributed on the Internet. Their implementations, however, continue to be a complex problem due to the difficulties researchers encounter in proposing secure, easy-to-use and, at the same time, “trusted third parties” (TTPs)-free solutions. In this regard, implementations based on blockchain and smart contracts are among the most advanced and promising, even if they are affected by problems regarding the performance and privacy of the information exchanged and processed by smart contracts and managed by blockchains. This paper presents a watermarking protocol implemented by smart contracts and blockchain. The protocol uses a “layer-2” blockchain execution model and performs the computation in “trusted execution environments” (TEEs). Therefore, its implementation can guarantee efficient and confidential execution without compromising ease of use or resorting to TTPs. The protocol and its implementation can, thus, be considered a valid answer to the “trilemma” that afflicts the use of blockchains, managing to guarantee decentralization, security, and scalability.
Full article
(This article belongs to the Special Issue Blockchain and Web 3.0: Applications, Challenges and Future Trends—Volume II)
►▼
Show Figures

Figure 1
Open AccessArticle
Internet of Things Platform for Assessment and Research on Cybersecurity of Smart Rural Environments
by
Daniel Sernández-Iglesias, Llanos Tobarra, Rafael Pastor-Vargas, Antonio Robles-Gómez, Pedro Vidal-Balboa and João Sarraipa
Future Internet 2025, 17(8), 351; https://doi.org/10.3390/fi17080351 - 1 Aug 2025
Abstract
Rural regions face significant barriers to adopting IoT technologies, due to limited connectivity, energy constraints, and poor technical infrastructure. While urban environments benefit from advanced digital systems and cloud services, rural areas often lack the necessary conditions to deploy and evaluate secure and
[...] Read more.
Rural regions face significant barriers to adopting IoT technologies, due to limited connectivity, energy constraints, and poor technical infrastructure. While urban environments benefit from advanced digital systems and cloud services, rural areas often lack the necessary conditions to deploy and evaluate secure and autonomous IoT solutions. To help overcome this gap, this paper presents the Smart Rural IoT Lab, a modular and reproducible testbed designed to replicate the deployment conditions in rural areas using open-source tools and affordable hardware. The laboratory integrates long-range and short-range communication technologies in six experimental scenarios, implementing protocols such as MQTT, HTTP, UDP, and CoAP. These scenarios simulate realistic rural use cases, including environmental monitoring, livestock tracking, infrastructure access control, and heritage site protection. Local data processing is achieved through containerized services like Node-RED, InfluxDB, MongoDB, and Grafana, ensuring complete autonomy, without dependence on cloud services. A key contribution of the laboratory is the generation of structured datasets from real network traffic captured with Tcpdump and preprocessed using Zeek. Unlike simulated datasets, the collected data reflect communication patterns generated from real devices. Although the current dataset only includes benign traffic, the platform is prepared for future incorporation of adversarial scenarios (spoofing, DoS) to support AI-based cybersecurity research. While experiments were conducted in an indoor controlled environment, the testbed architecture is portable and suitable for future outdoor deployment. The Smart Rural IoT Lab addresses a critical gap in current research infrastructure, providing a realistic and flexible foundation for developing secure, cloud-independent IoT solutions, contributing to the digital transformation of rural regions.
Full article
(This article belongs to the Special Issue IoT Architecture for Smart Environments: Mechanisms, Approaches, and Applications)
►▼
Show Figures

Figure 1
Open AccessReview
Security Issues in IoT-Based Wireless Sensor Networks: Classifications and Solutions
by
Dung T. Nguyen, Mien L. Trinh, Minh T. Nguyen, Thang C. Vu, Tao V. Nguyen, Long Q. Dinh and Mui D. Nguyen
Future Internet 2025, 17(8), 350; https://doi.org/10.3390/fi17080350 - 1 Aug 2025
Abstract
►▼
Show Figures
In recent years, the Internet of Things (IoT) has experienced considerable developments and has played an important role in various domains such as industry, agriculture, healthcare, transportation, and environment, especially for smart cities. Along with that, wireless sensor networks (WSNs) are considered to
[...] Read more.
In recent years, the Internet of Things (IoT) has experienced considerable developments and has played an important role in various domains such as industry, agriculture, healthcare, transportation, and environment, especially for smart cities. Along with that, wireless sensor networks (WSNs) are considered to be important components of the IoT system (WSN-IoT) to create smart applications and automate processes. As the number of connected IoT devices increases, privacy and security issues become more complicated due to their external working environments and limited resources. Hence, solutions need to be updated to ensure that data and user privacy are protected from threats and attacks. To support the safety and reliability of such systems, in this paper, security issues in the WSN-IoT are addressed and classified as identifying security challenges and requirements for different kinds of attacks in either WSNs or IoT systems. In addition, security solutions corresponding to different types of attacks are provided, analyzed, and evaluated. We provide different comparisons and classifications based on specific goals and applications that hopefully can suggest suitable solutions for specific purposes in practical. We also suggest some research directions to support new security mechanisms.
Full article

Figure 1
Open AccessArticle
AI-Supported EUD for Data Visualization: An Exploratory Case Study
by
Sara Beschi, Daniela Fogli, Luigi Gargioni and Angela Locoro
Future Internet 2025, 17(8), 349; https://doi.org/10.3390/fi17080349 - 1 Aug 2025
Abstract
Data visualization is a key activity in data-driven decision making and is gaining momentum in many organizational contexts. However, the role and contribution of both end-user development (EUD) and artificial intelligence (AI) technologies for data visualization and analytics are still not clear or
[...] Read more.
Data visualization is a key activity in data-driven decision making and is gaining momentum in many organizational contexts. However, the role and contribution of both end-user development (EUD) and artificial intelligence (AI) technologies for data visualization and analytics are still not clear or systematically studied. This work investigates how effectively AI-supported EUD tools may assist visual analytics tasks in organizations. An exploratory case study with eight interviews with key informants allowed a deep understanding of data analysis and visualization practices in a large Italian company. It aimed at identifying the various professional roles and competencies necessary in the business context, understanding the data sources and data formats exploited in daily activities, and formulating suitable hypotheses to guide the design of AI-supported EUD tools for data analysis and visualization. In particular, the results of interviews with key informants yielded the development of a prototype of an LLM-based EUD environment, which was then used with selected target users to collect their opinions and expectations about this type of intervention in their work practice and organization. All the data collected during the exploratory case study finally led to defining a set of design guidelines for AI-supported EUD for data visualization.
Full article
(This article belongs to the Special Issue Human-Centered Artificial Intelligence)
►▼
Show Figures

Figure 1
Open AccessArticle
A Trusted Multi-Cloud Brokerage System for Validating Cloud Services Using Ranking Heuristics
by
Rajganesh Nagarajan, Vinothiyalakshmi Palanichamy, Ramkumar Thirunavukarasu and J. Arun Pandian
Future Internet 2025, 17(8), 348; https://doi.org/10.3390/fi17080348 - 31 Jul 2025
Abstract
►▼
Show Figures
Cloud computing offers a broad spectrum of services to users, particularly in multi-cloud environments where service-centric features are introduced to support users from multiple endpoints. To improve service availability and optimize the utilization of required services, cloud brokerage has been integrated into multi-cloud
[...] Read more.
Cloud computing offers a broad spectrum of services to users, particularly in multi-cloud environments where service-centric features are introduced to support users from multiple endpoints. To improve service availability and optimize the utilization of required services, cloud brokerage has been integrated into multi-cloud systems. The primary objective of a cloud broker is to ensure the quality and outcomes of services offered to customers. However, traditional cloud brokers face limitations in measuring service trust, ensuring validity, and anticipating future enhancements of services across different cloud platforms. To address these challenges, the proposed intelligent cloud broker integrates an intelligence mechanism that enhances decision-making within a multi-cloud environment. This broker performs a comprehensive validation and verification of service trustworthiness by analyzing various trust factors, including service response time, sustainability, suitability, accuracy, transparency, interoperability, availability, reliability, stability, cost, throughput, efficiency, and scalability. Customer feedback is also incorporated to assess these trust factors prior to service recommendation. The proposed model calculates service ranking (SR) values for available cloud services and dynamically includes newly introduced services during the validation process by mapping them with existing entries in the Service Collection Repository (SCR). Performance evaluation using the Google cluster-usage traces dataset demonstrates that the ICB outperforms existing approaches such as the Clustering-Based Trust Degree Computation (CBTDC) algorithm and the Service Context-Aware QoS Prediction and Recommendation (SCAQPR) model. Results confirm that the ICB significantly enhances the effectiveness and reliability of cloud service recommendations for users.
Full article

Figure 1
Open AccessArticle
DigiHortiRobot: An AI-Driven Digital Twin Architecture for Hydroponic Greenhouse Horticulture with Dual-Arm Robotic Automation
by
Roemi Fernández, Eduardo Navas, Daniel Rodríguez-Nieto, Alain Antonio Rodríguez-González and Luis Emmi
Future Internet 2025, 17(8), 347; https://doi.org/10.3390/fi17080347 - 31 Jul 2025
Abstract
The integration of digital twin technology with robotic automation holds significant promise for advancing sustainable horticulture in controlled environment agriculture. This article presents DigiHortiRobot, a novel AI-driven digital twin architecture tailored for hydroponic greenhouse systems. The proposed framework integrates real-time sensing, predictive modeling,
[...] Read more.
The integration of digital twin technology with robotic automation holds significant promise for advancing sustainable horticulture in controlled environment agriculture. This article presents DigiHortiRobot, a novel AI-driven digital twin architecture tailored for hydroponic greenhouse systems. The proposed framework integrates real-time sensing, predictive modeling, task planning, and dual-arm robotic execution within a modular, IoT-enabled infrastructure. DigiHortiRobot is structured into three progressive implementation phases: (i) monitoring and data acquisition through a multimodal perception system; (ii) decision support and virtual simulation for scenario analysis and intervention planning; and (iii) autonomous execution with feedback-based model refinement. The Physical Layer encompasses crops, infrastructure, and a mobile dual-arm robot; the virtual layer incorporates semantic modeling and simulation environments; and the synchronization layer enables continuous bi-directional communication via a nine-tier IoT architecture inspired by FIWARE standards. A robot task assignment algorithm is introduced to support operational autonomy while maintaining human oversight. The system is designed to optimize horticultural workflows such as seeding and harvesting while allowing farmers to interact remotely through cloud-based interfaces. Compared to previous digital agriculture approaches, DigiHortiRobot enables closed-loop coordination among perception, simulation, and action, supporting real-time task adaptation in dynamic environments. Experimental validation in a hydroponic greenhouse confirmed robust performance in both seeding and harvesting operations, achieving over 90% accuracy in localizing target elements and successfully executing planned tasks. The platform thus provides a strong foundation for future research in predictive control, semantic environment modeling, and scalable deployment of autonomous systems for high-value crop production.
Full article
(This article belongs to the Special Issue Advances in Smart Environments and Digital Twin Technologies)
►▼
Show Figures

Figure 1
Open AccessArticle
RestRho: A JSON-Based Domain-Specific Language for Designing and Developing RESTful APIs to Validate RhoArchitecture
by
Enrique Chavarriaga, Luis Rojas, Francy D. Rodríguez, Kat Sorbello and Francisco Jurado
Future Internet 2025, 17(8), 346; https://doi.org/10.3390/fi17080346 - 31 Jul 2025
Abstract
►▼
Show Figures
Domain-Specific Languages with JSON grammar (JSON-DSLs) are specialized programming languages tailored to specific problem domains, offering higher abstraction levels and simplifying software implementation through the JSON standard. RhoArchitecture is an approach for designing and executing JSON-DSLs, incorporating a modular programming model, a JSON-based
[...] Read more.
Domain-Specific Languages with JSON grammar (JSON-DSLs) are specialized programming languages tailored to specific problem domains, offering higher abstraction levels and simplifying software implementation through the JSON standard. RhoArchitecture is an approach for designing and executing JSON-DSLs, incorporating a modular programming model, a JSON-based evaluation engine, and an integrated web development environment. This paper presents RestRho, a RESTful NodeJS server developed using two JSON-DSLs designed with RhoArchitecture: SQLRho and DBRestRho. These languages enable declarative specification of database operations and HTTP requests, respectively, supporting modularity, reuse, and template-based transformations. We validate the RestRho implementation through a dual approach. First, we apply software metrics to assess code quality, maintainability, and complexity. Second, we conduct an empirical study involving 39 final-year computer engineering students, who completed 18 structured tasks and provided feedback via questionnaires. The results demonstrate the tool’s usability, development efficiency, and potential for adoption in web application development.
Full article

Figure 1
Open AccessArticle
Privacy-Aware Hierarchical Federated Learning in Healthcare: Integrating Differential Privacy and Secure Multi-Party Computation
by
Jatinder Pal Singh, Aqsa Aqsa, Imran Ghani, Raj Sonani and Vijay Govindarajan
Future Internet 2025, 17(8), 345; https://doi.org/10.3390/fi17080345 - 31 Jul 2025
Abstract
The development of big data analytics in healthcare has created a demand for privacy-conscious and scalable machine learning algorithms that can allow the use of patient information across different healthcare organizations. In this study, the difficulties that come with traditional federated learning frameworks
[...] Read more.
The development of big data analytics in healthcare has created a demand for privacy-conscious and scalable machine learning algorithms that can allow the use of patient information across different healthcare organizations. In this study, the difficulties that come with traditional federated learning frameworks in healthcare sectors, such as scalability, computational effectiveness, and preserving patient privacy for numerous healthcare systems, are discussed. In this work, a new conceptual model known as Hierarchical Federated Learning (HFL) for large, integrated healthcare organizations that include several institutions is proposed. The first level of aggregation forms regional centers where local updates are first collected and then sent to the second level of aggregation to form the global update, thus reducing the message-passing traffic and improving the scalability of the HFL architecture. Furthermore, the HFL framework leveraged more robust privacy characteristics such as Local Differential Privacy (LDP), Gaussian Differential Privacy (GDP), Secure Multi-Party Computation (SMPC) and Homomorphic Encryption (HE). In addition, a Novel Aggregated Gradient Perturbation Mechanism is presented to alleviate noise in model updates and maintain privacy and utility. The performance of the proposed HFL framework is evaluated on real-life healthcare datasets and an artificial dataset created using Generative Adversarial Networks (GANs), showing that the proposed HFL framework is better than other methods. Our approach provided an accuracy of around 97% and 30% less privacy leakage compared to the existing models of FLBM-IoT and PPFLB. The proposed HFL approach can help to find the optimal balance between privacy and model performance, which is crucial for healthcare applications and scalable and secure solutions.
Full article
(This article belongs to the Special Issue Security and Privacy in AI-Powered Systems)
►▼
Show Figures

Graphical abstract

Journal Menu
► ▼ Journal Menu-
- Future Internet Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Algorithms, Future Internet, Information, Mathematics, Symmetry
Research on Data Mining of Electronic Health Records Using Deep Learning Methods
Topic Editors: Dawei Yang, Yu Zhu, Hongyi XinDeadline: 31 August 2025
Topic in
Algorithms, Applied Sciences, Future Internet, Information, Mathematics
Soft Computing and Machine Learning
Topic Editors: Rui Araújo, António Pedro Aguiar, Nuno Lau, Rodrigo Ventura, João FabroDeadline: 30 September 2025
Topic in
Education Sciences, Future Internet, Information, Sustainability
Advances in Online and Distance Learning
Topic Editors: Neil Gordon, Han ReichgeltDeadline: 31 December 2025
Topic in
Applied Sciences, Electronics, Future Internet, IoT, Technologies, Inventions, Sensors, Vehicles
Next-Generation IoT and Smart Systems for Communication and Sensing
Topic Editors: Dinh-Thuan Do, Vitor Fialho, Luis Pires, Francisco Rego, Ricardo Santos, Vasco VelezDeadline: 31 January 2026

Special Issues
Special Issue in
Future Internet
Intrusion Detection and Resiliency in Cyber-Physical Systems and Networks
Guest Editors: Olusola Tolulope Odeyomi, Temitayo OlowuDeadline: 20 August 2025
Special Issue in
Future Internet
Artificial Intelligence and Control Systems for Industry 4.0 and 5.0
Guest Editors: Filipe Pereira, Paulo LeitaoDeadline: 20 August 2025
Special Issue in
Future Internet
Intelligent Agents and Their Application
Guest Editors: Stanimir Stoyanov, Asya Stoyanova Doycheva, Veneta Tabakova-KomsalovaDeadline: 31 August 2025
Special Issue in
Future Internet
Moving Towards 6G Wireless Technologies—2nd Edition
Guest Editors: Alessandro Raschellà, Michael MackayDeadline: 31 August 2025
Topical Collections
Topical Collection in
Future Internet
Innovative People-Centered Solutions Applied to Industries, Cities and Societies
Collection Editors: Dino Giuli, Filipe Portela
Topical Collection in
Future Internet
Featured Reviews of Future Internet Research
Collection Editor: Dino Giuli
Topical Collection in
Future Internet
Computer Vision, Deep Learning and Machine Learning with Applications
Collection Editors: Remus Brad, Arpad Gellert
Topical Collection in
Future Internet
Machine Learning Approaches for User Identity
Collection Editors: Kaushik Roy, Mustafa Atay, Ajita Rattani