Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
55 pages, 4454 KB  
Article
The Future of Education: A Multi-Layered Metaverse Classroom Model for Immersive and Inclusive Learning
by Leyli Nouraei Yeganeh, Nicole Scarlett Fenty, Yu Chen, Amber Simpson and Mohsen Hatami
Future Internet 2025, 17(2), 63; https://doi.org/10.3390/fi17020063 - 4 Feb 2025
Cited by 61 | Viewed by 15963
Abstract
Modern education faces persistent challenges, including disengagement, inequitable access to learning resources, and the lack of personalized instruction, particularly in virtual environments. In this perspective, we envision a transformative Metaverse classroom model, the Multi-layered Immersive Learning Environment (Meta-MILE) to address these critical issues. [...] Read more.
Modern education faces persistent challenges, including disengagement, inequitable access to learning resources, and the lack of personalized instruction, particularly in virtual environments. In this perspective, we envision a transformative Metaverse classroom model, the Multi-layered Immersive Learning Environment (Meta-MILE) to address these critical issues. The Meta-MILE framework integrates essential components such as immersive infrastructure, personalized interactions, social collaboration, and advanced assessment techniques to enhance student engagement and inclusivity. By leveraging three-dimensional (3D) virtual environments, artificial intelligence (AI)-driven personalization, gamified learning pathways, and scenario-based evaluations, the Meta-MILE model offers tailored learning experiences that traditional virtual classrooms often struggle to achieve. Acknowledging potential challenges such as accessibility, infrastructure demands, and data security, the study proposed practical strategies to ensure equitable access and safe interactions within the Metaverse. Empirical findings from our pilot experiment demonstrated the framework’s effectiveness in improving engagement and skill acquisition, with broader implications for educational policy and competency-based, experiential learning approaches. Looking ahead, we advocate for ongoing research to validate long-term learning outcomes and technological advancements to make immersive learning more accessible and secure. Our perspective underscores the transformative potential of the Metaverse classroom in shaping inclusive, future-ready educational environments capable of meeting the diverse needs of learners worldwide. Full article
(This article belongs to the Special Issue Human-Centered Artificial Intelligence)
Show Figures

Figure 1

30 pages, 3400 KB  
Review
AI Agents Meet Blockchain: A Survey on Secure and Scalable Collaboration for Multi-Agents
by Md Monjurul Karim, Dong Hoang Van, Sangeen Khan, Qiang Qu and Yaroslav Kholodov
Future Internet 2025, 17(2), 57; https://doi.org/10.3390/fi17020057 - 2 Feb 2025
Cited by 36 | Viewed by 19268
Abstract
In recent years, the interplay between AI agents and blockchain has enabled secure and scalable collaboration among multi-agent systems, promoting unprecedented levels of autonomy and interoperability. AI agents play a vital role in facilitating complex decision making and improving operational efficiency in blockchain [...] Read more.
In recent years, the interplay between AI agents and blockchain has enabled secure and scalable collaboration among multi-agent systems, promoting unprecedented levels of autonomy and interoperability. AI agents play a vital role in facilitating complex decision making and improving operational efficiency in blockchain systems. This collaborative synergy is particularly evident in how multi-agent systems collectively tackle complex tasks to ensure seamless integration within these frameworks. While significant efforts have been made to integrate AI agents and blockchain, most studies overlook the broader potential of AI agents in addressing challenges such as interoperability, scalability, and privacy issues. In this paper, we bridge these gaps by illustrating the interplay between AI agents and blockchain. Specifically, we explore how AI agents enhance decentralized systems and examine blockchain’s role in enabling secure and scalable collaboration. Furthermore, we categorize practical applications across domains, such as Web3, decentralized finance (DeFi), asset management, and autonomous systems, providing practical insights and real-world use cases. Additionally, we identify key research challenges, including the complexities of multi-agent coordination, interoperability across diverse systems, and privacy maintenance in decentralized frameworks. Finally, we offer future directions in terms of governance, sovereignty, computation, and interpretability to promote a secure and responsible ecosystem. Full article
Show Figures

Figure 1

25 pages, 6644 KB  
Review
Intelligent Virtual Reality and Augmented Reality Technologies: An Overview
by Georgios Lampropoulos
Future Internet 2025, 17(2), 58; https://doi.org/10.3390/fi17020058 - 2 Feb 2025
Cited by 26 | Viewed by 7003
Abstract
The research into artificial intelligence (AI), the metaverse, and extended reality (XR) technologies, such as augmented reality (AR), virtual reality (VR), and mixed reality (MR), has been expanding over the recent years. This study aims to provide an overview regarding the combination of [...] Read more.
The research into artificial intelligence (AI), the metaverse, and extended reality (XR) technologies, such as augmented reality (AR), virtual reality (VR), and mixed reality (MR), has been expanding over the recent years. This study aims to provide an overview regarding the combination of AI with XR technologies and the metaverse through the examination of 880 articles using different approaches. The field has experienced a 91.29% increase in its annual growth rate, and although it is still in its infancy, the outcomes of this study highlight the potential of these technologies to be effectively combined and applied in various domains transforming and enriching them. Through content analysis and topic modeling, the main topics and areas in which this combination is mostly being researched and applied are as follows: (1) “Education/Learning/Training”, (2) “Healthcare and Medicine”, (3) “Generative artificial intelligence/Large language models”, (4) “Virtual worlds/Virtual avatars/Virtual assistants”, (5) “Human-computer interaction”, (6) “Machine learning/Deep learning/Neural networks”, (7) “Communication networks”, (8) “Industry”, (9) “Manufacturing”, (10) “E-commerce”, (11) “Entertainment”, (12) “Smart cities”, and (13) “New technologies” (e.g., digital twins, blockchain, internet of things, etc.). The study explores the documents through various dimensions and concludes by presenting the existing limitations, identifying key challenges, and providing suggestions for future research. Full article
Show Figures

Figure 1

35 pages, 550 KB  
Article
Decentralized Identity Management for Internet of Things (IoT) Devices Using IOTA Blockchain Technology
by Tamai Ramírez-Gordillo, Antonio Maciá-Lillo, Francisco A. Pujol, Nahuel García-D’Urso, Jorge Azorín-López and Higinio Mora
Future Internet 2025, 17(1), 49; https://doi.org/10.3390/fi17010049 - 20 Jan 2025
Cited by 21 | Viewed by 7873
Abstract
The exponential growth of the Internet of Things (IoT) necessitates robust, scalable, and secure identity management solutions to handle the vast number of interconnected devices. Traditional centralized identity systems are increasingly inadequate due to their vulnerabilities, such as single points of failure, scalability [...] Read more.
The exponential growth of the Internet of Things (IoT) necessitates robust, scalable, and secure identity management solutions to handle the vast number of interconnected devices. Traditional centralized identity systems are increasingly inadequate due to their vulnerabilities, such as single points of failure, scalability issues, and limited user control over data. This study explores a decentralized identity management model leveraging the IOTA Tangle, a Directed Acyclic Graph (DAG)-based distributed ledger technology, to address these challenges. By integrating Decentralized Identifiers (DIDs), Verifiable Credentials (VCs), and IOTA-specific technologies like IOTA Identity, IOTA Streams, and IOTA Stronghold, we propose a proof-of-concept framework that enhances security, scalability, and privacy in IoT ecosystems. Our implementation on resource-constrained IoT devices demonstrates the feasibility of this approach, highlighting significant improvements in transaction efficiency, real-time data exchange, and cryptographic key management. Furthermore, this research aligns with Web 3.0 principles, emphasizing decentralization, user autonomy, and data sovereignty. The findings suggest that IOTA-based solutions can effectively advance secure and user-centric identity management in IoT, paving the way for broader applications in various domains, including smart cities and healthcare. Full article
Show Figures

Figure 1

16 pages, 1545 KB  
Article
Digital Twins: Strategic Guide to Utilize Digital Twins to Improve Operational Efficiency in Industry 4.0
by Italo Cesidio Fantozzi, Annalisa Santolamazza, Giancarlo Loy and Massimiliano Maria Schiraldi
Future Internet 2025, 17(1), 41; https://doi.org/10.3390/fi17010041 - 17 Jan 2025
Cited by 29 | Viewed by 8465
Abstract
The Fourth Industrial Revolution, known as Industry 4.0, has transformed the manufacturing landscape by integrating advanced digital technologies, fostering automation, interconnectivity, and data-driven decision-making. Among these innovations, Digital Twins (DTs) have emerged as a pivotal tool, enabling real-time monitoring, simulation, and optimization of [...] Read more.
The Fourth Industrial Revolution, known as Industry 4.0, has transformed the manufacturing landscape by integrating advanced digital technologies, fostering automation, interconnectivity, and data-driven decision-making. Among these innovations, Digital Twins (DTs) have emerged as a pivotal tool, enabling real-time monitoring, simulation, and optimization of production processes. This paper provides a comprehensive exploration of DT technology, offering a strategic framework for its effective implementation within Industry 4.0 environments to enhance operational efficiency. The proposed methodology integrates key enabling technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning to create accurate digital replicas of manufacturing systems. Through a detailed case study, this work demonstrates how DTs can optimize production processes, reduce downtime, and improve maintenance strategies. The findings highlight DTs’ transformative potential in achieving continuous improvement, competitiveness, and operational excellence. This research aims to provide organizations with actionable insights and a roadmap to leverage DT technology for sustainable industrial innovation. Full article
(This article belongs to the Special Issue Machine Learning and Internet of Things in Industry 4.0)
Show Figures

Figure 1

17 pages, 12972 KB  
Article
Wireless Accelerometer Architecture for Bridge SHM: From Sensor Design to System Deployment
by Francesco Morgan Bono, Alessio Polinelli, Luca Radicioni, Lorenzo Benedetti, Francesco Castelli-Dezza, Simone Cinquemani and Marco Belloli
Future Internet 2025, 17(1), 29; https://doi.org/10.3390/fi17010029 - 10 Jan 2025
Cited by 35 | Viewed by 4035
Abstract
This paper introduces a framework to perform operational modal analysis (OMA) for structural health monitoring (SHM) by presenting the development and validation of a low-power, solar-powered wireless sensor network (WSN) tailored for bridge structures. The system integrates accelerometers and temperature sensors for dynamic [...] Read more.
This paper introduces a framework to perform operational modal analysis (OMA) for structural health monitoring (SHM) by presenting the development and validation of a low-power, solar-powered wireless sensor network (WSN) tailored for bridge structures. The system integrates accelerometers and temperature sensors for dynamic structural assessment, all interconnected through the energy-efficient message queuing telemetry transport (MQTT) messaging protocol. Moreover, it delves into the details of sensor selection, calibration, and the design considerations necessary to address the unique challenges associated with bridge structures. Special attention is given to the solar-powered aspect, allowing for extended deployment periods without the need for frequent maintenance or battery replacements. To validate the proposed system, a comprehensive field deployment was conducted on an actual bridge structure. The collected data were transmitted through MQTT messages and analyzed by means of OMA. Comparative studies with traditional wired systems underscore the advantages of the solar-powered wireless solution in terms of sustainability, scalability, and ease of deployment. Results from the validation phase demonstrate the system’s capability to provide accurate and real-time data needed to assess the health state of the monitored asset. This paper concludes with insights into the practical implications of adopting such a solar-powered WSN, emphasizing its potential to revolutionize bridge health monitoring by offering a cost-effective and energy-efficient solution for long-term infrastructure resilience. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

28 pages, 1559 KB  
Article
XI2S-IDS: An Explainable Intelligent 2-Stage Intrusion Detection System
by Maiada M. Mahmoud, Yasser Omar Youssef and Ayman A. Abdel-Hamid
Future Internet 2025, 17(1), 25; https://doi.org/10.3390/fi17010025 - 8 Jan 2025
Cited by 16 | Viewed by 4575
Abstract
The rapid evolution of technologies such as the Internet of Things (IoT), 5G, and cloud computing has exponentially increased the complexity of cyber attacks. Modern Intrusion Detection Systems (IDSs) must be capable of identifying not only frequent, well-known attacks but also low-frequency, subtle [...] Read more.
The rapid evolution of technologies such as the Internet of Things (IoT), 5G, and cloud computing has exponentially increased the complexity of cyber attacks. Modern Intrusion Detection Systems (IDSs) must be capable of identifying not only frequent, well-known attacks but also low-frequency, subtle intrusions that are often missed by traditional systems. The challenge is further compounded by the fact that most IDS rely on black-box machine learning (ML) and deep learning (DL) models, making it difficult for security teams to interpret their decisions. This lack of transparency is particularly problematic in environments where quick and informed responses are crucial. To address these challenges, we introduce the XI2S-IDS framework—an Explainable, Intelligent 2-Stage Intrusion Detection System. The XI2S-IDS framework uniquely combines a two-stage approach with SHAP-based explanations, offering improved detection and interpretability for low-frequency attacks. Binary classification is conducted in the first stage followed by multi-class classification in the second stage. By leveraging SHAP values, XI2S-IDS enhances transparency in decision-making, allowing security analysts to gain clear insights into feature importance and the model’s rationale. Experiments conducted on the UNSW-NB15 and CICIDS2017 datasets demonstrate significant improvements in detection performance, with a notable reduction in false negative rates for low-frequency attacks, while maintaining high precision, recall, and F1-scores. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

23 pages, 2052 KB  
Article
On Edge-Fog-Cloud Collaboration and Reaping Its Benefits: A Heterogeneous Multi-Tier Edge Computing Architecture
by Niroshinie Fernando, Samir Shrestha, Seng W. Loke and Kevin Lee
Future Internet 2025, 17(1), 22; https://doi.org/10.3390/fi17010022 - 7 Jan 2025
Cited by 15 | Viewed by 7920
Abstract
Edge, fog, and cloud computing provide complementary capabilities to enable distributed processing of IoT data. This requires offloading mechanisms, decision-making mechanisms, support for the dynamic availability of resources, and the cooperation of available nodes. This paper proposes a novel 3-tier architecture that integrates [...] Read more.
Edge, fog, and cloud computing provide complementary capabilities to enable distributed processing of IoT data. This requires offloading mechanisms, decision-making mechanisms, support for the dynamic availability of resources, and the cooperation of available nodes. This paper proposes a novel 3-tier architecture that integrates edge, fog, and cloud computing to harness their collective strengths, facilitating optimised data processing across these tiers. Our approach optimises performance, reducing energy consumption, and lowers costs. We evaluate our architecture through a series of experiments conducted on a purpose-built testbed. The results demonstrate significant improvements, with speedups of up to 7.5 times and energy savings reaching 80%, underlining the effectiveness and practical benefits of our cooperative edge-fog-cloud model in supporting the dynamic computational needs of IoT ecosystems. We argue that a multi-tier (e.g., edge-fog-cloud) dynamic task offloading and management of heterogeneous devices will be key to flexible edge computing, and that the advantage of task relocation and offloading is not straightforward but depends on the configuration of devices and relative device capabilities. Full article
(This article belongs to the Special Issue Edge Intelligence: Edge Computing for 5G and the Internet of Things)
Show Figures

Figure 1

20 pages, 1896 KB  
Article
A Blockchain-Assisted Federated Learning Framework for Secure and Self-Optimizing Digital Twins in Industrial IoT
by Innocent Boakye Ababio, Jan Bieniek, Mohamed Rahouti, Thaier Hayajneh, Mohammed Aledhari, Dinesh C. Verma and Abdellah Chehri
Future Internet 2025, 17(1), 13; https://doi.org/10.3390/fi17010013 - 3 Jan 2025
Cited by 37 | Viewed by 6760
Abstract
Optimizing digital twins in the Industrial Internet of Things (IIoT) requires secure and adaptable AI models. The IIoT enables digital twins, virtual replicas of physical assets, to improve real-time decision-making, but challenges remain in trust, data security, and model accuracy. This paper presents [...] Read more.
Optimizing digital twins in the Industrial Internet of Things (IIoT) requires secure and adaptable AI models. The IIoT enables digital twins, virtual replicas of physical assets, to improve real-time decision-making, but challenges remain in trust, data security, and model accuracy. This paper presents a novel framework combining blockchain technology and federated learning (FL) to address these issues. By deploying AI models on edge devices and using FL, data privacy is maintained while enabling collaboration across industrial assets. Blockchain ensures secure data management and transparency, while explainable AI (XAI) enhances interpretability. The framework improves transparency, control, security, privacy, and scalability for self-optimizing digital twins in IIoT. A real-world evaluation demonstrates the framework’s effectiveness in enhancing security, explainability, and optimization, offering improved efficiency and reliability for industrial operations. Full article
(This article belongs to the Special Issue Machine Learning for Blockchain and IoT Systems in Smart City)
Show Figures

Figure 1

74 pages, 2233 KB  
Article
Advanced Hybrid Transformer-CNN Deep Learning Model for Effective Intrusion Detection Systems with Class Imbalance Mitigation Using Resampling Techniques
by Hesham Kamal and Maggie Mashaly
Future Internet 2024, 16(12), 481; https://doi.org/10.3390/fi16120481 - 23 Dec 2024
Cited by 35 | Viewed by 7045
Abstract
Network and cloud environments must be fortified against a dynamic array of threats, and intrusion detection systems (IDSs) are critical tools for identifying and thwarting hostile activities. IDSs, classified as anomaly-based or signature-based, have increasingly incorporated deep learning models into their framework. Recently, [...] Read more.
Network and cloud environments must be fortified against a dynamic array of threats, and intrusion detection systems (IDSs) are critical tools for identifying and thwarting hostile activities. IDSs, classified as anomaly-based or signature-based, have increasingly incorporated deep learning models into their framework. Recently, significant advancements have been made in anomaly-based IDSs, particularly those using machine learning, where attack detection accuracy has been notably high. Our proposed method demonstrates that deep learning models can achieve unprecedented success in identifying both known and unknown threats within cloud environments. However, existing benchmark datasets for intrusion detection typically contain more normal traffic samples than attack samples to reflect real-world network traffic. This imbalance in the training data makes it more challenging for IDSs to accurately detect specific types of attacks. Thus, our challenges arise from two key factors, unbalanced training data and the emergence of new, unidentified threats. To address these issues, we present a hybrid transformer-convolutional neural network (Transformer-CNN) deep learning model, which leverages data resampling techniques such as adaptive synthetic (ADASYN), synthetic minority oversampling technique (SMOTE), edited nearest neighbors (ENN), and class weights to overcome class imbalance. The transformer component of our model is employed for contextual feature extraction, enabling the system to analyze relationships and patterns in the data effectively. In contrast, the CNN is responsible for final classification, processing the extracted features to accurately identify specific attack types. The Transformer-CNN model focuses on three primary objectives to enhance detection accuracy and performance: (1) reducing false positives and false negatives, (2) enabling real-time intrusion detection in high-speed networks, and (3) detecting zero-day attacks. We evaluate our proposed model, Transformer-CNN, using the NF-UNSW-NB15-v2 and CICIDS2017 benchmark datasets, and assess its performance with metrics such as accuracy, precision, recall, and F1-score. The results demonstrate that our method achieves an impressive 99.71% accuracy in binary classification and 99.02% in multi-class classification on the NF-UNSW-NB15-v2 dataset, while for the CICIDS2017 dataset, it reaches 99.93% in binary classification and 99.13% in multi-class classification, significantly outperforming existing models. This proves the enhanced capability of our IDS in defending cloud environments against intrusions, including zero-day attacks. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

24 pages, 1273 KB  
Article
Flexible Hyper-Distributed IoT–Edge–Cloud Platform for Real-Time Digital Twin Applications on 6G-Intended Testbeds for Logistics and Industry
by Maria Crespo-Aguado, Raul Lozano, Fernando Hernandez-Gobertti, Nuria Molner and David Gomez-Barquero
Future Internet 2024, 16(11), 431; https://doi.org/10.3390/fi16110431 - 20 Nov 2024
Cited by 22 | Viewed by 6455
Abstract
This paper presents the design and development of a flexible hyper-distributed IoT–Edge–Cloud computing platform for real-time Digital Twins in real logistics and industrial environments, intended as a novel living lab and testbed for future 6G applications. It expands the limited capabilities of IoT [...] Read more.
This paper presents the design and development of a flexible hyper-distributed IoT–Edge–Cloud computing platform for real-time Digital Twins in real logistics and industrial environments, intended as a novel living lab and testbed for future 6G applications. It expands the limited capabilities of IoT devices with extended Cloud and Edge computing functionalities, creating an IoT–Edge–Cloud continuum platform composed of multiple stakeholder solutions, in which vertical application developers can take full advantage of the computing resources of the infrastructure. The platform is built together with a private 5G network to connect machines and sensors on a large scale. Artificial intelligence and machine learning are used to allocate computing resources for real-time services by an end-to-end intelligent orchestrator, and real-time distributed analytic tools leverage Edge computing platforms to support different types of Digital Twin applications for logistics and industry, such as immersive remote driving, with specific characteristics and features. Performance evaluations demonstrated the platform’s capability to support the high-throughput communications required for Digital Twins, achieving user-experienced rates close to the maximum theoretical values, up to 552 Mb/s for the downlink and 87.3 Mb/s for the uplink in the n78 frequency band. Moreover, the platform’s support for Digital Twins was validated via QoE assessments conducted on an immersive remote driving prototype, which demonstrated high levels of user satisfaction in key dimensions such as presence, engagement, control, sensory integration, and cognitive load. Full article
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)
Show Figures

Figure 1

41 pages, 438 KB  
Review
Recent Advancements in Federated Learning: State of the Art, Fundamentals, Principles, IoT Applications and Future Trends
by Christos Papadopoulos, Konstantinos-Filippos Kollias and George F. Fragulis
Future Internet 2024, 16(11), 415; https://doi.org/10.3390/fi16110415 - 9 Nov 2024
Cited by 23 | Viewed by 10963
Abstract
Federated learning (FL) is creating a paradigm shift in machine learning by directing the focus of model training to where the data actually exist. Instead of drawing all data into a central location, which raises concerns about privacy, costs, and delays, FL allows [...] Read more.
Federated learning (FL) is creating a paradigm shift in machine learning by directing the focus of model training to where the data actually exist. Instead of drawing all data into a central location, which raises concerns about privacy, costs, and delays, FL allows learning to take place directly on the device, keeping the data safe and minimizing the need for transfer. This approach is especially important in areas like healthcare, where protecting patient privacy is critical, and in industrial IoT settings, where moving large numbers of data is not practical. What makes FL even more compelling is its ability to reduce the bias that can occur when all data are centralized, leading to fairer and more inclusive machine learning outcomes. However, it is not without its challenges—particularly with regard to keeping the models secure from attacks. Nonetheless, the potential benefits are clear: FL can lower the costs associated with data storage and processing, while also helping organizations to meet strict privacy regulations like GDPR. As edge computing continues to grow, FL’s decentralized approach could play a key role in shaping how we handle data in the future, moving toward a more privacy-conscious world. This study identifies ongoing challenges in ensuring model security against adversarial attacks, pointing to the need for further research in this area. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

42 pages, 9475 KB  
Review
Machine Learning and IoT-Based Solutions in Industrial Applications for Smart Manufacturing: A Critical Review
by Paolo Visconti, Giuseppe Rausa, Carolina Del-Valle-Soto, Ramiro Velázquez, Donato Cafagna and Roberto De Fazio
Future Internet 2024, 16(11), 394; https://doi.org/10.3390/fi16110394 - 26 Oct 2024
Cited by 35 | Viewed by 15144
Abstract
The Internet of Things (IoT) has radically changed the industrial world, enabling the integration of numerous systems and devices into the industrial ecosystem. There are many areas of the manufacturing industry in which IoT has contributed, including plants’ remote monitoring and control, energy [...] Read more.
The Internet of Things (IoT) has radically changed the industrial world, enabling the integration of numerous systems and devices into the industrial ecosystem. There are many areas of the manufacturing industry in which IoT has contributed, including plants’ remote monitoring and control, energy efficiency, more efficient resources management, and cost reduction, paving the way for smart manufacturing in the framework of Industry 4.0. This review article provides an up-to-date overview of IoT systems and machine learning (ML) algorithms applied to smart manufacturing (SM), analyzing four main application fields: security, predictive maintenance, process control, and additive manufacturing. In addition, the paper presents a descriptive and comparative overview of ML algorithms mainly used in smart manufacturing. Furthermore, for each discussed topic, a deep comparative analysis of the recent IoT solutions reported in the scientific literature is introduced, dwelling on the architectural aspects, sensing solutions, implemented data analysis strategies, communication tools, performance, and other characteristic parameters. This comparison highlights the strengths and weaknesses of each discussed solution. Finally, the presented work outlines the features and functionalities of future IoT-based systems for smart industry applications. Full article
(This article belongs to the Special Issue Machine Learning and Internet of Things in Industry 4.0)
Show Figures

Figure 1

33 pages, 1577 KB  
Review
Health IoT Threats: Survey of Risks and Vulnerabilities
by Samaneh Madanian, Tserendorj Chinbat, Maduka Subasinghage, David Airehrour, Farkhondeh Hassandoust and Sira Yongchareon
Future Internet 2024, 16(11), 389; https://doi.org/10.3390/fi16110389 - 23 Oct 2024
Cited by 34 | Viewed by 11159
Abstract
The secure and efficient collection of patients’ vital information is a challenge faced by the healthcare industry. Through the adoption and application of Internet of Things (IoT), the healthcare industry has seen an improvement in the quality of delivered services and patient safety. [...] Read more.
The secure and efficient collection of patients’ vital information is a challenge faced by the healthcare industry. Through the adoption and application of Internet of Things (IoT), the healthcare industry has seen an improvement in the quality of delivered services and patient safety. However, IoT utilization in healthcare is challenging due to the sensitive nature of patients’ clinical information and communicating this across heterogeneous networks and among IoT devices. We conducted a semi-systematic literature review to provide an overview of IoT security and privacy challenges in the healthcare sector over time. We collected 279 studies from 5 scientific databases, of which 69 articles met the requirements for inclusion. We performed thematic and qualitative content analysis to extract trends and information. According to our analysis, the vulnerabilities in IoT in healthcare are classified into three main layers: perception, network, and application. We comprehensively reviewed IoT privacy and security threats on each layer. Different technological advancements were suggested to address the identified vulnerabilities in healthcare. This review has practical implications, emphasizing that healthcare organizations, software developers, and device manufacturers must prioritize healthcare IoT security and privacy. A comprehensive, multilayered security approach, security-by-design principles, and training for staff and end-users must be adopted. Regulators and policy makers must also establish and enforce standards and regulations that promote the security and privacy of healthcare IoT. Overall, this study underscores the importance of ensuring the security and privacy of healthcare IoT, with stakeholders’ coordinated efforts to address the complex and evolving security and privacy threats in this field. This can enhance healthcare IoT trust and reliability, reduce the risks of security and privacy issues and attacks, and ultimately improve healthcare delivery quality and safety. Full article
(This article belongs to the Special Issue Cybersecurity in the IoT)
Show Figures

Figure 1

52 pages, 18006 KB  
Review
A Survey of the Real-Time Metaverse: Challenges and Opportunities
by Mohsen Hatami, Qian Qu, Yu Chen, Hisham Kholidy, Erik Blasch and Erika Ardiles-Cruz
Future Internet 2024, 16(10), 379; https://doi.org/10.3390/fi16100379 - 18 Oct 2024
Cited by 83 | Viewed by 15876
Abstract
The metaverse concept has been evolving from static, pre-rendered virtual environments to a new frontier: the real-time metaverse. This survey paper explores the emerging field of real-time metaverse technologies, which enable the continuous integration of dynamic, real-world data into immersive virtual environments. We [...] Read more.
The metaverse concept has been evolving from static, pre-rendered virtual environments to a new frontier: the real-time metaverse. This survey paper explores the emerging field of real-time metaverse technologies, which enable the continuous integration of dynamic, real-world data into immersive virtual environments. We examine the key technologies driving this evolution, including advanced sensor systems (LiDAR, radar, cameras), artificial intelligence (AI) models for data interpretation, fast data fusion algorithms, and edge computing with 5G networks for low-latency data transmission. This paper reveals how these technologies are orchestrated to achieve near-instantaneous synchronization between physical and virtual worlds, a defining characteristic that distinguishes the real-time metaverse from its traditional counterparts. The survey provides a comprehensive insight into the technical challenges and discusses solutions to realize responsive dynamic virtual environments. The potential applications and impact of real-time metaverse technologies across various fields are considered, including live entertainment, remote collaboration, dynamic simulations, and urban planning with digital twins. By synthesizing current research and identifying future directions, this survey provides a foundation for understanding and advancing the rapidly evolving landscape of real-time metaverse technologies, contributing to the growing body of knowledge on immersive digital experiences and setting the stage for further innovations in the Metaverse transformative field. Full article
Show Figures

Figure 1

37 pages, 2626 KB  
Article
A Survey of Security Strategies in Federated Learning: Defending Models, Data, and Privacy
by Habib Ullah Manzoor, Attia Shabbir, Ao Chen, David Flynn and Ahmed Zoha
Future Internet 2024, 16(10), 374; https://doi.org/10.3390/fi16100374 - 15 Oct 2024
Cited by 41 | Viewed by 15889
Abstract
Federated Learning (FL) has emerged as a transformative paradigm in machine learning, enabling decentralized model training across multiple devices while preserving data privacy. However, the decentralized nature of FL introduces significant security challenges, making it vulnerable to various attacks targeting models, data, and [...] Read more.
Federated Learning (FL) has emerged as a transformative paradigm in machine learning, enabling decentralized model training across multiple devices while preserving data privacy. However, the decentralized nature of FL introduces significant security challenges, making it vulnerable to various attacks targeting models, data, and privacy. This survey provides a comprehensive overview of the defense strategies against these attacks, categorizing them into data and model defenses and privacy attacks. We explore pre-aggregation, in-aggregation, and post-aggregation defenses, highlighting their methodologies and effectiveness. Additionally, the survey delves into advanced techniques such as homomorphic encryption and differential privacy to safeguard sensitive information. The integration of blockchain technology for enhancing security in FL environments is also discussed, along with incentive mechanisms to promote active participation among clients. Through this detailed examination, the survey aims to inform and guide future research in developing robust defense frameworks for FL systems. Full article
(This article belongs to the Special Issue Privacy and Security Issues in IoT Systems)
Show Figures

Figure 1

17 pages, 1040 KB  
Article
Enhancing Heart Disease Prediction with Federated Learning and Blockchain Integration
by Yazan Otoum, Chaosheng Hu, Eyad Haj Said and Amiya Nayak
Future Internet 2024, 16(10), 372; https://doi.org/10.3390/fi16100372 - 14 Oct 2024
Cited by 23 | Viewed by 4881
Abstract
Federated learning offers a framework for developing local models across institutions while safeguarding sensitive data. This paper introduces a novel approach for heart disease prediction using the TabNet model, which combines the strengths of tree-based models and deep neural networks. Our study utilizes [...] Read more.
Federated learning offers a framework for developing local models across institutions while safeguarding sensitive data. This paper introduces a novel approach for heart disease prediction using the TabNet model, which combines the strengths of tree-based models and deep neural networks. Our study utilizes the Comprehensive Heart Disease and UCI Heart Disease datasets, leveraging TabNet’s architecture to enhance data handling in federated environments. Horizontal federated learning was implemented using the federated averaging algorithm to securely aggregate model updates across participants. Blockchain technology was integrated to enhance transparency and accountability, with smart contracts automating governance. The experimental results demonstrate that TabNet achieved the highest balanced metrics score of 1.594 after 50 epochs, with an accuracy of 0.822 and an epsilon value of 6.855, effectively balancing privacy and performance. The model also demonstrated strong accuracy with only 10 iterations on aggregated data, highlighting the benefits of multi-source data integration. This work presents a scalable, privacy-preserving solution for heart disease prediction, combining TabNet and blockchain to address key healthcare challenges while ensuring data integrity. Full article
Show Figures

Figure 1

31 pages, 5936 KB  
Article
Advanced Optimization Techniques for Federated Learning on Non-IID Data
by Filippos Efthymiadis, Aristeidis Karras, Christos Karras and Spyros Sioutas
Future Internet 2024, 16(10), 370; https://doi.org/10.3390/fi16100370 - 13 Oct 2024
Cited by 17 | Viewed by 8742
Abstract
Federated learning enables model training on multiple clients locally, without the need to transfer their data to a central server, thus ensuring data privacy. In this paper, we investigate the impact of Non-Independent and Identically Distributed (non-IID) data on the performance of federated [...] Read more.
Federated learning enables model training on multiple clients locally, without the need to transfer their data to a central server, thus ensuring data privacy. In this paper, we investigate the impact of Non-Independent and Identically Distributed (non-IID) data on the performance of federated training, where we find a reduction in accuracy of up to 29% for neural networks trained in environments with skewed non-IID data. Two optimization strategies are presented to address this issue. The first strategy focuses on applying a cyclical learning rate to determine the learning rate during federated training, while the second strategy develops a sharing and pre-training method on augmented data in order to improve the efficiency of the algorithm in the case of non-IID data. By combining these two methods, experiments show that the accuracy on the CIFAR-10 dataset increased by about 36% while achieving faster convergence by reducing the number of required communication rounds by 5.33 times. The proposed techniques lead to improved accuracy and faster model convergence, thus representing a significant advance in the field of federated learning and facilitating its application to real-world scenarios. Full article
(This article belongs to the Special Issue Distributed Storage of Large Knowledge Graphs with Mobility Data)
Show Figures

Figure 1

29 pages, 778 KB  
Review
Large Language Models Meet Next-Generation Networking Technologies: A Review
by Ching-Nam Hang, Pei-Duo Yu, Roberto Morabito and Chee-Wei Tan
Future Internet 2024, 16(10), 365; https://doi.org/10.3390/fi16100365 - 7 Oct 2024
Cited by 53 | Viewed by 22071
Abstract
The evolution of network technologies has significantly transformed global communication, information sharing, and connectivity. Traditional networks, relying on static configurations and manual interventions, face substantial challenges such as complex management, inefficiency, and susceptibility to human error. The rise of artificial intelligence (AI) has [...] Read more.
The evolution of network technologies has significantly transformed global communication, information sharing, and connectivity. Traditional networks, relying on static configurations and manual interventions, face substantial challenges such as complex management, inefficiency, and susceptibility to human error. The rise of artificial intelligence (AI) has begun to address these issues by automating tasks like network configuration, traffic optimization, and security enhancements. Despite their potential, integrating AI models in network engineering encounters practical obstacles including complex configurations, heterogeneous infrastructure, unstructured data, and dynamic environments. Generative AI, particularly large language models (LLMs), represents a promising advancement in AI, with capabilities extending to natural language processing tasks like translation, summarization, and sentiment analysis. This paper aims to provide a comprehensive review exploring the transformative role of LLMs in modern network engineering. In particular, it addresses gaps in the existing literature by focusing on LLM applications in network design and planning, implementation, analytics, and management. It also discusses current research efforts, challenges, and future opportunities, aiming to provide a comprehensive guide for networking professionals and researchers. The main goal is to facilitate the adoption and advancement of AI and LLMs in networking, promoting more efficient, resilient, and intelligent network systems. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Figure 1

19 pages, 756 KB  
Article
AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities
by Chuhao Wu, He Zhang and John M. Carroll
Future Internet 2024, 16(10), 354; https://doi.org/10.3390/fi16100354 - 28 Sep 2024
Cited by 28 | Viewed by 23787
Abstract
Generative AI has drawn significant attention from stakeholders in higher education. As it introduces new opportunities for personalized learning and tutoring support, it simultaneously poses challenges to academic integrity and leads to ethical issues. Consequently, governing responsible AI usage within higher education institutions [...] Read more.
Generative AI has drawn significant attention from stakeholders in higher education. As it introduces new opportunities for personalized learning and tutoring support, it simultaneously poses challenges to academic integrity and leads to ethical issues. Consequently, governing responsible AI usage within higher education institutions (HEIs) becomes increasingly important. Leading universities have already published guidelines on Generative AI, with most attempting to embrace this technology responsibly. This study provides a new perspective by focusing on strategies for responsible AI governance as demonstrated in these guidelines. Through a case study of 14 prestigious universities in the United States, we identified the multi-unit governance of AI, the role-specific governance of AI, and the academic characteristics of AI governance from their AI guidelines. The strengths and potential limitations of these strategies and characteristics are discussed. The findings offer practical implications for guiding responsible AI usage in HEIs and beyond. Full article
(This article belongs to the Special Issue ICT and AI in Intelligent E-systems)
Show Figures

Figure 1

28 pages, 3973 KB  
Systematic Review
Edge Computing in Healthcare: Innovations, Opportunities, and Challenges
by Alexandru Rancea, Ionut Anghel and Tudor Cioara
Future Internet 2024, 16(9), 329; https://doi.org/10.3390/fi16090329 - 10 Sep 2024
Cited by 125 | Viewed by 28695
Abstract
Edge computing promising a vision of processing data close to its generation point, reducing latency and bandwidth usage compared with traditional cloud computing architectures, has attracted significant attention lately. The integration of edge computing in modern systems takes advantage of Internet of Things [...] Read more.
Edge computing promising a vision of processing data close to its generation point, reducing latency and bandwidth usage compared with traditional cloud computing architectures, has attracted significant attention lately. The integration of edge computing in modern systems takes advantage of Internet of Things (IoT) devices and can potentially improve the systems’ performance, scalability, privacy, and security with applications in different domains. In the healthcare domain, modern IoT devices can nowadays be used to gather vital parameters and information that can be fed to edge Artificial Intelligence (AI) techniques able to offer precious insights and support to healthcare professionals. However, issues regarding data privacy and security, AI optimization, and computational offloading at the edge pose challenges to the adoption of edge AI. This paper aims to explore the current state of the art of edge AI in healthcare by using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology and analyzing more than 70 Web of Science articles. We have defined the relevant research questions, clear inclusion and exclusion criteria, and classified the research works in three main directions: privacy and security, AI-based optimization methods, and edge offloading techniques. The findings highlight the many advantages of integrating edge computing in a wide range of healthcare use cases requiring data privacy and security, near real-time decision-making, and efficient communication links, with the potential to transform future healthcare services and eHealth applications. However, further research is needed to enforce new security-preserving methods and for better orchestrating and coordinating the load in distributed and decentralized scenarios. Full article
(This article belongs to the Special Issue Privacy and Security Issues in IoT Systems)
Show Figures

Figure 1

34 pages, 2225 KB  
Review
Graph Attention Networks: A Comprehensive Review of Methods and Applications
by Aristidis G. Vrahatis, Konstantinos Lazaros and Sotiris Kotsiantis
Future Internet 2024, 16(9), 318; https://doi.org/10.3390/fi16090318 - 3 Sep 2024
Cited by 173 | Viewed by 38052
Abstract
Real-world problems often exhibit complex relationships and dependencies, which can be effectively captured by graph learning systems. Graph attention networks (GATs) have emerged as a powerful and versatile framework in this direction, inspiring numerous extensions and applications in several areas. In this review, [...] Read more.
Real-world problems often exhibit complex relationships and dependencies, which can be effectively captured by graph learning systems. Graph attention networks (GATs) have emerged as a powerful and versatile framework in this direction, inspiring numerous extensions and applications in several areas. In this review, we present a thorough examination of GATs, covering both diverse approaches and a wide range of applications. We examine the principal GAT-based categories, including Global Attention Networks, Multi-Layer Architectures, graph-embedding techniques, Spatial Approaches, and Variational Models. Furthermore, we delve into the diverse applications of GATs in various systems such as recommendation systems, image analysis, medical domain, sentiment analysis, and anomaly detection. This review seeks to act as a navigational reference for researchers and practitioners aiming to emphasize the capabilities and prospects of GATs. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technologies in Greece 2024–2025)
Show Figures

Figure 1

16 pages, 456 KB  
Review
A Survey on Data Availability in Layer 2 Blockchain Rollups: Open Challenges and Future Improvements
by Muhammad Bin Saif, Sara Migliorini and Fausto Spoto
Future Internet 2024, 16(9), 315; https://doi.org/10.3390/fi16090315 - 29 Aug 2024
Cited by 21 | Viewed by 10250
Abstract
Layer 2 solutions have emerged in recent years as a valuable alternative to increase the throughput and scalability of blockchain-based architectures. The three primary types of Layer 2 solutions are state channels, sidechains, and rollups. The rollups are particularly promising, allowing significant improvements [...] Read more.
Layer 2 solutions have emerged in recent years as a valuable alternative to increase the throughput and scalability of blockchain-based architectures. The three primary types of Layer 2 solutions are state channels, sidechains, and rollups. The rollups are particularly promising, allowing significant improvements in transaction throughput, security, and efficiency, and have been adopted by many real-world projects, such as Polygon and Optimistic. However, the adoption of Layer 2 solutions has led to other challenges, such as the data availability problem, where transaction data processed off-chain must be posted back on the main chain. This is crucial to prevent data withholding attacks and ensure all participants can independently verify the blockchain state. This paper provides a comprehensive survey of existing rollup-based Layer 2 solutions with a focus on the data availability problem and discusses the major advantages and disadvantages of them. Finally, an analysis of open challenges and future research directions is provided. Full article
Show Figures

Graphical abstract

32 pages, 1667 KB  
Review
Artificial Intelligence Applications in Smart Healthcare: A Survey
by Xian Gao, Peixiong He, Yi Zhou and Xiao Qin
Future Internet 2024, 16(9), 308; https://doi.org/10.3390/fi16090308 - 27 Aug 2024
Cited by 36 | Viewed by 18052
Abstract
The rapid development of AI technology in recent years has led to its widespread use in daily life, where it plays an increasingly important role. In healthcare, AI has been integrated into the field to develop the new domain of smart healthcare. In [...] Read more.
The rapid development of AI technology in recent years has led to its widespread use in daily life, where it plays an increasingly important role. In healthcare, AI has been integrated into the field to develop the new domain of smart healthcare. In smart healthcare, opportunities and challenges coexist. This article provides a comprehensive overview of past developments and recent progress in this area. First, we summarize the definition and characteristics of smart healthcare. Second, we explore the opportunities that AI technology brings to the smart healthcare field from a macro perspective. Third, we categorize specific AI applications in smart healthcare into ten domains and discuss their technological foundations individually. Finally, we identify ten key challenges these applications face and discuss the existing solutions for each. Full article
(This article belongs to the Special Issue eHealth and mHealth)
Show Figures

Figure 1

29 pages, 521 KB  
Review
A Survey on the Use of Large Language Models (LLMs) in Fake News
by Eleftheria Papageorgiou, Christos Chronis, Iraklis Varlamis and Yassine Himeur
Future Internet 2024, 16(8), 298; https://doi.org/10.3390/fi16080298 - 19 Aug 2024
Cited by 49 | Viewed by 24260
Abstract
The proliferation of fake news and fake profiles on social media platforms poses significant threats to information integrity and societal trust. Traditional detection methods, including rule-based approaches, metadata analysis, and human fact-checking, have been employed to combat disinformation, but these methods often fall [...] Read more.
The proliferation of fake news and fake profiles on social media platforms poses significant threats to information integrity and societal trust. Traditional detection methods, including rule-based approaches, metadata analysis, and human fact-checking, have been employed to combat disinformation, but these methods often fall short in the face of increasingly sophisticated fake content. This review article explores the emerging role of Large Language Models (LLMs) in enhancing the detection of fake news and fake profiles. We provide a comprehensive overview of the nature and spread of disinformation, followed by an examination of existing detection methodologies. The article delves into the capabilities of LLMs in generating both fake news and fake profiles, highlighting their dual role as both a tool for disinformation and a powerful means of detection. We discuss the various applications of LLMs in text classification, fact-checking, verification, and contextual analysis, demonstrating how these models surpass traditional methods in accuracy and efficiency. Additionally, the article covers LLM-based detection of fake profiles through profile attribute analysis, network analysis, and behavior pattern recognition. Through comparative analysis, we showcase the advantages of LLMs over conventional techniques and present case studies that illustrate practical applications. Despite their potential, LLMs face challenges such as computational demands and ethical concerns, which we discuss in more detail. The review concludes with future directions for research and development in LLM-based fake news and fake profile detection, underscoring the importance of continued innovation to safeguard the authenticity of online information. Full article
Show Figures

Figure 1

37 pages, 1164 KB  
Article
Early Ransomware Detection with Deep Learning Models
by Matan Davidian, Michael Kiperberg and Natalia Vanetik
Future Internet 2024, 16(8), 291; https://doi.org/10.3390/fi16080291 - 11 Aug 2024
Cited by 8 | Viewed by 7158
Abstract
Ransomware is a growing-in-popularity type of malware that restricts access to the victim’s system or data until a ransom is paid. Traditional detection methods rely on analyzing the malware’s content, but these methods are ineffective against unknown or zero-day malware. Therefore, zero-day malware [...] Read more.
Ransomware is a growing-in-popularity type of malware that restricts access to the victim’s system or data until a ransom is paid. Traditional detection methods rely on analyzing the malware’s content, but these methods are ineffective against unknown or zero-day malware. Therefore, zero-day malware detection typically involves observing the malware’s behavior, specifically the sequence of application programming interface (API) calls it makes, such as reading and writing files or enumerating directories. While previous studies have used machine learning (ML) techniques to classify API call sequences, they have only considered the API call name. This paper systematically compares various subsets of API call features, different ML techniques, and context-window sizes to identify the optimal ransomware classifier. Our findings indicate that a context-window size of 7 is ideal, and the most effective ML techniques are CNN and LSTM. Additionally, augmenting the API call name with the operation result significantly enhances the classifier’s precision. Performance analysis suggests that this classifier can be effectively applied in real-time scenarios. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Graphical abstract

25 pages, 3302 KB  
Article
Multi-Class Intrusion Detection Based on Transformer for IoT Networks Using CIC-IoT-2023 Dataset
by Shu-Ming Tseng, Yan-Qi Wang and Yung-Chung Wang
Future Internet 2024, 16(8), 284; https://doi.org/10.3390/fi16080284 - 8 Aug 2024
Cited by 68 | Viewed by 17975
Abstract
This study uses deep learning methods to explore the Internet of Things (IoT) network intrusion detection method based on the CIC-IoT-2023 dataset. This dataset contains extensive data on real-life IoT environments. Based on this, this study proposes an effective intrusion detection method. Apply [...] Read more.
This study uses deep learning methods to explore the Internet of Things (IoT) network intrusion detection method based on the CIC-IoT-2023 dataset. This dataset contains extensive data on real-life IoT environments. Based on this, this study proposes an effective intrusion detection method. Apply seven deep learning models, including Transformer, to analyze network traffic characteristics and identify abnormal behavior and potential intrusions through binary and multivariate classifications. Compared with other papers, we not only use a Transformer model, but we also consider the model’s performance in the multi-class classification. Although the accuracy of the Transformer model used in the binary classification is lower than that of DNN and CNN + LSTM hybrid models, it achieves better results in the multi-class classification. The accuracy of binary classification of our model is 0.74% higher than that of papers that also use Transformer on TON-IOT. In the multi-class classification, our best-performing model combination is Transformer, which reaches 99.40% accuracy. Its accuracy is 3.8%, 0.65%, and 0.29% higher than the 95.60%, 98.75%, and 99.11% figures recorded in papers using the same dataset, respectively. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

25 pages, 3477 KB  
Article
Overlay and Virtual Private Networks Security Performances Analysis with Open Source Infrastructure Deployment
by Antonio Francesco Gentile, Davide Macrì, Emilio Greco and Peppino Fazio
Future Internet 2024, 16(8), 283; https://doi.org/10.3390/fi16080283 - 7 Aug 2024
Cited by 9 | Viewed by 6160
Abstract
Nowadays, some of the most well-deployed infrastructures are Virtual Private Networks (VPNs) and Overlay Networks (ONs). They consist of hardware and software components designed to build private/secure channels, typically over the Internet. They are currently among the most reliable technologies for achieving this [...] Read more.
Nowadays, some of the most well-deployed infrastructures are Virtual Private Networks (VPNs) and Overlay Networks (ONs). They consist of hardware and software components designed to build private/secure channels, typically over the Internet. They are currently among the most reliable technologies for achieving this objective. VPNs are well-established and can be patched to address security vulnerabilities, while overlay networks represent the next-generation solution for secure communication. In this paper, for both VPNs and ONs, we analyze some important network performance components (RTT and bandwidth) while varying the type of overlay networks utilized for interconnecting traffic between two or more hosts (in the same data center, in different data centers in the same building, or over the Internet). These networks establish connections between KVM (Kernel-based Virtual Machine) instances rather than the typical Docker/LXC/Podman containers. The first analysis aims to assess network performance as it is, without any overlay channels. Meanwhile, the second establishes various channels without encryption and the final analysis encapsulates overlay traffic via IPsec (Transport mode), where encrypted channels like VTI are not already available for use. A deep set of traffic simulation campaigns shows the obtained performance. Full article
Show Figures

Figure 1

32 pages, 15790 KB  
Review
Human–AI Collaboration for Remote Sighted Assistance: Perspectives from the LLM Era
by Rui Yu, Sooyeon Lee, Jingyi Xie, Syed Masum Billah and John M. Carroll
Future Internet 2024, 16(7), 254; https://doi.org/10.3390/fi16070254 - 18 Jul 2024
Cited by 12 | Viewed by 8186
Abstract
Remote sighted assistance (RSA) has emerged as a conversational technology aiding people with visual impairments (VI) through real-time video chat communication with sighted agents. We conducted a literature review and interviewed 12 RSA users to understand the technical and navigational challenges faced by [...] Read more.
Remote sighted assistance (RSA) has emerged as a conversational technology aiding people with visual impairments (VI) through real-time video chat communication with sighted agents. We conducted a literature review and interviewed 12 RSA users to understand the technical and navigational challenges faced by both agents and users. The technical challenges were categorized into four groups: agents’ difficulties in orienting and localizing users, acquiring and interpreting users’ surroundings and obstacles, delivering information specific to user situations, and coping with poor network connections. We also presented 15 real-world navigational challenges, including 8 outdoor and 7 indoor scenarios. Given the spatial and visual nature of these challenges, we identified relevant computer vision problems that could potentially provide solutions. We then formulated 10 emerging problems that neither human agents nor computer vision can fully address alone. For each emerging problem, we discussed solutions grounded in human–AI collaboration. Additionally, with the advent of large language models (LLMs), we outlined how RSA can integrate with LLMs within a human–AI collaborative framework, envisioning the future of visual prosthetics. Full article
Show Figures

Figure 1

23 pages, 714 KB  
Review
Smart Irrigation Systems from Cyber–Physical Perspective: State of Art and Future Directions
by Mian Qian, Cheng Qian, Guobin Xu, Pu Tian and Wei Yu
Future Internet 2024, 16(7), 234; https://doi.org/10.3390/fi16070234 - 29 Jun 2024
Cited by 34 | Viewed by 7694
Abstract
Irrigation refers to supplying water to soil through pipes, pumps, and spraying systems to ensure even distribution across the field. In traditional farming or gardening, the setup and usage of an agricultural irrigation system solely rely on the personal experience of farmers. The [...] Read more.
Irrigation refers to supplying water to soil through pipes, pumps, and spraying systems to ensure even distribution across the field. In traditional farming or gardening, the setup and usage of an agricultural irrigation system solely rely on the personal experience of farmers. The Food and Agriculture Organization of the United Nations (UN) has projected that by 2030, developing countries will expand their irrigated areas by 34%, while water consumption will only be up 14%. This discrepancy highlights the importance of accurately monitoring water flow and volume rather than people’s rough estimations. The smart irrigation systems, a key subsystem of smart agriculture known as the cyber–physical system (CPS) in the agriculture domain, automate the administration of water flow, volume, and timing via using cutting-edge technologies, especially the Internet of Things (IoT) technology, to solve the challenges. This study explores a comprehensive three-dimensional problem space to thoroughly analyze the IoT’s applications in irrigation systems. Our framework encompasses several critical domains in smart irrigation systems. These domains include soil science, sensor technology, communication protocols, data analysis techniques, and the practical implementations of automated irrigation systems, such as remote monitoring, autonomous operation, and intelligent decision-making processes. Finally, we discuss a few challenges and outline future research directions in this promising field. Full article
Show Figures

Figure 1

36 pages, 3662 KB  
Article
Enhancing Network Slicing Security: Machine Learning, Software-Defined Networking, and Network Functions Virtualization-Driven Strategies
by José Cunha, Pedro Ferreira, Eva M. Castro, Paula Cristina Oliveira, Maria João Nicolau, Iván Núñez, Xosé Ramon Sousa and Carlos Serôdio
Future Internet 2024, 16(7), 226; https://doi.org/10.3390/fi16070226 - 27 Jun 2024
Cited by 52 | Viewed by 10291
Abstract
The rapid development of 5G networks and the anticipation of 6G technologies have ushered in an era of highly customizable network environments facilitated by the innovative concept of network slicing. This technology allows the creation of multiple virtual networks on the same physical [...] Read more.
The rapid development of 5G networks and the anticipation of 6G technologies have ushered in an era of highly customizable network environments facilitated by the innovative concept of network slicing. This technology allows the creation of multiple virtual networks on the same physical infrastructure, each optimized for specific service requirements. Despite its numerous benefits, network slicing introduces significant security vulnerabilities that must be addressed to prevent exploitation by increasingly sophisticated cyber threats. This review explores the application of cutting-edge technologies—Artificial Intelligence (AI), specifically Machine Learning (ML), Software-Defined Networking (SDN), and Network Functions Virtualization (NFV)—in crafting advanced security solutions tailored for network slicing. AI’s predictive threat detection and automated response capabilities are analysed, highlighting its role in maintaining service integrity and resilience. Meanwhile, SDN and NFV are scrutinized for their ability to enforce flexible security policies and manage network functionalities dynamically, thereby enhancing the adaptability of security measures to meet evolving network demands. Thoroughly examining the current literature and industry practices, this paper identifies critical research gaps in security frameworks and proposes innovative solutions. We advocate for a holistic security strategy integrating ML, SDN, and NFV to enhance data confidentiality, integrity, and availability across network slices. The paper concludes with future research directions to develop robust, scalable, and efficient security frameworks capable of supporting the safe deployment of network slicing in next-generation networks. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

12 pages, 1053 KB  
Article
Adapting Self-Regulated Learning in an Age of Generative Artificial Intelligence Chatbots
by Joel Weijia Lai
Future Internet 2024, 16(6), 218; https://doi.org/10.3390/fi16060218 - 20 Jun 2024
Cited by 34 | Viewed by 12813
Abstract
The increasing use of generative artificial intelligence (GenAI) has led to a rise in conversations about how teachers and students should adopt these tools to enhance the learning process. Self-regulated learning (SRL) research is important for addressing this question. A popular form of [...] Read more.
The increasing use of generative artificial intelligence (GenAI) has led to a rise in conversations about how teachers and students should adopt these tools to enhance the learning process. Self-regulated learning (SRL) research is important for addressing this question. A popular form of GenAI is the large language model chatbot, which allows users to seek answers to their queries. This article seeks to adapt current SRL models to understand student learning with these chatbots. This is achieved by classifying the prompts supplied by a learner to an educational chatbot into learning actions and processes using the process–action library. Subsequently, through process mining, we can analyze these data to provide valuable insights for learners, educators, instructional designers, and researchers into the possible applications of chatbots for SRL. Full article
(This article belongs to the Special Issue ICT and AI in Intelligent E-systems)
Show Figures

Figure 1

40 pages, 5898 KB  
Article
Authentication and Key Agreement Protocol in Hybrid Edge–Fog–Cloud Computing Enhanced by 5G Networks
by Jiayi Zhang, Abdelkader Ouda and Raafat Abu-Rukba
Future Internet 2024, 16(6), 209; https://doi.org/10.3390/fi16060209 - 14 Jun 2024
Cited by 15 | Viewed by 4267
Abstract
The Internet of Things (IoT) has revolutionized connected devices, with applications in healthcare, data analytics, and smart cities. For time-sensitive applications, 5G wireless networks provide ultra-reliable low-latency communication (URLLC) and fog computing offloads IoT processing. Integrating 5G and fog computing can address cloud [...] Read more.
The Internet of Things (IoT) has revolutionized connected devices, with applications in healthcare, data analytics, and smart cities. For time-sensitive applications, 5G wireless networks provide ultra-reliable low-latency communication (URLLC) and fog computing offloads IoT processing. Integrating 5G and fog computing can address cloud computing’s deficiencies, but security challenges remain, especially in Authentication and Key Agreement aspects due to the distributed and dynamic nature of fog computing. This study presents an innovative mutual Authentication and Key Agreement protocol that is specifically tailored to meet the security needs of fog computing in the context of the edge–fog–cloud three-tier architecture, enhanced by the incorporation of the 5G network. This study improves security in the edge–fog–cloud context by introducing a stateless authentication mechanism and conducting a comparative analysis of the proposed protocol with well-known alternatives, such as TLS 1.3, 5G-AKA, and various handover protocols. The suggested approach has a total transmission cost of only 1280 bits in the authentication phase, which is approximately 30% lower than other protocols. In addition, the suggested handover protocol only involves two signaling expenses. The computational cost for handover authentication for the edge user is significantly low, measuring 0.243 ms, which is under 10% of the computing costs of other authentication protocols. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks)
Show Figures

Figure 1

22 pages, 2903 KB  
Article
Implementation of Lightweight Machine Learning-Based Intrusion Detection System on IoT Devices of Smart Homes
by Abbas Javed, Amna Ehtsham, Muhammad Jawad, Muhammad Naeem Awais, Ayyaz-ul-Haq Qureshi and Hadi Larijani
Future Internet 2024, 16(6), 200; https://doi.org/10.3390/fi16060200 - 5 Jun 2024
Cited by 36 | Viewed by 9610
Abstract
Smart home devices, also known as IoT devices, provide significant convenience; however, they also present opportunities for attackers to jeopardize homeowners’ security and privacy. Securing these IoT devices is a formidable challenge because of their limited computational resources. Machine learning-based intrusion detection systems [...] Read more.
Smart home devices, also known as IoT devices, provide significant convenience; however, they also present opportunities for attackers to jeopardize homeowners’ security and privacy. Securing these IoT devices is a formidable challenge because of their limited computational resources. Machine learning-based intrusion detection systems (IDSs) have been implemented on the edge and the cloud; however, IDSs have not been embedded in IoT devices. To address this, we propose a novel machine learning-based two-layered IDS for smart home IoT devices, enhancing accuracy and computational efficiency. The first layer of the proposed IDS is deployed on a microcontroller-based smart thermostat, which uploads the data to a website hosted on a cloud server. The second layer of the IDS is deployed on the cloud side for classification of attacks. The proposed IDS can detect the threats with an accuracy of 99.50% at cloud level (multiclassification). For real-time testing, we implemented the Raspberry Pi 4-based adversary to generate a dataset for man-in-the-middle (MITM) and denial of service (DoS) attacks on smart thermostats. The results show that the XGBoost-based IDS detects MITM and DoS attacks in 3.51 ms on a smart thermostat with an accuracy of 97.59%. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

32 pages, 1109 KB  
Article
Impact, Compliance, and Countermeasures in Relation to Data Breaches in Publicly Traded U.S. Companies
by Gabriel Arquelau Pimenta Rodrigues, André Luiz Marques Serrano, Guilherme Fay Vergara, Robson de Oliveira Albuquerque and Georges Daniel Amvame Nze
Future Internet 2024, 16(6), 201; https://doi.org/10.3390/fi16060201 - 5 Jun 2024
Cited by 20 | Viewed by 13587
Abstract
A data breach is the unauthorized disclosure of sensitive personal data, and it impacts millions of individuals annually in the United States, as reported by Privacy Rights Clearinghouse. These breaches jeopardize the physical safety of the individuals whose data are exposed and result [...] Read more.
A data breach is the unauthorized disclosure of sensitive personal data, and it impacts millions of individuals annually in the United States, as reported by Privacy Rights Clearinghouse. These breaches jeopardize the physical safety of the individuals whose data are exposed and result in substantial economic losses for the affected companies. To diminish the frequency and severity of data breaches in the future, it is imperative to research their causes and explore preventive measures. In pursuit of this goal, this study considers a dataset of data breach incidents affecting companies listed on the New York Stock Exchange and NASDAQ. This dataset has been augmented with additional information regarding the targeted company. This paper employs statistical visualizations of the data to clarify these incidents and assess their consequences on the affected companies and individuals whose data were compromised. We then propose mitigation controls based on established frameworks such as the NIST Cybersecurity Framework. Additionally, this paper reviews the compliance scenario by examining the relevant laws and regulations applicable to each case, including SOX, HIPAA, GLBA, and PCI-DSS, and evaluates the impacts of data breaches on stock market prices. We also review guidelines for appropriately responding to data leaks in the U.S., for compliance achievement and cost reduction. By conducting this analysis, this work aims to contribute to a comprehensive understanding of data breaches and empower organizations to safeguard against them proactively, improving the technical quality of their basic services. To our knowledge, this is the first paper to address compliance with data protection regulations, security controls as countermeasures, financial impacts on stock prices, and incident response strategies. Although the discussion is focused on publicly traded companies in the United States, it may also apply to public and private companies worldwide. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Graphical abstract

21 pages, 718 KB  
Review
Using ChatGPT in Software Requirements Engineering: A Comprehensive Review
by Nuno Marques, Rodrigo Rocha Silva and Jorge Bernardino
Future Internet 2024, 16(6), 180; https://doi.org/10.3390/fi16060180 - 21 May 2024
Cited by 86 | Viewed by 19321
Abstract
Large language models (LLMs) have had a significant impact on several domains, including software engineering. However, a comprehensive understanding of LLMs’ use, impact, and potential limitations in software engineering is still emerging and remains in its early stages. This paper analyzes the role [...] Read more.
Large language models (LLMs) have had a significant impact on several domains, including software engineering. However, a comprehensive understanding of LLMs’ use, impact, and potential limitations in software engineering is still emerging and remains in its early stages. This paper analyzes the role of large language models (LLMs), such as ChatGPT-3.5, in software requirements engineering, a critical area in software engineering experiencing rapid advances due to artificial intelligence (AI). By analyzing several studies, we systematically evaluate the integration of ChatGPT into software requirements engineering, focusing on its benefits, challenges, and ethical considerations. This evaluation is based on a comparative analysis that highlights ChatGPT’s efficiency in eliciting requirements, accuracy in capturing user needs, potential to improve communication among stakeholders, and impact on the responsibilities of requirements engineers. The selected studies were analyzed for their insights into the effectiveness of ChatGPT, the importance of human feedback, prompt engineering techniques, technological limitations, and future research directions in using LLMs in software requirements engineering. This comprehensive analysis aims to provide a differentiated perspective on how ChatGPT can reshape software requirements engineering practices and provides strategic recommendations for leveraging ChatGPT to effectively improve the software requirements engineering process. Full article
Show Figures

Figure 1

30 pages, 5255 KB  
Article
Evaluating Realistic Adversarial Attacks against Machine Learning Models for Windows PE Malware Detection
by Muhammad Imran, Annalisa Appice and Donato Malerba
Future Internet 2024, 16(5), 168; https://doi.org/10.3390/fi16050168 - 12 May 2024
Cited by 23 | Viewed by 8814
Abstract
During the last decade, the cybersecurity literature has conferred a high-level role to machine learning as a powerful security paradigm to recognise malicious software in modern anti-malware systems. However, a non-negligible limitation of machine learning methods used to train decision models is that [...] Read more.
During the last decade, the cybersecurity literature has conferred a high-level role to machine learning as a powerful security paradigm to recognise malicious software in modern anti-malware systems. However, a non-negligible limitation of machine learning methods used to train decision models is that adversarial attacks can easily fool them. Adversarial attacks are attack samples produced by carefully manipulating the samples at the test time to violate the model integrity by causing detection mistakes. In this paper, we analyse the performance of five realistic target-based adversarial attacks, namely Extend, Full DOS, Shift, FGSM padding + slack and GAMMA, against two machine learning models, namely MalConv and LGBM, learned to recognise Windows Portable Executable (PE) malware files. Specifically, MalConv is a Convolutional Neural Network (CNN) model learned from the raw bytes of Windows PE files. LGBM is a Gradient-Boosted Decision Tree model that is learned from features extracted through the static analysis of Windows PE files. Notably, the attack methods and machine learning models considered in this study are state-of-the-art methods broadly used in the machine learning literature for Windows PE malware detection tasks. In addition, we explore the effect of accounting for adversarial attacks on securing machine learning models through the adversarial training strategy. Therefore, the main contributions of this article are as follows: (1) We extend existing machine learning studies that commonly consider small datasets to explore the evasion ability of state-of-the-art Windows PE attack methods by increasing the size of the evaluation dataset. (2) To the best of our knowledge, we are the first to carry out an exploratory study to explain how the considered adversarial attack methods change Windows PE malware to fool an effective decision model. (3) We explore the performance of the adversarial training strategy as a means to secure effective decision models against adversarial Windows PE malware files generated with the considered attack methods. Hence, the study explains how GAMMA can actually be considered the most effective evasion method for the performed comparative analysis. On the other hand, the study shows that the adversarial training strategy can actually help in recognising adversarial PE malware generated with GAMMA by also explaining how it changes model decisions. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Figure 1

20 pages, 7164 KB  
Review
A Comprehensive Review of Machine Learning Approaches for Anomaly Detection in Smart Homes: Experimental Analysis and Future Directions
by Md Motiur Rahman, Deepti Gupta, Smriti Bhatt, Shiva Shokouhmand and Miad Faezipour
Future Internet 2024, 16(4), 139; https://doi.org/10.3390/fi16040139 - 19 Apr 2024
Cited by 17 | Viewed by 6388
Abstract
Detecting anomalies in human activities is increasingly crucial today, particularly in nuclear family settings, where there may not be constant monitoring of individuals’ health, especially the elderly, during critical periods. Early anomaly detection can prevent from attack scenarios and life-threatening situations. This task [...] Read more.
Detecting anomalies in human activities is increasingly crucial today, particularly in nuclear family settings, where there may not be constant monitoring of individuals’ health, especially the elderly, during critical periods. Early anomaly detection can prevent from attack scenarios and life-threatening situations. This task becomes notably more complex when multiple ambient sensors are deployed in homes with multiple residents, as opposed to single-resident environments. Additionally, the availability of datasets containing anomalies representing the full spectrum of abnormalities is limited. In our experimental study, we employed eight widely used machine learning and two deep learning classifiers to identify anomalies in human activities. We meticulously generated anomalies, considering all conceivable scenarios. Our findings reveal that the Gated Recurrent Unit (GRU) excels in accurately classifying normal and anomalous activities, while the naïve Bayes classifier demonstrates relatively poor performance among the ten classifiers considered. We conducted various experiments to assess the impact of different training–test splitting ratios, along with a five-fold cross-validation technique, on the performance. Notably, the GRU model consistently outperformed all other classifiers under both conditions. Furthermore, we offer insights into the computational costs associated with these classifiers, encompassing training and prediction phases. Extensive ablation experiments conducted in this study underscore that all these classifiers can effectively be deployed for anomaly detection in two-resident homes. Full article
(This article belongs to the Special Issue Machine Learning for Blockchain and IoT Systems in Smart City)
Show Figures

Graphical abstract

21 pages, 991 KB  
Article
Metaverse Meets Smart Cities—Applications, Benefits, and Challenges
by Florian Maier and Markus Weinberger
Future Internet 2024, 16(4), 126; https://doi.org/10.3390/fi16040126 - 8 Apr 2024
Cited by 26 | Viewed by 6534
Abstract
The metaverse aims to merge the virtual and real worlds. The target is to generate a virtual community where social components play a crucial role and combine different areas such as entertainment, work, shopping, and services. This idea is explicitly appealing in the [...] Read more.
The metaverse aims to merge the virtual and real worlds. The target is to generate a virtual community where social components play a crucial role and combine different areas such as entertainment, work, shopping, and services. This idea is explicitly appealing in the context of smart cities. The metaverse offers digitalization approaches and can strengthen citizens’ social community. While the existing literature covers the exemplary potential of smart city metaverse applications, this study aims to provide a comprehensive overview of the potential and already implemented metaverse applications in the context of cities and municipalities. In addition, challenges related to these applications are identified. The study combines literature reviews and expert interviews to ensure a broad overview. Forty-eight smart city metaverse applications from eleven areas were identified, and actual projects from eleven cities demonstrate the current state of development. Still, further research should evaluate the benefits of the various applications and find strategies to overcome the identified challenges. Full article
Show Figures

Figure 1

13 pages, 395 KB  
Article
Efficient and Secure Distributed Data Storage and Retrieval Using Interplanetary File System and Blockchain
by Muhammad Bin Saif, Sara Migliorini and Fausto Spoto
Future Internet 2024, 16(3), 98; https://doi.org/10.3390/fi16030098 - 15 Mar 2024
Cited by 25 | Viewed by 7906
Abstract
Blockchain technology has been successfully applied in recent years to promote the immutability, traceability, and authenticity of previously collected and stored data. However, the amount of data stored in the blockchain is usually limited for economic and technological issues. Namely, the blockchain usually [...] Read more.
Blockchain technology has been successfully applied in recent years to promote the immutability, traceability, and authenticity of previously collected and stored data. However, the amount of data stored in the blockchain is usually limited for economic and technological issues. Namely, the blockchain usually stores only a fingerprint of data, such as the hash of data, while full, raw information is stored off-chain. This is generally enough to guarantee immutability and traceability, but misses to support another important property, that is, data availability. This is particularly true when a traditional, centralized database is chosen for off-chain storage. For this reason, many proposals try to properly combine blockchain with decentralized IPFS storage. However, the storage of data on IPFS could pose some privacy problems. This paper proposes a solution that properly combines blockchain, IPFS, and encryption techniques to guarantee immutability, traceability, availability, and data privacy. Full article
(This article belongs to the Special Issue Blockchain and Web 3.0: Applications, Challenges and Future Trends)
Show Figures

Figure 1

17 pages, 2344 KB  
Article
An Advanced Path Planning and UAV Relay System: Enhancing Connectivity in Rural Environments
by Mostafa El Debeiki, Saba Al-Rubaye, Adolfo Perrusquía, Christopher Conrad and Juan Alejandro Flores-Campos
Future Internet 2024, 16(3), 89; https://doi.org/10.3390/fi16030089 - 6 Mar 2024
Cited by 31 | Viewed by 5034
Abstract
The use of unmanned aerial vehicles (UAVs) is increasing in transportation applications due to their high versatility and maneuverability in complex environments. Search and rescue is one of the most challenging applications of UAVs due to the non-homogeneous nature of the environmental and [...] Read more.
The use of unmanned aerial vehicles (UAVs) is increasing in transportation applications due to their high versatility and maneuverability in complex environments. Search and rescue is one of the most challenging applications of UAVs due to the non-homogeneous nature of the environmental and communication landscapes. In particular, mountainous areas pose difficulties due to the loss of connectivity caused by large valleys and the volumes of hazardous weather. In this paper, the connectivity issue in mountainous areas is addressed using a path planning algorithm for UAV relay. The approach is based on two main phases: (1) the detection of areas of interest where the connectivity signal is poor, and (2) an energy-aware and resilient path planning algorithm that maximizes the coverage links. The approach uses a viewshed analysis to identify areas of visibility between the areas of interest and the cell-towers. This allows the construction of a blockage map that prevents the UAV from passing through areas with no coverage, whilst maximizing the coverage area under energy constraints and hazardous weather. The proposed approach is validated under open-access datasets of mountainous zones, and the obtained results confirm the benefits of the proposed approach for communication networks in remote and challenging environments. Full article
Show Figures

Figure 1

19 pages, 3172 KB  
Article
Multi-Level Split Federated Learning for Large-Scale AIoT System Based on Smart Cities
by Hanyue Xu, Kah Phooi Seng, Jeremy Smith and Li Minn Ang
Future Internet 2024, 16(3), 82; https://doi.org/10.3390/fi16030082 - 28 Feb 2024
Cited by 16 | Viewed by 6807
Abstract
In the context of smart cities, the integration of artificial intelligence (AI) and the Internet of Things (IoT) has led to the proliferation of AIoT systems, which handle vast amounts of data to enhance urban infrastructure and services. However, the collaborative training of [...] Read more.
In the context of smart cities, the integration of artificial intelligence (AI) and the Internet of Things (IoT) has led to the proliferation of AIoT systems, which handle vast amounts of data to enhance urban infrastructure and services. However, the collaborative training of deep learning models within these systems encounters significant challenges, chiefly due to data privacy concerns and dealing with communication latency from large-scale IoT devices. To address these issues, multi-level split federated learning (multi-level SFL) has been proposed, merging the benefits of split learning (SL) and federated learning (FL). This framework introduces a novel multi-level aggregation architecture that reduces communication delays, enhances scalability, and addresses system and statistical heterogeneity inherent in large AIoT systems with non-IID data distributions. The architecture leverages the Message Queuing Telemetry Transport (MQTT) protocol to cluster IoT devices geographically and employs edge and fog computing layers for initial model parameter aggregation. Simulation experiments validate that the multi-level SFL outperforms traditional SFL by improving model accuracy and convergence speed in large-scale, non-IID environments. This paper delineates the proposed architecture, its workflow, and its advantages in enhancing the robustness and scalability of AIoT systems in smart cities while preserving data privacy. Full article
Show Figures

Figure 1

15 pages, 2529 KB  
Article
A Lightweight Neural Network Model for Disease Risk Prediction in Edge Intelligent Computing Architecture
by Feng Zhou, Shijing Hu, Xin Du, Xiaoli Wan and Jie Wu
Future Internet 2024, 16(3), 75; https://doi.org/10.3390/fi16030075 - 26 Feb 2024
Cited by 16 | Viewed by 4295
Abstract
In the current field of disease risk prediction research, there are many methods of using servers for centralized computing to train and infer prediction models. However, this centralized computing method increases storage space, the load on network bandwidth, and the computing pressure on [...] Read more.
In the current field of disease risk prediction research, there are many methods of using servers for centralized computing to train and infer prediction models. However, this centralized computing method increases storage space, the load on network bandwidth, and the computing pressure on the central server. In this article, we design an image preprocessing method and propose a lightweight neural network model called Linge (Lightweight Neural Network Models for the Edge). We propose a distributed intelligent edge computing technology based on the federated learning algorithm for disease risk prediction. The intelligent edge computing method we proposed for disease risk prediction directly performs prediction model training and inference at the edge without increasing storage space. It also reduces the load on network bandwidth and reduces the computing pressure on the server. The lightweight neural network model we designed has only 7.63 MB of parameters and only takes up 155.28 MB of memory. In the experiment with the Linge model compared with the EfficientNetV2 model, the accuracy and precision increased by 2%, the recall rate increased by 1%, the specificity increased by 4%, the F1 score increased by 3%, and the AUC (Area Under the Curve) value increased by 2%. Full article
Show Figures

Figure 1

16 pages, 2560 KB  
Article
Deep Learning for Intrusion Detection Systems (IDSs) in Time Series Data
by Konstantinos Psychogyios, Andreas Papadakis, Stavroula Bourou, Nikolaos Nikolaou, Apostolos Maniatis and Theodore Zahariadis
Future Internet 2024, 16(3), 73; https://doi.org/10.3390/fi16030073 - 23 Feb 2024
Cited by 27 | Viewed by 6774
Abstract
The advent of computer networks and the internet has drastically altered the means by which we share information and interact with each other. However, this technological advancement has also created opportunities for malevolent behavior, with individuals exploiting vulnerabilities to gain access to confidential [...] Read more.
The advent of computer networks and the internet has drastically altered the means by which we share information and interact with each other. However, this technological advancement has also created opportunities for malevolent behavior, with individuals exploiting vulnerabilities to gain access to confidential data, obstruct activity, etc. To this end, intrusion detection systems (IDSs) are needed to filter malicious traffic and prevent common attacks. In the past, these systems relied on a fixed set of rules or comparisons with previous attacks. However, with the increased availability of computational power and data, machine learning has emerged as a promising solution for this task. While many systems now use this methodology in real-time for a reactive approach to mitigation, we explore the potential of configuring it as a proactive time series prediction. In this work, we delve into this possibility further. More specifically, we convert a classic IDS dataset to a time series format and use predictive models to forecast forthcoming malign packets. We propose a new architecture combining convolutional neural networks, long short-term memory networks, and attention. The findings indicate that our model performs strongly, exhibiting an F1 score and AUC that are within margins of 1% and 3%, respectively, when compared to conventional real-time detection. Also, our architecture achieves an ∼8% F1 score improvement compared to an LSTM (long short-term memory) model. Full article
(This article belongs to the Special Issue Security in the Internet of Things (IoT))
Show Figures

Figure 1

26 pages, 1791 KB  
Article
The Future of Healthcare with Industry 5.0: Preliminary Interview-Based Qualitative Analysis
by Juliana Basulo-Ribeiro and Leonor Teixeira
Future Internet 2024, 16(3), 68; https://doi.org/10.3390/fi16030068 - 22 Feb 2024
Cited by 66 | Viewed by 14938
Abstract
With the advent of Industry 5.0 (I5.0), healthcare is undergoing a profound transformation, integrating human capabilities with advanced technologies to promote a patient-centered, efficient, and empathetic healthcare ecosystem. This study aims to examine the effects of Industry 5.0 on healthcare, emphasizing the synergy [...] Read more.
With the advent of Industry 5.0 (I5.0), healthcare is undergoing a profound transformation, integrating human capabilities with advanced technologies to promote a patient-centered, efficient, and empathetic healthcare ecosystem. This study aims to examine the effects of Industry 5.0 on healthcare, emphasizing the synergy between human experience and technology. To this end, 6 specific objectives were found, which were answered in the results through an empirical study based on interviews with 11 healthcare professionals. This article thus outlines strategic and policy guidelines for the integration of I5.0 in healthcare, advocating policy-driven change, and contributes to the literature by offering a solid theoretical basis on I5.0 and its impact on the healthcare sector. Full article
(This article belongs to the Special Issue eHealth and mHealth)
Show Figures

Figure 1

38 pages, 1021 KB  
Review
A Systematic Survey on 5G and 6G Security Considerations, Challenges, Trends, and Research Areas
by Paul Scalise, Matthew Boeding, Michael Hempel, Hamid Sharif, Joseph Delloiacovo and John Reed
Future Internet 2024, 16(3), 67; https://doi.org/10.3390/fi16030067 - 20 Feb 2024
Cited by 64 | Viewed by 15656
Abstract
With the rapid rollout and growing adoption of 3GPP 5thGeneration (5G) cellular services, including in critical infrastructure sectors, it is important to review security mechanisms, risks, and potential vulnerabilities within this vital technology. Numerous security capabilities need to work together to ensure and [...] Read more.
With the rapid rollout and growing adoption of 3GPP 5thGeneration (5G) cellular services, including in critical infrastructure sectors, it is important to review security mechanisms, risks, and potential vulnerabilities within this vital technology. Numerous security capabilities need to work together to ensure and maintain a sufficiently secure 5G environment that places user privacy and security at the forefront. Confidentiality, integrity, and availability are all pillars of a privacy and security framework that define major aspects of 5G operations. They are incorporated and considered in the design of the 5G standard by the 3rd Generation Partnership Project (3GPP) with the goal of providing a highly reliable network operation for all. Through a comprehensive review, we aim to analyze the ever-evolving landscape of 5G, including any potential attack vectors and proposed measures to mitigate or prevent these threats. This paper presents a comprehensive survey of the state-of-the-art research that has been conducted in recent years regarding 5G systems, focusing on the main components in a systematic approach: the Core Network (CN), Radio Access Network (RAN), and User Equipment (UE). Additionally, we investigate the utilization of 5G in time-dependent, ultra-confidential, and private communications built around a Zero Trust approach. In today’s world, where everything is more connected than ever, Zero Trust policies and architectures can be highly valuable in operations containing sensitive data. Realizing a Zero Trust Architecture entails continuous verification of all devices, users, and requests, regardless of their location within the network, and grants permission only to authorized entities. Finally, developments and proposed methods of new 5G and future 6G security approaches, such as Blockchain technology, post-quantum cryptography (PQC), and Artificial Intelligence (AI) schemes, are also discussed to understand better the full landscape of current and future research within this telecommunications domain. Full article
(This article belongs to the Special Issue 5G Security: Challenges, Opportunities, and the Road Ahead)
Show Figures

Figure 1

18 pages, 6477 KB  
Article
The Microverse: A Task-Oriented Edge-Scale Metaverse
by Qian Qu, Mohsen Hatami, Ronghua Xu, Deeraj Nagothu, Yu Chen, Xiaohua Li, Erik Blasch, Erika Ardiles-Cruz and Genshe Chen
Future Internet 2024, 16(2), 60; https://doi.org/10.3390/fi16020060 - 13 Feb 2024
Cited by 31 | Viewed by 6175
Abstract
Over the past decade, there has been a remarkable acceleration in the evolution of smart cities and intelligent spaces, driven by breakthroughs in technologies such as the Internet of Things (IoT), edge–fog–cloud computing, and machine learning (ML)/artificial intelligence (AI). As society begins to [...] Read more.
Over the past decade, there has been a remarkable acceleration in the evolution of smart cities and intelligent spaces, driven by breakthroughs in technologies such as the Internet of Things (IoT), edge–fog–cloud computing, and machine learning (ML)/artificial intelligence (AI). As society begins to harness the full potential of these smart environments, the horizon brightens with the promise of an immersive, interconnected 3D world. The forthcoming paradigm shift in how we live, work, and interact owes much to groundbreaking innovations in augmented reality (AR), virtual reality (VR), extended reality (XR), blockchain, and digital twins (DTs). However, realizing the expansive digital vista in our daily lives is challenging. Current limitations include an incomplete integration of pivotal techniques, daunting bandwidth requirements, and the critical need for near-instantaneous data transmission, all impeding the digital VR metaverse from fully manifesting as envisioned by its proponents. This paper seeks to delve deeply into the intricacies of the immersive, interconnected 3D realm, particularly in applications demanding high levels of intelligence. Specifically, this paper introduces the microverse, a task-oriented, edge-scale, pragmatic solution for smart cities. Unlike all-encompassing metaverses, each microverse instance serves a specific task as a manageable digital twin of an individual network slice. Each microverse enables on-site/near-site data processing, information fusion, and real-time decision-making within the edge–fog–cloud computing framework. The microverse concept is verified using smart public safety surveillance (SPSS) for smart communities as a case study, demonstrating its feasibility in practical smart city applications. The aim is to stimulate discussions and inspire fresh ideas in our community, guiding us as we navigate the evolving digital landscape of smart cities to embrace the potential of the metaverse. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

24 pages, 8449 KB  
Article
A Secure Opportunistic Network with Efficient Routing for Enhanced Efficiency and Sustainability
by Ayman Khalil and Besma Zeddini
Future Internet 2024, 16(2), 56; https://doi.org/10.3390/fi16020056 - 8 Feb 2024
Cited by 7 | Viewed by 4355
Abstract
The intersection of cybersecurity and opportunistic networks has ushered in a new era of innovation in the realm of wireless communications. In an increasingly interconnected world, where seamless data exchange is pivotal for both individual users and organizations, the need for efficient, reliable, [...] Read more.
The intersection of cybersecurity and opportunistic networks has ushered in a new era of innovation in the realm of wireless communications. In an increasingly interconnected world, where seamless data exchange is pivotal for both individual users and organizations, the need for efficient, reliable, and sustainable networking solutions has never been more pressing. Opportunistic networks, characterized by intermittent connectivity and dynamic network conditions, present unique challenges that necessitate innovative approaches for optimal performance and sustainability. This paper introduces a groundbreaking paradigm that integrates the principles of cybersecurity with opportunistic networks. At its core, this study presents a novel routing protocol meticulously designed to significantly outperform existing solutions concerning key metrics such as delivery probability, overhead ratio, and communication delay. Leveraging cybersecurity’s inherent strengths, our protocol not only fortifies the network’s security posture but also provides a foundation for enhancing efficiency and sustainability in opportunistic networks. The overarching goal of this paper is to address the inherent limitations of conventional opportunistic network protocols. By proposing an innovative routing protocol, we aim to optimize data delivery, minimize overhead, and reduce communication latency. These objectives are crucial for ensuring seamless and timely information exchange, especially in scenarios where traditional networking infrastructures fall short. By large-scale simulations, the new model proves its effectiveness in the different scenarios, especially in terms of message delivery probability, while ensuring reasonable overhead and latency. Full article
Show Figures

Figure 1

14 pages, 3418 KB  
Article
Enhancing Smart City Safety and Utilizing AI Expert Systems for Violence Detection
by Pradeep Kumar, Guo-Liang Shih, Bo-Lin Guo, Siva Kumar Nagi, Yibeltal Chanie Manie, Cheng-Kai Yao, Michael Augustine Arockiyadoss and Peng-Chun Peng
Future Internet 2024, 16(2), 50; https://doi.org/10.3390/fi16020050 - 31 Jan 2024
Cited by 16 | Viewed by 6919
Abstract
Violent attacks have been one of the hot issues in recent years. In the presence of closed-circuit televisions (CCTVs) in smart cities, there is an emerging challenge in apprehending criminals, leading to a need for innovative solutions. In this paper, the propose a [...] Read more.
Violent attacks have been one of the hot issues in recent years. In the presence of closed-circuit televisions (CCTVs) in smart cities, there is an emerging challenge in apprehending criminals, leading to a need for innovative solutions. In this paper, the propose a model aimed at enhancing real-time emergency response capabilities and swiftly identifying criminals. This initiative aims to foster a safer environment and better manage criminal activity within smart cities. The proposed architecture combines an image-to-image stable diffusion model with violence detection and pose estimation approaches. The diffusion model generates synthetic data while the object detection approach uses YOLO v7 to identify violent objects like baseball bats, knives, and pistols, complemented by MediaPipe for action detection. Further, a long short-term memory (LSTM) network classifies the action attacks involving violent objects. Subsequently, an ensemble consisting of an edge device and the entire proposed model is deployed onto the edge device for real-time data testing using a dash camera. Thus, this study can handle violent attacks and send alerts in emergencies. As a result, our proposed YOLO model achieves a mean average precision (MAP) of 89.5% for violent attack detection, and the LSTM classifier model achieves an accuracy of 88.33% for violent action classification. The results highlight the model’s enhanced capability to accurately detect violent objects, particularly in effectively identifying violence through the implemented artificial intelligence system. Full article
(This article belongs to the Special Issue Challenges in Real-Time Intelligent Systems)
Show Figures

Figure 1

44 pages, 38595 KB  
Article
Enhancing Urban Resilience: Smart City Data Analyses, Forecasts, and Digital Twin Techniques at the Neighborhood Level
by Andreas F. Gkontzis, Sotiris Kotsiantis, Georgios Feretzakis and Vassilios S. Verykios
Future Internet 2024, 16(2), 47; https://doi.org/10.3390/fi16020047 - 30 Jan 2024
Cited by 71 | Viewed by 11873
Abstract
Smart cities, leveraging advanced data analytics, predictive models, and digital twin techniques, offer a transformative model for sustainable urban development. Predictive analytics is critical to proactive planning, enabling cities to adapt to evolving challenges. Concurrently, digital twin techniques provide a virtual replica of [...] Read more.
Smart cities, leveraging advanced data analytics, predictive models, and digital twin techniques, offer a transformative model for sustainable urban development. Predictive analytics is critical to proactive planning, enabling cities to adapt to evolving challenges. Concurrently, digital twin techniques provide a virtual replica of the urban environment, fostering real-time monitoring, simulation, and analysis of urban systems. This study underscores the significance of real-time monitoring, simulation, and analysis of urban systems to support test scenarios that identify bottlenecks and enhance smart city efficiency. This paper delves into the crucial roles of citizen report analytics, prediction, and digital twin technologies at the neighborhood level. The study integrates extract, transform, load (ETL) processes, artificial intelligence (AI) techniques, and a digital twin methodology to process and interpret urban data streams derived from citizen interactions with the city’s coordinate-based problem mapping platform. Using an interactive GeoDataFrame within the digital twin methodology, dynamic entities facilitate simulations based on various scenarios, allowing users to visualize, analyze, and predict the response of the urban system at the neighborhood level. This approach reveals antecedent and predictive patterns, trends, and correlations at the physical level of each city area, leading to improvements in urban functionality, resilience, and resident quality of life. Full article
Show Figures

Graphical abstract

Back to TopTop