Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
74 pages, 2233 KB  
Article
Advanced Hybrid Transformer-CNN Deep Learning Model for Effective Intrusion Detection Systems with Class Imbalance Mitigation Using Resampling Techniques
by Hesham Kamal and Maggie Mashaly
Future Internet 2024, 16(12), 481; https://doi.org/10.3390/fi16120481 - 23 Dec 2024
Cited by 20 | Viewed by 5624
Abstract
Network and cloud environments must be fortified against a dynamic array of threats, and intrusion detection systems (IDSs) are critical tools for identifying and thwarting hostile activities. IDSs, classified as anomaly-based or signature-based, have increasingly incorporated deep learning models into their framework. Recently, [...] Read more.
Network and cloud environments must be fortified against a dynamic array of threats, and intrusion detection systems (IDSs) are critical tools for identifying and thwarting hostile activities. IDSs, classified as anomaly-based or signature-based, have increasingly incorporated deep learning models into their framework. Recently, significant advancements have been made in anomaly-based IDSs, particularly those using machine learning, where attack detection accuracy has been notably high. Our proposed method demonstrates that deep learning models can achieve unprecedented success in identifying both known and unknown threats within cloud environments. However, existing benchmark datasets for intrusion detection typically contain more normal traffic samples than attack samples to reflect real-world network traffic. This imbalance in the training data makes it more challenging for IDSs to accurately detect specific types of attacks. Thus, our challenges arise from two key factors, unbalanced training data and the emergence of new, unidentified threats. To address these issues, we present a hybrid transformer-convolutional neural network (Transformer-CNN) deep learning model, which leverages data resampling techniques such as adaptive synthetic (ADASYN), synthetic minority oversampling technique (SMOTE), edited nearest neighbors (ENN), and class weights to overcome class imbalance. The transformer component of our model is employed for contextual feature extraction, enabling the system to analyze relationships and patterns in the data effectively. In contrast, the CNN is responsible for final classification, processing the extracted features to accurately identify specific attack types. The Transformer-CNN model focuses on three primary objectives to enhance detection accuracy and performance: (1) reducing false positives and false negatives, (2) enabling real-time intrusion detection in high-speed networks, and (3) detecting zero-day attacks. We evaluate our proposed model, Transformer-CNN, using the NF-UNSW-NB15-v2 and CICIDS2017 benchmark datasets, and assess its performance with metrics such as accuracy, precision, recall, and F1-score. The results demonstrate that our method achieves an impressive 99.71% accuracy in binary classification and 99.02% in multi-class classification on the NF-UNSW-NB15-v2 dataset, while for the CICIDS2017 dataset, it reaches 99.93% in binary classification and 99.13% in multi-class classification, significantly outperforming existing models. This proves the enhanced capability of our IDS in defending cloud environments against intrusions, including zero-day attacks. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

24 pages, 1273 KB  
Article
Flexible Hyper-Distributed IoT–Edge–Cloud Platform for Real-Time Digital Twin Applications on 6G-Intended Testbeds for Logistics and Industry
by Maria Crespo-Aguado, Raul Lozano, Fernando Hernandez-Gobertti, Nuria Molner and David Gomez-Barquero
Future Internet 2024, 16(11), 431; https://doi.org/10.3390/fi16110431 - 20 Nov 2024
Cited by 11 | Viewed by 4624
Abstract
This paper presents the design and development of a flexible hyper-distributed IoT–Edge–Cloud computing platform for real-time Digital Twins in real logistics and industrial environments, intended as a novel living lab and testbed for future 6G applications. It expands the limited capabilities of IoT [...] Read more.
This paper presents the design and development of a flexible hyper-distributed IoT–Edge–Cloud computing platform for real-time Digital Twins in real logistics and industrial environments, intended as a novel living lab and testbed for future 6G applications. It expands the limited capabilities of IoT devices with extended Cloud and Edge computing functionalities, creating an IoT–Edge–Cloud continuum platform composed of multiple stakeholder solutions, in which vertical application developers can take full advantage of the computing resources of the infrastructure. The platform is built together with a private 5G network to connect machines and sensors on a large scale. Artificial intelligence and machine learning are used to allocate computing resources for real-time services by an end-to-end intelligent orchestrator, and real-time distributed analytic tools leverage Edge computing platforms to support different types of Digital Twin applications for logistics and industry, such as immersive remote driving, with specific characteristics and features. Performance evaluations demonstrated the platform’s capability to support the high-throughput communications required for Digital Twins, achieving user-experienced rates close to the maximum theoretical values, up to 552 Mb/s for the downlink and 87.3 Mb/s for the uplink in the n78 frequency band. Moreover, the platform’s support for Digital Twins was validated via QoE assessments conducted on an immersive remote driving prototype, which demonstrated high levels of user satisfaction in key dimensions such as presence, engagement, control, sensory integration, and cognitive load. Full article
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)
Show Figures

Figure 1

41 pages, 438 KB  
Review
Recent Advancements in Federated Learning: State of the Art, Fundamentals, Principles, IoT Applications and Future Trends
by Christos Papadopoulos, Konstantinos-Filippos Kollias and George F. Fragulis
Future Internet 2024, 16(11), 415; https://doi.org/10.3390/fi16110415 - 9 Nov 2024
Cited by 15 | Viewed by 9272
Abstract
Federated learning (FL) is creating a paradigm shift in machine learning by directing the focus of model training to where the data actually exist. Instead of drawing all data into a central location, which raises concerns about privacy, costs, and delays, FL allows [...] Read more.
Federated learning (FL) is creating a paradigm shift in machine learning by directing the focus of model training to where the data actually exist. Instead of drawing all data into a central location, which raises concerns about privacy, costs, and delays, FL allows learning to take place directly on the device, keeping the data safe and minimizing the need for transfer. This approach is especially important in areas like healthcare, where protecting patient privacy is critical, and in industrial IoT settings, where moving large numbers of data is not practical. What makes FL even more compelling is its ability to reduce the bias that can occur when all data are centralized, leading to fairer and more inclusive machine learning outcomes. However, it is not without its challenges—particularly with regard to keeping the models secure from attacks. Nonetheless, the potential benefits are clear: FL can lower the costs associated with data storage and processing, while also helping organizations to meet strict privacy regulations like GDPR. As edge computing continues to grow, FL’s decentralized approach could play a key role in shaping how we handle data in the future, moving toward a more privacy-conscious world. This study identifies ongoing challenges in ensuring model security against adversarial attacks, pointing to the need for further research in this area. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

42 pages, 9475 KB  
Review
Machine Learning and IoT-Based Solutions in Industrial Applications for Smart Manufacturing: A Critical Review
by Paolo Visconti, Giuseppe Rausa, Carolina Del-Valle-Soto, Ramiro Velázquez, Donato Cafagna and Roberto De Fazio
Future Internet 2024, 16(11), 394; https://doi.org/10.3390/fi16110394 - 26 Oct 2024
Cited by 22 | Viewed by 12900
Abstract
The Internet of Things (IoT) has radically changed the industrial world, enabling the integration of numerous systems and devices into the industrial ecosystem. There are many areas of the manufacturing industry in which IoT has contributed, including plants’ remote monitoring and control, energy [...] Read more.
The Internet of Things (IoT) has radically changed the industrial world, enabling the integration of numerous systems and devices into the industrial ecosystem. There are many areas of the manufacturing industry in which IoT has contributed, including plants’ remote monitoring and control, energy efficiency, more efficient resources management, and cost reduction, paving the way for smart manufacturing in the framework of Industry 4.0. This review article provides an up-to-date overview of IoT systems and machine learning (ML) algorithms applied to smart manufacturing (SM), analyzing four main application fields: security, predictive maintenance, process control, and additive manufacturing. In addition, the paper presents a descriptive and comparative overview of ML algorithms mainly used in smart manufacturing. Furthermore, for each discussed topic, a deep comparative analysis of the recent IoT solutions reported in the scientific literature is introduced, dwelling on the architectural aspects, sensing solutions, implemented data analysis strategies, communication tools, performance, and other characteristic parameters. This comparison highlights the strengths and weaknesses of each discussed solution. Finally, the presented work outlines the features and functionalities of future IoT-based systems for smart industry applications. Full article
(This article belongs to the Special Issue Machine Learning and Internet of Things in Industry 4.0)
Show Figures

Figure 1

33 pages, 1577 KB  
Review
Health IoT Threats: Survey of Risks and Vulnerabilities
by Samaneh Madanian, Tserendorj Chinbat, Maduka Subasinghage, David Airehrour, Farkhondeh Hassandoust and Sira Yongchareon
Future Internet 2024, 16(11), 389; https://doi.org/10.3390/fi16110389 - 23 Oct 2024
Cited by 17 | Viewed by 8906
Abstract
The secure and efficient collection of patients’ vital information is a challenge faced by the healthcare industry. Through the adoption and application of Internet of Things (IoT), the healthcare industry has seen an improvement in the quality of delivered services and patient safety. [...] Read more.
The secure and efficient collection of patients’ vital information is a challenge faced by the healthcare industry. Through the adoption and application of Internet of Things (IoT), the healthcare industry has seen an improvement in the quality of delivered services and patient safety. However, IoT utilization in healthcare is challenging due to the sensitive nature of patients’ clinical information and communicating this across heterogeneous networks and among IoT devices. We conducted a semi-systematic literature review to provide an overview of IoT security and privacy challenges in the healthcare sector over time. We collected 279 studies from 5 scientific databases, of which 69 articles met the requirements for inclusion. We performed thematic and qualitative content analysis to extract trends and information. According to our analysis, the vulnerabilities in IoT in healthcare are classified into three main layers: perception, network, and application. We comprehensively reviewed IoT privacy and security threats on each layer. Different technological advancements were suggested to address the identified vulnerabilities in healthcare. This review has practical implications, emphasizing that healthcare organizations, software developers, and device manufacturers must prioritize healthcare IoT security and privacy. A comprehensive, multilayered security approach, security-by-design principles, and training for staff and end-users must be adopted. Regulators and policy makers must also establish and enforce standards and regulations that promote the security and privacy of healthcare IoT. Overall, this study underscores the importance of ensuring the security and privacy of healthcare IoT, with stakeholders’ coordinated efforts to address the complex and evolving security and privacy threats in this field. This can enhance healthcare IoT trust and reliability, reduce the risks of security and privacy issues and attacks, and ultimately improve healthcare delivery quality and safety. Full article
(This article belongs to the Special Issue Cybersecurity in the IoT)
Show Figures

Figure 1

52 pages, 18006 KB  
Review
A Survey of the Real-Time Metaverse: Challenges and Opportunities
by Mohsen Hatami, Qian Qu, Yu Chen, Hisham Kholidy, Erik Blasch and Erika Ardiles-Cruz
Future Internet 2024, 16(10), 379; https://doi.org/10.3390/fi16100379 - 18 Oct 2024
Cited by 54 | Viewed by 12822
Abstract
The metaverse concept has been evolving from static, pre-rendered virtual environments to a new frontier: the real-time metaverse. This survey paper explores the emerging field of real-time metaverse technologies, which enable the continuous integration of dynamic, real-world data into immersive virtual environments. We [...] Read more.
The metaverse concept has been evolving from static, pre-rendered virtual environments to a new frontier: the real-time metaverse. This survey paper explores the emerging field of real-time metaverse technologies, which enable the continuous integration of dynamic, real-world data into immersive virtual environments. We examine the key technologies driving this evolution, including advanced sensor systems (LiDAR, radar, cameras), artificial intelligence (AI) models for data interpretation, fast data fusion algorithms, and edge computing with 5G networks for low-latency data transmission. This paper reveals how these technologies are orchestrated to achieve near-instantaneous synchronization between physical and virtual worlds, a defining characteristic that distinguishes the real-time metaverse from its traditional counterparts. The survey provides a comprehensive insight into the technical challenges and discusses solutions to realize responsive dynamic virtual environments. The potential applications and impact of real-time metaverse technologies across various fields are considered, including live entertainment, remote collaboration, dynamic simulations, and urban planning with digital twins. By synthesizing current research and identifying future directions, this survey provides a foundation for understanding and advancing the rapidly evolving landscape of real-time metaverse technologies, contributing to the growing body of knowledge on immersive digital experiences and setting the stage for further innovations in the Metaverse transformative field. Full article
Show Figures

Figure 1

37 pages, 2626 KB  
Article
A Survey of Security Strategies in Federated Learning: Defending Models, Data, and Privacy
by Habib Ullah Manzoor, Attia Shabbir, Ao Chen, David Flynn and Ahmed Zoha
Future Internet 2024, 16(10), 374; https://doi.org/10.3390/fi16100374 - 15 Oct 2024
Cited by 21 | Viewed by 13063
Abstract
Federated Learning (FL) has emerged as a transformative paradigm in machine learning, enabling decentralized model training across multiple devices while preserving data privacy. However, the decentralized nature of FL introduces significant security challenges, making it vulnerable to various attacks targeting models, data, and [...] Read more.
Federated Learning (FL) has emerged as a transformative paradigm in machine learning, enabling decentralized model training across multiple devices while preserving data privacy. However, the decentralized nature of FL introduces significant security challenges, making it vulnerable to various attacks targeting models, data, and privacy. This survey provides a comprehensive overview of the defense strategies against these attacks, categorizing them into data and model defenses and privacy attacks. We explore pre-aggregation, in-aggregation, and post-aggregation defenses, highlighting their methodologies and effectiveness. Additionally, the survey delves into advanced techniques such as homomorphic encryption and differential privacy to safeguard sensitive information. The integration of blockchain technology for enhancing security in FL environments is also discussed, along with incentive mechanisms to promote active participation among clients. Through this detailed examination, the survey aims to inform and guide future research in developing robust defense frameworks for FL systems. Full article
(This article belongs to the Special Issue Privacy and Security Issues in IoT Systems)
Show Figures

Figure 1

17 pages, 1040 KB  
Article
Enhancing Heart Disease Prediction with Federated Learning and Blockchain Integration
by Yazan Otoum, Chaosheng Hu, Eyad Haj Said and Amiya Nayak
Future Internet 2024, 16(10), 372; https://doi.org/10.3390/fi16100372 - 14 Oct 2024
Cited by 16 | Viewed by 4012
Abstract
Federated learning offers a framework for developing local models across institutions while safeguarding sensitive data. This paper introduces a novel approach for heart disease prediction using the TabNet model, which combines the strengths of tree-based models and deep neural networks. Our study utilizes [...] Read more.
Federated learning offers a framework for developing local models across institutions while safeguarding sensitive data. This paper introduces a novel approach for heart disease prediction using the TabNet model, which combines the strengths of tree-based models and deep neural networks. Our study utilizes the Comprehensive Heart Disease and UCI Heart Disease datasets, leveraging TabNet’s architecture to enhance data handling in federated environments. Horizontal federated learning was implemented using the federated averaging algorithm to securely aggregate model updates across participants. Blockchain technology was integrated to enhance transparency and accountability, with smart contracts automating governance. The experimental results demonstrate that TabNet achieved the highest balanced metrics score of 1.594 after 50 epochs, with an accuracy of 0.822 and an epsilon value of 6.855, effectively balancing privacy and performance. The model also demonstrated strong accuracy with only 10 iterations on aggregated data, highlighting the benefits of multi-source data integration. This work presents a scalable, privacy-preserving solution for heart disease prediction, combining TabNet and blockchain to address key healthcare challenges while ensuring data integrity. Full article
Show Figures

Figure 1

31 pages, 5936 KB  
Article
Advanced Optimization Techniques for Federated Learning on Non-IID Data
by Filippos Efthymiadis, Aristeidis Karras, Christos Karras and Spyros Sioutas
Future Internet 2024, 16(10), 370; https://doi.org/10.3390/fi16100370 - 13 Oct 2024
Cited by 10 | Viewed by 6458
Abstract
Federated learning enables model training on multiple clients locally, without the need to transfer their data to a central server, thus ensuring data privacy. In this paper, we investigate the impact of Non-Independent and Identically Distributed (non-IID) data on the performance of federated [...] Read more.
Federated learning enables model training on multiple clients locally, without the need to transfer their data to a central server, thus ensuring data privacy. In this paper, we investigate the impact of Non-Independent and Identically Distributed (non-IID) data on the performance of federated training, where we find a reduction in accuracy of up to 29% for neural networks trained in environments with skewed non-IID data. Two optimization strategies are presented to address this issue. The first strategy focuses on applying a cyclical learning rate to determine the learning rate during federated training, while the second strategy develops a sharing and pre-training method on augmented data in order to improve the efficiency of the algorithm in the case of non-IID data. By combining these two methods, experiments show that the accuracy on the CIFAR-10 dataset increased by about 36% while achieving faster convergence by reducing the number of required communication rounds by 5.33 times. The proposed techniques lead to improved accuracy and faster model convergence, thus representing a significant advance in the field of federated learning and facilitating its application to real-world scenarios. Full article
(This article belongs to the Special Issue Distributed Storage of Large Knowledge Graphs with Mobility Data)
Show Figures

Figure 1

29 pages, 778 KB  
Review
Large Language Models Meet Next-Generation Networking Technologies: A Review
by Ching-Nam Hang, Pei-Duo Yu, Roberto Morabito and Chee-Wei Tan
Future Internet 2024, 16(10), 365; https://doi.org/10.3390/fi16100365 - 7 Oct 2024
Cited by 36 | Viewed by 19256
Abstract
The evolution of network technologies has significantly transformed global communication, information sharing, and connectivity. Traditional networks, relying on static configurations and manual interventions, face substantial challenges such as complex management, inefficiency, and susceptibility to human error. The rise of artificial intelligence (AI) has [...] Read more.
The evolution of network technologies has significantly transformed global communication, information sharing, and connectivity. Traditional networks, relying on static configurations and manual interventions, face substantial challenges such as complex management, inefficiency, and susceptibility to human error. The rise of artificial intelligence (AI) has begun to address these issues by automating tasks like network configuration, traffic optimization, and security enhancements. Despite their potential, integrating AI models in network engineering encounters practical obstacles including complex configurations, heterogeneous infrastructure, unstructured data, and dynamic environments. Generative AI, particularly large language models (LLMs), represents a promising advancement in AI, with capabilities extending to natural language processing tasks like translation, summarization, and sentiment analysis. This paper aims to provide a comprehensive review exploring the transformative role of LLMs in modern network engineering. In particular, it addresses gaps in the existing literature by focusing on LLM applications in network design and planning, implementation, analytics, and management. It also discusses current research efforts, challenges, and future opportunities, aiming to provide a comprehensive guide for networking professionals and researchers. The main goal is to facilitate the adoption and advancement of AI and LLMs in networking, promoting more efficient, resilient, and intelligent network systems. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Figure 1

19 pages, 756 KB  
Article
AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities
by Chuhao Wu, He Zhang and John M. Carroll
Future Internet 2024, 16(10), 354; https://doi.org/10.3390/fi16100354 - 28 Sep 2024
Cited by 17 | Viewed by 19762
Abstract
Generative AI has drawn significant attention from stakeholders in higher education. As it introduces new opportunities for personalized learning and tutoring support, it simultaneously poses challenges to academic integrity and leads to ethical issues. Consequently, governing responsible AI usage within higher education institutions [...] Read more.
Generative AI has drawn significant attention from stakeholders in higher education. As it introduces new opportunities for personalized learning and tutoring support, it simultaneously poses challenges to academic integrity and leads to ethical issues. Consequently, governing responsible AI usage within higher education institutions (HEIs) becomes increasingly important. Leading universities have already published guidelines on Generative AI, with most attempting to embrace this technology responsibly. This study provides a new perspective by focusing on strategies for responsible AI governance as demonstrated in these guidelines. Through a case study of 14 prestigious universities in the United States, we identified the multi-unit governance of AI, the role-specific governance of AI, and the academic characteristics of AI governance from their AI guidelines. The strengths and potential limitations of these strategies and characteristics are discussed. The findings offer practical implications for guiding responsible AI usage in HEIs and beyond. Full article
(This article belongs to the Special Issue ICT and AI in Intelligent E-systems)
Show Figures

Figure 1

28 pages, 3973 KB  
Systematic Review
Edge Computing in Healthcare: Innovations, Opportunities, and Challenges
by Alexandru Rancea, Ionut Anghel and Tudor Cioara
Future Internet 2024, 16(9), 329; https://doi.org/10.3390/fi16090329 - 10 Sep 2024
Cited by 52 | Viewed by 22758
Abstract
Edge computing promising a vision of processing data close to its generation point, reducing latency and bandwidth usage compared with traditional cloud computing architectures, has attracted significant attention lately. The integration of edge computing in modern systems takes advantage of Internet of Things [...] Read more.
Edge computing promising a vision of processing data close to its generation point, reducing latency and bandwidth usage compared with traditional cloud computing architectures, has attracted significant attention lately. The integration of edge computing in modern systems takes advantage of Internet of Things (IoT) devices and can potentially improve the systems’ performance, scalability, privacy, and security with applications in different domains. In the healthcare domain, modern IoT devices can nowadays be used to gather vital parameters and information that can be fed to edge Artificial Intelligence (AI) techniques able to offer precious insights and support to healthcare professionals. However, issues regarding data privacy and security, AI optimization, and computational offloading at the edge pose challenges to the adoption of edge AI. This paper aims to explore the current state of the art of edge AI in healthcare by using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology and analyzing more than 70 Web of Science articles. We have defined the relevant research questions, clear inclusion and exclusion criteria, and classified the research works in three main directions: privacy and security, AI-based optimization methods, and edge offloading techniques. The findings highlight the many advantages of integrating edge computing in a wide range of healthcare use cases requiring data privacy and security, near real-time decision-making, and efficient communication links, with the potential to transform future healthcare services and eHealth applications. However, further research is needed to enforce new security-preserving methods and for better orchestrating and coordinating the load in distributed and decentralized scenarios. Full article
(This article belongs to the Special Issue Privacy and Security Issues in IoT Systems)
Show Figures

Figure 1

34 pages, 2225 KB  
Review
Graph Attention Networks: A Comprehensive Review of Methods and Applications
by Aristidis G. Vrahatis, Konstantinos Lazaros and Sotiris Kotsiantis
Future Internet 2024, 16(9), 318; https://doi.org/10.3390/fi16090318 - 3 Sep 2024
Cited by 80 | Viewed by 30371
Abstract
Real-world problems often exhibit complex relationships and dependencies, which can be effectively captured by graph learning systems. Graph attention networks (GATs) have emerged as a powerful and versatile framework in this direction, inspiring numerous extensions and applications in several areas. In this review, [...] Read more.
Real-world problems often exhibit complex relationships and dependencies, which can be effectively captured by graph learning systems. Graph attention networks (GATs) have emerged as a powerful and versatile framework in this direction, inspiring numerous extensions and applications in several areas. In this review, we present a thorough examination of GATs, covering both diverse approaches and a wide range of applications. We examine the principal GAT-based categories, including Global Attention Networks, Multi-Layer Architectures, graph-embedding techniques, Spatial Approaches, and Variational Models. Furthermore, we delve into the diverse applications of GATs in various systems such as recommendation systems, image analysis, medical domain, sentiment analysis, and anomaly detection. This review seeks to act as a navigational reference for researchers and practitioners aiming to emphasize the capabilities and prospects of GATs. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technologies in Greece 2024–2025)
Show Figures

Figure 1

16 pages, 456 KB  
Review
A Survey on Data Availability in Layer 2 Blockchain Rollups: Open Challenges and Future Improvements
by Muhammad Bin Saif, Sara Migliorini and Fausto Spoto
Future Internet 2024, 16(9), 315; https://doi.org/10.3390/fi16090315 - 29 Aug 2024
Cited by 14 | Viewed by 7437
Abstract
Layer 2 solutions have emerged in recent years as a valuable alternative to increase the throughput and scalability of blockchain-based architectures. The three primary types of Layer 2 solutions are state channels, sidechains, and rollups. The rollups are particularly promising, allowing significant improvements [...] Read more.
Layer 2 solutions have emerged in recent years as a valuable alternative to increase the throughput and scalability of blockchain-based architectures. The three primary types of Layer 2 solutions are state channels, sidechains, and rollups. The rollups are particularly promising, allowing significant improvements in transaction throughput, security, and efficiency, and have been adopted by many real-world projects, such as Polygon and Optimistic. However, the adoption of Layer 2 solutions has led to other challenges, such as the data availability problem, where transaction data processed off-chain must be posted back on the main chain. This is crucial to prevent data withholding attacks and ensure all participants can independently verify the blockchain state. This paper provides a comprehensive survey of existing rollup-based Layer 2 solutions with a focus on the data availability problem and discusses the major advantages and disadvantages of them. Finally, an analysis of open challenges and future research directions is provided. Full article
Show Figures

Graphical abstract

32 pages, 1667 KB  
Review
Artificial Intelligence Applications in Smart Healthcare: A Survey
by Xian Gao, Peixiong He, Yi Zhou and Xiao Qin
Future Internet 2024, 16(9), 308; https://doi.org/10.3390/fi16090308 - 27 Aug 2024
Cited by 20 | Viewed by 15297
Abstract
The rapid development of AI technology in recent years has led to its widespread use in daily life, where it plays an increasingly important role. In healthcare, AI has been integrated into the field to develop the new domain of smart healthcare. In [...] Read more.
The rapid development of AI technology in recent years has led to its widespread use in daily life, where it plays an increasingly important role. In healthcare, AI has been integrated into the field to develop the new domain of smart healthcare. In smart healthcare, opportunities and challenges coexist. This article provides a comprehensive overview of past developments and recent progress in this area. First, we summarize the definition and characteristics of smart healthcare. Second, we explore the opportunities that AI technology brings to the smart healthcare field from a macro perspective. Third, we categorize specific AI applications in smart healthcare into ten domains and discuss their technological foundations individually. Finally, we identify ten key challenges these applications face and discuss the existing solutions for each. Full article
(This article belongs to the Special Issue eHealth and mHealth)
Show Figures

Figure 1

29 pages, 521 KB  
Review
A Survey on the Use of Large Language Models (LLMs) in Fake News
by Eleftheria Papageorgiou, Christos Chronis, Iraklis Varlamis and Yassine Himeur
Future Internet 2024, 16(8), 298; https://doi.org/10.3390/fi16080298 - 19 Aug 2024
Cited by 33 | Viewed by 20529
Abstract
The proliferation of fake news and fake profiles on social media platforms poses significant threats to information integrity and societal trust. Traditional detection methods, including rule-based approaches, metadata analysis, and human fact-checking, have been employed to combat disinformation, but these methods often fall [...] Read more.
The proliferation of fake news and fake profiles on social media platforms poses significant threats to information integrity and societal trust. Traditional detection methods, including rule-based approaches, metadata analysis, and human fact-checking, have been employed to combat disinformation, but these methods often fall short in the face of increasingly sophisticated fake content. This review article explores the emerging role of Large Language Models (LLMs) in enhancing the detection of fake news and fake profiles. We provide a comprehensive overview of the nature and spread of disinformation, followed by an examination of existing detection methodologies. The article delves into the capabilities of LLMs in generating both fake news and fake profiles, highlighting their dual role as both a tool for disinformation and a powerful means of detection. We discuss the various applications of LLMs in text classification, fact-checking, verification, and contextual analysis, demonstrating how these models surpass traditional methods in accuracy and efficiency. Additionally, the article covers LLM-based detection of fake profiles through profile attribute analysis, network analysis, and behavior pattern recognition. Through comparative analysis, we showcase the advantages of LLMs over conventional techniques and present case studies that illustrate practical applications. Despite their potential, LLMs face challenges such as computational demands and ethical concerns, which we discuss in more detail. The review concludes with future directions for research and development in LLM-based fake news and fake profile detection, underscoring the importance of continued innovation to safeguard the authenticity of online information. Full article
Show Figures

Figure 1

37 pages, 1164 KB  
Article
Early Ransomware Detection with Deep Learning Models
by Matan Davidian, Michael Kiperberg and Natalia Vanetik
Future Internet 2024, 16(8), 291; https://doi.org/10.3390/fi16080291 - 11 Aug 2024
Cited by 6 | Viewed by 5135
Abstract
Ransomware is a growing-in-popularity type of malware that restricts access to the victim’s system or data until a ransom is paid. Traditional detection methods rely on analyzing the malware’s content, but these methods are ineffective against unknown or zero-day malware. Therefore, zero-day malware [...] Read more.
Ransomware is a growing-in-popularity type of malware that restricts access to the victim’s system or data until a ransom is paid. Traditional detection methods rely on analyzing the malware’s content, but these methods are ineffective against unknown or zero-day malware. Therefore, zero-day malware detection typically involves observing the malware’s behavior, specifically the sequence of application programming interface (API) calls it makes, such as reading and writing files or enumerating directories. While previous studies have used machine learning (ML) techniques to classify API call sequences, they have only considered the API call name. This paper systematically compares various subsets of API call features, different ML techniques, and context-window sizes to identify the optimal ransomware classifier. Our findings indicate that a context-window size of 7 is ideal, and the most effective ML techniques are CNN and LSTM. Additionally, augmenting the API call name with the operation result significantly enhances the classifier’s precision. Performance analysis suggests that this classifier can be effectively applied in real-time scenarios. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Graphical abstract

25 pages, 3302 KB  
Article
Multi-Class Intrusion Detection Based on Transformer for IoT Networks Using CIC-IoT-2023 Dataset
by Shu-Ming Tseng, Yan-Qi Wang and Yung-Chung Wang
Future Internet 2024, 16(8), 284; https://doi.org/10.3390/fi16080284 - 8 Aug 2024
Cited by 34 | Viewed by 14845
Abstract
This study uses deep learning methods to explore the Internet of Things (IoT) network intrusion detection method based on the CIC-IoT-2023 dataset. This dataset contains extensive data on real-life IoT environments. Based on this, this study proposes an effective intrusion detection method. Apply [...] Read more.
This study uses deep learning methods to explore the Internet of Things (IoT) network intrusion detection method based on the CIC-IoT-2023 dataset. This dataset contains extensive data on real-life IoT environments. Based on this, this study proposes an effective intrusion detection method. Apply seven deep learning models, including Transformer, to analyze network traffic characteristics and identify abnormal behavior and potential intrusions through binary and multivariate classifications. Compared with other papers, we not only use a Transformer model, but we also consider the model’s performance in the multi-class classification. Although the accuracy of the Transformer model used in the binary classification is lower than that of DNN and CNN + LSTM hybrid models, it achieves better results in the multi-class classification. The accuracy of binary classification of our model is 0.74% higher than that of papers that also use Transformer on TON-IOT. In the multi-class classification, our best-performing model combination is Transformer, which reaches 99.40% accuracy. Its accuracy is 3.8%, 0.65%, and 0.29% higher than the 95.60%, 98.75%, and 99.11% figures recorded in papers using the same dataset, respectively. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

25 pages, 3477 KB  
Article
Overlay and Virtual Private Networks Security Performances Analysis with Open Source Infrastructure Deployment
by Antonio Francesco Gentile, Davide Macrì, Emilio Greco and Peppino Fazio
Future Internet 2024, 16(8), 283; https://doi.org/10.3390/fi16080283 - 7 Aug 2024
Cited by 6 | Viewed by 4283
Abstract
Nowadays, some of the most well-deployed infrastructures are Virtual Private Networks (VPNs) and Overlay Networks (ONs). They consist of hardware and software components designed to build private/secure channels, typically over the Internet. They are currently among the most reliable technologies for achieving this [...] Read more.
Nowadays, some of the most well-deployed infrastructures are Virtual Private Networks (VPNs) and Overlay Networks (ONs). They consist of hardware and software components designed to build private/secure channels, typically over the Internet. They are currently among the most reliable technologies for achieving this objective. VPNs are well-established and can be patched to address security vulnerabilities, while overlay networks represent the next-generation solution for secure communication. In this paper, for both VPNs and ONs, we analyze some important network performance components (RTT and bandwidth) while varying the type of overlay networks utilized for interconnecting traffic between two or more hosts (in the same data center, in different data centers in the same building, or over the Internet). These networks establish connections between KVM (Kernel-based Virtual Machine) instances rather than the typical Docker/LXC/Podman containers. The first analysis aims to assess network performance as it is, without any overlay channels. Meanwhile, the second establishes various channels without encryption and the final analysis encapsulates overlay traffic via IPsec (Transport mode), where encrypted channels like VTI are not already available for use. A deep set of traffic simulation campaigns shows the obtained performance. Full article
Show Figures

Figure 1

32 pages, 15790 KB  
Review
Human–AI Collaboration for Remote Sighted Assistance: Perspectives from the LLM Era
by Rui Yu, Sooyeon Lee, Jingyi Xie, Syed Masum Billah and John M. Carroll
Future Internet 2024, 16(7), 254; https://doi.org/10.3390/fi16070254 - 18 Jul 2024
Cited by 9 | Viewed by 6396
Abstract
Remote sighted assistance (RSA) has emerged as a conversational technology aiding people with visual impairments (VI) through real-time video chat communication with sighted agents. We conducted a literature review and interviewed 12 RSA users to understand the technical and navigational challenges faced by [...] Read more.
Remote sighted assistance (RSA) has emerged as a conversational technology aiding people with visual impairments (VI) through real-time video chat communication with sighted agents. We conducted a literature review and interviewed 12 RSA users to understand the technical and navigational challenges faced by both agents and users. The technical challenges were categorized into four groups: agents’ difficulties in orienting and localizing users, acquiring and interpreting users’ surroundings and obstacles, delivering information specific to user situations, and coping with poor network connections. We also presented 15 real-world navigational challenges, including 8 outdoor and 7 indoor scenarios. Given the spatial and visual nature of these challenges, we identified relevant computer vision problems that could potentially provide solutions. We then formulated 10 emerging problems that neither human agents nor computer vision can fully address alone. For each emerging problem, we discussed solutions grounded in human–AI collaboration. Additionally, with the advent of large language models (LLMs), we outlined how RSA can integrate with LLMs within a human–AI collaborative framework, envisioning the future of visual prosthetics. Full article
Show Figures

Figure 1

23 pages, 714 KB  
Review
Smart Irrigation Systems from Cyber–Physical Perspective: State of Art and Future Directions
by Mian Qian, Cheng Qian, Guobin Xu, Pu Tian and Wei Yu
Future Internet 2024, 16(7), 234; https://doi.org/10.3390/fi16070234 - 29 Jun 2024
Cited by 21 | Viewed by 6023
Abstract
Irrigation refers to supplying water to soil through pipes, pumps, and spraying systems to ensure even distribution across the field. In traditional farming or gardening, the setup and usage of an agricultural irrigation system solely rely on the personal experience of farmers. The [...] Read more.
Irrigation refers to supplying water to soil through pipes, pumps, and spraying systems to ensure even distribution across the field. In traditional farming or gardening, the setup and usage of an agricultural irrigation system solely rely on the personal experience of farmers. The Food and Agriculture Organization of the United Nations (UN) has projected that by 2030, developing countries will expand their irrigated areas by 34%, while water consumption will only be up 14%. This discrepancy highlights the importance of accurately monitoring water flow and volume rather than people’s rough estimations. The smart irrigation systems, a key subsystem of smart agriculture known as the cyber–physical system (CPS) in the agriculture domain, automate the administration of water flow, volume, and timing via using cutting-edge technologies, especially the Internet of Things (IoT) technology, to solve the challenges. This study explores a comprehensive three-dimensional problem space to thoroughly analyze the IoT’s applications in irrigation systems. Our framework encompasses several critical domains in smart irrigation systems. These domains include soil science, sensor technology, communication protocols, data analysis techniques, and the practical implementations of automated irrigation systems, such as remote monitoring, autonomous operation, and intelligent decision-making processes. Finally, we discuss a few challenges and outline future research directions in this promising field. Full article
Show Figures

Figure 1

36 pages, 3662 KB  
Article
Enhancing Network Slicing Security: Machine Learning, Software-Defined Networking, and Network Functions Virtualization-Driven Strategies
by José Cunha, Pedro Ferreira, Eva M. Castro, Paula Cristina Oliveira, Maria João Nicolau, Iván Núñez, Xosé Ramon Sousa and Carlos Serôdio
Future Internet 2024, 16(7), 226; https://doi.org/10.3390/fi16070226 - 27 Jun 2024
Cited by 28 | Viewed by 8961
Abstract
The rapid development of 5G networks and the anticipation of 6G technologies have ushered in an era of highly customizable network environments facilitated by the innovative concept of network slicing. This technology allows the creation of multiple virtual networks on the same physical [...] Read more.
The rapid development of 5G networks and the anticipation of 6G technologies have ushered in an era of highly customizable network environments facilitated by the innovative concept of network slicing. This technology allows the creation of multiple virtual networks on the same physical infrastructure, each optimized for specific service requirements. Despite its numerous benefits, network slicing introduces significant security vulnerabilities that must be addressed to prevent exploitation by increasingly sophisticated cyber threats. This review explores the application of cutting-edge technologies—Artificial Intelligence (AI), specifically Machine Learning (ML), Software-Defined Networking (SDN), and Network Functions Virtualization (NFV)—in crafting advanced security solutions tailored for network slicing. AI’s predictive threat detection and automated response capabilities are analysed, highlighting its role in maintaining service integrity and resilience. Meanwhile, SDN and NFV are scrutinized for their ability to enforce flexible security policies and manage network functionalities dynamically, thereby enhancing the adaptability of security measures to meet evolving network demands. Thoroughly examining the current literature and industry practices, this paper identifies critical research gaps in security frameworks and proposes innovative solutions. We advocate for a holistic security strategy integrating ML, SDN, and NFV to enhance data confidentiality, integrity, and availability across network slices. The paper concludes with future research directions to develop robust, scalable, and efficient security frameworks capable of supporting the safe deployment of network slicing in next-generation networks. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

12 pages, 1053 KB  
Article
Adapting Self-Regulated Learning in an Age of Generative Artificial Intelligence Chatbots
by Joel Weijia Lai
Future Internet 2024, 16(6), 218; https://doi.org/10.3390/fi16060218 - 20 Jun 2024
Cited by 20 | Viewed by 10650
Abstract
The increasing use of generative artificial intelligence (GenAI) has led to a rise in conversations about how teachers and students should adopt these tools to enhance the learning process. Self-regulated learning (SRL) research is important for addressing this question. A popular form of [...] Read more.
The increasing use of generative artificial intelligence (GenAI) has led to a rise in conversations about how teachers and students should adopt these tools to enhance the learning process. Self-regulated learning (SRL) research is important for addressing this question. A popular form of GenAI is the large language model chatbot, which allows users to seek answers to their queries. This article seeks to adapt current SRL models to understand student learning with these chatbots. This is achieved by classifying the prompts supplied by a learner to an educational chatbot into learning actions and processes using the process–action library. Subsequently, through process mining, we can analyze these data to provide valuable insights for learners, educators, instructional designers, and researchers into the possible applications of chatbots for SRL. Full article
(This article belongs to the Special Issue ICT and AI in Intelligent E-systems)
Show Figures

Figure 1

40 pages, 5898 KB  
Article
Authentication and Key Agreement Protocol in Hybrid Edge–Fog–Cloud Computing Enhanced by 5G Networks
by Jiayi Zhang, Abdelkader Ouda and Raafat Abu-Rukba
Future Internet 2024, 16(6), 209; https://doi.org/10.3390/fi16060209 - 14 Jun 2024
Cited by 13 | Viewed by 3458
Abstract
The Internet of Things (IoT) has revolutionized connected devices, with applications in healthcare, data analytics, and smart cities. For time-sensitive applications, 5G wireless networks provide ultra-reliable low-latency communication (URLLC) and fog computing offloads IoT processing. Integrating 5G and fog computing can address cloud [...] Read more.
The Internet of Things (IoT) has revolutionized connected devices, with applications in healthcare, data analytics, and smart cities. For time-sensitive applications, 5G wireless networks provide ultra-reliable low-latency communication (URLLC) and fog computing offloads IoT processing. Integrating 5G and fog computing can address cloud computing’s deficiencies, but security challenges remain, especially in Authentication and Key Agreement aspects due to the distributed and dynamic nature of fog computing. This study presents an innovative mutual Authentication and Key Agreement protocol that is specifically tailored to meet the security needs of fog computing in the context of the edge–fog–cloud three-tier architecture, enhanced by the incorporation of the 5G network. This study improves security in the edge–fog–cloud context by introducing a stateless authentication mechanism and conducting a comparative analysis of the proposed protocol with well-known alternatives, such as TLS 1.3, 5G-AKA, and various handover protocols. The suggested approach has a total transmission cost of only 1280 bits in the authentication phase, which is approximately 30% lower than other protocols. In addition, the suggested handover protocol only involves two signaling expenses. The computational cost for handover authentication for the edge user is significantly low, measuring 0.243 ms, which is under 10% of the computing costs of other authentication protocols. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks)
Show Figures

Figure 1

22 pages, 2903 KB  
Article
Implementation of Lightweight Machine Learning-Based Intrusion Detection System on IoT Devices of Smart Homes
by Abbas Javed, Amna Ehtsham, Muhammad Jawad, Muhammad Naeem Awais, Ayyaz-ul-Haq Qureshi and Hadi Larijani
Future Internet 2024, 16(6), 200; https://doi.org/10.3390/fi16060200 - 5 Jun 2024
Cited by 26 | Viewed by 7673
Abstract
Smart home devices, also known as IoT devices, provide significant convenience; however, they also present opportunities for attackers to jeopardize homeowners’ security and privacy. Securing these IoT devices is a formidable challenge because of their limited computational resources. Machine learning-based intrusion detection systems [...] Read more.
Smart home devices, also known as IoT devices, provide significant convenience; however, they also present opportunities for attackers to jeopardize homeowners’ security and privacy. Securing these IoT devices is a formidable challenge because of their limited computational resources. Machine learning-based intrusion detection systems (IDSs) have been implemented on the edge and the cloud; however, IDSs have not been embedded in IoT devices. To address this, we propose a novel machine learning-based two-layered IDS for smart home IoT devices, enhancing accuracy and computational efficiency. The first layer of the proposed IDS is deployed on a microcontroller-based smart thermostat, which uploads the data to a website hosted on a cloud server. The second layer of the IDS is deployed on the cloud side for classification of attacks. The proposed IDS can detect the threats with an accuracy of 99.50% at cloud level (multiclassification). For real-time testing, we implemented the Raspberry Pi 4-based adversary to generate a dataset for man-in-the-middle (MITM) and denial of service (DoS) attacks on smart thermostats. The results show that the XGBoost-based IDS detects MITM and DoS attacks in 3.51 ms on a smart thermostat with an accuracy of 97.59%. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

32 pages, 1109 KB  
Article
Impact, Compliance, and Countermeasures in Relation to Data Breaches in Publicly Traded U.S. Companies
by Gabriel Arquelau Pimenta Rodrigues, André Luiz Marques Serrano, Guilherme Fay Vergara, Robson de Oliveira Albuquerque and Georges Daniel Amvame Nze
Future Internet 2024, 16(6), 201; https://doi.org/10.3390/fi16060201 - 5 Jun 2024
Cited by 14 | Viewed by 11410
Abstract
A data breach is the unauthorized disclosure of sensitive personal data, and it impacts millions of individuals annually in the United States, as reported by Privacy Rights Clearinghouse. These breaches jeopardize the physical safety of the individuals whose data are exposed and result [...] Read more.
A data breach is the unauthorized disclosure of sensitive personal data, and it impacts millions of individuals annually in the United States, as reported by Privacy Rights Clearinghouse. These breaches jeopardize the physical safety of the individuals whose data are exposed and result in substantial economic losses for the affected companies. To diminish the frequency and severity of data breaches in the future, it is imperative to research their causes and explore preventive measures. In pursuit of this goal, this study considers a dataset of data breach incidents affecting companies listed on the New York Stock Exchange and NASDAQ. This dataset has been augmented with additional information regarding the targeted company. This paper employs statistical visualizations of the data to clarify these incidents and assess their consequences on the affected companies and individuals whose data were compromised. We then propose mitigation controls based on established frameworks such as the NIST Cybersecurity Framework. Additionally, this paper reviews the compliance scenario by examining the relevant laws and regulations applicable to each case, including SOX, HIPAA, GLBA, and PCI-DSS, and evaluates the impacts of data breaches on stock market prices. We also review guidelines for appropriately responding to data leaks in the U.S., for compliance achievement and cost reduction. By conducting this analysis, this work aims to contribute to a comprehensive understanding of data breaches and empower organizations to safeguard against them proactively, improving the technical quality of their basic services. To our knowledge, this is the first paper to address compliance with data protection regulations, security controls as countermeasures, financial impacts on stock prices, and incident response strategies. Although the discussion is focused on publicly traded companies in the United States, it may also apply to public and private companies worldwide. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Graphical abstract

21 pages, 718 KB  
Review
Using ChatGPT in Software Requirements Engineering: A Comprehensive Review
by Nuno Marques, Rodrigo Rocha Silva and Jorge Bernardino
Future Internet 2024, 16(6), 180; https://doi.org/10.3390/fi16060180 - 21 May 2024
Cited by 57 | Viewed by 16662
Abstract
Large language models (LLMs) have had a significant impact on several domains, including software engineering. However, a comprehensive understanding of LLMs’ use, impact, and potential limitations in software engineering is still emerging and remains in its early stages. This paper analyzes the role [...] Read more.
Large language models (LLMs) have had a significant impact on several domains, including software engineering. However, a comprehensive understanding of LLMs’ use, impact, and potential limitations in software engineering is still emerging and remains in its early stages. This paper analyzes the role of large language models (LLMs), such as ChatGPT-3.5, in software requirements engineering, a critical area in software engineering experiencing rapid advances due to artificial intelligence (AI). By analyzing several studies, we systematically evaluate the integration of ChatGPT into software requirements engineering, focusing on its benefits, challenges, and ethical considerations. This evaluation is based on a comparative analysis that highlights ChatGPT’s efficiency in eliciting requirements, accuracy in capturing user needs, potential to improve communication among stakeholders, and impact on the responsibilities of requirements engineers. The selected studies were analyzed for their insights into the effectiveness of ChatGPT, the importance of human feedback, prompt engineering techniques, technological limitations, and future research directions in using LLMs in software requirements engineering. This comprehensive analysis aims to provide a differentiated perspective on how ChatGPT can reshape software requirements engineering practices and provides strategic recommendations for leveraging ChatGPT to effectively improve the software requirements engineering process. Full article
Show Figures

Figure 1

30 pages, 5255 KB  
Article
Evaluating Realistic Adversarial Attacks against Machine Learning Models for Windows PE Malware Detection
by Muhammad Imran, Annalisa Appice and Donato Malerba
Future Internet 2024, 16(5), 168; https://doi.org/10.3390/fi16050168 - 12 May 2024
Cited by 17 | Viewed by 7423
Abstract
During the last decade, the cybersecurity literature has conferred a high-level role to machine learning as a powerful security paradigm to recognise malicious software in modern anti-malware systems. However, a non-negligible limitation of machine learning methods used to train decision models is that [...] Read more.
During the last decade, the cybersecurity literature has conferred a high-level role to machine learning as a powerful security paradigm to recognise malicious software in modern anti-malware systems. However, a non-negligible limitation of machine learning methods used to train decision models is that adversarial attacks can easily fool them. Adversarial attacks are attack samples produced by carefully manipulating the samples at the test time to violate the model integrity by causing detection mistakes. In this paper, we analyse the performance of five realistic target-based adversarial attacks, namely Extend, Full DOS, Shift, FGSM padding + slack and GAMMA, against two machine learning models, namely MalConv and LGBM, learned to recognise Windows Portable Executable (PE) malware files. Specifically, MalConv is a Convolutional Neural Network (CNN) model learned from the raw bytes of Windows PE files. LGBM is a Gradient-Boosted Decision Tree model that is learned from features extracted through the static analysis of Windows PE files. Notably, the attack methods and machine learning models considered in this study are state-of-the-art methods broadly used in the machine learning literature for Windows PE malware detection tasks. In addition, we explore the effect of accounting for adversarial attacks on securing machine learning models through the adversarial training strategy. Therefore, the main contributions of this article are as follows: (1) We extend existing machine learning studies that commonly consider small datasets to explore the evasion ability of state-of-the-art Windows PE attack methods by increasing the size of the evaluation dataset. (2) To the best of our knowledge, we are the first to carry out an exploratory study to explain how the considered adversarial attack methods change Windows PE malware to fool an effective decision model. (3) We explore the performance of the adversarial training strategy as a means to secure effective decision models against adversarial Windows PE malware files generated with the considered attack methods. Hence, the study explains how GAMMA can actually be considered the most effective evasion method for the performed comparative analysis. On the other hand, the study shows that the adversarial training strategy can actually help in recognising adversarial PE malware generated with GAMMA by also explaining how it changes model decisions. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Figure 1

20 pages, 7164 KB  
Review
A Comprehensive Review of Machine Learning Approaches for Anomaly Detection in Smart Homes: Experimental Analysis and Future Directions
by Md Motiur Rahman, Deepti Gupta, Smriti Bhatt, Shiva Shokouhmand and Miad Faezipour
Future Internet 2024, 16(4), 139; https://doi.org/10.3390/fi16040139 - 19 Apr 2024
Cited by 11 | Viewed by 5588
Abstract
Detecting anomalies in human activities is increasingly crucial today, particularly in nuclear family settings, where there may not be constant monitoring of individuals’ health, especially the elderly, during critical periods. Early anomaly detection can prevent from attack scenarios and life-threatening situations. This task [...] Read more.
Detecting anomalies in human activities is increasingly crucial today, particularly in nuclear family settings, where there may not be constant monitoring of individuals’ health, especially the elderly, during critical periods. Early anomaly detection can prevent from attack scenarios and life-threatening situations. This task becomes notably more complex when multiple ambient sensors are deployed in homes with multiple residents, as opposed to single-resident environments. Additionally, the availability of datasets containing anomalies representing the full spectrum of abnormalities is limited. In our experimental study, we employed eight widely used machine learning and two deep learning classifiers to identify anomalies in human activities. We meticulously generated anomalies, considering all conceivable scenarios. Our findings reveal that the Gated Recurrent Unit (GRU) excels in accurately classifying normal and anomalous activities, while the naïve Bayes classifier demonstrates relatively poor performance among the ten classifiers considered. We conducted various experiments to assess the impact of different training–test splitting ratios, along with a five-fold cross-validation technique, on the performance. Notably, the GRU model consistently outperformed all other classifiers under both conditions. Furthermore, we offer insights into the computational costs associated with these classifiers, encompassing training and prediction phases. Extensive ablation experiments conducted in this study underscore that all these classifiers can effectively be deployed for anomaly detection in two-resident homes. Full article
(This article belongs to the Special Issue Machine Learning for Blockchain and IoT Systems in Smart City)
Show Figures

Graphical abstract

21 pages, 991 KB  
Article
Metaverse Meets Smart Cities—Applications, Benefits, and Challenges
by Florian Maier and Markus Weinberger
Future Internet 2024, 16(4), 126; https://doi.org/10.3390/fi16040126 - 8 Apr 2024
Cited by 18 | Viewed by 5716
Abstract
The metaverse aims to merge the virtual and real worlds. The target is to generate a virtual community where social components play a crucial role and combine different areas such as entertainment, work, shopping, and services. This idea is explicitly appealing in the [...] Read more.
The metaverse aims to merge the virtual and real worlds. The target is to generate a virtual community where social components play a crucial role and combine different areas such as entertainment, work, shopping, and services. This idea is explicitly appealing in the context of smart cities. The metaverse offers digitalization approaches and can strengthen citizens’ social community. While the existing literature covers the exemplary potential of smart city metaverse applications, this study aims to provide a comprehensive overview of the potential and already implemented metaverse applications in the context of cities and municipalities. In addition, challenges related to these applications are identified. The study combines literature reviews and expert interviews to ensure a broad overview. Forty-eight smart city metaverse applications from eleven areas were identified, and actual projects from eleven cities demonstrate the current state of development. Still, further research should evaluate the benefits of the various applications and find strategies to overcome the identified challenges. Full article
Show Figures

Figure 1

13 pages, 395 KB  
Article
Efficient and Secure Distributed Data Storage and Retrieval Using Interplanetary File System and Blockchain
by Muhammad Bin Saif, Sara Migliorini and Fausto Spoto
Future Internet 2024, 16(3), 98; https://doi.org/10.3390/fi16030098 - 15 Mar 2024
Cited by 19 | Viewed by 6770
Abstract
Blockchain technology has been successfully applied in recent years to promote the immutability, traceability, and authenticity of previously collected and stored data. However, the amount of data stored in the blockchain is usually limited for economic and technological issues. Namely, the blockchain usually [...] Read more.
Blockchain technology has been successfully applied in recent years to promote the immutability, traceability, and authenticity of previously collected and stored data. However, the amount of data stored in the blockchain is usually limited for economic and technological issues. Namely, the blockchain usually stores only a fingerprint of data, such as the hash of data, while full, raw information is stored off-chain. This is generally enough to guarantee immutability and traceability, but misses to support another important property, that is, data availability. This is particularly true when a traditional, centralized database is chosen for off-chain storage. For this reason, many proposals try to properly combine blockchain with decentralized IPFS storage. However, the storage of data on IPFS could pose some privacy problems. This paper proposes a solution that properly combines blockchain, IPFS, and encryption techniques to guarantee immutability, traceability, availability, and data privacy. Full article
(This article belongs to the Special Issue Blockchain and Web 3.0: Applications, Challenges and Future Trends)
Show Figures

Figure 1

17 pages, 2344 KB  
Article
An Advanced Path Planning and UAV Relay System: Enhancing Connectivity in Rural Environments
by Mostafa El Debeiki, Saba Al-Rubaye, Adolfo Perrusquía, Christopher Conrad and Juan Alejandro Flores-Campos
Future Internet 2024, 16(3), 89; https://doi.org/10.3390/fi16030089 - 6 Mar 2024
Cited by 23 | Viewed by 4150
Abstract
The use of unmanned aerial vehicles (UAVs) is increasing in transportation applications due to their high versatility and maneuverability in complex environments. Search and rescue is one of the most challenging applications of UAVs due to the non-homogeneous nature of the environmental and [...] Read more.
The use of unmanned aerial vehicles (UAVs) is increasing in transportation applications due to their high versatility and maneuverability in complex environments. Search and rescue is one of the most challenging applications of UAVs due to the non-homogeneous nature of the environmental and communication landscapes. In particular, mountainous areas pose difficulties due to the loss of connectivity caused by large valleys and the volumes of hazardous weather. In this paper, the connectivity issue in mountainous areas is addressed using a path planning algorithm for UAV relay. The approach is based on two main phases: (1) the detection of areas of interest where the connectivity signal is poor, and (2) an energy-aware and resilient path planning algorithm that maximizes the coverage links. The approach uses a viewshed analysis to identify areas of visibility between the areas of interest and the cell-towers. This allows the construction of a blockage map that prevents the UAV from passing through areas with no coverage, whilst maximizing the coverage area under energy constraints and hazardous weather. The proposed approach is validated under open-access datasets of mountainous zones, and the obtained results confirm the benefits of the proposed approach for communication networks in remote and challenging environments. Full article
Show Figures

Figure 1

19 pages, 3172 KB  
Article
Multi-Level Split Federated Learning for Large-Scale AIoT System Based on Smart Cities
by Hanyue Xu, Kah Phooi Seng, Jeremy Smith and Li Minn Ang
Future Internet 2024, 16(3), 82; https://doi.org/10.3390/fi16030082 - 28 Feb 2024
Cited by 13 | Viewed by 6036
Abstract
In the context of smart cities, the integration of artificial intelligence (AI) and the Internet of Things (IoT) has led to the proliferation of AIoT systems, which handle vast amounts of data to enhance urban infrastructure and services. However, the collaborative training of [...] Read more.
In the context of smart cities, the integration of artificial intelligence (AI) and the Internet of Things (IoT) has led to the proliferation of AIoT systems, which handle vast amounts of data to enhance urban infrastructure and services. However, the collaborative training of deep learning models within these systems encounters significant challenges, chiefly due to data privacy concerns and dealing with communication latency from large-scale IoT devices. To address these issues, multi-level split federated learning (multi-level SFL) has been proposed, merging the benefits of split learning (SL) and federated learning (FL). This framework introduces a novel multi-level aggregation architecture that reduces communication delays, enhances scalability, and addresses system and statistical heterogeneity inherent in large AIoT systems with non-IID data distributions. The architecture leverages the Message Queuing Telemetry Transport (MQTT) protocol to cluster IoT devices geographically and employs edge and fog computing layers for initial model parameter aggregation. Simulation experiments validate that the multi-level SFL outperforms traditional SFL by improving model accuracy and convergence speed in large-scale, non-IID environments. This paper delineates the proposed architecture, its workflow, and its advantages in enhancing the robustness and scalability of AIoT systems in smart cities while preserving data privacy. Full article
Show Figures

Figure 1

15 pages, 2529 KB  
Article
A Lightweight Neural Network Model for Disease Risk Prediction in Edge Intelligent Computing Architecture
by Feng Zhou, Shijing Hu, Xin Du, Xiaoli Wan and Jie Wu
Future Internet 2024, 16(3), 75; https://doi.org/10.3390/fi16030075 - 26 Feb 2024
Cited by 13 | Viewed by 3710
Abstract
In the current field of disease risk prediction research, there are many methods of using servers for centralized computing to train and infer prediction models. However, this centralized computing method increases storage space, the load on network bandwidth, and the computing pressure on [...] Read more.
In the current field of disease risk prediction research, there are many methods of using servers for centralized computing to train and infer prediction models. However, this centralized computing method increases storage space, the load on network bandwidth, and the computing pressure on the central server. In this article, we design an image preprocessing method and propose a lightweight neural network model called Linge (Lightweight Neural Network Models for the Edge). We propose a distributed intelligent edge computing technology based on the federated learning algorithm for disease risk prediction. The intelligent edge computing method we proposed for disease risk prediction directly performs prediction model training and inference at the edge without increasing storage space. It also reduces the load on network bandwidth and reduces the computing pressure on the server. The lightweight neural network model we designed has only 7.63 MB of parameters and only takes up 155.28 MB of memory. In the experiment with the Linge model compared with the EfficientNetV2 model, the accuracy and precision increased by 2%, the recall rate increased by 1%, the specificity increased by 4%, the F1 score increased by 3%, and the AUC (Area Under the Curve) value increased by 2%. Full article
Show Figures

Figure 1

16 pages, 2560 KB  
Article
Deep Learning for Intrusion Detection Systems (IDSs) in Time Series Data
by Konstantinos Psychogyios, Andreas Papadakis, Stavroula Bourou, Nikolaos Nikolaou, Apostolos Maniatis and Theodore Zahariadis
Future Internet 2024, 16(3), 73; https://doi.org/10.3390/fi16030073 - 23 Feb 2024
Cited by 18 | Viewed by 5731
Abstract
The advent of computer networks and the internet has drastically altered the means by which we share information and interact with each other. However, this technological advancement has also created opportunities for malevolent behavior, with individuals exploiting vulnerabilities to gain access to confidential [...] Read more.
The advent of computer networks and the internet has drastically altered the means by which we share information and interact with each other. However, this technological advancement has also created opportunities for malevolent behavior, with individuals exploiting vulnerabilities to gain access to confidential data, obstruct activity, etc. To this end, intrusion detection systems (IDSs) are needed to filter malicious traffic and prevent common attacks. In the past, these systems relied on a fixed set of rules or comparisons with previous attacks. However, with the increased availability of computational power and data, machine learning has emerged as a promising solution for this task. While many systems now use this methodology in real-time for a reactive approach to mitigation, we explore the potential of configuring it as a proactive time series prediction. In this work, we delve into this possibility further. More specifically, we convert a classic IDS dataset to a time series format and use predictive models to forecast forthcoming malign packets. We propose a new architecture combining convolutional neural networks, long short-term memory networks, and attention. The findings indicate that our model performs strongly, exhibiting an F1 score and AUC that are within margins of 1% and 3%, respectively, when compared to conventional real-time detection. Also, our architecture achieves an ∼8% F1 score improvement compared to an LSTM (long short-term memory) model. Full article
(This article belongs to the Special Issue Security in the Internet of Things (IoT))
Show Figures

Figure 1

26 pages, 1791 KB  
Article
The Future of Healthcare with Industry 5.0: Preliminary Interview-Based Qualitative Analysis
by Juliana Basulo-Ribeiro and Leonor Teixeira
Future Internet 2024, 16(3), 68; https://doi.org/10.3390/fi16030068 - 22 Feb 2024
Cited by 39 | Viewed by 13199
Abstract
With the advent of Industry 5.0 (I5.0), healthcare is undergoing a profound transformation, integrating human capabilities with advanced technologies to promote a patient-centered, efficient, and empathetic healthcare ecosystem. This study aims to examine the effects of Industry 5.0 on healthcare, emphasizing the synergy [...] Read more.
With the advent of Industry 5.0 (I5.0), healthcare is undergoing a profound transformation, integrating human capabilities with advanced technologies to promote a patient-centered, efficient, and empathetic healthcare ecosystem. This study aims to examine the effects of Industry 5.0 on healthcare, emphasizing the synergy between human experience and technology. To this end, 6 specific objectives were found, which were answered in the results through an empirical study based on interviews with 11 healthcare professionals. This article thus outlines strategic and policy guidelines for the integration of I5.0 in healthcare, advocating policy-driven change, and contributes to the literature by offering a solid theoretical basis on I5.0 and its impact on the healthcare sector. Full article
(This article belongs to the Special Issue eHealth and mHealth)
Show Figures

Figure 1

38 pages, 1021 KB  
Review
A Systematic Survey on 5G and 6G Security Considerations, Challenges, Trends, and Research Areas
by Paul Scalise, Matthew Boeding, Michael Hempel, Hamid Sharif, Joseph Delloiacovo and John Reed
Future Internet 2024, 16(3), 67; https://doi.org/10.3390/fi16030067 - 20 Feb 2024
Cited by 38 | Viewed by 13828
Abstract
With the rapid rollout and growing adoption of 3GPP 5thGeneration (5G) cellular services, including in critical infrastructure sectors, it is important to review security mechanisms, risks, and potential vulnerabilities within this vital technology. Numerous security capabilities need to work together to ensure and [...] Read more.
With the rapid rollout and growing adoption of 3GPP 5thGeneration (5G) cellular services, including in critical infrastructure sectors, it is important to review security mechanisms, risks, and potential vulnerabilities within this vital technology. Numerous security capabilities need to work together to ensure and maintain a sufficiently secure 5G environment that places user privacy and security at the forefront. Confidentiality, integrity, and availability are all pillars of a privacy and security framework that define major aspects of 5G operations. They are incorporated and considered in the design of the 5G standard by the 3rd Generation Partnership Project (3GPP) with the goal of providing a highly reliable network operation for all. Through a comprehensive review, we aim to analyze the ever-evolving landscape of 5G, including any potential attack vectors and proposed measures to mitigate or prevent these threats. This paper presents a comprehensive survey of the state-of-the-art research that has been conducted in recent years regarding 5G systems, focusing on the main components in a systematic approach: the Core Network (CN), Radio Access Network (RAN), and User Equipment (UE). Additionally, we investigate the utilization of 5G in time-dependent, ultra-confidential, and private communications built around a Zero Trust approach. In today’s world, where everything is more connected than ever, Zero Trust policies and architectures can be highly valuable in operations containing sensitive data. Realizing a Zero Trust Architecture entails continuous verification of all devices, users, and requests, regardless of their location within the network, and grants permission only to authorized entities. Finally, developments and proposed methods of new 5G and future 6G security approaches, such as Blockchain technology, post-quantum cryptography (PQC), and Artificial Intelligence (AI) schemes, are also discussed to understand better the full landscape of current and future research within this telecommunications domain. Full article
(This article belongs to the Special Issue 5G Security: Challenges, Opportunities, and the Road Ahead)
Show Figures

Figure 1

18 pages, 6477 KB  
Article
The Microverse: A Task-Oriented Edge-Scale Metaverse
by Qian Qu, Mohsen Hatami, Ronghua Xu, Deeraj Nagothu, Yu Chen, Xiaohua Li, Erik Blasch, Erika Ardiles-Cruz and Genshe Chen
Future Internet 2024, 16(2), 60; https://doi.org/10.3390/fi16020060 - 13 Feb 2024
Cited by 28 | Viewed by 5336
Abstract
Over the past decade, there has been a remarkable acceleration in the evolution of smart cities and intelligent spaces, driven by breakthroughs in technologies such as the Internet of Things (IoT), edge–fog–cloud computing, and machine learning (ML)/artificial intelligence (AI). As society begins to [...] Read more.
Over the past decade, there has been a remarkable acceleration in the evolution of smart cities and intelligent spaces, driven by breakthroughs in technologies such as the Internet of Things (IoT), edge–fog–cloud computing, and machine learning (ML)/artificial intelligence (AI). As society begins to harness the full potential of these smart environments, the horizon brightens with the promise of an immersive, interconnected 3D world. The forthcoming paradigm shift in how we live, work, and interact owes much to groundbreaking innovations in augmented reality (AR), virtual reality (VR), extended reality (XR), blockchain, and digital twins (DTs). However, realizing the expansive digital vista in our daily lives is challenging. Current limitations include an incomplete integration of pivotal techniques, daunting bandwidth requirements, and the critical need for near-instantaneous data transmission, all impeding the digital VR metaverse from fully manifesting as envisioned by its proponents. This paper seeks to delve deeply into the intricacies of the immersive, interconnected 3D realm, particularly in applications demanding high levels of intelligence. Specifically, this paper introduces the microverse, a task-oriented, edge-scale, pragmatic solution for smart cities. Unlike all-encompassing metaverses, each microverse instance serves a specific task as a manageable digital twin of an individual network slice. Each microverse enables on-site/near-site data processing, information fusion, and real-time decision-making within the edge–fog–cloud computing framework. The microverse concept is verified using smart public safety surveillance (SPSS) for smart communities as a case study, demonstrating its feasibility in practical smart city applications. The aim is to stimulate discussions and inspire fresh ideas in our community, guiding us as we navigate the evolving digital landscape of smart cities to embrace the potential of the metaverse. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

24 pages, 8449 KB  
Article
A Secure Opportunistic Network with Efficient Routing for Enhanced Efficiency and Sustainability
by Ayman Khalil and Besma Zeddini
Future Internet 2024, 16(2), 56; https://doi.org/10.3390/fi16020056 - 8 Feb 2024
Cited by 6 | Viewed by 3850
Abstract
The intersection of cybersecurity and opportunistic networks has ushered in a new era of innovation in the realm of wireless communications. In an increasingly interconnected world, where seamless data exchange is pivotal for both individual users and organizations, the need for efficient, reliable, [...] Read more.
The intersection of cybersecurity and opportunistic networks has ushered in a new era of innovation in the realm of wireless communications. In an increasingly interconnected world, where seamless data exchange is pivotal for both individual users and organizations, the need for efficient, reliable, and sustainable networking solutions has never been more pressing. Opportunistic networks, characterized by intermittent connectivity and dynamic network conditions, present unique challenges that necessitate innovative approaches for optimal performance and sustainability. This paper introduces a groundbreaking paradigm that integrates the principles of cybersecurity with opportunistic networks. At its core, this study presents a novel routing protocol meticulously designed to significantly outperform existing solutions concerning key metrics such as delivery probability, overhead ratio, and communication delay. Leveraging cybersecurity’s inherent strengths, our protocol not only fortifies the network’s security posture but also provides a foundation for enhancing efficiency and sustainability in opportunistic networks. The overarching goal of this paper is to address the inherent limitations of conventional opportunistic network protocols. By proposing an innovative routing protocol, we aim to optimize data delivery, minimize overhead, and reduce communication latency. These objectives are crucial for ensuring seamless and timely information exchange, especially in scenarios where traditional networking infrastructures fall short. By large-scale simulations, the new model proves its effectiveness in the different scenarios, especially in terms of message delivery probability, while ensuring reasonable overhead and latency. Full article
Show Figures

Figure 1

14 pages, 3418 KB  
Article
Enhancing Smart City Safety and Utilizing AI Expert Systems for Violence Detection
by Pradeep Kumar, Guo-Liang Shih, Bo-Lin Guo, Siva Kumar Nagi, Yibeltal Chanie Manie, Cheng-Kai Yao, Michael Augustine Arockiyadoss and Peng-Chun Peng
Future Internet 2024, 16(2), 50; https://doi.org/10.3390/fi16020050 - 31 Jan 2024
Cited by 10 | Viewed by 5985
Abstract
Violent attacks have been one of the hot issues in recent years. In the presence of closed-circuit televisions (CCTVs) in smart cities, there is an emerging challenge in apprehending criminals, leading to a need for innovative solutions. In this paper, the propose a [...] Read more.
Violent attacks have been one of the hot issues in recent years. In the presence of closed-circuit televisions (CCTVs) in smart cities, there is an emerging challenge in apprehending criminals, leading to a need for innovative solutions. In this paper, the propose a model aimed at enhancing real-time emergency response capabilities and swiftly identifying criminals. This initiative aims to foster a safer environment and better manage criminal activity within smart cities. The proposed architecture combines an image-to-image stable diffusion model with violence detection and pose estimation approaches. The diffusion model generates synthetic data while the object detection approach uses YOLO v7 to identify violent objects like baseball bats, knives, and pistols, complemented by MediaPipe for action detection. Further, a long short-term memory (LSTM) network classifies the action attacks involving violent objects. Subsequently, an ensemble consisting of an edge device and the entire proposed model is deployed onto the edge device for real-time data testing using a dash camera. Thus, this study can handle violent attacks and send alerts in emergencies. As a result, our proposed YOLO model achieves a mean average precision (MAP) of 89.5% for violent attack detection, and the LSTM classifier model achieves an accuracy of 88.33% for violent action classification. The results highlight the model’s enhanced capability to accurately detect violent objects, particularly in effectively identifying violence through the implemented artificial intelligence system. Full article
(This article belongs to the Special Issue Challenges in Real-Time Intelligent Systems)
Show Figures

Figure 1

44 pages, 38595 KB  
Article
Enhancing Urban Resilience: Smart City Data Analyses, Forecasts, and Digital Twin Techniques at the Neighborhood Level
by Andreas F. Gkontzis, Sotiris Kotsiantis, Georgios Feretzakis and Vassilios S. Verykios
Future Internet 2024, 16(2), 47; https://doi.org/10.3390/fi16020047 - 30 Jan 2024
Cited by 54 | Viewed by 10587
Abstract
Smart cities, leveraging advanced data analytics, predictive models, and digital twin techniques, offer a transformative model for sustainable urban development. Predictive analytics is critical to proactive planning, enabling cities to adapt to evolving challenges. Concurrently, digital twin techniques provide a virtual replica of [...] Read more.
Smart cities, leveraging advanced data analytics, predictive models, and digital twin techniques, offer a transformative model for sustainable urban development. Predictive analytics is critical to proactive planning, enabling cities to adapt to evolving challenges. Concurrently, digital twin techniques provide a virtual replica of the urban environment, fostering real-time monitoring, simulation, and analysis of urban systems. This study underscores the significance of real-time monitoring, simulation, and analysis of urban systems to support test scenarios that identify bottlenecks and enhance smart city efficiency. This paper delves into the crucial roles of citizen report analytics, prediction, and digital twin technologies at the neighborhood level. The study integrates extract, transform, load (ETL) processes, artificial intelligence (AI) techniques, and a digital twin methodology to process and interpret urban data streams derived from citizen interactions with the city’s coordinate-based problem mapping platform. Using an interactive GeoDataFrame within the digital twin methodology, dynamic entities facilitate simulations based on various scenarios, allowing users to visualize, analyze, and predict the response of the urban system at the neighborhood level. This approach reveals antecedent and predictive patterns, trends, and correlations at the physical level of each city area, leading to improvements in urban functionality, resilience, and resident quality of life. Full article
Show Figures

Graphical abstract

29 pages, 743 KB  
Article
TinyML Algorithms for Big Data Management in Large-Scale IoT Systems
by Aristeidis Karras, Anastasios Giannaros, Christos Karras, Leonidas Theodorakopoulos, Constantinos S. Mammassis, George A. Krimpas and Spyros Sioutas
Future Internet 2024, 16(2), 42; https://doi.org/10.3390/fi16020042 - 25 Jan 2024
Cited by 38 | Viewed by 7522
Abstract
In the context of the Internet of Things (IoT), Tiny Machine Learning (TinyML) and Big Data, enhanced by Edge Artificial Intelligence, are essential for effectively managing the extensive data produced by numerous connected devices. Our study introduces a set of TinyML algorithms designed [...] Read more.
In the context of the Internet of Things (IoT), Tiny Machine Learning (TinyML) and Big Data, enhanced by Edge Artificial Intelligence, are essential for effectively managing the extensive data produced by numerous connected devices. Our study introduces a set of TinyML algorithms designed and developed to improve Big Data management in large-scale IoT systems. These algorithms, named TinyCleanEDF, EdgeClusterML, CompressEdgeML, CacheEdgeML, and TinyHybridSenseQ, operate together to enhance data processing, storage, and quality control in IoT networks, utilizing the capabilities of Edge AI. In particular, TinyCleanEDF applies federated learning for Edge-based data cleaning and anomaly detection. EdgeClusterML combines reinforcement learning with self-organizing maps for effective data clustering. CompressEdgeML uses neural networks for adaptive data compression. CacheEdgeML employs predictive analytics for smart data caching, and TinyHybridSenseQ concentrates on data quality evaluation and hybrid storage strategies. Our experimental evaluation of the proposed techniques includes executing all the algorithms in various numbers of Raspberry Pi devices ranging from one to ten. The experimental results are promising as we outperform similar methods across various evaluation metrics. Ultimately, we anticipate that the proposed algorithms offer a comprehensive and efficient approach to managing the complexities of IoT, Big Data, and Edge AI. Full article
(This article belongs to the Special Issue Internet of Things and Cyber-Physical Systems II)
Show Figures

Figure 1

57 pages, 2070 KB  
Review
A Holistic Analysis of Internet of Things (IoT) Security: Principles, Practices, and New Perspectives
by Mahmud Hossain, Golam Kayas, Ragib Hasan, Anthony Skjellum, Shahid Noor and S. M. Riazul Islam
Future Internet 2024, 16(2), 40; https://doi.org/10.3390/fi16020040 - 24 Jan 2024
Cited by 34 | Viewed by 12453
Abstract
Driven by the rapid escalation of its utilization, as well as ramping commercialization, Internet of Things (IoT) devices increasingly face security threats. Apart from denial of service, privacy, and safety concerns, compromised devices can be used as enablers for committing a variety of [...] Read more.
Driven by the rapid escalation of its utilization, as well as ramping commercialization, Internet of Things (IoT) devices increasingly face security threats. Apart from denial of service, privacy, and safety concerns, compromised devices can be used as enablers for committing a variety of crime and e-crime. Despite ongoing research and study, there remains a significant gap in the thorough analysis of security challenges, feasible solutions, and open secure problems for IoT. To bridge this gap, we provide a comprehensive overview of the state of the art in IoT security with a critical investigation-based approach. This includes a detailed analysis of vulnerabilities in IoT-based systems and potential attacks. We present a holistic review of the security properties required to be adopted by IoT devices, applications, and services to mitigate IoT vulnerabilities and, thus, successful attacks. Moreover, we identify challenges to the design of security protocols for IoT systems in which constituent devices vary markedly in capability (such as storage, computation speed, hardware architecture, and communication interfaces). Next, we review existing research and feasible solutions for IoT security. We highlight a set of open problems not yet addressed among existing security solutions. We provide a set of new perspectives for future research on such issues including secure service discovery, on-device credential security, and network anomaly detection. We also provide directions for designing a forensic investigation framework for IoT infrastructures to inspect relevant criminal cases, execute a cyber forensic process, and determine the facts about a given incident. This framework offers a means to better capture information on successful attacks as part of a feedback mechanism to thwart future vulnerabilities and threats. This systematic holistic review will both inform on current challenges in IoT security and ideally motivate their future resolution. Full article
(This article belongs to the Special Issue Cyber Security in the New "Edge Computing + IoT" World)
Show Figures

Figure 1

27 pages, 2022 KB  
Review
Overview of Protocols and Standards for Wireless Sensor Networks in Critical Infrastructures
by Spyridon Daousis, Nikolaos Peladarinos, Vasileios Cheimaras, Panagiotis Papageorgas, Dimitrios D. Piromalis and Radu Adrian Munteanu
Future Internet 2024, 16(1), 33; https://doi.org/10.3390/fi16010033 - 21 Jan 2024
Cited by 39 | Viewed by 7111
Abstract
This paper highlights the crucial role of wireless sensor networks (WSNs) in the surveillance and administration of critical infrastructures (CIs), contributing to their reliability, security, and operational efficiency. It starts by detailing the international significance and structural aspects of these infrastructures, mentions the [...] Read more.
This paper highlights the crucial role of wireless sensor networks (WSNs) in the surveillance and administration of critical infrastructures (CIs), contributing to their reliability, security, and operational efficiency. It starts by detailing the international significance and structural aspects of these infrastructures, mentions the market tension in recent years in the gradual development of wireless networks for industrial applications, and proceeds to categorize WSNs and examine the protocols and standards of WSNs in demanding environments like critical infrastructures, drawing on the recent literature. This review concentrates on the protocols and standards utilized in WSNs for critical infrastructures, and it concludes by identifying a notable gap in the literature concerning quality standards for equipment used in such infrastructures. Full article
(This article belongs to the Special Issue Applications of Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

42 pages, 2733 KB  
Review
A Holistic Review of Machine Learning Adversarial Attacks in IoT Networks
by Hassan Khazane, Mohammed Ridouani, Fatima Salahdine and Naima Kaabouch
Future Internet 2024, 16(1), 32; https://doi.org/10.3390/fi16010032 - 19 Jan 2024
Cited by 38 | Viewed by 10368
Abstract
With the rapid advancements and notable achievements across various application domains, Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem. Among these use cases is IoT security, where numerous systems are deployed to identify or thwart attacks, [...] Read more.
With the rapid advancements and notable achievements across various application domains, Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem. Among these use cases is IoT security, where numerous systems are deployed to identify or thwart attacks, including intrusion detection systems (IDSs), malware detection systems (MDSs), and device identification systems (DISs). Machine Learning-based (ML-based) IoT security systems can fulfill several security objectives, including detecting attacks, authenticating users before they gain access to the system, and categorizing suspicious activities. Nevertheless, ML faces numerous challenges, such as those resulting from the emergence of adversarial attacks crafted to mislead classifiers. This paper provides a comprehensive review of the body of knowledge about adversarial attacks and defense mechanisms, with a particular focus on three prominent IoT security systems: IDSs, MDSs, and DISs. The paper starts by establishing a taxonomy of adversarial attacks within the context of IoT. Then, various methodologies employed in the generation of adversarial attacks are described and classified within a two-dimensional framework. Additionally, we describe existing countermeasures for enhancing IoT security against adversarial attacks. Finally, we explore the most recent literature on the vulnerability of three ML-based IoT security systems to adversarial attacks. Full article
(This article belongs to the Special Issue AI and Security in 5G Cooperative Cognitive Radio Networks)
Show Figures

Figure 1

17 pages, 3053 KB  
Article
Proximal Policy Optimization for Efficient D2D-Assisted Computation Offloading and Resource Allocation in Multi-Access Edge Computing
by Chen Zhang, Celimuge Wu, Min Lin, Yangfei Lin and William Liu
Future Internet 2024, 16(1), 19; https://doi.org/10.3390/fi16010019 - 2 Jan 2024
Cited by 18 | Viewed by 5759
Abstract
In the advanced 5G and beyond networks, multi-access edge computing (MEC) is increasingly recognized as a promising technology, offering the dual advantages of reducing energy utilization in cloud data centers while catering to the demands for reliability and real-time responsiveness in end devices. [...] Read more.
In the advanced 5G and beyond networks, multi-access edge computing (MEC) is increasingly recognized as a promising technology, offering the dual advantages of reducing energy utilization in cloud data centers while catering to the demands for reliability and real-time responsiveness in end devices. However, the inherent complexity and variability of MEC networks pose significant challenges in computational offloading decisions. To tackle this problem, we propose a proximal policy optimization (PPO)-based Device-to-Device (D2D)-assisted computation offloading and resource allocation scheme. We construct a realistic MEC network environment and develop a Markov decision process (MDP) model that minimizes time loss and energy consumption. The integration of a D2D communication-based offloading framework allows for collaborative task offloading between end devices and MEC servers, enhancing both resource utilization and computational efficiency. The MDP model is solved using the PPO algorithm in deep reinforcement learning to derive an optimal policy for offloading and resource allocation. Extensive comparative analysis with three benchmarked approaches has confirmed our scheme’s superior performance in latency, energy consumption, and algorithmic convergence, demonstrating its potential to improve MEC network operations in the context of emerging 5G and beyond technologies. Full article
Show Figures

Figure 1

34 pages, 2309 KB  
Review
Edge AI for Early Detection of Chronic Diseases and the Spread of Infectious Diseases: Opportunities, Challenges, and Future Directions
by Elarbi Badidi
Future Internet 2023, 15(11), 370; https://doi.org/10.3390/fi15110370 - 18 Nov 2023
Cited by 48 | Viewed by 22319
Abstract
Edge AI, an interdisciplinary technology that enables distributed intelligence with edge devices, is quickly becoming a critical component in early health prediction. Edge AI encompasses data analytics and artificial intelligence (AI) using machine learning, deep learning, and federated learning models deployed and executed [...] Read more.
Edge AI, an interdisciplinary technology that enables distributed intelligence with edge devices, is quickly becoming a critical component in early health prediction. Edge AI encompasses data analytics and artificial intelligence (AI) using machine learning, deep learning, and federated learning models deployed and executed at the edge of the network, far from centralized data centers. AI enables the careful analysis of large datasets derived from multiple sources, including electronic health records, wearable devices, and demographic information, making it possible to identify intricate patterns and predict a person’s future health. Federated learning, a novel approach in AI, further enhances this prediction by enabling collaborative training of AI models on distributed edge devices while maintaining privacy. Using edge computing, data can be processed and analyzed locally, reducing latency and enabling instant decision making. This article reviews the role of Edge AI in early health prediction and highlights its potential to improve public health. Topics covered include the use of AI algorithms for early detection of chronic diseases such as diabetes and cancer and the use of edge computing in wearable devices to detect the spread of infectious diseases. In addition to discussing the challenges and limitations of Edge AI in early health prediction, this article emphasizes future research directions to address these concerns and the integration with existing healthcare systems and explore the full potential of these technologies in improving public health. Full article
(This article belongs to the Special Issue Internet of Things (IoT) for Smart Living and Public Health)
Show Figures

Figure 1

23 pages, 14269 KB  
Article
Implementation and Evaluation of a Federated Learning Framework on Raspberry PI Platforms for IoT 6G Applications
by Lorenzo Ridolfi, David Naseh, Swapnil Sadashiv Shinde and Daniele Tarchi
Future Internet 2023, 15(11), 358; https://doi.org/10.3390/fi15110358 - 31 Oct 2023
Cited by 10 | Viewed by 4821
Abstract
With the advent of 6G technology, the proliferation of interconnected devices necessitates a robust, fully connected intelligence network. Federated Learning (FL) stands as a key distributed learning technique, showing promise in recent advancements. However, the integration of novel Internet of Things (IoT) applications [...] Read more.
With the advent of 6G technology, the proliferation of interconnected devices necessitates a robust, fully connected intelligence network. Federated Learning (FL) stands as a key distributed learning technique, showing promise in recent advancements. However, the integration of novel Internet of Things (IoT) applications and virtualization technologies has introduced diverse and heterogeneous devices into wireless networks. This diversity encompasses variations in computation, communication, storage resources, training data, and communication modes among connected nodes. In this context, our study presents a pivotal contribution by analyzing and implementing FL processes tailored for 6G standards. Our work defines a practical FL platform, employing Raspberry Pi devices and virtual machines as client nodes, with a Windows PC serving as a parameter server. We tackle the image classification challenge, implementing the FL model via PyTorch, augmented by the specialized FL library, Flower. Notably, our analysis delves into the impact of computational resources, data availability, and heating issues across heterogeneous device sets. Additionally, we address knowledge transfer and employ pre-trained networks in our FL performance evaluation. This research underscores the indispensable role of artificial intelligence in IoT scenarios within the 6G landscape, providing a comprehensive framework for FL implementation across diverse and heterogeneous devices. Full article
Show Figures

Figure 1

32 pages, 419 KB  
Article
The 6G Ecosystem as Support for IoE and Private Networks: Vision, Requirements, and Challenges
by Carlos Serôdio, José Cunha, Guillermo Candela, Santiago Rodriguez, Xosé Ramón Sousa and Frederico Branco
Future Internet 2023, 15(11), 348; https://doi.org/10.3390/fi15110348 - 25 Oct 2023
Cited by 65 | Viewed by 6888
Abstract
The emergence of the sixth generation of cellular systems (6G) signals a transformative era and ecosystem for mobile communications, driven by demands from technologies like the internet of everything (IoE), V2X communications, and factory automation. To support this connectivity, mission-critical applications are emerging [...] Read more.
The emergence of the sixth generation of cellular systems (6G) signals a transformative era and ecosystem for mobile communications, driven by demands from technologies like the internet of everything (IoE), V2X communications, and factory automation. To support this connectivity, mission-critical applications are emerging with challenging network requirements. The primary goals of 6G include providing sophisticated and high-quality services, extremely reliable and further-enhanced mobile broadband (feMBB), low-latency communication (ERLLC), long-distance and high-mobility communications (LDHMC), ultra-massive machine-type communications (umMTC), extremely low-power communications (ELPC), holographic communications, and quality of experience (QoE), grounded in incorporating massive broad-bandwidth machine-type (mBBMT), mobile broad-bandwidth and low-latency (MBBLL), and massive low-latency machine-type (mLLMT) communications. In attaining its objectives, 6G faces challenges that demand inventive solutions, incorporating AI, softwarization, cloudification, virtualization, and slicing features. Technologies like network function virtualization (NFV), network slicing, and software-defined networking (SDN) play pivotal roles in this integration, which facilitates efficient resource utilization, responsive service provisioning, expanded coverage, enhanced network reliability, increased capacity, densification, heightened availability, safety, security, and reduced energy consumption. It presents innovative network infrastructure concepts, such as resource-as-a-service (RaaS) and infrastructure-as-a-service (IaaS), featuring management and service orchestration mechanisms. This includes nomadic networks, AI-aware networking strategies, and dynamic management of diverse network resources. This paper provides an in-depth survey of the wireless evolution leading to 6G networks, addressing future issues and challenges associated with 6G technology to support V2X environments considering presenting +challenges in architecture, spectrum, air interface, reliability, availability, density, flexibility, mobility, and security. Full article
(This article belongs to the Special Issue Moving towards 6G Wireless Technologies)
26 pages, 4052 KB  
Article
Fluent but Not Factual: A Comparative Analysis of ChatGPT and Other AI Chatbots’ Proficiency and Originality in Scientific Writing for Humanities
by Edisa Lozić and Benjamin Štular
Future Internet 2023, 15(10), 336; https://doi.org/10.3390/fi15100336 - 13 Oct 2023
Cited by 52 | Viewed by 15872
Abstract
Historically, mastery of writing was deemed essential to human progress. However, recent advances in generative AI have marked an inflection point in this narrative, including for scientific writing. This article provides a comprehensive analysis of the capabilities and limitations of six AI chatbots [...] Read more.
Historically, mastery of writing was deemed essential to human progress. However, recent advances in generative AI have marked an inflection point in this narrative, including for scientific writing. This article provides a comprehensive analysis of the capabilities and limitations of six AI chatbots in scholarly writing in the humanities and archaeology. The methodology was based on tagging AI-generated content for quantitative accuracy and qualitative precision by human experts. Quantitative accuracy assessed the factual correctness in a manner similar to grading students, while qualitative precision gauged the scientific contribution similar to reviewing a scientific article. In the quantitative test, ChatGPT-4 scored near the passing grade (−5) whereas ChatGPT-3.5 (−18), Bing (−21) and Bard (−31) were not far behind. Claude 2 (−75) and Aria (−80) scored much lower. In the qualitative test, all AI chatbots, but especially ChatGPT-4, demonstrated proficiency in recombining existing knowledge, but all failed to generate original scientific content. As a side note, our results suggest that with ChatGPT-4, the size of large language models has reached a plateau. Furthermore, this paper underscores the intricate and recursive nature of human research. This process of transforming raw data into refined knowledge is computationally irreducible, highlighting the challenges AI chatbots face in emulating human originality in scientific writing. Our results apply to the state of affairs in the third quarter of 2023. In conclusion, while large language models have revolutionised content generation, their ability to produce original scientific contributions in the humanities remains limited. We expect this to change in the near future as current large language model-based AI chatbots evolve into large language model-powered software. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

Back to TopTop