Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 714 KiB  
Review
Smart Irrigation Systems from Cyber–Physical Perspective: State of Art and Future Directions
by Mian Qian, Cheng Qian, Guobin Xu, Pu Tian and Wei Yu
Future Internet 2024, 16(7), 234; https://doi.org/10.3390/fi16070234 - 29 Jun 2024
Cited by 9 | Viewed by 3628
Abstract
Irrigation refers to supplying water to soil through pipes, pumps, and spraying systems to ensure even distribution across the field. In traditional farming or gardening, the setup and usage of an agricultural irrigation system solely rely on the personal experience of farmers. The [...] Read more.
Irrigation refers to supplying water to soil through pipes, pumps, and spraying systems to ensure even distribution across the field. In traditional farming or gardening, the setup and usage of an agricultural irrigation system solely rely on the personal experience of farmers. The Food and Agriculture Organization of the United Nations (UN) has projected that by 2030, developing countries will expand their irrigated areas by 34%, while water consumption will only be up 14%. This discrepancy highlights the importance of accurately monitoring water flow and volume rather than people’s rough estimations. The smart irrigation systems, a key subsystem of smart agriculture known as the cyber–physical system (CPS) in the agriculture domain, automate the administration of water flow, volume, and timing via using cutting-edge technologies, especially the Internet of Things (IoT) technology, to solve the challenges. This study explores a comprehensive three-dimensional problem space to thoroughly analyze the IoT’s applications in irrigation systems. Our framework encompasses several critical domains in smart irrigation systems. These domains include soil science, sensor technology, communication protocols, data analysis techniques, and the practical implementations of automated irrigation systems, such as remote monitoring, autonomous operation, and intelligent decision-making processes. Finally, we discuss a few challenges and outline future research directions in this promising field. Full article
Show Figures

Figure 1

36 pages, 3662 KiB  
Article
Enhancing Network Slicing Security: Machine Learning, Software-Defined Networking, and Network Functions Virtualization-Driven Strategies
by José Cunha, Pedro Ferreira, Eva M. Castro, Paula Cristina Oliveira, Maria João Nicolau, Iván Núñez, Xosé Ramon Sousa and Carlos Serôdio
Future Internet 2024, 16(7), 226; https://doi.org/10.3390/fi16070226 - 27 Jun 2024
Cited by 11 | Viewed by 6502
Abstract
The rapid development of 5G networks and the anticipation of 6G technologies have ushered in an era of highly customizable network environments facilitated by the innovative concept of network slicing. This technology allows the creation of multiple virtual networks on the same physical [...] Read more.
The rapid development of 5G networks and the anticipation of 6G technologies have ushered in an era of highly customizable network environments facilitated by the innovative concept of network slicing. This technology allows the creation of multiple virtual networks on the same physical infrastructure, each optimized for specific service requirements. Despite its numerous benefits, network slicing introduces significant security vulnerabilities that must be addressed to prevent exploitation by increasingly sophisticated cyber threats. This review explores the application of cutting-edge technologies—Artificial Intelligence (AI), specifically Machine Learning (ML), Software-Defined Networking (SDN), and Network Functions Virtualization (NFV)—in crafting advanced security solutions tailored for network slicing. AI’s predictive threat detection and automated response capabilities are analysed, highlighting its role in maintaining service integrity and resilience. Meanwhile, SDN and NFV are scrutinized for their ability to enforce flexible security policies and manage network functionalities dynamically, thereby enhancing the adaptability of security measures to meet evolving network demands. Thoroughly examining the current literature and industry practices, this paper identifies critical research gaps in security frameworks and proposes innovative solutions. We advocate for a holistic security strategy integrating ML, SDN, and NFV to enhance data confidentiality, integrity, and availability across network slices. The paper concludes with future research directions to develop robust, scalable, and efficient security frameworks capable of supporting the safe deployment of network slicing in next-generation networks. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

12 pages, 1053 KiB  
Article
Adapting Self-Regulated Learning in an Age of Generative Artificial Intelligence Chatbots
by Joel Weijia Lai
Future Internet 2024, 16(6), 218; https://doi.org/10.3390/fi16060218 - 20 Jun 2024
Cited by 10 | Viewed by 6356
Abstract
The increasing use of generative artificial intelligence (GenAI) has led to a rise in conversations about how teachers and students should adopt these tools to enhance the learning process. Self-regulated learning (SRL) research is important for addressing this question. A popular form of [...] Read more.
The increasing use of generative artificial intelligence (GenAI) has led to a rise in conversations about how teachers and students should adopt these tools to enhance the learning process. Self-regulated learning (SRL) research is important for addressing this question. A popular form of GenAI is the large language model chatbot, which allows users to seek answers to their queries. This article seeks to adapt current SRL models to understand student learning with these chatbots. This is achieved by classifying the prompts supplied by a learner to an educational chatbot into learning actions and processes using the process–action library. Subsequently, through process mining, we can analyze these data to provide valuable insights for learners, educators, instructional designers, and researchers into the possible applications of chatbots for SRL. Full article
(This article belongs to the Special Issue ICT and AI in Intelligent E-systems)
Show Figures

Figure 1

40 pages, 5898 KiB  
Article
Authentication and Key Agreement Protocol in Hybrid Edge–Fog–Cloud Computing Enhanced by 5G Networks
by Jiayi Zhang, Abdelkader Ouda and Raafat Abu-Rukba
Future Internet 2024, 16(6), 209; https://doi.org/10.3390/fi16060209 - 14 Jun 2024
Cited by 8 | Viewed by 1990
Abstract
The Internet of Things (IoT) has revolutionized connected devices, with applications in healthcare, data analytics, and smart cities. For time-sensitive applications, 5G wireless networks provide ultra-reliable low-latency communication (URLLC) and fog computing offloads IoT processing. Integrating 5G and fog computing can address cloud [...] Read more.
The Internet of Things (IoT) has revolutionized connected devices, with applications in healthcare, data analytics, and smart cities. For time-sensitive applications, 5G wireless networks provide ultra-reliable low-latency communication (URLLC) and fog computing offloads IoT processing. Integrating 5G and fog computing can address cloud computing’s deficiencies, but security challenges remain, especially in Authentication and Key Agreement aspects due to the distributed and dynamic nature of fog computing. This study presents an innovative mutual Authentication and Key Agreement protocol that is specifically tailored to meet the security needs of fog computing in the context of the edge–fog–cloud three-tier architecture, enhanced by the incorporation of the 5G network. This study improves security in the edge–fog–cloud context by introducing a stateless authentication mechanism and conducting a comparative analysis of the proposed protocol with well-known alternatives, such as TLS 1.3, 5G-AKA, and various handover protocols. The suggested approach has a total transmission cost of only 1280 bits in the authentication phase, which is approximately 30% lower than other protocols. In addition, the suggested handover protocol only involves two signaling expenses. The computational cost for handover authentication for the edge user is significantly low, measuring 0.243 ms, which is under 10% of the computing costs of other authentication protocols. Full article
(This article belongs to the Special Issue Key Enabling Technologies for Beyond 5G Networks)
Show Figures

Figure 1

32 pages, 1109 KiB  
Article
Impact, Compliance, and Countermeasures in Relation to Data Breaches in Publicly Traded U.S. Companies
by Gabriel Arquelau Pimenta Rodrigues, André Luiz Marques Serrano, Guilherme Fay Vergara, Robson de Oliveira Albuquerque and Georges Daniel Amvame Nze
Future Internet 2024, 16(6), 201; https://doi.org/10.3390/fi16060201 - 5 Jun 2024
Cited by 10 | Viewed by 7474
Abstract
A data breach is the unauthorized disclosure of sensitive personal data, and it impacts millions of individuals annually in the United States, as reported by Privacy Rights Clearinghouse. These breaches jeopardize the physical safety of the individuals whose data are exposed and result [...] Read more.
A data breach is the unauthorized disclosure of sensitive personal data, and it impacts millions of individuals annually in the United States, as reported by Privacy Rights Clearinghouse. These breaches jeopardize the physical safety of the individuals whose data are exposed and result in substantial economic losses for the affected companies. To diminish the frequency and severity of data breaches in the future, it is imperative to research their causes and explore preventive measures. In pursuit of this goal, this study considers a dataset of data breach incidents affecting companies listed on the New York Stock Exchange and NASDAQ. This dataset has been augmented with additional information regarding the targeted company. This paper employs statistical visualizations of the data to clarify these incidents and assess their consequences on the affected companies and individuals whose data were compromised. We then propose mitigation controls based on established frameworks such as the NIST Cybersecurity Framework. Additionally, this paper reviews the compliance scenario by examining the relevant laws and regulations applicable to each case, including SOX, HIPAA, GLBA, and PCI-DSS, and evaluates the impacts of data breaches on stock market prices. We also review guidelines for appropriately responding to data leaks in the U.S., for compliance achievement and cost reduction. By conducting this analysis, this work aims to contribute to a comprehensive understanding of data breaches and empower organizations to safeguard against them proactively, improving the technical quality of their basic services. To our knowledge, this is the first paper to address compliance with data protection regulations, security controls as countermeasures, financial impacts on stock prices, and incident response strategies. Although the discussion is focused on publicly traded companies in the United States, it may also apply to public and private companies worldwide. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Graphical abstract

22 pages, 2903 KiB  
Article
Implementation of Lightweight Machine Learning-Based Intrusion Detection System on IoT Devices of Smart Homes
by Abbas Javed, Amna Ehtsham, Muhammad Jawad, Muhammad Naeem Awais, Ayyaz-ul-Haq Qureshi and Hadi Larijani
Future Internet 2024, 16(6), 200; https://doi.org/10.3390/fi16060200 - 5 Jun 2024
Cited by 8 | Viewed by 4147
Abstract
Smart home devices, also known as IoT devices, provide significant convenience; however, they also present opportunities for attackers to jeopardize homeowners’ security and privacy. Securing these IoT devices is a formidable challenge because of their limited computational resources. Machine learning-based intrusion detection systems [...] Read more.
Smart home devices, also known as IoT devices, provide significant convenience; however, they also present opportunities for attackers to jeopardize homeowners’ security and privacy. Securing these IoT devices is a formidable challenge because of their limited computational resources. Machine learning-based intrusion detection systems (IDSs) have been implemented on the edge and the cloud; however, IDSs have not been embedded in IoT devices. To address this, we propose a novel machine learning-based two-layered IDS for smart home IoT devices, enhancing accuracy and computational efficiency. The first layer of the proposed IDS is deployed on a microcontroller-based smart thermostat, which uploads the data to a website hosted on a cloud server. The second layer of the IDS is deployed on the cloud side for classification of attacks. The proposed IDS can detect the threats with an accuracy of 99.50% at cloud level (multiclassification). For real-time testing, we implemented the Raspberry Pi 4-based adversary to generate a dataset for man-in-the-middle (MITM) and denial of service (DoS) attacks on smart thermostats. The results show that the XGBoost-based IDS detects MITM and DoS attacks in 3.51 ms on a smart thermostat with an accuracy of 97.59%. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

21 pages, 718 KiB  
Review
Using ChatGPT in Software Requirements Engineering: A Comprehensive Review
by Nuno Marques, Rodrigo Rocha Silva and Jorge Bernardino
Future Internet 2024, 16(6), 180; https://doi.org/10.3390/fi16060180 - 21 May 2024
Cited by 16 | Viewed by 10276
Abstract
Large language models (LLMs) have had a significant impact on several domains, including software engineering. However, a comprehensive understanding of LLMs’ use, impact, and potential limitations in software engineering is still emerging and remains in its early stages. This paper analyzes the role [...] Read more.
Large language models (LLMs) have had a significant impact on several domains, including software engineering. However, a comprehensive understanding of LLMs’ use, impact, and potential limitations in software engineering is still emerging and remains in its early stages. This paper analyzes the role of large language models (LLMs), such as ChatGPT-3.5, in software requirements engineering, a critical area in software engineering experiencing rapid advances due to artificial intelligence (AI). By analyzing several studies, we systematically evaluate the integration of ChatGPT into software requirements engineering, focusing on its benefits, challenges, and ethical considerations. This evaluation is based on a comparative analysis that highlights ChatGPT’s efficiency in eliciting requirements, accuracy in capturing user needs, potential to improve communication among stakeholders, and impact on the responsibilities of requirements engineers. The selected studies were analyzed for their insights into the effectiveness of ChatGPT, the importance of human feedback, prompt engineering techniques, technological limitations, and future research directions in using LLMs in software requirements engineering. This comprehensive analysis aims to provide a differentiated perspective on how ChatGPT can reshape software requirements engineering practices and provides strategic recommendations for leveraging ChatGPT to effectively improve the software requirements engineering process. Full article
Show Figures

Figure 1

30 pages, 5255 KiB  
Article
Evaluating Realistic Adversarial Attacks against Machine Learning Models for Windows PE Malware Detection
by Muhammad Imran, Annalisa Appice and Donato Malerba
Future Internet 2024, 16(5), 168; https://doi.org/10.3390/fi16050168 - 12 May 2024
Cited by 7 | Viewed by 4129
Abstract
During the last decade, the cybersecurity literature has conferred a high-level role to machine learning as a powerful security paradigm to recognise malicious software in modern anti-malware systems. However, a non-negligible limitation of machine learning methods used to train decision models is that [...] Read more.
During the last decade, the cybersecurity literature has conferred a high-level role to machine learning as a powerful security paradigm to recognise malicious software in modern anti-malware systems. However, a non-negligible limitation of machine learning methods used to train decision models is that adversarial attacks can easily fool them. Adversarial attacks are attack samples produced by carefully manipulating the samples at the test time to violate the model integrity by causing detection mistakes. In this paper, we analyse the performance of five realistic target-based adversarial attacks, namely Extend, Full DOS, Shift, FGSM padding + slack and GAMMA, against two machine learning models, namely MalConv and LGBM, learned to recognise Windows Portable Executable (PE) malware files. Specifically, MalConv is a Convolutional Neural Network (CNN) model learned from the raw bytes of Windows PE files. LGBM is a Gradient-Boosted Decision Tree model that is learned from features extracted through the static analysis of Windows PE files. Notably, the attack methods and machine learning models considered in this study are state-of-the-art methods broadly used in the machine learning literature for Windows PE malware detection tasks. In addition, we explore the effect of accounting for adversarial attacks on securing machine learning models through the adversarial training strategy. Therefore, the main contributions of this article are as follows: (1) We extend existing machine learning studies that commonly consider small datasets to explore the evasion ability of state-of-the-art Windows PE attack methods by increasing the size of the evaluation dataset. (2) To the best of our knowledge, we are the first to carry out an exploratory study to explain how the considered adversarial attack methods change Windows PE malware to fool an effective decision model. (3) We explore the performance of the adversarial training strategy as a means to secure effective decision models against adversarial Windows PE malware files generated with the considered attack methods. Hence, the study explains how GAMMA can actually be considered the most effective evasion method for the performed comparative analysis. On the other hand, the study shows that the adversarial training strategy can actually help in recognising adversarial PE malware generated with GAMMA by also explaining how it changes model decisions. Full article
(This article belongs to the Collection Information Systems Security)
Show Figures

Figure 1

20 pages, 7164 KiB  
Review
A Comprehensive Review of Machine Learning Approaches for Anomaly Detection in Smart Homes: Experimental Analysis and Future Directions
by Md Motiur Rahman, Deepti Gupta, Smriti Bhatt, Shiva Shokouhmand and Miad Faezipour
Future Internet 2024, 16(4), 139; https://doi.org/10.3390/fi16040139 - 19 Apr 2024
Cited by 7 | Viewed by 2976
Abstract
Detecting anomalies in human activities is increasingly crucial today, particularly in nuclear family settings, where there may not be constant monitoring of individuals’ health, especially the elderly, during critical periods. Early anomaly detection can prevent from attack scenarios and life-threatening situations. This task [...] Read more.
Detecting anomalies in human activities is increasingly crucial today, particularly in nuclear family settings, where there may not be constant monitoring of individuals’ health, especially the elderly, during critical periods. Early anomaly detection can prevent from attack scenarios and life-threatening situations. This task becomes notably more complex when multiple ambient sensors are deployed in homes with multiple residents, as opposed to single-resident environments. Additionally, the availability of datasets containing anomalies representing the full spectrum of abnormalities is limited. In our experimental study, we employed eight widely used machine learning and two deep learning classifiers to identify anomalies in human activities. We meticulously generated anomalies, considering all conceivable scenarios. Our findings reveal that the Gated Recurrent Unit (GRU) excels in accurately classifying normal and anomalous activities, while the naïve Bayes classifier demonstrates relatively poor performance among the ten classifiers considered. We conducted various experiments to assess the impact of different training–test splitting ratios, along with a five-fold cross-validation technique, on the performance. Notably, the GRU model consistently outperformed all other classifiers under both conditions. Furthermore, we offer insights into the computational costs associated with these classifiers, encompassing training and prediction phases. Extensive ablation experiments conducted in this study underscore that all these classifiers can effectively be deployed for anomaly detection in two-resident homes. Full article
(This article belongs to the Special Issue Machine Learning for Blockchain and IoT Systems in Smart City)
Show Figures

Graphical abstract

21 pages, 991 KiB  
Article
Metaverse Meets Smart Cities—Applications, Benefits, and Challenges
by Florian Maier and Markus Weinberger
Future Internet 2024, 16(4), 126; https://doi.org/10.3390/fi16040126 - 8 Apr 2024
Cited by 10 | Viewed by 3432
Abstract
The metaverse aims to merge the virtual and real worlds. The target is to generate a virtual community where social components play a crucial role and combine different areas such as entertainment, work, shopping, and services. This idea is explicitly appealing in the [...] Read more.
The metaverse aims to merge the virtual and real worlds. The target is to generate a virtual community where social components play a crucial role and combine different areas such as entertainment, work, shopping, and services. This idea is explicitly appealing in the context of smart cities. The metaverse offers digitalization approaches and can strengthen citizens’ social community. While the existing literature covers the exemplary potential of smart city metaverse applications, this study aims to provide a comprehensive overview of the potential and already implemented metaverse applications in the context of cities and municipalities. In addition, challenges related to these applications are identified. The study combines literature reviews and expert interviews to ensure a broad overview. Forty-eight smart city metaverse applications from eleven areas were identified, and actual projects from eleven cities demonstrate the current state of development. Still, further research should evaluate the benefits of the various applications and find strategies to overcome the identified challenges. Full article
Show Figures

Figure 1

13 pages, 395 KiB  
Article
Efficient and Secure Distributed Data Storage and Retrieval Using Interplanetary File System and Blockchain
by Muhammad Bin Saif, Sara Migliorini and Fausto Spoto
Future Internet 2024, 16(3), 98; https://doi.org/10.3390/fi16030098 - 15 Mar 2024
Cited by 12 | Viewed by 4188
Abstract
Blockchain technology has been successfully applied in recent years to promote the immutability, traceability, and authenticity of previously collected and stored data. However, the amount of data stored in the blockchain is usually limited for economic and technological issues. Namely, the blockchain usually [...] Read more.
Blockchain technology has been successfully applied in recent years to promote the immutability, traceability, and authenticity of previously collected and stored data. However, the amount of data stored in the blockchain is usually limited for economic and technological issues. Namely, the blockchain usually stores only a fingerprint of data, such as the hash of data, while full, raw information is stored off-chain. This is generally enough to guarantee immutability and traceability, but misses to support another important property, that is, data availability. This is particularly true when a traditional, centralized database is chosen for off-chain storage. For this reason, many proposals try to properly combine blockchain with decentralized IPFS storage. However, the storage of data on IPFS could pose some privacy problems. This paper proposes a solution that properly combines blockchain, IPFS, and encryption techniques to guarantee immutability, traceability, availability, and data privacy. Full article
(This article belongs to the Special Issue Blockchain and Web 3.0: Applications, Challenges and Future Trends)
Show Figures

Figure 1

17 pages, 2344 KiB  
Article
An Advanced Path Planning and UAV Relay System: Enhancing Connectivity in Rural Environments
by Mostafa El Debeiki, Saba Al-Rubaye, Adolfo Perrusquía, Christopher Conrad and Juan Alejandro Flores-Campos
Future Internet 2024, 16(3), 89; https://doi.org/10.3390/fi16030089 - 6 Mar 2024
Cited by 18 | Viewed by 2474
Abstract
The use of unmanned aerial vehicles (UAVs) is increasing in transportation applications due to their high versatility and maneuverability in complex environments. Search and rescue is one of the most challenging applications of UAVs due to the non-homogeneous nature of the environmental and [...] Read more.
The use of unmanned aerial vehicles (UAVs) is increasing in transportation applications due to their high versatility and maneuverability in complex environments. Search and rescue is one of the most challenging applications of UAVs due to the non-homogeneous nature of the environmental and communication landscapes. In particular, mountainous areas pose difficulties due to the loss of connectivity caused by large valleys and the volumes of hazardous weather. In this paper, the connectivity issue in mountainous areas is addressed using a path planning algorithm for UAV relay. The approach is based on two main phases: (1) the detection of areas of interest where the connectivity signal is poor, and (2) an energy-aware and resilient path planning algorithm that maximizes the coverage links. The approach uses a viewshed analysis to identify areas of visibility between the areas of interest and the cell-towers. This allows the construction of a blockage map that prevents the UAV from passing through areas with no coverage, whilst maximizing the coverage area under energy constraints and hazardous weather. The proposed approach is validated under open-access datasets of mountainous zones, and the obtained results confirm the benefits of the proposed approach for communication networks in remote and challenging environments. Full article
Show Figures

Figure 1

19 pages, 3172 KiB  
Article
Multi-Level Split Federated Learning for Large-Scale AIoT System Based on Smart Cities
by Hanyue Xu, Kah Phooi Seng, Jeremy Smith and Li Minn Ang
Future Internet 2024, 16(3), 82; https://doi.org/10.3390/fi16030082 - 28 Feb 2024
Cited by 10 | Viewed by 4236
Abstract
In the context of smart cities, the integration of artificial intelligence (AI) and the Internet of Things (IoT) has led to the proliferation of AIoT systems, which handle vast amounts of data to enhance urban infrastructure and services. However, the collaborative training of [...] Read more.
In the context of smart cities, the integration of artificial intelligence (AI) and the Internet of Things (IoT) has led to the proliferation of AIoT systems, which handle vast amounts of data to enhance urban infrastructure and services. However, the collaborative training of deep learning models within these systems encounters significant challenges, chiefly due to data privacy concerns and dealing with communication latency from large-scale IoT devices. To address these issues, multi-level split federated learning (multi-level SFL) has been proposed, merging the benefits of split learning (SL) and federated learning (FL). This framework introduces a novel multi-level aggregation architecture that reduces communication delays, enhances scalability, and addresses system and statistical heterogeneity inherent in large AIoT systems with non-IID data distributions. The architecture leverages the Message Queuing Telemetry Transport (MQTT) protocol to cluster IoT devices geographically and employs edge and fog computing layers for initial model parameter aggregation. Simulation experiments validate that the multi-level SFL outperforms traditional SFL by improving model accuracy and convergence speed in large-scale, non-IID environments. This paper delineates the proposed architecture, its workflow, and its advantages in enhancing the robustness and scalability of AIoT systems in smart cities while preserving data privacy. Full article
Show Figures

Figure 1

15 pages, 2529 KiB  
Article
A Lightweight Neural Network Model for Disease Risk Prediction in Edge Intelligent Computing Architecture
by Feng Zhou, Shijing Hu, Xin Du, Xiaoli Wan and Jie Wu
Future Internet 2024, 16(3), 75; https://doi.org/10.3390/fi16030075 - 26 Feb 2024
Cited by 10 | Viewed by 2254
Abstract
In the current field of disease risk prediction research, there are many methods of using servers for centralized computing to train and infer prediction models. However, this centralized computing method increases storage space, the load on network bandwidth, and the computing pressure on [...] Read more.
In the current field of disease risk prediction research, there are many methods of using servers for centralized computing to train and infer prediction models. However, this centralized computing method increases storage space, the load on network bandwidth, and the computing pressure on the central server. In this article, we design an image preprocessing method and propose a lightweight neural network model called Linge (Lightweight Neural Network Models for the Edge). We propose a distributed intelligent edge computing technology based on the federated learning algorithm for disease risk prediction. The intelligent edge computing method we proposed for disease risk prediction directly performs prediction model training and inference at the edge without increasing storage space. It also reduces the load on network bandwidth and reduces the computing pressure on the server. The lightweight neural network model we designed has only 7.63 MB of parameters and only takes up 155.28 MB of memory. In the experiment with the Linge model compared with the EfficientNetV2 model, the accuracy and precision increased by 2%, the recall rate increased by 1%, the specificity increased by 4%, the F1 score increased by 3%, and the AUC (Area Under the Curve) value increased by 2%. Full article
Show Figures

Figure 1

16 pages, 2560 KiB  
Article
Deep Learning for Intrusion Detection Systems (IDSs) in Time Series Data
by Konstantinos Psychogyios, Andreas Papadakis, Stavroula Bourou, Nikolaos Nikolaou, Apostolos Maniatis and Theodore Zahariadis
Future Internet 2024, 16(3), 73; https://doi.org/10.3390/fi16030073 - 23 Feb 2024
Cited by 11 | Viewed by 3758
Abstract
The advent of computer networks and the internet has drastically altered the means by which we share information and interact with each other. However, this technological advancement has also created opportunities for malevolent behavior, with individuals exploiting vulnerabilities to gain access to confidential [...] Read more.
The advent of computer networks and the internet has drastically altered the means by which we share information and interact with each other. However, this technological advancement has also created opportunities for malevolent behavior, with individuals exploiting vulnerabilities to gain access to confidential data, obstruct activity, etc. To this end, intrusion detection systems (IDSs) are needed to filter malicious traffic and prevent common attacks. In the past, these systems relied on a fixed set of rules or comparisons with previous attacks. However, with the increased availability of computational power and data, machine learning has emerged as a promising solution for this task. While many systems now use this methodology in real-time for a reactive approach to mitigation, we explore the potential of configuring it as a proactive time series prediction. In this work, we delve into this possibility further. More specifically, we convert a classic IDS dataset to a time series format and use predictive models to forecast forthcoming malign packets. We propose a new architecture combining convolutional neural networks, long short-term memory networks, and attention. The findings indicate that our model performs strongly, exhibiting an F1 score and AUC that are within margins of 1% and 3%, respectively, when compared to conventional real-time detection. Also, our architecture achieves an ∼8% F1 score improvement compared to an LSTM (long short-term memory) model. Full article
(This article belongs to the Special Issue Security in the Internet of Things (IoT))
Show Figures

Figure 1

26 pages, 1791 KiB  
Article
The Future of Healthcare with Industry 5.0: Preliminary Interview-Based Qualitative Analysis
by Juliana Basulo-Ribeiro and Leonor Teixeira
Future Internet 2024, 16(3), 68; https://doi.org/10.3390/fi16030068 - 22 Feb 2024
Cited by 21 | Viewed by 8065
Abstract
With the advent of Industry 5.0 (I5.0), healthcare is undergoing a profound transformation, integrating human capabilities with advanced technologies to promote a patient-centered, efficient, and empathetic healthcare ecosystem. This study aims to examine the effects of Industry 5.0 on healthcare, emphasizing the synergy [...] Read more.
With the advent of Industry 5.0 (I5.0), healthcare is undergoing a profound transformation, integrating human capabilities with advanced technologies to promote a patient-centered, efficient, and empathetic healthcare ecosystem. This study aims to examine the effects of Industry 5.0 on healthcare, emphasizing the synergy between human experience and technology. To this end, 6 specific objectives were found, which were answered in the results through an empirical study based on interviews with 11 healthcare professionals. This article thus outlines strategic and policy guidelines for the integration of I5.0 in healthcare, advocating policy-driven change, and contributes to the literature by offering a solid theoretical basis on I5.0 and its impact on the healthcare sector. Full article
(This article belongs to the Special Issue eHealth and mHealth)
Show Figures

Figure 1

38 pages, 1021 KiB  
Review
A Systematic Survey on 5G and 6G Security Considerations, Challenges, Trends, and Research Areas
by Paul Scalise, Matthew Boeding, Michael Hempel, Hamid Sharif, Joseph Delloiacovo and John Reed
Future Internet 2024, 16(3), 67; https://doi.org/10.3390/fi16030067 - 20 Feb 2024
Cited by 23 | Viewed by 8691
Abstract
With the rapid rollout and growing adoption of 3GPP 5thGeneration (5G) cellular services, including in critical infrastructure sectors, it is important to review security mechanisms, risks, and potential vulnerabilities within this vital technology. Numerous security capabilities need to work together to ensure and [...] Read more.
With the rapid rollout and growing adoption of 3GPP 5thGeneration (5G) cellular services, including in critical infrastructure sectors, it is important to review security mechanisms, risks, and potential vulnerabilities within this vital technology. Numerous security capabilities need to work together to ensure and maintain a sufficiently secure 5G environment that places user privacy and security at the forefront. Confidentiality, integrity, and availability are all pillars of a privacy and security framework that define major aspects of 5G operations. They are incorporated and considered in the design of the 5G standard by the 3rd Generation Partnership Project (3GPP) with the goal of providing a highly reliable network operation for all. Through a comprehensive review, we aim to analyze the ever-evolving landscape of 5G, including any potential attack vectors and proposed measures to mitigate or prevent these threats. This paper presents a comprehensive survey of the state-of-the-art research that has been conducted in recent years regarding 5G systems, focusing on the main components in a systematic approach: the Core Network (CN), Radio Access Network (RAN), and User Equipment (UE). Additionally, we investigate the utilization of 5G in time-dependent, ultra-confidential, and private communications built around a Zero Trust approach. In today’s world, where everything is more connected than ever, Zero Trust policies and architectures can be highly valuable in operations containing sensitive data. Realizing a Zero Trust Architecture entails continuous verification of all devices, users, and requests, regardless of their location within the network, and grants permission only to authorized entities. Finally, developments and proposed methods of new 5G and future 6G security approaches, such as Blockchain technology, post-quantum cryptography (PQC), and Artificial Intelligence (AI) schemes, are also discussed to understand better the full landscape of current and future research within this telecommunications domain. Full article
(This article belongs to the Special Issue 5G Security: Challenges, Opportunities, and the Road Ahead)
Show Figures

Figure 1

18 pages, 6477 KiB  
Article
The Microverse: A Task-Oriented Edge-Scale Metaverse
by Qian Qu, Mohsen Hatami, Ronghua Xu, Deeraj Nagothu, Yu Chen, Xiaohua Li, Erik Blasch, Erika Ardiles-Cruz and Genshe Chen
Future Internet 2024, 16(2), 60; https://doi.org/10.3390/fi16020060 - 13 Feb 2024
Cited by 16 | Viewed by 3319
Abstract
Over the past decade, there has been a remarkable acceleration in the evolution of smart cities and intelligent spaces, driven by breakthroughs in technologies such as the Internet of Things (IoT), edge–fog–cloud computing, and machine learning (ML)/artificial intelligence (AI). As society begins to [...] Read more.
Over the past decade, there has been a remarkable acceleration in the evolution of smart cities and intelligent spaces, driven by breakthroughs in technologies such as the Internet of Things (IoT), edge–fog–cloud computing, and machine learning (ML)/artificial intelligence (AI). As society begins to harness the full potential of these smart environments, the horizon brightens with the promise of an immersive, interconnected 3D world. The forthcoming paradigm shift in how we live, work, and interact owes much to groundbreaking innovations in augmented reality (AR), virtual reality (VR), extended reality (XR), blockchain, and digital twins (DTs). However, realizing the expansive digital vista in our daily lives is challenging. Current limitations include an incomplete integration of pivotal techniques, daunting bandwidth requirements, and the critical need for near-instantaneous data transmission, all impeding the digital VR metaverse from fully manifesting as envisioned by its proponents. This paper seeks to delve deeply into the intricacies of the immersive, interconnected 3D realm, particularly in applications demanding high levels of intelligence. Specifically, this paper introduces the microverse, a task-oriented, edge-scale, pragmatic solution for smart cities. Unlike all-encompassing metaverses, each microverse instance serves a specific task as a manageable digital twin of an individual network slice. Each microverse enables on-site/near-site data processing, information fusion, and real-time decision-making within the edge–fog–cloud computing framework. The microverse concept is verified using smart public safety surveillance (SPSS) for smart communities as a case study, demonstrating its feasibility in practical smart city applications. The aim is to stimulate discussions and inspire fresh ideas in our community, guiding us as we navigate the evolving digital landscape of smart cities to embrace the potential of the metaverse. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

24 pages, 8449 KiB  
Article
A Secure Opportunistic Network with Efficient Routing for Enhanced Efficiency and Sustainability
by Ayman Khalil and Besma Zeddini
Future Internet 2024, 16(2), 56; https://doi.org/10.3390/fi16020056 - 8 Feb 2024
Cited by 5 | Viewed by 2136
Abstract
The intersection of cybersecurity and opportunistic networks has ushered in a new era of innovation in the realm of wireless communications. In an increasingly interconnected world, where seamless data exchange is pivotal for both individual users and organizations, the need for efficient, reliable, [...] Read more.
The intersection of cybersecurity and opportunistic networks has ushered in a new era of innovation in the realm of wireless communications. In an increasingly interconnected world, where seamless data exchange is pivotal for both individual users and organizations, the need for efficient, reliable, and sustainable networking solutions has never been more pressing. Opportunistic networks, characterized by intermittent connectivity and dynamic network conditions, present unique challenges that necessitate innovative approaches for optimal performance and sustainability. This paper introduces a groundbreaking paradigm that integrates the principles of cybersecurity with opportunistic networks. At its core, this study presents a novel routing protocol meticulously designed to significantly outperform existing solutions concerning key metrics such as delivery probability, overhead ratio, and communication delay. Leveraging cybersecurity’s inherent strengths, our protocol not only fortifies the network’s security posture but also provides a foundation for enhancing efficiency and sustainability in opportunistic networks. The overarching goal of this paper is to address the inherent limitations of conventional opportunistic network protocols. By proposing an innovative routing protocol, we aim to optimize data delivery, minimize overhead, and reduce communication latency. These objectives are crucial for ensuring seamless and timely information exchange, especially in scenarios where traditional networking infrastructures fall short. By large-scale simulations, the new model proves its effectiveness in the different scenarios, especially in terms of message delivery probability, while ensuring reasonable overhead and latency. Full article
Show Figures

Figure 1

14 pages, 3418 KiB  
Article
Enhancing Smart City Safety and Utilizing AI Expert Systems for Violence Detection
by Pradeep Kumar, Guo-Liang Shih, Bo-Lin Guo, Siva Kumar Nagi, Yibeltal Chanie Manie, Cheng-Kai Yao, Michael Augustine Arockiyadoss and Peng-Chun Peng
Future Internet 2024, 16(2), 50; https://doi.org/10.3390/fi16020050 - 31 Jan 2024
Cited by 7 | Viewed by 3864
Abstract
Violent attacks have been one of the hot issues in recent years. In the presence of closed-circuit televisions (CCTVs) in smart cities, there is an emerging challenge in apprehending criminals, leading to a need for innovative solutions. In this paper, the propose a [...] Read more.
Violent attacks have been one of the hot issues in recent years. In the presence of closed-circuit televisions (CCTVs) in smart cities, there is an emerging challenge in apprehending criminals, leading to a need for innovative solutions. In this paper, the propose a model aimed at enhancing real-time emergency response capabilities and swiftly identifying criminals. This initiative aims to foster a safer environment and better manage criminal activity within smart cities. The proposed architecture combines an image-to-image stable diffusion model with violence detection and pose estimation approaches. The diffusion model generates synthetic data while the object detection approach uses YOLO v7 to identify violent objects like baseball bats, knives, and pistols, complemented by MediaPipe for action detection. Further, a long short-term memory (LSTM) network classifies the action attacks involving violent objects. Subsequently, an ensemble consisting of an edge device and the entire proposed model is deployed onto the edge device for real-time data testing using a dash camera. Thus, this study can handle violent attacks and send alerts in emergencies. As a result, our proposed YOLO model achieves a mean average precision (MAP) of 89.5% for violent attack detection, and the LSTM classifier model achieves an accuracy of 88.33% for violent action classification. The results highlight the model’s enhanced capability to accurately detect violent objects, particularly in effectively identifying violence through the implemented artificial intelligence system. Full article
(This article belongs to the Special Issue Challenges in Real-Time Intelligent Systems)
Show Figures

Figure 1

44 pages, 38595 KiB  
Article
Enhancing Urban Resilience: Smart City Data Analyses, Forecasts, and Digital Twin Techniques at the Neighborhood Level
by Andreas F. Gkontzis, Sotiris Kotsiantis, Georgios Feretzakis and Vassilios S. Verykios
Future Internet 2024, 16(2), 47; https://doi.org/10.3390/fi16020047 - 30 Jan 2024
Cited by 28 | Viewed by 7987
Abstract
Smart cities, leveraging advanced data analytics, predictive models, and digital twin techniques, offer a transformative model for sustainable urban development. Predictive analytics is critical to proactive planning, enabling cities to adapt to evolving challenges. Concurrently, digital twin techniques provide a virtual replica of [...] Read more.
Smart cities, leveraging advanced data analytics, predictive models, and digital twin techniques, offer a transformative model for sustainable urban development. Predictive analytics is critical to proactive planning, enabling cities to adapt to evolving challenges. Concurrently, digital twin techniques provide a virtual replica of the urban environment, fostering real-time monitoring, simulation, and analysis of urban systems. This study underscores the significance of real-time monitoring, simulation, and analysis of urban systems to support test scenarios that identify bottlenecks and enhance smart city efficiency. This paper delves into the crucial roles of citizen report analytics, prediction, and digital twin technologies at the neighborhood level. The study integrates extract, transform, load (ETL) processes, artificial intelligence (AI) techniques, and a digital twin methodology to process and interpret urban data streams derived from citizen interactions with the city’s coordinate-based problem mapping platform. Using an interactive GeoDataFrame within the digital twin methodology, dynamic entities facilitate simulations based on various scenarios, allowing users to visualize, analyze, and predict the response of the urban system at the neighborhood level. This approach reveals antecedent and predictive patterns, trends, and correlations at the physical level of each city area, leading to improvements in urban functionality, resilience, and resident quality of life. Full article
Show Figures

Graphical abstract

29 pages, 743 KiB  
Article
TinyML Algorithms for Big Data Management in Large-Scale IoT Systems
by Aristeidis Karras, Anastasios Giannaros, Christos Karras, Leonidas Theodorakopoulos, Constantinos S. Mammassis, George A. Krimpas and Spyros Sioutas
Future Internet 2024, 16(2), 42; https://doi.org/10.3390/fi16020042 - 25 Jan 2024
Cited by 19 | Viewed by 4923
Abstract
In the context of the Internet of Things (IoT), Tiny Machine Learning (TinyML) and Big Data, enhanced by Edge Artificial Intelligence, are essential for effectively managing the extensive data produced by numerous connected devices. Our study introduces a set of TinyML algorithms designed [...] Read more.
In the context of the Internet of Things (IoT), Tiny Machine Learning (TinyML) and Big Data, enhanced by Edge Artificial Intelligence, are essential for effectively managing the extensive data produced by numerous connected devices. Our study introduces a set of TinyML algorithms designed and developed to improve Big Data management in large-scale IoT systems. These algorithms, named TinyCleanEDF, EdgeClusterML, CompressEdgeML, CacheEdgeML, and TinyHybridSenseQ, operate together to enhance data processing, storage, and quality control in IoT networks, utilizing the capabilities of Edge AI. In particular, TinyCleanEDF applies federated learning for Edge-based data cleaning and anomaly detection. EdgeClusterML combines reinforcement learning with self-organizing maps for effective data clustering. CompressEdgeML uses neural networks for adaptive data compression. CacheEdgeML employs predictive analytics for smart data caching, and TinyHybridSenseQ concentrates on data quality evaluation and hybrid storage strategies. Our experimental evaluation of the proposed techniques includes executing all the algorithms in various numbers of Raspberry Pi devices ranging from one to ten. The experimental results are promising as we outperform similar methods across various evaluation metrics. Ultimately, we anticipate that the proposed algorithms offer a comprehensive and efficient approach to managing the complexities of IoT, Big Data, and Edge AI. Full article
(This article belongs to the Special Issue Internet of Things and Cyber-Physical Systems II)
Show Figures

Figure 1

57 pages, 2070 KiB  
Review
A Holistic Analysis of Internet of Things (IoT) Security: Principles, Practices, and New Perspectives
by Mahmud Hossain, Golam Kayas, Ragib Hasan, Anthony Skjellum, Shahid Noor and S. M. Riazul Islam
Future Internet 2024, 16(2), 40; https://doi.org/10.3390/fi16020040 - 24 Jan 2024
Cited by 22 | Viewed by 9726
Abstract
Driven by the rapid escalation of its utilization, as well as ramping commercialization, Internet of Things (IoT) devices increasingly face security threats. Apart from denial of service, privacy, and safety concerns, compromised devices can be used as enablers for committing a variety of [...] Read more.
Driven by the rapid escalation of its utilization, as well as ramping commercialization, Internet of Things (IoT) devices increasingly face security threats. Apart from denial of service, privacy, and safety concerns, compromised devices can be used as enablers for committing a variety of crime and e-crime. Despite ongoing research and study, there remains a significant gap in the thorough analysis of security challenges, feasible solutions, and open secure problems for IoT. To bridge this gap, we provide a comprehensive overview of the state of the art in IoT security with a critical investigation-based approach. This includes a detailed analysis of vulnerabilities in IoT-based systems and potential attacks. We present a holistic review of the security properties required to be adopted by IoT devices, applications, and services to mitigate IoT vulnerabilities and, thus, successful attacks. Moreover, we identify challenges to the design of security protocols for IoT systems in which constituent devices vary markedly in capability (such as storage, computation speed, hardware architecture, and communication interfaces). Next, we review existing research and feasible solutions for IoT security. We highlight a set of open problems not yet addressed among existing security solutions. We provide a set of new perspectives for future research on such issues including secure service discovery, on-device credential security, and network anomaly detection. We also provide directions for designing a forensic investigation framework for IoT infrastructures to inspect relevant criminal cases, execute a cyber forensic process, and determine the facts about a given incident. This framework offers a means to better capture information on successful attacks as part of a feedback mechanism to thwart future vulnerabilities and threats. This systematic holistic review will both inform on current challenges in IoT security and ideally motivate their future resolution. Full article
(This article belongs to the Special Issue Cyber Security in the New "Edge Computing + IoT" World)
Show Figures

Figure 1

27 pages, 2022 KiB  
Review
Overview of Protocols and Standards for Wireless Sensor Networks in Critical Infrastructures
by Spyridon Daousis, Nikolaos Peladarinos, Vasileios Cheimaras, Panagiotis Papageorgas, Dimitrios D. Piromalis and Radu Adrian Munteanu
Future Internet 2024, 16(1), 33; https://doi.org/10.3390/fi16010033 - 21 Jan 2024
Cited by 16 | Viewed by 5026
Abstract
This paper highlights the crucial role of wireless sensor networks (WSNs) in the surveillance and administration of critical infrastructures (CIs), contributing to their reliability, security, and operational efficiency. It starts by detailing the international significance and structural aspects of these infrastructures, mentions the [...] Read more.
This paper highlights the crucial role of wireless sensor networks (WSNs) in the surveillance and administration of critical infrastructures (CIs), contributing to their reliability, security, and operational efficiency. It starts by detailing the international significance and structural aspects of these infrastructures, mentions the market tension in recent years in the gradual development of wireless networks for industrial applications, and proceeds to categorize WSNs and examine the protocols and standards of WSNs in demanding environments like critical infrastructures, drawing on the recent literature. This review concentrates on the protocols and standards utilized in WSNs for critical infrastructures, and it concludes by identifying a notable gap in the literature concerning quality standards for equipment used in such infrastructures. Full article
(This article belongs to the Special Issue Applications of Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

42 pages, 2733 KiB  
Review
A Holistic Review of Machine Learning Adversarial Attacks in IoT Networks
by Hassan Khazane, Mohammed Ridouani, Fatima Salahdine and Naima Kaabouch
Future Internet 2024, 16(1), 32; https://doi.org/10.3390/fi16010032 - 19 Jan 2024
Cited by 21 | Viewed by 6855
Abstract
With the rapid advancements and notable achievements across various application domains, Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem. Among these use cases is IoT security, where numerous systems are deployed to identify or thwart attacks, [...] Read more.
With the rapid advancements and notable achievements across various application domains, Machine Learning (ML) has become a vital element within the Internet of Things (IoT) ecosystem. Among these use cases is IoT security, where numerous systems are deployed to identify or thwart attacks, including intrusion detection systems (IDSs), malware detection systems (MDSs), and device identification systems (DISs). Machine Learning-based (ML-based) IoT security systems can fulfill several security objectives, including detecting attacks, authenticating users before they gain access to the system, and categorizing suspicious activities. Nevertheless, ML faces numerous challenges, such as those resulting from the emergence of adversarial attacks crafted to mislead classifiers. This paper provides a comprehensive review of the body of knowledge about adversarial attacks and defense mechanisms, with a particular focus on three prominent IoT security systems: IDSs, MDSs, and DISs. The paper starts by establishing a taxonomy of adversarial attacks within the context of IoT. Then, various methodologies employed in the generation of adversarial attacks are described and classified within a two-dimensional framework. Additionally, we describe existing countermeasures for enhancing IoT security against adversarial attacks. Finally, we explore the most recent literature on the vulnerability of three ML-based IoT security systems to adversarial attacks. Full article
(This article belongs to the Special Issue AI and Security in 5G Cooperative Cognitive Radio Networks)
Show Figures

Figure 1

17 pages, 3053 KiB  
Article
Proximal Policy Optimization for Efficient D2D-Assisted Computation Offloading and Resource Allocation in Multi-Access Edge Computing
by Chen Zhang, Celimuge Wu, Min Lin, Yangfei Lin and William Liu
Future Internet 2024, 16(1), 19; https://doi.org/10.3390/fi16010019 - 2 Jan 2024
Cited by 8 | Viewed by 4343
Abstract
In the advanced 5G and beyond networks, multi-access edge computing (MEC) is increasingly recognized as a promising technology, offering the dual advantages of reducing energy utilization in cloud data centers while catering to the demands for reliability and real-time responsiveness in end devices. [...] Read more.
In the advanced 5G and beyond networks, multi-access edge computing (MEC) is increasingly recognized as a promising technology, offering the dual advantages of reducing energy utilization in cloud data centers while catering to the demands for reliability and real-time responsiveness in end devices. However, the inherent complexity and variability of MEC networks pose significant challenges in computational offloading decisions. To tackle this problem, we propose a proximal policy optimization (PPO)-based Device-to-Device (D2D)-assisted computation offloading and resource allocation scheme. We construct a realistic MEC network environment and develop a Markov decision process (MDP) model that minimizes time loss and energy consumption. The integration of a D2D communication-based offloading framework allows for collaborative task offloading between end devices and MEC servers, enhancing both resource utilization and computational efficiency. The MDP model is solved using the PPO algorithm in deep reinforcement learning to derive an optimal policy for offloading and resource allocation. Extensive comparative analysis with three benchmarked approaches has confirmed our scheme’s superior performance in latency, energy consumption, and algorithmic convergence, demonstrating its potential to improve MEC network operations in the context of emerging 5G and beyond technologies. Full article
Show Figures

Figure 1

34 pages, 2309 KiB  
Review
Edge AI for Early Detection of Chronic Diseases and the Spread of Infectious Diseases: Opportunities, Challenges, and Future Directions
by Elarbi Badidi
Future Internet 2023, 15(11), 370; https://doi.org/10.3390/fi15110370 - 18 Nov 2023
Cited by 25 | Viewed by 17470
Abstract
Edge AI, an interdisciplinary technology that enables distributed intelligence with edge devices, is quickly becoming a critical component in early health prediction. Edge AI encompasses data analytics and artificial intelligence (AI) using machine learning, deep learning, and federated learning models deployed and executed [...] Read more.
Edge AI, an interdisciplinary technology that enables distributed intelligence with edge devices, is quickly becoming a critical component in early health prediction. Edge AI encompasses data analytics and artificial intelligence (AI) using machine learning, deep learning, and federated learning models deployed and executed at the edge of the network, far from centralized data centers. AI enables the careful analysis of large datasets derived from multiple sources, including electronic health records, wearable devices, and demographic information, making it possible to identify intricate patterns and predict a person’s future health. Federated learning, a novel approach in AI, further enhances this prediction by enabling collaborative training of AI models on distributed edge devices while maintaining privacy. Using edge computing, data can be processed and analyzed locally, reducing latency and enabling instant decision making. This article reviews the role of Edge AI in early health prediction and highlights its potential to improve public health. Topics covered include the use of AI algorithms for early detection of chronic diseases such as diabetes and cancer and the use of edge computing in wearable devices to detect the spread of infectious diseases. In addition to discussing the challenges and limitations of Edge AI in early health prediction, this article emphasizes future research directions to address these concerns and the integration with existing healthcare systems and explore the full potential of these technologies in improving public health. Full article
(This article belongs to the Special Issue Internet of Things (IoT) for Smart Living and Public Health)
Show Figures

Figure 1

23 pages, 14269 KiB  
Article
Implementation and Evaluation of a Federated Learning Framework on Raspberry PI Platforms for IoT 6G Applications
by Lorenzo Ridolfi, David Naseh, Swapnil Sadashiv Shinde and Daniele Tarchi
Future Internet 2023, 15(11), 358; https://doi.org/10.3390/fi15110358 - 31 Oct 2023
Cited by 6 | Viewed by 3627
Abstract
With the advent of 6G technology, the proliferation of interconnected devices necessitates a robust, fully connected intelligence network. Federated Learning (FL) stands as a key distributed learning technique, showing promise in recent advancements. However, the integration of novel Internet of Things (IoT) applications [...] Read more.
With the advent of 6G technology, the proliferation of interconnected devices necessitates a robust, fully connected intelligence network. Federated Learning (FL) stands as a key distributed learning technique, showing promise in recent advancements. However, the integration of novel Internet of Things (IoT) applications and virtualization technologies has introduced diverse and heterogeneous devices into wireless networks. This diversity encompasses variations in computation, communication, storage resources, training data, and communication modes among connected nodes. In this context, our study presents a pivotal contribution by analyzing and implementing FL processes tailored for 6G standards. Our work defines a practical FL platform, employing Raspberry Pi devices and virtual machines as client nodes, with a Windows PC serving as a parameter server. We tackle the image classification challenge, implementing the FL model via PyTorch, augmented by the specialized FL library, Flower. Notably, our analysis delves into the impact of computational resources, data availability, and heating issues across heterogeneous device sets. Additionally, we address knowledge transfer and employ pre-trained networks in our FL performance evaluation. This research underscores the indispensable role of artificial intelligence in IoT scenarios within the 6G landscape, providing a comprehensive framework for FL implementation across diverse and heterogeneous devices. Full article
Show Figures

Figure 1

32 pages, 419 KiB  
Article
The 6G Ecosystem as Support for IoE and Private Networks: Vision, Requirements, and Challenges
by Carlos Serôdio, José Cunha, Guillermo Candela, Santiago Rodriguez, Xosé Ramón Sousa and Frederico Branco
Future Internet 2023, 15(11), 348; https://doi.org/10.3390/fi15110348 - 25 Oct 2023
Cited by 42 | Viewed by 5240
Abstract
The emergence of the sixth generation of cellular systems (6G) signals a transformative era and ecosystem for mobile communications, driven by demands from technologies like the internet of everything (IoE), V2X communications, and factory automation. To support this connectivity, mission-critical applications are emerging [...] Read more.
The emergence of the sixth generation of cellular systems (6G) signals a transformative era and ecosystem for mobile communications, driven by demands from technologies like the internet of everything (IoE), V2X communications, and factory automation. To support this connectivity, mission-critical applications are emerging with challenging network requirements. The primary goals of 6G include providing sophisticated and high-quality services, extremely reliable and further-enhanced mobile broadband (feMBB), low-latency communication (ERLLC), long-distance and high-mobility communications (LDHMC), ultra-massive machine-type communications (umMTC), extremely low-power communications (ELPC), holographic communications, and quality of experience (QoE), grounded in incorporating massive broad-bandwidth machine-type (mBBMT), mobile broad-bandwidth and low-latency (MBBLL), and massive low-latency machine-type (mLLMT) communications. In attaining its objectives, 6G faces challenges that demand inventive solutions, incorporating AI, softwarization, cloudification, virtualization, and slicing features. Technologies like network function virtualization (NFV), network slicing, and software-defined networking (SDN) play pivotal roles in this integration, which facilitates efficient resource utilization, responsive service provisioning, expanded coverage, enhanced network reliability, increased capacity, densification, heightened availability, safety, security, and reduced energy consumption. It presents innovative network infrastructure concepts, such as resource-as-a-service (RaaS) and infrastructure-as-a-service (IaaS), featuring management and service orchestration mechanisms. This includes nomadic networks, AI-aware networking strategies, and dynamic management of diverse network resources. This paper provides an in-depth survey of the wireless evolution leading to 6G networks, addressing future issues and challenges associated with 6G technology to support V2X environments considering presenting +challenges in architecture, spectrum, air interface, reliability, availability, density, flexibility, mobility, and security. Full article
(This article belongs to the Special Issue Moving towards 6G Wireless Technologies)
26 pages, 4052 KiB  
Article
Fluent but Not Factual: A Comparative Analysis of ChatGPT and Other AI Chatbots’ Proficiency and Originality in Scientific Writing for Humanities
by Edisa Lozić and Benjamin Štular
Future Internet 2023, 15(10), 336; https://doi.org/10.3390/fi15100336 - 13 Oct 2023
Cited by 34 | Viewed by 12340
Abstract
Historically, mastery of writing was deemed essential to human progress. However, recent advances in generative AI have marked an inflection point in this narrative, including for scientific writing. This article provides a comprehensive analysis of the capabilities and limitations of six AI chatbots [...] Read more.
Historically, mastery of writing was deemed essential to human progress. However, recent advances in generative AI have marked an inflection point in this narrative, including for scientific writing. This article provides a comprehensive analysis of the capabilities and limitations of six AI chatbots in scholarly writing in the humanities and archaeology. The methodology was based on tagging AI-generated content for quantitative accuracy and qualitative precision by human experts. Quantitative accuracy assessed the factual correctness in a manner similar to grading students, while qualitative precision gauged the scientific contribution similar to reviewing a scientific article. In the quantitative test, ChatGPT-4 scored near the passing grade (−5) whereas ChatGPT-3.5 (−18), Bing (−21) and Bard (−31) were not far behind. Claude 2 (−75) and Aria (−80) scored much lower. In the qualitative test, all AI chatbots, but especially ChatGPT-4, demonstrated proficiency in recombining existing knowledge, but all failed to generate original scientific content. As a side note, our results suggest that with ChatGPT-4, the size of large language models has reached a plateau. Furthermore, this paper underscores the intricate and recursive nature of human research. This process of transforming raw data into refined knowledge is computationally irreducible, highlighting the challenges AI chatbots face in emulating human originality in scientific writing. Our results apply to the state of affairs in the third quarter of 2023. In conclusion, while large language models have revolutionised content generation, their ability to produce original scientific contributions in the humanities remains limited. We expect this to change in the near future as current large language model-based AI chatbots evolve into large language model-powered software. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

27 pages, 312 KiB  
Article
A New Approach to Web Application Security: Utilizing GPT Language Models for Source Code Inspection
by Zoltán Szabó and Vilmos Bilicki
Future Internet 2023, 15(10), 326; https://doi.org/10.3390/fi15100326 - 28 Sep 2023
Cited by 17 | Viewed by 5701
Abstract
Due to the proliferation of large language models (LLMs) and their widespread use in applications such as ChatGPT, there has been a significant increase in interest in AI over the past year. Multiple researchers have raised the question: how will AI be applied [...] Read more.
Due to the proliferation of large language models (LLMs) and their widespread use in applications such as ChatGPT, there has been a significant increase in interest in AI over the past year. Multiple researchers have raised the question: how will AI be applied and in what areas? Programming, including the generation, interpretation, analysis, and documentation of static program code based on promptsis one of the most promising fields. With the GPT API, we have explored a new aspect of this: static analysis of the source code of front-end applications at the endpoints of the data path. Our focus was the detection of the CWE-653 vulnerability—inadequately isolated sensitive code segments that could lead to unauthorized access or data leakage. This type of vulnerability detection consists of the detection of code segments dealing with sensitive data and the categorization of the isolation and protection levels of those segments that were previously not feasible without human intervention. However, we believed that the interpretive capabilities of GPT models could be explored to create a set of prompts to detect these cases on a file-by-file basis for the applications under study, and the efficiency of the method could pave the way for additional analysis tasks that were previously unavailable for automation. In the introduction to our paper, we characterize in detail the problem space of vulnerability and weakness detection, the challenges of the domain, and the advances that have been achieved in similarly complex areas using GPT or other LLMs. Then, we present our methodology, which includes our classification of sensitive data and protection levels. This is followed by the process of preprocessing, analyzing, and evaluating static code. This was achieved through a series of GPT prompts containing parts of static source code, utilizing few-shot examples and chain-of-thought techniques that detected sensitive code segments and mapped the complex code base into manageable JSON structures.Finally, we present our findings and evaluation of the open source project analysis, comparing the results of the GPT-based pipelines with manual evaluations, highlighting that the field yields a high research value. The results show a vulnerability detection rate for this particular type of model of 88.76%, among others. Full article
18 pages, 1957 KiB  
Article
An Enhanced Minimax Loss Function Technique in Generative Adversarial Network for Ransomware Behavior Prediction
by Mazen Gazzan and Frederick T. Sheldon
Future Internet 2023, 15(10), 318; https://doi.org/10.3390/fi15100318 - 22 Sep 2023
Cited by 12 | Viewed by 3974
Abstract
Recent ransomware attacks threaten not only personal files but also critical infrastructure like smart grids, necessitating early detection before encryption occurs. Current methods, reliant on pre-encryption data, suffer from insufficient and rapidly outdated attack patterns, despite efforts to focus on select features. Such [...] Read more.
Recent ransomware attacks threaten not only personal files but also critical infrastructure like smart grids, necessitating early detection before encryption occurs. Current methods, reliant on pre-encryption data, suffer from insufficient and rapidly outdated attack patterns, despite efforts to focus on select features. Such an approach assumes that the same features remain unchanged. This approach proves ineffective due to the polymorphic and metamorphic characteristics of ransomware, which generate unique attack patterns for each new target, particularly in the pre-encryption phase where evasiveness is prioritized. As a result, the selected features quickly become obsolete. Therefore, this study proposes an enhanced Bi-Gradual Minimax (BGM) loss function for the Generative Adversarial Network (GAN) Algorithm that compensates for the attack patterns insufficiency to represents the polymorphic behavior at the earlier phases of the ransomware lifecycle. Unlike existing GAN-based models, the BGM-GAN gradually minimizes the maximum loss of the generator and discriminator in the network. This allows the generator to create artificial patterns that resemble the pre-encryption data distribution. The generator is used to craft evasive adversarial patterns and add them to the original data. Then, the generator and discriminator compete to optimize their weights during the training phase such that the generator produces realistic attack patterns, while the discriminator endeavors to distinguish between the real and crafted patterns. The experimental results show that the proposed BGM-GAN reached maximum accuracy of 0.98, recall (0.96), and a minimum false positive rate (0.14) which all outperform those obtained by the existing works. The application of BGM-GAN can be extended to early detect malware and other types of attacks. Full article
(This article belongs to the Special Issue Information and Future Internet Security, Trust and Privacy II)
Show Figures

Figure 1

16 pages, 1050 KiB  
Review
Enhancing E-Learning with Blockchain: Characteristics, Projects, and Emerging Trends
by Mahmoud Bidry, Abdellah Ouaguid and Mohamed Hanine
Future Internet 2023, 15(9), 293; https://doi.org/10.3390/fi15090293 - 28 Aug 2023
Cited by 16 | Viewed by 5240
Abstract
Blockchain represents a decentralized and distributed ledger technology, ensuring transparent and secure transaction recording across networks. This innovative technology offers several benefits, including increased security, trust, and transparency, making it suitable for a wide range of applications. In the last few years, there [...] Read more.
Blockchain represents a decentralized and distributed ledger technology, ensuring transparent and secure transaction recording across networks. This innovative technology offers several benefits, including increased security, trust, and transparency, making it suitable for a wide range of applications. In the last few years, there has been a growing interest in investigating the potential of Blockchain technology to enhance diverse fields, such as e-learning. In this research, we undertook a systematic literature review to explore the potential of Blockchain technology in enhancing the e-learning domain. Our research focused on four main questions: (1) What potential characteristics of Blockchain can contribute to enhancing e-learning? (2) What are the existing Blockchain projects dedicated to e-learning? (3) What are the limitations of existing projects? (4) What are the future trends in Blockchain-related research that will impact e-learning? The results showed that Blockchain technology has several characteristics that could benefit e-learning. We also discussed immutability, transparency, decentralization, security, and traceability. We also identified several existing Blockchain projects dedicated to e-learning and discussed their potential to revolutionize learning by providing more transparency, security, and effectiveness. However, our research also revealed many limitations and challenges that could be addressed to achieve Blockchain technology’s potential in e-learning. Full article
(This article belongs to the Special Issue Future Prospects and Advancements in Blockchain Technology)
Show Figures

Figure 1

15 pages, 644 KiB  
Review
Generative AI in Medicine and Healthcare: Promises, Opportunities and Challenges
by Peng Zhang and Maged N. Kamel Boulos
Future Internet 2023, 15(9), 286; https://doi.org/10.3390/fi15090286 - 24 Aug 2023
Cited by 164 | Viewed by 49293
Abstract
Generative AI (artificial intelligence) refers to algorithms and models, such as OpenAI’s ChatGPT, that can be prompted to generate various types of content. In this narrative review, we present a selection of representative examples of generative AI applications in medicine and healthcare. We [...] Read more.
Generative AI (artificial intelligence) refers to algorithms and models, such as OpenAI’s ChatGPT, that can be prompted to generate various types of content. In this narrative review, we present a selection of representative examples of generative AI applications in medicine and healthcare. We then briefly discuss some associated issues, such as trust, veracity, clinical safety and reliability, privacy, copyrights, ownership, and opportunities, e.g., AI-driven conversational user interfaces for friendlier human-computer interaction. We conclude that generative AI will play an increasingly important role in medicine and healthcare as it further evolves and gets better tailored to the unique settings and requirements of the medical domain and as the laws, policies and regulatory frameworks surrounding its use start taking shape. Full article
(This article belongs to the Special Issue The Future Internet of Medical Things II)
Show Figures

Figure 1

21 pages, 538 KiB  
Article
Prospects of Cybersecurity in Smart Cities
by Fernando Almeida
Future Internet 2023, 15(9), 285; https://doi.org/10.3390/fi15090285 - 23 Aug 2023
Cited by 15 | Viewed by 8499
Abstract
The complex and interconnected infrastructure of smart cities offers several opportunities for attackers to exploit vulnerabilities and carry out cyberattacks that can have serious consequences for the functioning of cities’ critical infrastructures. This study aims to address this phenomenon and characterize the dimensions [...] Read more.
The complex and interconnected infrastructure of smart cities offers several opportunities for attackers to exploit vulnerabilities and carry out cyberattacks that can have serious consequences for the functioning of cities’ critical infrastructures. This study aims to address this phenomenon and characterize the dimensions of security risks in smart cities and present mitigation proposals to address these risks. The study adopts a qualitative methodology through the identification of 62 European research projects in the field of cybersecurity in smart cities, which are underway during the period from 2022 to 2027. Compared to previous studies, this work provides a comprehensive view of security risks from the perspective of multiple universities, research centers, and companies participating in European projects. The findings of this study offer relevant scientific contributions by identifying 7 dimensions and 31 sub-dimensions of cybersecurity risks in smart cities and proposing 24 mitigation strategies to face these security challenges. Furthermore, this study explores emerging cybersecurity issues to which smart cities are exposed by the increasing proliferation of new technologies and standards. Full article
(This article belongs to the Special Issue Cyber Security Challenges in the New Smart Worlds)
Show Figures

Figure 1

38 pages, 7280 KiB  
Article
SEDIA: A Platform for Semantically Enriched IoT Data Integration and Development of Smart City Applications
by Dimitrios Lymperis and Christos Goumopoulos
Future Internet 2023, 15(8), 276; https://doi.org/10.3390/fi15080276 - 18 Aug 2023
Cited by 8 | Viewed by 4272
Abstract
The development of smart city applications often encounters a variety of challenges. These include the need to address complex requirements such as integrating diverse data sources and incorporating geographical data that reflect the physical urban environment. Platforms designed for smart cities hold a [...] Read more.
The development of smart city applications often encounters a variety of challenges. These include the need to address complex requirements such as integrating diverse data sources and incorporating geographical data that reflect the physical urban environment. Platforms designed for smart cities hold a pivotal position in materializing these applications, given that they offer a suite of high-level services, which can be repurposed by developers. Although a variety of platforms are available to aid the creation of smart city applications, most fail to couple their services with geographical data, do not offer the ability to execute semantic queries on the available data, and possess restrictions that could impede the development process. This paper introduces SEDIA, a platform for developing smart applications based on diverse data sources, including geographical information, to support a semantically enriched data model for effective data analysis and integration. It also discusses the efficacy of SEDIA in a proof-of-concept smart city application related to air quality monitoring. The platform utilizes ontology classes and properties to semantically annotate collected data, and the Neo4j graph database facilitates the recognition of patterns and relationships within the data. This research also offers empirical data demonstrating the performance evaluation of SEDIA. These contributions collectively advance our understanding of semantically enriched data integration within the realm of smart city applications. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Graphical abstract

26 pages, 1726 KiB  
Article
Towards Efficient Resource Allocation for Federated Learning in Virtualized Managed Environments
by Fotis Nikolaidis, Moysis Symeonides and Demetris Trihinas
Future Internet 2023, 15(8), 261; https://doi.org/10.3390/fi15080261 - 31 Jul 2023
Cited by 16 | Viewed by 4878
Abstract
Federated learning (FL) is a transformative approach to Machine Learning that enables the training of a shared model without transferring private data to a central location. This decentralized training paradigm has found particular applicability in edge computing, where IoT devices and edge nodes [...] Read more.
Federated learning (FL) is a transformative approach to Machine Learning that enables the training of a shared model without transferring private data to a central location. This decentralized training paradigm has found particular applicability in edge computing, where IoT devices and edge nodes often possess limited computational power, network bandwidth, and energy resources. While various techniques have been developed to optimize the FL training process, an important question remains unanswered: how should resources be allocated in the training workflow? To address this question, it is crucial to understand the nature of these resources. In physical environments, the allocation is typically performed at the node level, with the entire node dedicated to executing a single workload. In contrast, virtualized environments allow for the dynamic partitioning of a node into containerized units that can adapt to changing workloads. Consequently, the new question that arises is: how can a physical node be partitioned into virtual resources to maximize the efficiency of the FL process? To answer this, we investigate various resource allocation methods that consider factors such as computational and network capabilities, the complexity of datasets, as well as the specific characteristics of the FL workflow and ML backend. We explore two scenarios: (i) running FL over a finite number of testbed nodes and (ii) hosting multiple parallel FL workflows on the same set of testbed nodes. Our findings reveal that the default configurations of state-of-the-art cloud orchestrators are sub-optimal when orchestrating FL workflows. Additionally, we demonstrate that different libraries and ML models exhibit diverse computational footprints. Building upon these insights, we discuss methods to mitigate computational interferences and enhance the overall performance of the FL pipeline execution. Full article
Show Figures

Figure 1

60 pages, 14922 KiB  
Review
The Power of Generative AI: A Review of Requirements, Models, Input–Output Formats, Evaluation Metrics, and Challenges
by Ajay Bandi, Pydi Venkata Satya Ramesh Adapa and Yudu Eswar Vinay Pratap Kumar Kuchi
Future Internet 2023, 15(8), 260; https://doi.org/10.3390/fi15080260 - 31 Jul 2023
Cited by 244 | Viewed by 63841
Abstract
Generative artificial intelligence (AI) has emerged as a powerful technology with numerous applications in various domains. There is a need to identify the requirements and evaluation metrics for generative AI models designed for specific tasks. The purpose of the research aims to investigate [...] Read more.
Generative artificial intelligence (AI) has emerged as a powerful technology with numerous applications in various domains. There is a need to identify the requirements and evaluation metrics for generative AI models designed for specific tasks. The purpose of the research aims to investigate the fundamental aspects of generative AI systems, including their requirements, models, input–output formats, and evaluation metrics. The study addresses key research questions and presents comprehensive insights to guide researchers, developers, and practitioners in the field. Firstly, the requirements necessary for implementing generative AI systems are examined and categorized into three distinct categories: hardware, software, and user experience. Furthermore, the study explores the different types of generative AI models described in the literature by presenting a taxonomy based on architectural characteristics, such as variational autoencoders (VAEs), generative adversarial networks (GANs), diffusion models, transformers, language models, normalizing flow models, and hybrid models. A comprehensive classification of input and output formats used in generative AI systems is also provided. Moreover, the research proposes a classification system based on output types and discusses commonly used evaluation metrics in generative AI. The findings contribute to advancements in the field, enabling researchers, developers, and practitioners to effectively implement and evaluate generative AI models for various applications. The significance of the research lies in understanding that generative AI system requirements are crucial for effective planning, design, and optimal performance. A taxonomy of models aids in selecting suitable options and driving advancements. Classifying input–output formats enables leveraging diverse formats for customized systems, while evaluation metrics establish standardized methods to assess model quality and performance. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

31 pages, 602 KiB  
Review
A Review of ARIMA vs. Machine Learning Approaches for Time Series Forecasting in Data Driven Networks
by Vaia I. Kontopoulou, Athanasios D. Panagopoulos, Ioannis Kakkos and George K. Matsopoulos
Future Internet 2023, 15(8), 255; https://doi.org/10.3390/fi15080255 - 30 Jul 2023
Cited by 111 | Viewed by 34683
Abstract
In the broad scientific field of time series forecasting, the ARIMA models and their variants have been widely applied for half a century now due to their mathematical simplicity and flexibility in application. However, with the recent advances in the development and efficient [...] Read more.
In the broad scientific field of time series forecasting, the ARIMA models and their variants have been widely applied for half a century now due to their mathematical simplicity and flexibility in application. However, with the recent advances in the development and efficient deployment of artificial intelligence models and techniques, the view is rapidly changing, with a shift towards machine and deep learning approaches becoming apparent, even without a complete evaluation of the superiority of the new approach over the classic statistical algorithms. Our work constitutes an extensive review of the published scientific literature regarding the comparison of ARIMA and machine learning algorithms applied to time series forecasting problems, as well as the combination of these two approaches in hybrid statistical-AI models in a wide variety of data applications (finance, health, weather, utilities, and network traffic prediction). Our review has shown that the AI algorithms display better prediction performance in most applications, with a few notable exceptions analyzed in our Discussion and Conclusions sections, while the hybrid statistical-AI models steadily outperform their individual parts, utilizing the best algorithmic features of both worlds. Full article
(This article belongs to the Special Issue Smart Data and Systems for the Internet of Things)
Show Figures

Figure 1

30 pages, 583 KiB  
Review
Task Allocation Methods and Optimization Techniques in Edge Computing: A Systematic Review of the Literature
by Vasilios Patsias, Petros Amanatidis, Dimitris Karampatzakis, Thomas Lagkas, Kalliopi Michalakopoulou and Alexandros Nikitas
Future Internet 2023, 15(8), 254; https://doi.org/10.3390/fi15080254 - 28 Jul 2023
Cited by 27 | Viewed by 7044
Abstract
Task allocation in edge computing refers to the process of distributing tasks among the various nodes in an edge computing network. The main challenges in task allocation include determining the optimal location for each task based on the requirements such as processing power, [...] Read more.
Task allocation in edge computing refers to the process of distributing tasks among the various nodes in an edge computing network. The main challenges in task allocation include determining the optimal location for each task based on the requirements such as processing power, storage, and network bandwidth, and adapting to the dynamic nature of the network. Different approaches for task allocation include centralized, decentralized, hybrid, and machine learning algorithms. Each approach has its strengths and weaknesses and the choice of approach will depend on the specific requirements of the application. In more detail, the selection of the most optimal task allocation methods depends on the edge computing architecture and configuration type, like mobile edge computing (MEC), cloud-edge, fog computing, peer-to-peer edge computing, etc. Thus, task allocation in edge computing is a complex, diverse, and challenging problem that requires a balance of trade-offs between multiple conflicting objectives such as energy efficiency, data privacy, security, latency, and quality of service (QoS). Recently, an increased number of research studies have emerged regarding the performance evaluation and optimization of task allocation on edge devices. While several survey articles have described the current state-of-the-art task allocation methods, this work focuses on comparing and contrasting different task allocation methods, optimization algorithms, as well as the network types that are most frequently used in edge computing systems. Full article
Show Figures

Figure 1

31 pages, 2565 KiB  
Article
The Meta-Metaverse: Ideation and Future Directions
by Mohammad (Behdad) Jamshidi, Arash Dehghaniyan Serej, Alireza Jamshidi and Omid Moztarzadeh
Future Internet 2023, 15(8), 252; https://doi.org/10.3390/fi15080252 - 27 Jul 2023
Cited by 26 | Viewed by 5602
Abstract
In the era of digitalization and artificial intelligence (AI), the utilization of Metaverse technology has become increasingly crucial. As the world becomes more digitized, there is a pressing need to effectively transfer real-world assets into the digital realm and establish meaningful relationships between [...] Read more.
In the era of digitalization and artificial intelligence (AI), the utilization of Metaverse technology has become increasingly crucial. As the world becomes more digitized, there is a pressing need to effectively transfer real-world assets into the digital realm and establish meaningful relationships between them. However, existing approaches have shown significant limitations in achieving this goal comprehensively. To address this, this research introduces an innovative methodology called the Meta-Metaverse, which aims to enhance the immersive experience and create realistic digital twins across various domains such as biology, genetics, economy, medicine, environment, gaming, digital twins, Internet of Things, artificial intelligence, machine learning, psychology, supply chain, social networking, smart manufacturing, and politics. The multi-layered structure of Metaverse platforms and digital twins allows for greater flexibility and scalability, offering valuable insights into the potential impact of advancing science, technology, and the internet. This article presents a detailed description of the proposed methodology and its applications, highlighting its potential to transform scientific research and inspire groundbreaking ideas in science, medicine, and technology. Full article
Show Figures

Graphical abstract

18 pages, 5882 KiB  
Article
A Novel Approach for Fraud Detection in Blockchain-Based Healthcare Networks Using Machine Learning
by Mohammed A. Mohammed, Manel Boujelben and Mohamed Abid
Future Internet 2023, 15(8), 250; https://doi.org/10.3390/fi15080250 - 26 Jul 2023
Cited by 23 | Viewed by 5097
Abstract
Recently, the advent of blockchain (BC) has sparked a digital revolution in different fields, such as finance, healthcare, and supply chain. It is used by smart healthcare systems to provide transparency and control for personal medical records. However, BC and healthcare integration still [...] Read more.
Recently, the advent of blockchain (BC) has sparked a digital revolution in different fields, such as finance, healthcare, and supply chain. It is used by smart healthcare systems to provide transparency and control for personal medical records. However, BC and healthcare integration still face many challenges, such as storing patient data and privacy and security issues. In the context of security, new attacks target different parts of the BC network, such as nodes, consensus algorithms, Smart Contracts (SC), and wallets. Fraudulent data insertion can have serious consequences on the integrity and reliability of the BC, as it can compromise the trustworthiness of the information stored on it and lead to incorrect or misleading transactions. Detecting and preventing fraudulent data insertion is crucial for maintaining the credibility of the BC as a secure and transparent system for recording and verifying transactions. SCs control the transfer of assets, which is why they may be subject to several adverbial attacks. Therefore, many efforts have been proposed to detect vulnerabilities and attacks in the SCs, such as utilizing programming tools. However, their proposals are inadequate against the newly emerging vulnerabilities and attacks. Artificial Intelligence technology is robust in analyzing and detecting new attacks in every part of the BC network. Therefore, this article proposes a system architecture for detecting fraudulent transactions and attacks in the BC network based on Machine Learning (ML). It is composed of two stages: (1) Using ML to check medical data from sensors and block abnormal data from entering the blockchain network. (2) Using the same ML to check transactions in the blockchain, storing normal transactions, and marking abnormal ones as novel attacks in the attacks database. To build our system, we utilized two datasets and six machine learning algorithms (Logistic Regression, Decision Tree, KNN, Naive Bayes, SVM, and Random Forest). The results demonstrate that the Random Forest algorithm outperformed others by achieving the highest accuracy, execution time, and scalability. Thereby, it was considered the best solution among the rest of the algorithms for tackling the research problem. Moreover, the security analysis of the proposed system proves its robustness against several attacks which threaten the functioning of the blockchain-based healthcare application. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

42 pages, 4766 KiB  
Review
Self-Healing in Cyber–Physical Systems Using Machine Learning: A Critical Analysis of Theories and Tools
by Obinna Johnphill, Ali Safaa Sadiq, Feras Al-Obeidat, Haider Al-Khateeb, Mohammed Adam Taheir, Omprakash Kaiwartya and Mohammed Ali
Future Internet 2023, 15(7), 244; https://doi.org/10.3390/fi15070244 - 17 Jul 2023
Cited by 18 | Viewed by 7765
Abstract
The rapid advancement of networking, computing, sensing, and control systems has introduced a wide range of cyber threats, including those from new devices deployed during the development of scenarios. With recent advancements in automobiles, medical devices, smart industrial systems, and other technologies, system [...] Read more.
The rapid advancement of networking, computing, sensing, and control systems has introduced a wide range of cyber threats, including those from new devices deployed during the development of scenarios. With recent advancements in automobiles, medical devices, smart industrial systems, and other technologies, system failures resulting from external attacks or internal process malfunctions are increasingly common. Restoring the system’s stable state requires autonomous intervention through the self-healing process to maintain service quality. This paper, therefore, aims to analyse state of the art and identify where self-healing using machine learning can be applied to cyber–physical systems to enhance security and prevent failures within the system. The paper describes three key components of self-healing functionality in computer systems: anomaly detection, fault alert, and fault auto-remediation. The significance of these components is that self-healing functionality cannot be practical without considering all three. Understanding the self-healing theories that form the guiding principles for implementing these functionalities with real-life implications is crucial. There are strong indications that self-healing functionality in the cyber–physical system is an emerging area of research that holds great promise for the future of computing technology. It has the potential to provide seamless self-organising and self-restoration functionality to cyber–physical systems, leading to increased security of systems and improved user experience. For instance, a functional self-healing system implemented on a power grid will react autonomously when a threat or fault occurs, without requiring human intervention to restore power to communities and preserve critical services after power outages or defects. This paper presents the existing vulnerabilities, threats, and challenges and critically analyses the current self-healing theories and methods that use machine learning for cyber–physical systems. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

17 pages, 1160 KiB  
Article
Machine Learning for Network Intrusion Detection—A Comparative Study
by Mustafa Al Lail, Alejandro Garcia and Saul Olivo
Future Internet 2023, 15(7), 243; https://doi.org/10.3390/fi15070243 - 16 Jul 2023
Cited by 27 | Viewed by 7275
Abstract
Modern society has quickly evolved to utilize communication and data-sharing media with the advent of the internet and electronic technologies. However, these technologies have created new opportunities for attackers to gain access to confidential electronic resources. As a result, data breaches have significantly [...] Read more.
Modern society has quickly evolved to utilize communication and data-sharing media with the advent of the internet and electronic technologies. However, these technologies have created new opportunities for attackers to gain access to confidential electronic resources. As a result, data breaches have significantly impacted our society in multiple ways. To mitigate this situation, researchers have developed multiple security countermeasure techniques known as Network Intrusion Detection Systems (NIDS). Despite these techniques, attackers have developed new strategies to gain unauthorized access to resources. In this work, we propose using machine learning (ML) to develop a NIDS system capable of detecting modern attack types with a very high detection rate. To this end, we implement and evaluate several ML algorithms and compare their effectiveness using a state-of-the-art dataset containing modern attack types. The results show that the random forest model outperforms other models, with a detection rate of modern network attacks of 97 percent. This study shows that not only is accurate prediction possible but also a high detection rate of attacks can be achieved. These results indicate that ML has the potential to create very effective NIDS systems. Full article
(This article belongs to the Special Issue Anomaly Detection in Modern Networks)
Show Figures

Figure 1

28 pages, 22171 KiB  
Article
A Cyber-Physical System for Wildfire Detection and Firefighting
by Pietro Battistoni, Andrea Antonio Cantone, Gerardo Martino, Valerio Passamano, Marco Romano, Monica Sebillo and Giuliana Vitiello
Future Internet 2023, 15(7), 237; https://doi.org/10.3390/fi15070237 - 6 Jul 2023
Cited by 16 | Viewed by 5614
Abstract
The increasing frequency and severity of forest fires necessitate early detection and rapid response to mitigate their impact. This project aims to design a cyber-physical system for early detection and rapid response to forest fires using advanced technologies. The system incorporates Internet of [...] Read more.
The increasing frequency and severity of forest fires necessitate early detection and rapid response to mitigate their impact. This project aims to design a cyber-physical system for early detection and rapid response to forest fires using advanced technologies. The system incorporates Internet of Things sensors and autonomous unmanned aerial and ground vehicles controlled by the robot operating system. An IoT-based wildfire detection node continuously monitors environmental conditions, enabling early fire detection. Upon fire detection, a UAV autonomously surveys the area to precisely locate the fire and can deploy an extinguishing payload or provide data for decision-making. The UAV communicates the fire’s precise location to a collaborative UGV, which autonomously reaches the designated area to support ground-based firefighters. The CPS includes a ground control station with web-based dashboards for real-time monitoring of system parameters and telemetry data from UAVs and UGVs. The article demonstrates the real-time fire detection capabilities of the proposed system using simulated forest fire scenarios. The objective is to provide a practical approach using open-source technologies for early detection and extinguishing of forest fires, with potential applications in various industries, surveillance, and precision agriculture. Full article
Show Figures

Figure 1

21 pages, 1044 KiB  
Article
Enhancing Collaborative Filtering-Based Recommender System Using Sentiment Analysis
by Ikram Karabila, Nossayba Darraz, Anas El-Ansari, Nabil Alami and Mostafa El Mallahi
Future Internet 2023, 15(7), 235; https://doi.org/10.3390/fi15070235 - 5 Jul 2023
Cited by 35 | Viewed by 8157
Abstract
Recommendation systems (RSs) are widely used in e-commerce to improve conversion rates by aligning product offerings with customer preferences and interests. While traditional RSs rely solely on numerical ratings to generate recommendations, these ratings alone may not be sufficient to offer personalized and [...] Read more.
Recommendation systems (RSs) are widely used in e-commerce to improve conversion rates by aligning product offerings with customer preferences and interests. While traditional RSs rely solely on numerical ratings to generate recommendations, these ratings alone may not be sufficient to offer personalized and accurate suggestions. To overcome this limitation, additional sources of information, such as reviews, can be utilized. However, analyzing and understanding the information contained within reviews, which are often unstructured data, is a challenging task. To address this issue, sentiment analysis (SA) has attracted considerable attention as a tool to better comprehend a user’s opinions, emotions, and attitudes. In this study, we propose a novel RS that leverages ensemble learning by integrating sentiment analysis of textual data with collaborative filtering techniques to provide users with more precise and individualized recommendations. Our system was developed in three main steps. Firstly, we used unsupervised “GloVe” vectorization for better classification performance and built a sentiment model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Secondly, we developed a recommendation model based on collaborative filtering techniques. Lastly, we integrated our sentiment analysis model into the RS. Our proposed model of SA achieved an accuracy score of 93%, which is superior to other models. The results of our study indicate that our approach enhances the accuracy of the recommendation system. Overall, our proposed system offers customers a more reliable and personalized recommendation service in e-commerce. Full article
Show Figures

Figure 1

27 pages, 5489 KiB  
Article
A New AI-Based Semantic Cyber Intelligence Agent
by Fahim Sufi
Future Internet 2023, 15(7), 231; https://doi.org/10.3390/fi15070231 - 29 Jun 2023
Cited by 12 | Viewed by 5306
Abstract
The surge in cybercrime has emerged as a pressing concern in contemporary society due to its far-reaching financial, social, and psychological repercussions on individuals. Beyond inflicting monetary losses, cyber-attacks exert adverse effects on the social fabric and psychological well-being of the affected individuals. [...] Read more.
The surge in cybercrime has emerged as a pressing concern in contemporary society due to its far-reaching financial, social, and psychological repercussions on individuals. Beyond inflicting monetary losses, cyber-attacks exert adverse effects on the social fabric and psychological well-being of the affected individuals. In order to mitigate the deleterious consequences of cyber threats, adoption of an intelligent agent-based solution to enhance the speed and comprehensiveness of cyber intelligence is advocated. In this paper, a novel cyber intelligence solution is proposed, employing four semantic agents that interact autonomously to acquire crucial cyber intelligence pertaining to any given country. The solution leverages a combination of techniques, including a convolutional neural network (CNN), sentiment analysis, exponential smoothing, latent Dirichlet allocation (LDA), term frequency-inverse document frequency (TF-IDF), Porter stemming, and others, to analyse data from both social media and web sources. The proposed method underwent evaluation from 13 October 2022 to 6 April 2023, utilizing a dataset comprising 37,386 tweets generated by 30,706 users across 54 languages. To address non-English content, a total of 8199 HTTP requests were made to facilitate translation. Additionally, the system processed 238,220 cyber threat data from the web. Within a remarkably brief duration of 6 s, the system autonomously generated a comprehensive cyber intelligence report encompassing 7 critical dimensions of cyber intelligence for countries such as Russia, Ukraine, China, Iran, India, and Australia. Full article
(This article belongs to the Special Issue Semantic Web Services for Multi-Agent Systems)
Show Figures

Figure 1

17 pages, 18103 KiB  
Article
RSSI and Device Pose Fusion for Fingerprinting-Based Indoor Smartphone Localization Systems
by Imran Moez Khan, Andrew Thompson, Akram Al-Hourani, Kandeepan Sithamparanathan and Wayne S. T. Rowe
Future Internet 2023, 15(6), 220; https://doi.org/10.3390/fi15060220 - 20 Jun 2023
Cited by 5 | Viewed by 3639
Abstract
Complementing RSSI measurements at anchors with onboard smartphone accelerometer measurements is a popular research direction to improve the accuracy of indoor localization systems. This can be performed at different levels; for example, many studies have used pedestrian dead reckoning (PDR) and a filtering [...] Read more.
Complementing RSSI measurements at anchors with onboard smartphone accelerometer measurements is a popular research direction to improve the accuracy of indoor localization systems. This can be performed at different levels; for example, many studies have used pedestrian dead reckoning (PDR) and a filtering method at the algorithm level for sensor fusion. In this study, a novel conceptual framework was developed and applied at the data level that first utilizes accelerometer measurements to classify the smartphone’s device pose and then combines this with RSSI measurements. The framework was explored using neural networks with room-scale experimental data obtained from a Bluetooth low-energy (BLE) setup. Consistent accuracy improvement was obtained for the output localization classes (zones), with an average overall accuracy improvement of 10.7 percentage points for the RSSI-and-device-pose framework over that of RSSI-only localization. Full article
Show Figures

Figure 1

19 pages, 3172 KiB  
Article
BERT4Loc: BERT for Location—POI Recommender System
by Syed Raza Bashir, Shaina Raza and Vojislav B. Misic
Future Internet 2023, 15(6), 213; https://doi.org/10.3390/fi15060213 - 12 Jun 2023
Cited by 10 | Viewed by 4631
Abstract
Recommending points of interest (POI) is a challenging task that requires extracting comprehensive location data from location-based social media platforms. To provide effective location-based recommendations, it is important to analyze users’ historical behavior and preferences. In this study, we present a sophisticated location-aware [...] Read more.
Recommending points of interest (POI) is a challenging task that requires extracting comprehensive location data from location-based social media platforms. To provide effective location-based recommendations, it is important to analyze users’ historical behavior and preferences. In this study, we present a sophisticated location-aware recommendation system that uses Bidirectional Encoder Representations from Transformers (BERT) to offer personalized location-based suggestions. Our model combines location information and user preferences to provide more relevant recommendations compared to models that predict the next POI in a sequence. Based on our experiments conducted on two benchmark datasets, we have observed that our BERT-based model surpasses baselines models in terms of HR by a significant margin of 6% compared to the second-best performing baseline. Furthermore, our model demonstrates a percentage gain of 1–2% in the NDCG compared to second best baseline. These results indicate the superior performance and effectiveness of our BERT-based approach in comparison to other models when evaluating HR and NDCG metrics. Moreover, we see the effectiveness of the proposed model for quality through additional experiments. Full article
(This article belongs to the Section Techno-Social Smart Systems)
Show Figures

Figure 1

45 pages, 2869 KiB  
Review
Securing Wireless Sensor Networks Using Machine Learning and Blockchain: A Review
by Shereen Ismail, Diana W. Dawoud and Hassan Reza
Future Internet 2023, 15(6), 200; https://doi.org/10.3390/fi15060200 - 30 May 2023
Cited by 37 | Viewed by 8899
Abstract
As an Internet of Things (IoT) technological key enabler, Wireless Sensor Networks (WSNs) are prone to different kinds of cyberattacks. WSNs have unique characteristics, and have several limitations which complicate the design of effective attack prevention and detection techniques. This paper aims to [...] Read more.
As an Internet of Things (IoT) technological key enabler, Wireless Sensor Networks (WSNs) are prone to different kinds of cyberattacks. WSNs have unique characteristics, and have several limitations which complicate the design of effective attack prevention and detection techniques. This paper aims to provide a comprehensive understanding of the fundamental principles underlying cybersecurity in WSNs. In addition to current and envisioned solutions that have been studied in detail, this review primarily focuses on state-of-the-art Machine Learning (ML) and Blockchain (BC) security techniques by studying and analyzing 164 up-to-date publications highlighting security aspect in WSNs. Then, the paper discusses integrating BC and ML towards developing a lightweight security framework that consists of two lines of defence, i.e, cyberattack detection and cyberattack prevention in WSNs, emphasizing the relevant design insights and challenges. The paper concludes by presenting a proposed integrated BC and ML solution highlighting potential BC and ML algorithms underpinning a less computationally demanding solution. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT II)
Show Figures

Figure 1

Back to TopTop