Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 2309 KiB  
Review
Edge AI for Early Detection of Chronic Diseases and the Spread of Infectious Diseases: Opportunities, Challenges, and Future Directions
by Elarbi Badidi
Future Internet 2023, 15(11), 370; https://doi.org/10.3390/fi15110370 - 18 Nov 2023
Cited by 13 | Viewed by 11943
Abstract
Edge AI, an interdisciplinary technology that enables distributed intelligence with edge devices, is quickly becoming a critical component in early health prediction. Edge AI encompasses data analytics and artificial intelligence (AI) using machine learning, deep learning, and federated learning models deployed and executed [...] Read more.
Edge AI, an interdisciplinary technology that enables distributed intelligence with edge devices, is quickly becoming a critical component in early health prediction. Edge AI encompasses data analytics and artificial intelligence (AI) using machine learning, deep learning, and federated learning models deployed and executed at the edge of the network, far from centralized data centers. AI enables the careful analysis of large datasets derived from multiple sources, including electronic health records, wearable devices, and demographic information, making it possible to identify intricate patterns and predict a person’s future health. Federated learning, a novel approach in AI, further enhances this prediction by enabling collaborative training of AI models on distributed edge devices while maintaining privacy. Using edge computing, data can be processed and analyzed locally, reducing latency and enabling instant decision making. This article reviews the role of Edge AI in early health prediction and highlights its potential to improve public health. Topics covered include the use of AI algorithms for early detection of chronic diseases such as diabetes and cancer and the use of edge computing in wearable devices to detect the spread of infectious diseases. In addition to discussing the challenges and limitations of Edge AI in early health prediction, this article emphasizes future research directions to address these concerns and the integration with existing healthcare systems and explore the full potential of these technologies in improving public health. Full article
(This article belongs to the Special Issue Internet of Things (IoT) for Smart Living and Public Health)
Show Figures

Figure 1

23 pages, 14269 KiB  
Article
Implementation and Evaluation of a Federated Learning Framework on Raspberry PI Platforms for IoT 6G Applications
by Lorenzo Ridolfi, David Naseh, Swapnil Sadashiv Shinde and Daniele Tarchi
Future Internet 2023, 15(11), 358; https://doi.org/10.3390/fi15110358 - 31 Oct 2023
Cited by 6 | Viewed by 2401
Abstract
With the advent of 6G technology, the proliferation of interconnected devices necessitates a robust, fully connected intelligence network. Federated Learning (FL) stands as a key distributed learning technique, showing promise in recent advancements. However, the integration of novel Internet of Things (IoT) applications [...] Read more.
With the advent of 6G technology, the proliferation of interconnected devices necessitates a robust, fully connected intelligence network. Federated Learning (FL) stands as a key distributed learning technique, showing promise in recent advancements. However, the integration of novel Internet of Things (IoT) applications and virtualization technologies has introduced diverse and heterogeneous devices into wireless networks. This diversity encompasses variations in computation, communication, storage resources, training data, and communication modes among connected nodes. In this context, our study presents a pivotal contribution by analyzing and implementing FL processes tailored for 6G standards. Our work defines a practical FL platform, employing Raspberry Pi devices and virtual machines as client nodes, with a Windows PC serving as a parameter server. We tackle the image classification challenge, implementing the FL model via PyTorch, augmented by the specialized FL library, Flower. Notably, our analysis delves into the impact of computational resources, data availability, and heating issues across heterogeneous device sets. Additionally, we address knowledge transfer and employ pre-trained networks in our FL performance evaluation. This research underscores the indispensable role of artificial intelligence in IoT scenarios within the 6G landscape, providing a comprehensive framework for FL implementation across diverse and heterogeneous devices. Full article
Show Figures

Figure 1

32 pages, 419 KiB  
Article
The 6G Ecosystem as Support for IoE and Private Networks: Vision, Requirements, and Challenges
by Carlos Serôdio, José Cunha, Guillermo Candela, Santiago Rodriguez, Xosé Ramón Sousa and Frederico Branco
Future Internet 2023, 15(11), 348; https://doi.org/10.3390/fi15110348 - 25 Oct 2023
Cited by 12 | Viewed by 3789
Abstract
The emergence of the sixth generation of cellular systems (6G) signals a transformative era and ecosystem for mobile communications, driven by demands from technologies like the internet of everything (IoE), V2X communications, and factory automation. To support this connectivity, mission-critical applications are emerging [...] Read more.
The emergence of the sixth generation of cellular systems (6G) signals a transformative era and ecosystem for mobile communications, driven by demands from technologies like the internet of everything (IoE), V2X communications, and factory automation. To support this connectivity, mission-critical applications are emerging with challenging network requirements. The primary goals of 6G include providing sophisticated and high-quality services, extremely reliable and further-enhanced mobile broadband (feMBB), low-latency communication (ERLLC), long-distance and high-mobility communications (LDHMC), ultra-massive machine-type communications (umMTC), extremely low-power communications (ELPC), holographic communications, and quality of experience (QoE), grounded in incorporating massive broad-bandwidth machine-type (mBBMT), mobile broad-bandwidth and low-latency (MBBLL), and massive low-latency machine-type (mLLMT) communications. In attaining its objectives, 6G faces challenges that demand inventive solutions, incorporating AI, softwarization, cloudification, virtualization, and slicing features. Technologies like network function virtualization (NFV), network slicing, and software-defined networking (SDN) play pivotal roles in this integration, which facilitates efficient resource utilization, responsive service provisioning, expanded coverage, enhanced network reliability, increased capacity, densification, heightened availability, safety, security, and reduced energy consumption. It presents innovative network infrastructure concepts, such as resource-as-a-service (RaaS) and infrastructure-as-a-service (IaaS), featuring management and service orchestration mechanisms. This includes nomadic networks, AI-aware networking strategies, and dynamic management of diverse network resources. This paper provides an in-depth survey of the wireless evolution leading to 6G networks, addressing future issues and challenges associated with 6G technology to support V2X environments considering presenting +challenges in architecture, spectrum, air interface, reliability, availability, density, flexibility, mobility, and security. Full article
(This article belongs to the Special Issue Moving towards 6G Wireless Technologies)
26 pages, 4052 KiB  
Article
Fluent but Not Factual: A Comparative Analysis of ChatGPT and Other AI Chatbots’ Proficiency and Originality in Scientific Writing for Humanities
by Edisa Lozić and Benjamin Štular
Future Internet 2023, 15(10), 336; https://doi.org/10.3390/fi15100336 - 13 Oct 2023
Cited by 18 | Viewed by 8210
Abstract
Historically, mastery of writing was deemed essential to human progress. However, recent advances in generative AI have marked an inflection point in this narrative, including for scientific writing. This article provides a comprehensive analysis of the capabilities and limitations of six AI chatbots [...] Read more.
Historically, mastery of writing was deemed essential to human progress. However, recent advances in generative AI have marked an inflection point in this narrative, including for scientific writing. This article provides a comprehensive analysis of the capabilities and limitations of six AI chatbots in scholarly writing in the humanities and archaeology. The methodology was based on tagging AI-generated content for quantitative accuracy and qualitative precision by human experts. Quantitative accuracy assessed the factual correctness in a manner similar to grading students, while qualitative precision gauged the scientific contribution similar to reviewing a scientific article. In the quantitative test, ChatGPT-4 scored near the passing grade (−5) whereas ChatGPT-3.5 (−18), Bing (−21) and Bard (−31) were not far behind. Claude 2 (−75) and Aria (−80) scored much lower. In the qualitative test, all AI chatbots, but especially ChatGPT-4, demonstrated proficiency in recombining existing knowledge, but all failed to generate original scientific content. As a side note, our results suggest that with ChatGPT-4, the size of large language models has reached a plateau. Furthermore, this paper underscores the intricate and recursive nature of human research. This process of transforming raw data into refined knowledge is computationally irreducible, highlighting the challenges AI chatbots face in emulating human originality in scientific writing. Our results apply to the state of affairs in the third quarter of 2023. In conclusion, while large language models have revolutionised content generation, their ability to produce original scientific contributions in the humanities remains limited. We expect this to change in the near future as current large language model-based AI chatbots evolve into large language model-powered software. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

27 pages, 312 KiB  
Article
A New Approach to Web Application Security: Utilizing GPT Language Models for Source Code Inspection
by Zoltán Szabó and Vilmos Bilicki
Future Internet 2023, 15(10), 326; https://doi.org/10.3390/fi15100326 - 28 Sep 2023
Cited by 9 | Viewed by 3954
Abstract
Due to the proliferation of large language models (LLMs) and their widespread use in applications such as ChatGPT, there has been a significant increase in interest in AI over the past year. Multiple researchers have raised the question: how will AI be applied [...] Read more.
Due to the proliferation of large language models (LLMs) and their widespread use in applications such as ChatGPT, there has been a significant increase in interest in AI over the past year. Multiple researchers have raised the question: how will AI be applied and in what areas? Programming, including the generation, interpretation, analysis, and documentation of static program code based on promptsis one of the most promising fields. With the GPT API, we have explored a new aspect of this: static analysis of the source code of front-end applications at the endpoints of the data path. Our focus was the detection of the CWE-653 vulnerability—inadequately isolated sensitive code segments that could lead to unauthorized access or data leakage. This type of vulnerability detection consists of the detection of code segments dealing with sensitive data and the categorization of the isolation and protection levels of those segments that were previously not feasible without human intervention. However, we believed that the interpretive capabilities of GPT models could be explored to create a set of prompts to detect these cases on a file-by-file basis for the applications under study, and the efficiency of the method could pave the way for additional analysis tasks that were previously unavailable for automation. In the introduction to our paper, we characterize in detail the problem space of vulnerability and weakness detection, the challenges of the domain, and the advances that have been achieved in similarly complex areas using GPT or other LLMs. Then, we present our methodology, which includes our classification of sensitive data and protection levels. This is followed by the process of preprocessing, analyzing, and evaluating static code. This was achieved through a series of GPT prompts containing parts of static source code, utilizing few-shot examples and chain-of-thought techniques that detected sensitive code segments and mapped the complex code base into manageable JSON structures.Finally, we present our findings and evaluation of the open source project analysis, comparing the results of the GPT-based pipelines with manual evaluations, highlighting that the field yields a high research value. The results show a vulnerability detection rate for this particular type of model of 88.76%, among others. Full article
18 pages, 1957 KiB  
Article
An Enhanced Minimax Loss Function Technique in Generative Adversarial Network for Ransomware Behavior Prediction
by Mazen Gazzan and Frederick T. Sheldon
Future Internet 2023, 15(10), 318; https://doi.org/10.3390/fi15100318 - 22 Sep 2023
Cited by 8 | Viewed by 2055
Abstract
Recent ransomware attacks threaten not only personal files but also critical infrastructure like smart grids, necessitating early detection before encryption occurs. Current methods, reliant on pre-encryption data, suffer from insufficient and rapidly outdated attack patterns, despite efforts to focus on select features. Such [...] Read more.
Recent ransomware attacks threaten not only personal files but also critical infrastructure like smart grids, necessitating early detection before encryption occurs. Current methods, reliant on pre-encryption data, suffer from insufficient and rapidly outdated attack patterns, despite efforts to focus on select features. Such an approach assumes that the same features remain unchanged. This approach proves ineffective due to the polymorphic and metamorphic characteristics of ransomware, which generate unique attack patterns for each new target, particularly in the pre-encryption phase where evasiveness is prioritized. As a result, the selected features quickly become obsolete. Therefore, this study proposes an enhanced Bi-Gradual Minimax (BGM) loss function for the Generative Adversarial Network (GAN) Algorithm that compensates for the attack patterns insufficiency to represents the polymorphic behavior at the earlier phases of the ransomware lifecycle. Unlike existing GAN-based models, the BGM-GAN gradually minimizes the maximum loss of the generator and discriminator in the network. This allows the generator to create artificial patterns that resemble the pre-encryption data distribution. The generator is used to craft evasive adversarial patterns and add them to the original data. Then, the generator and discriminator compete to optimize their weights during the training phase such that the generator produces realistic attack patterns, while the discriminator endeavors to distinguish between the real and crafted patterns. The experimental results show that the proposed BGM-GAN reached maximum accuracy of 0.98, recall (0.96), and a minimum false positive rate (0.14) which all outperform those obtained by the existing works. The application of BGM-GAN can be extended to early detect malware and other types of attacks. Full article
(This article belongs to the Special Issue Information and Future Internet Security, Trust and Privacy II)
Show Figures

Figure 1

16 pages, 1050 KiB  
Review
Enhancing E-Learning with Blockchain: Characteristics, Projects, and Emerging Trends
by Mahmoud Bidry, Abdellah Ouaguid and Mohamed Hanine
Future Internet 2023, 15(9), 293; https://doi.org/10.3390/fi15090293 - 28 Aug 2023
Cited by 11 | Viewed by 3649
Abstract
Blockchain represents a decentralized and distributed ledger technology, ensuring transparent and secure transaction recording across networks. This innovative technology offers several benefits, including increased security, trust, and transparency, making it suitable for a wide range of applications. In the last few years, there [...] Read more.
Blockchain represents a decentralized and distributed ledger technology, ensuring transparent and secure transaction recording across networks. This innovative technology offers several benefits, including increased security, trust, and transparency, making it suitable for a wide range of applications. In the last few years, there has been a growing interest in investigating the potential of Blockchain technology to enhance diverse fields, such as e-learning. In this research, we undertook a systematic literature review to explore the potential of Blockchain technology in enhancing the e-learning domain. Our research focused on four main questions: (1) What potential characteristics of Blockchain can contribute to enhancing e-learning? (2) What are the existing Blockchain projects dedicated to e-learning? (3) What are the limitations of existing projects? (4) What are the future trends in Blockchain-related research that will impact e-learning? The results showed that Blockchain technology has several characteristics that could benefit e-learning. We also discussed immutability, transparency, decentralization, security, and traceability. We also identified several existing Blockchain projects dedicated to e-learning and discussed their potential to revolutionize learning by providing more transparency, security, and effectiveness. However, our research also revealed many limitations and challenges that could be addressed to achieve Blockchain technology’s potential in e-learning. Full article
(This article belongs to the Special Issue Future Prospects and Advancements in Blockchain Technology)
Show Figures

Figure 1

15 pages, 644 KiB  
Review
Generative AI in Medicine and Healthcare: Promises, Opportunities and Challenges
by Peng Zhang and Maged N. Kamel Boulos
Future Internet 2023, 15(9), 286; https://doi.org/10.3390/fi15090286 - 24 Aug 2023
Cited by 70 | Viewed by 40091
Abstract
Generative AI (artificial intelligence) refers to algorithms and models, such as OpenAI’s ChatGPT, that can be prompted to generate various types of content. In this narrative review, we present a selection of representative examples of generative AI applications in medicine and healthcare. We [...] Read more.
Generative AI (artificial intelligence) refers to algorithms and models, such as OpenAI’s ChatGPT, that can be prompted to generate various types of content. In this narrative review, we present a selection of representative examples of generative AI applications in medicine and healthcare. We then briefly discuss some associated issues, such as trust, veracity, clinical safety and reliability, privacy, copyrights, ownership, and opportunities, e.g., AI-driven conversational user interfaces for friendlier human-computer interaction. We conclude that generative AI will play an increasingly important role in medicine and healthcare as it further evolves and gets better tailored to the unique settings and requirements of the medical domain and as the laws, policies and regulatory frameworks surrounding its use start taking shape. Full article
(This article belongs to the Special Issue The Future Internet of Medical Things II)
Show Figures

Figure 1

21 pages, 538 KiB  
Article
Prospects of Cybersecurity in Smart Cities
by Fernando Almeida
Future Internet 2023, 15(9), 285; https://doi.org/10.3390/fi15090285 - 23 Aug 2023
Cited by 7 | Viewed by 5612
Abstract
The complex and interconnected infrastructure of smart cities offers several opportunities for attackers to exploit vulnerabilities and carry out cyberattacks that can have serious consequences for the functioning of cities’ critical infrastructures. This study aims to address this phenomenon and characterize the dimensions [...] Read more.
The complex and interconnected infrastructure of smart cities offers several opportunities for attackers to exploit vulnerabilities and carry out cyberattacks that can have serious consequences for the functioning of cities’ critical infrastructures. This study aims to address this phenomenon and characterize the dimensions of security risks in smart cities and present mitigation proposals to address these risks. The study adopts a qualitative methodology through the identification of 62 European research projects in the field of cybersecurity in smart cities, which are underway during the period from 2022 to 2027. Compared to previous studies, this work provides a comprehensive view of security risks from the perspective of multiple universities, research centers, and companies participating in European projects. The findings of this study offer relevant scientific contributions by identifying 7 dimensions and 31 sub-dimensions of cybersecurity risks in smart cities and proposing 24 mitigation strategies to face these security challenges. Furthermore, this study explores emerging cybersecurity issues to which smart cities are exposed by the increasing proliferation of new technologies and standards. Full article
(This article belongs to the Special Issue Cyber Security Challenges in the New Smart Worlds)
Show Figures

Figure 1

38 pages, 7280 KiB  
Article
SEDIA: A Platform for Semantically Enriched IoT Data Integration and Development of Smart City Applications
by Dimitrios Lymperis and Christos Goumopoulos
Future Internet 2023, 15(8), 276; https://doi.org/10.3390/fi15080276 - 18 Aug 2023
Cited by 7 | Viewed by 2640
Abstract
The development of smart city applications often encounters a variety of challenges. These include the need to address complex requirements such as integrating diverse data sources and incorporating geographical data that reflect the physical urban environment. Platforms designed for smart cities hold a [...] Read more.
The development of smart city applications often encounters a variety of challenges. These include the need to address complex requirements such as integrating diverse data sources and incorporating geographical data that reflect the physical urban environment. Platforms designed for smart cities hold a pivotal position in materializing these applications, given that they offer a suite of high-level services, which can be repurposed by developers. Although a variety of platforms are available to aid the creation of smart city applications, most fail to couple their services with geographical data, do not offer the ability to execute semantic queries on the available data, and possess restrictions that could impede the development process. This paper introduces SEDIA, a platform for developing smart applications based on diverse data sources, including geographical information, to support a semantically enriched data model for effective data analysis and integration. It also discusses the efficacy of SEDIA in a proof-of-concept smart city application related to air quality monitoring. The platform utilizes ontology classes and properties to semantically annotate collected data, and the Neo4j graph database facilitates the recognition of patterns and relationships within the data. This research also offers empirical data demonstrating the performance evaluation of SEDIA. These contributions collectively advance our understanding of semantically enriched data integration within the realm of smart city applications. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Graphical abstract

26 pages, 1726 KiB  
Article
Towards Efficient Resource Allocation for Federated Learning in Virtualized Managed Environments
by Fotis Nikolaidis, Moysis Symeonides and Demetris Trihinas
Future Internet 2023, 15(8), 261; https://doi.org/10.3390/fi15080261 - 31 Jul 2023
Cited by 11 | Viewed by 3132
Abstract
Federated learning (FL) is a transformative approach to Machine Learning that enables the training of a shared model without transferring private data to a central location. This decentralized training paradigm has found particular applicability in edge computing, where IoT devices and edge nodes [...] Read more.
Federated learning (FL) is a transformative approach to Machine Learning that enables the training of a shared model without transferring private data to a central location. This decentralized training paradigm has found particular applicability in edge computing, where IoT devices and edge nodes often possess limited computational power, network bandwidth, and energy resources. While various techniques have been developed to optimize the FL training process, an important question remains unanswered: how should resources be allocated in the training workflow? To address this question, it is crucial to understand the nature of these resources. In physical environments, the allocation is typically performed at the node level, with the entire node dedicated to executing a single workload. In contrast, virtualized environments allow for the dynamic partitioning of a node into containerized units that can adapt to changing workloads. Consequently, the new question that arises is: how can a physical node be partitioned into virtual resources to maximize the efficiency of the FL process? To answer this, we investigate various resource allocation methods that consider factors such as computational and network capabilities, the complexity of datasets, as well as the specific characteristics of the FL workflow and ML backend. We explore two scenarios: (i) running FL over a finite number of testbed nodes and (ii) hosting multiple parallel FL workflows on the same set of testbed nodes. Our findings reveal that the default configurations of state-of-the-art cloud orchestrators are sub-optimal when orchestrating FL workflows. Additionally, we demonstrate that different libraries and ML models exhibit diverse computational footprints. Building upon these insights, we discuss methods to mitigate computational interferences and enhance the overall performance of the FL pipeline execution. Full article
Show Figures

Figure 1

60 pages, 14922 KiB  
Review
The Power of Generative AI: A Review of Requirements, Models, Input–Output Formats, Evaluation Metrics, and Challenges
by Ajay Bandi, Pydi Venkata Satya Ramesh Adapa and Yudu Eswar Vinay Pratap Kumar Kuchi
Future Internet 2023, 15(8), 260; https://doi.org/10.3390/fi15080260 - 31 Jul 2023
Cited by 93 | Viewed by 49135
Abstract
Generative artificial intelligence (AI) has emerged as a powerful technology with numerous applications in various domains. There is a need to identify the requirements and evaluation metrics for generative AI models designed for specific tasks. The purpose of the research aims to investigate [...] Read more.
Generative artificial intelligence (AI) has emerged as a powerful technology with numerous applications in various domains. There is a need to identify the requirements and evaluation metrics for generative AI models designed for specific tasks. The purpose of the research aims to investigate the fundamental aspects of generative AI systems, including their requirements, models, input–output formats, and evaluation metrics. The study addresses key research questions and presents comprehensive insights to guide researchers, developers, and practitioners in the field. Firstly, the requirements necessary for implementing generative AI systems are examined and categorized into three distinct categories: hardware, software, and user experience. Furthermore, the study explores the different types of generative AI models described in the literature by presenting a taxonomy based on architectural characteristics, such as variational autoencoders (VAEs), generative adversarial networks (GANs), diffusion models, transformers, language models, normalizing flow models, and hybrid models. A comprehensive classification of input and output formats used in generative AI systems is also provided. Moreover, the research proposes a classification system based on output types and discusses commonly used evaluation metrics in generative AI. The findings contribute to advancements in the field, enabling researchers, developers, and practitioners to effectively implement and evaluate generative AI models for various applications. The significance of the research lies in understanding that generative AI system requirements are crucial for effective planning, design, and optimal performance. A taxonomy of models aids in selecting suitable options and driving advancements. Classifying input–output formats enables leveraging diverse formats for customized systems, while evaluation metrics establish standardized methods to assess model quality and performance. Full article
(This article belongs to the Special Issue State-of-the-Art Future Internet Technology in USA 2022–2023)
Show Figures

Figure 1

31 pages, 602 KiB  
Review
A Review of ARIMA vs. Machine Learning Approaches for Time Series Forecasting in Data Driven Networks
by Vaia I. Kontopoulou, Athanasios D. Panagopoulos, Ioannis Kakkos and George K. Matsopoulos
Future Internet 2023, 15(8), 255; https://doi.org/10.3390/fi15080255 - 30 Jul 2023
Cited by 56 | Viewed by 22702
Abstract
In the broad scientific field of time series forecasting, the ARIMA models and their variants have been widely applied for half a century now due to their mathematical simplicity and flexibility in application. However, with the recent advances in the development and efficient [...] Read more.
In the broad scientific field of time series forecasting, the ARIMA models and their variants have been widely applied for half a century now due to their mathematical simplicity and flexibility in application. However, with the recent advances in the development and efficient deployment of artificial intelligence models and techniques, the view is rapidly changing, with a shift towards machine and deep learning approaches becoming apparent, even without a complete evaluation of the superiority of the new approach over the classic statistical algorithms. Our work constitutes an extensive review of the published scientific literature regarding the comparison of ARIMA and machine learning algorithms applied to time series forecasting problems, as well as the combination of these two approaches in hybrid statistical-AI models in a wide variety of data applications (finance, health, weather, utilities, and network traffic prediction). Our review has shown that the AI algorithms display better prediction performance in most applications, with a few notable exceptions analyzed in our Discussion and Conclusions sections, while the hybrid statistical-AI models steadily outperform their individual parts, utilizing the best algorithmic features of both worlds. Full article
(This article belongs to the Special Issue Smart Data and Systems for the Internet of Things)
Show Figures

Figure 1

30 pages, 583 KiB  
Review
Task Allocation Methods and Optimization Techniques in Edge Computing: A Systematic Review of the Literature
by Vasilios Patsias, Petros Amanatidis, Dimitris Karampatzakis, Thomas Lagkas, Kalliopi Michalakopoulou and Alexandros Nikitas
Future Internet 2023, 15(8), 254; https://doi.org/10.3390/fi15080254 - 28 Jul 2023
Cited by 16 | Viewed by 5033
Abstract
Task allocation in edge computing refers to the process of distributing tasks among the various nodes in an edge computing network. The main challenges in task allocation include determining the optimal location for each task based on the requirements such as processing power, [...] Read more.
Task allocation in edge computing refers to the process of distributing tasks among the various nodes in an edge computing network. The main challenges in task allocation include determining the optimal location for each task based on the requirements such as processing power, storage, and network bandwidth, and adapting to the dynamic nature of the network. Different approaches for task allocation include centralized, decentralized, hybrid, and machine learning algorithms. Each approach has its strengths and weaknesses and the choice of approach will depend on the specific requirements of the application. In more detail, the selection of the most optimal task allocation methods depends on the edge computing architecture and configuration type, like mobile edge computing (MEC), cloud-edge, fog computing, peer-to-peer edge computing, etc. Thus, task allocation in edge computing is a complex, diverse, and challenging problem that requires a balance of trade-offs between multiple conflicting objectives such as energy efficiency, data privacy, security, latency, and quality of service (QoS). Recently, an increased number of research studies have emerged regarding the performance evaluation and optimization of task allocation on edge devices. While several survey articles have described the current state-of-the-art task allocation methods, this work focuses on comparing and contrasting different task allocation methods, optimization algorithms, as well as the network types that are most frequently used in edge computing systems. Full article
Show Figures

Figure 1

31 pages, 2565 KiB  
Article
The Meta-Metaverse: Ideation and Future Directions
by Mohammad (Behdad) Jamshidi, Arash Dehghaniyan Serej, Alireza Jamshidi and Omid Moztarzadeh
Future Internet 2023, 15(8), 252; https://doi.org/10.3390/fi15080252 - 27 Jul 2023
Cited by 22 | Viewed by 4275
Abstract
In the era of digitalization and artificial intelligence (AI), the utilization of Metaverse technology has become increasingly crucial. As the world becomes more digitized, there is a pressing need to effectively transfer real-world assets into the digital realm and establish meaningful relationships between [...] Read more.
In the era of digitalization and artificial intelligence (AI), the utilization of Metaverse technology has become increasingly crucial. As the world becomes more digitized, there is a pressing need to effectively transfer real-world assets into the digital realm and establish meaningful relationships between them. However, existing approaches have shown significant limitations in achieving this goal comprehensively. To address this, this research introduces an innovative methodology called the Meta-Metaverse, which aims to enhance the immersive experience and create realistic digital twins across various domains such as biology, genetics, economy, medicine, environment, gaming, digital twins, Internet of Things, artificial intelligence, machine learning, psychology, supply chain, social networking, smart manufacturing, and politics. The multi-layered structure of Metaverse platforms and digital twins allows for greater flexibility and scalability, offering valuable insights into the potential impact of advancing science, technology, and the internet. This article presents a detailed description of the proposed methodology and its applications, highlighting its potential to transform scientific research and inspire groundbreaking ideas in science, medicine, and technology. Full article
Show Figures

Graphical abstract

18 pages, 5882 KiB  
Article
A Novel Approach for Fraud Detection in Blockchain-Based Healthcare Networks Using Machine Learning
by Mohammed A. Mohammed, Manel Boujelben and Mohamed Abid
Future Internet 2023, 15(8), 250; https://doi.org/10.3390/fi15080250 - 26 Jul 2023
Cited by 11 | Viewed by 3541
Abstract
Recently, the advent of blockchain (BC) has sparked a digital revolution in different fields, such as finance, healthcare, and supply chain. It is used by smart healthcare systems to provide transparency and control for personal medical records. However, BC and healthcare integration still [...] Read more.
Recently, the advent of blockchain (BC) has sparked a digital revolution in different fields, such as finance, healthcare, and supply chain. It is used by smart healthcare systems to provide transparency and control for personal medical records. However, BC and healthcare integration still face many challenges, such as storing patient data and privacy and security issues. In the context of security, new attacks target different parts of the BC network, such as nodes, consensus algorithms, Smart Contracts (SC), and wallets. Fraudulent data insertion can have serious consequences on the integrity and reliability of the BC, as it can compromise the trustworthiness of the information stored on it and lead to incorrect or misleading transactions. Detecting and preventing fraudulent data insertion is crucial for maintaining the credibility of the BC as a secure and transparent system for recording and verifying transactions. SCs control the transfer of assets, which is why they may be subject to several adverbial attacks. Therefore, many efforts have been proposed to detect vulnerabilities and attacks in the SCs, such as utilizing programming tools. However, their proposals are inadequate against the newly emerging vulnerabilities and attacks. Artificial Intelligence technology is robust in analyzing and detecting new attacks in every part of the BC network. Therefore, this article proposes a system architecture for detecting fraudulent transactions and attacks in the BC network based on Machine Learning (ML). It is composed of two stages: (1) Using ML to check medical data from sensors and block abnormal data from entering the blockchain network. (2) Using the same ML to check transactions in the blockchain, storing normal transactions, and marking abnormal ones as novel attacks in the attacks database. To build our system, we utilized two datasets and six machine learning algorithms (Logistic Regression, Decision Tree, KNN, Naive Bayes, SVM, and Random Forest). The results demonstrate that the Random Forest algorithm outperformed others by achieving the highest accuracy, execution time, and scalability. Thereby, it was considered the best solution among the rest of the algorithms for tackling the research problem. Moreover, the security analysis of the proposed system proves its robustness against several attacks which threaten the functioning of the blockchain-based healthcare application. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

42 pages, 4766 KiB  
Review
Self-Healing in Cyber–Physical Systems Using Machine Learning: A Critical Analysis of Theories and Tools
by Obinna Johnphill, Ali Safaa Sadiq, Feras Al-Obeidat, Haider Al-Khateeb, Mohammed Adam Taheir, Omprakash Kaiwartya and Mohammed Ali
Future Internet 2023, 15(7), 244; https://doi.org/10.3390/fi15070244 - 17 Jul 2023
Cited by 11 | Viewed by 5055
Abstract
The rapid advancement of networking, computing, sensing, and control systems has introduced a wide range of cyber threats, including those from new devices deployed during the development of scenarios. With recent advancements in automobiles, medical devices, smart industrial systems, and other technologies, system [...] Read more.
The rapid advancement of networking, computing, sensing, and control systems has introduced a wide range of cyber threats, including those from new devices deployed during the development of scenarios. With recent advancements in automobiles, medical devices, smart industrial systems, and other technologies, system failures resulting from external attacks or internal process malfunctions are increasingly common. Restoring the system’s stable state requires autonomous intervention through the self-healing process to maintain service quality. This paper, therefore, aims to analyse state of the art and identify where self-healing using machine learning can be applied to cyber–physical systems to enhance security and prevent failures within the system. The paper describes three key components of self-healing functionality in computer systems: anomaly detection, fault alert, and fault auto-remediation. The significance of these components is that self-healing functionality cannot be practical without considering all three. Understanding the self-healing theories that form the guiding principles for implementing these functionalities with real-life implications is crucial. There are strong indications that self-healing functionality in the cyber–physical system is an emerging area of research that holds great promise for the future of computing technology. It has the potential to provide seamless self-organising and self-restoration functionality to cyber–physical systems, leading to increased security of systems and improved user experience. For instance, a functional self-healing system implemented on a power grid will react autonomously when a threat or fault occurs, without requiring human intervention to restore power to communities and preserve critical services after power outages or defects. This paper presents the existing vulnerabilities, threats, and challenges and critically analyses the current self-healing theories and methods that use machine learning for cyber–physical systems. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

17 pages, 1160 KiB  
Article
Machine Learning for Network Intrusion Detection—A Comparative Study
by Mustafa Al Lail, Alejandro Garcia and Saul Olivo
Future Internet 2023, 15(7), 243; https://doi.org/10.3390/fi15070243 - 16 Jul 2023
Cited by 14 | Viewed by 5233
Abstract
Modern society has quickly evolved to utilize communication and data-sharing media with the advent of the internet and electronic technologies. However, these technologies have created new opportunities for attackers to gain access to confidential electronic resources. As a result, data breaches have significantly [...] Read more.
Modern society has quickly evolved to utilize communication and data-sharing media with the advent of the internet and electronic technologies. However, these technologies have created new opportunities for attackers to gain access to confidential electronic resources. As a result, data breaches have significantly impacted our society in multiple ways. To mitigate this situation, researchers have developed multiple security countermeasure techniques known as Network Intrusion Detection Systems (NIDS). Despite these techniques, attackers have developed new strategies to gain unauthorized access to resources. In this work, we propose using machine learning (ML) to develop a NIDS system capable of detecting modern attack types with a very high detection rate. To this end, we implement and evaluate several ML algorithms and compare their effectiveness using a state-of-the-art dataset containing modern attack types. The results show that the random forest model outperforms other models, with a detection rate of modern network attacks of 97 percent. This study shows that not only is accurate prediction possible but also a high detection rate of attacks can be achieved. These results indicate that ML has the potential to create very effective NIDS systems. Full article
(This article belongs to the Special Issue Anomaly Detection in Modern Networks)
Show Figures

Figure 1

28 pages, 22171 KiB  
Article
A Cyber-Physical System for Wildfire Detection and Firefighting
by Pietro Battistoni, Andrea Antonio Cantone, Gerardo Martino, Valerio Passamano, Marco Romano, Monica Sebillo and Giuliana Vitiello
Future Internet 2023, 15(7), 237; https://doi.org/10.3390/fi15070237 - 6 Jul 2023
Cited by 13 | Viewed by 4360
Abstract
The increasing frequency and severity of forest fires necessitate early detection and rapid response to mitigate their impact. This project aims to design a cyber-physical system for early detection and rapid response to forest fires using advanced technologies. The system incorporates Internet of [...] Read more.
The increasing frequency and severity of forest fires necessitate early detection and rapid response to mitigate their impact. This project aims to design a cyber-physical system for early detection and rapid response to forest fires using advanced technologies. The system incorporates Internet of Things sensors and autonomous unmanned aerial and ground vehicles controlled by the robot operating system. An IoT-based wildfire detection node continuously monitors environmental conditions, enabling early fire detection. Upon fire detection, a UAV autonomously surveys the area to precisely locate the fire and can deploy an extinguishing payload or provide data for decision-making. The UAV communicates the fire’s precise location to a collaborative UGV, which autonomously reaches the designated area to support ground-based firefighters. The CPS includes a ground control station with web-based dashboards for real-time monitoring of system parameters and telemetry data from UAVs and UGVs. The article demonstrates the real-time fire detection capabilities of the proposed system using simulated forest fire scenarios. The objective is to provide a practical approach using open-source technologies for early detection and extinguishing of forest fires, with potential applications in various industries, surveillance, and precision agriculture. Full article
Show Figures

Figure 1

21 pages, 1044 KiB  
Article
Enhancing Collaborative Filtering-Based Recommender System Using Sentiment Analysis
by Ikram Karabila, Nossayba Darraz, Anas El-Ansari, Nabil Alami and Mostafa El Mallahi
Future Internet 2023, 15(7), 235; https://doi.org/10.3390/fi15070235 - 5 Jul 2023
Cited by 17 | Viewed by 6628
Abstract
Recommendation systems (RSs) are widely used in e-commerce to improve conversion rates by aligning product offerings with customer preferences and interests. While traditional RSs rely solely on numerical ratings to generate recommendations, these ratings alone may not be sufficient to offer personalized and [...] Read more.
Recommendation systems (RSs) are widely used in e-commerce to improve conversion rates by aligning product offerings with customer preferences and interests. While traditional RSs rely solely on numerical ratings to generate recommendations, these ratings alone may not be sufficient to offer personalized and accurate suggestions. To overcome this limitation, additional sources of information, such as reviews, can be utilized. However, analyzing and understanding the information contained within reviews, which are often unstructured data, is a challenging task. To address this issue, sentiment analysis (SA) has attracted considerable attention as a tool to better comprehend a user’s opinions, emotions, and attitudes. In this study, we propose a novel RS that leverages ensemble learning by integrating sentiment analysis of textual data with collaborative filtering techniques to provide users with more precise and individualized recommendations. Our system was developed in three main steps. Firstly, we used unsupervised “GloVe” vectorization for better classification performance and built a sentiment model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Secondly, we developed a recommendation model based on collaborative filtering techniques. Lastly, we integrated our sentiment analysis model into the RS. Our proposed model of SA achieved an accuracy score of 93%, which is superior to other models. The results of our study indicate that our approach enhances the accuracy of the recommendation system. Overall, our proposed system offers customers a more reliable and personalized recommendation service in e-commerce. Full article
Show Figures

Figure 1

27 pages, 5489 KiB  
Article
A New AI-Based Semantic Cyber Intelligence Agent
by Fahim Sufi
Future Internet 2023, 15(7), 231; https://doi.org/10.3390/fi15070231 - 29 Jun 2023
Cited by 11 | Viewed by 4462
Abstract
The surge in cybercrime has emerged as a pressing concern in contemporary society due to its far-reaching financial, social, and psychological repercussions on individuals. Beyond inflicting monetary losses, cyber-attacks exert adverse effects on the social fabric and psychological well-being of the affected individuals. [...] Read more.
The surge in cybercrime has emerged as a pressing concern in contemporary society due to its far-reaching financial, social, and psychological repercussions on individuals. Beyond inflicting monetary losses, cyber-attacks exert adverse effects on the social fabric and psychological well-being of the affected individuals. In order to mitigate the deleterious consequences of cyber threats, adoption of an intelligent agent-based solution to enhance the speed and comprehensiveness of cyber intelligence is advocated. In this paper, a novel cyber intelligence solution is proposed, employing four semantic agents that interact autonomously to acquire crucial cyber intelligence pertaining to any given country. The solution leverages a combination of techniques, including a convolutional neural network (CNN), sentiment analysis, exponential smoothing, latent Dirichlet allocation (LDA), term frequency-inverse document frequency (TF-IDF), Porter stemming, and others, to analyse data from both social media and web sources. The proposed method underwent evaluation from 13 October 2022 to 6 April 2023, utilizing a dataset comprising 37,386 tweets generated by 30,706 users across 54 languages. To address non-English content, a total of 8199 HTTP requests were made to facilitate translation. Additionally, the system processed 238,220 cyber threat data from the web. Within a remarkably brief duration of 6 s, the system autonomously generated a comprehensive cyber intelligence report encompassing 7 critical dimensions of cyber intelligence for countries such as Russia, Ukraine, China, Iran, India, and Australia. Full article
(This article belongs to the Special Issue Semantic Web Services for Multi-Agent Systems)
Show Figures

Figure 1

17 pages, 18103 KiB  
Article
RSSI and Device Pose Fusion for Fingerprinting-Based Indoor Smartphone Localization Systems
by Imran Moez Khan, Andrew Thompson, Akram Al-Hourani, Kandeepan Sithamparanathan and Wayne S. T. Rowe
Future Internet 2023, 15(6), 220; https://doi.org/10.3390/fi15060220 - 20 Jun 2023
Cited by 4 | Viewed by 3164
Abstract
Complementing RSSI measurements at anchors with onboard smartphone accelerometer measurements is a popular research direction to improve the accuracy of indoor localization systems. This can be performed at different levels; for example, many studies have used pedestrian dead reckoning (PDR) and a filtering [...] Read more.
Complementing RSSI measurements at anchors with onboard smartphone accelerometer measurements is a popular research direction to improve the accuracy of indoor localization systems. This can be performed at different levels; for example, many studies have used pedestrian dead reckoning (PDR) and a filtering method at the algorithm level for sensor fusion. In this study, a novel conceptual framework was developed and applied at the data level that first utilizes accelerometer measurements to classify the smartphone’s device pose and then combines this with RSSI measurements. The framework was explored using neural networks with room-scale experimental data obtained from a Bluetooth low-energy (BLE) setup. Consistent accuracy improvement was obtained for the output localization classes (zones), with an average overall accuracy improvement of 10.7 percentage points for the RSSI-and-device-pose framework over that of RSSI-only localization. Full article
Show Figures

Figure 1

19 pages, 3172 KiB  
Article
BERT4Loc: BERT for Location—POI Recommender System
by Syed Raza Bashir, Shaina Raza and Vojislav B. Misic
Future Internet 2023, 15(6), 213; https://doi.org/10.3390/fi15060213 - 12 Jun 2023
Cited by 6 | Viewed by 3916
Abstract
Recommending points of interest (POI) is a challenging task that requires extracting comprehensive location data from location-based social media platforms. To provide effective location-based recommendations, it is important to analyze users’ historical behavior and preferences. In this study, we present a sophisticated location-aware [...] Read more.
Recommending points of interest (POI) is a challenging task that requires extracting comprehensive location data from location-based social media platforms. To provide effective location-based recommendations, it is important to analyze users’ historical behavior and preferences. In this study, we present a sophisticated location-aware recommendation system that uses Bidirectional Encoder Representations from Transformers (BERT) to offer personalized location-based suggestions. Our model combines location information and user preferences to provide more relevant recommendations compared to models that predict the next POI in a sequence. Based on our experiments conducted on two benchmark datasets, we have observed that our BERT-based model surpasses baselines models in terms of HR by a significant margin of 6% compared to the second-best performing baseline. Furthermore, our model demonstrates a percentage gain of 1–2% in the NDCG compared to second best baseline. These results indicate the superior performance and effectiveness of our BERT-based approach in comparison to other models when evaluating HR and NDCG metrics. Moreover, we see the effectiveness of the proposed model for quality through additional experiments. Full article
(This article belongs to the Section Techno-Social Smart Systems)
Show Figures

Figure 1

45 pages, 2869 KiB  
Review
Securing Wireless Sensor Networks Using Machine Learning and Blockchain: A Review
by Shereen Ismail, Diana W. Dawoud and Hassan Reza
Future Internet 2023, 15(6), 200; https://doi.org/10.3390/fi15060200 - 30 May 2023
Cited by 21 | Viewed by 7816
Abstract
As an Internet of Things (IoT) technological key enabler, Wireless Sensor Networks (WSNs) are prone to different kinds of cyberattacks. WSNs have unique characteristics, and have several limitations which complicate the design of effective attack prevention and detection techniques. This paper aims to [...] Read more.
As an Internet of Things (IoT) technological key enabler, Wireless Sensor Networks (WSNs) are prone to different kinds of cyberattacks. WSNs have unique characteristics, and have several limitations which complicate the design of effective attack prevention and detection techniques. This paper aims to provide a comprehensive understanding of the fundamental principles underlying cybersecurity in WSNs. In addition to current and envisioned solutions that have been studied in detail, this review primarily focuses on state-of-the-art Machine Learning (ML) and Blockchain (BC) security techniques by studying and analyzing 164 up-to-date publications highlighting security aspect in WSNs. Then, the paper discusses integrating BC and ML towards developing a lightweight security framework that consists of two lines of defence, i.e, cyberattack detection and cyberattack prevention in WSNs, emphasizing the relevant design insights and challenges. The paper concludes by presenting a proposed integrated BC and ML solution highlighting potential BC and ML algorithms underpinning a less computationally demanding solution. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT II)
Show Figures

Figure 1

17 pages, 7028 KiB  
Article
A Distributed Sensor System Based on Cloud-Edge-End Network for Industrial Internet of Things
by Mian Wang , Cong’an Xu, Yun Lin, Zhiyi Lu, Jinlong Sun and Guan Gui
Future Internet 2023, 15(5), 171; https://doi.org/10.3390/fi15050171 - 30 Apr 2023
Cited by 7 | Viewed by 3404
Abstract
The Industrial Internet of Things (IIoT) refers to the application of the IoT in the industrial field. The development of fifth-generation (5G) communication technology has accelerated the world’s entry into the era of the industrial revolution and has also promoted the overall optimization [...] Read more.
The Industrial Internet of Things (IIoT) refers to the application of the IoT in the industrial field. The development of fifth-generation (5G) communication technology has accelerated the world’s entry into the era of the industrial revolution and has also promoted the overall optimization of the IIoT. In the IIoT environment, challenges such as complex operating conditions and diverse data transmission have become increasingly prominent. Therefore, studying how to collect and process a large amount of real-time data from various devices in a timely, efficient, and reasonable manner is a significant problem. To address these issues, we propose a three-level networking model based on distributed sensor self-networking and cloud server platforms for networking. This model can collect monitoring data for a variety of industrial scenarios that require data collection. It enables the processing and storage of key information in a timely manner, reduces data transmission and storage costs, and improves data transmission reliability and efficiency. Additionally, we have designed a feature fusion network to further enhance the amount of feature information and improve the accuracy of industrial data recognition. The system also includes data preprocessing and data visualization capabilities. Finally, we discuss how to further preprocess and visualize the collected dataset and provide a specific algorithm analysis process using a large manipulator dataset as an example. Full article
Show Figures

Figure 1

18 pages, 432 KiB  
Review
Opportunities for Early Detection and Prediction of Ransomware Attacks against Industrial Control Systems
by Mazen Gazzan and Frederick T. Sheldon
Future Internet 2023, 15(4), 144; https://doi.org/10.3390/fi15040144 - 7 Apr 2023
Cited by 17 | Viewed by 5482
Abstract
Industrial control systems (ICS) and supervisory control and data acquisition (SCADA) systems, which control critical infrastructure such as power plants and water treatment facilities, have unique characteristics that make them vulnerable to ransomware attacks. These systems are often outdated and run on proprietary [...] Read more.
Industrial control systems (ICS) and supervisory control and data acquisition (SCADA) systems, which control critical infrastructure such as power plants and water treatment facilities, have unique characteristics that make them vulnerable to ransomware attacks. These systems are often outdated and run on proprietary software, making them difficult to protect with traditional cybersecurity measures. The limited visibility into these systems and the lack of effective threat intelligence pose significant challenges to the early detection and prediction of ransomware attacks. Ransomware attacks on ICS and SCADA systems have become a growing concern in recent years. These attacks can cause significant disruptions to critical infrastructure and result in significant financial losses. Despite the increasing threat, the prediction of ransomware attacks on ICS remains a significant challenge for the cybersecurity community. This is due to the unique characteristics of these systems, including the use of proprietary software and limited visibility into their operations. In this review paper, we will examine the challenges associated with predicting ransomware attacks on industrial systems and the existing approaches for mitigating these risks. We will also discuss the need for a multi-disciplinary approach that involves a close collaboration between the cybersecurity and ICS communities. We aim to provide a comprehensive overview of the current state of ransomware prediction on industrial systems and to identify opportunities for future research and development in this area. Full article
(This article belongs to the Special Issue Cyber Security Challenges in the New Smart Worlds)
Show Figures

Figure 1

15 pages, 5002 KiB  
Article
Future of Drug Discovery: The Synergy of Edge Computing, Internet of Medical Things, and Deep Learning
by Mohammad (Behdad) Jamshidi, Omid Moztarzadeh, Alireza Jamshidi, Ahmed Abdelgawad, Ayman S. El-Baz and Lukas Hauer
Future Internet 2023, 15(4), 142; https://doi.org/10.3390/fi15040142 - 7 Apr 2023
Cited by 20 | Viewed by 4881
Abstract
The global spread of COVID-19 highlights the urgency of quickly finding drugs and vaccines and suggests that similar challenges will arise in the future. This underscores the need for ongoing efforts to overcome the obstacles involved in the development of potential treatments. Although [...] Read more.
The global spread of COVID-19 highlights the urgency of quickly finding drugs and vaccines and suggests that similar challenges will arise in the future. This underscores the need for ongoing efforts to overcome the obstacles involved in the development of potential treatments. Although some progress has been made in the use of Artificial Intelligence (AI) in drug discovery, virologists, pharmaceutical companies, and investors seek more long-term solutions and greater investment in emerging technologies. One potential solution to aid in the drug-development process is to combine the capabilities of the Internet of Medical Things (IoMT), edge computing (EC), and deep learning (DL). Some practical frameworks and techniques utilizing EC, IoMT, and DL have been proposed for the monitoring and tracking of infected individuals or high-risk areas. However, these technologies have not been widely utilized in drug clinical trials. Given the time-consuming nature of traditional drug- and vaccine-development methods, there is a need for a new AI-based platform that can revolutionize the industry. One approach involves utilizing smartphones equipped with medical sensors to collect and transmit real-time physiological and healthcare information on clinical-trial participants to the nearest edge nodes (EN). This allows the verification of a vast amount of medical data for a large number of individuals in a short time frame, without the restrictions of latency, bandwidth, or security constraints. The collected information can be monitored by physicians and researchers to assess a vaccine’s performance. Full article
Show Figures

Figure 1

18 pages, 995 KiB  
Review
Influential Factors in the Design and Development of a Sustainable Web3/Metaverse and Its Applications
by Reza Aria, Norm Archer, Moein Khanlari and Bharat Shah
Future Internet 2023, 15(4), 131; https://doi.org/10.3390/fi15040131 - 30 Mar 2023
Cited by 16 | Viewed by 6274
Abstract
This paper summarizes the work of many different authors, industries, and countries by introducing important and influential factors that will help in the development, successful adoption, and sustainable use of the Web3/metaverse and its applications. We introduce a few important factors derived from [...] Read more.
This paper summarizes the work of many different authors, industries, and countries by introducing important and influential factors that will help in the development, successful adoption, and sustainable use of the Web3/metaverse and its applications. We introduce a few important factors derived from the current state-of-the-art literature, including four essential elements including (1) appropriate decentralization, (2) good user experience, (3) appropriate translation and synchronization to the real world, and (4) a viable economy, which are required for appropriate implementation of a metaverse and its applications. The future of Web3 is all about decentralization, and blockchain can play a significant part in the development of the Metaverse. This paper also sheds light on some of the most relevant open issues and challenges currently facing the Web3/metaverse and its applications, with the hope that this discourse will help to encourage the development of appropriate solutions. Full article
Show Figures

Figure 1

36 pages, 5618 KiB  
Review
Quantum Computing for Healthcare: A Review
by Raihan Ur Rasool, Hafiz Farooq Ahmad, Wajid Rafique, Adnan Qayyum, Junaid Qadir and Zahid Anwar
Future Internet 2023, 15(3), 94; https://doi.org/10.3390/fi15030094 - 27 Feb 2023
Cited by 62 | Viewed by 28915
Abstract
In recent years, the interdisciplinary field of quantum computing has rapidly developed and garnered substantial interest from both academia and industry due to its ability to process information in fundamentally different ways, leading to hitherto unattainable computational capabilities. However, despite its potential, the [...] Read more.
In recent years, the interdisciplinary field of quantum computing has rapidly developed and garnered substantial interest from both academia and industry due to its ability to process information in fundamentally different ways, leading to hitherto unattainable computational capabilities. However, despite its potential, the full extent of quantum computing’s impact on healthcare remains largely unexplored. This survey paper presents the first systematic analysis of the various capabilities of quantum computing in enhancing healthcare systems, with a focus on its potential to revolutionize compute-intensive healthcare tasks such as drug discovery, personalized medicine, DNA sequencing, medical imaging, and operational optimization. Through a comprehensive analysis of existing literature, we have developed taxonomies across different dimensions, including background and enabling technologies, applications, requirements, architectures, security, open issues, and future research directions, providing a panoramic view of the quantum computing paradigm for healthcare. Our survey aims to aid both new and experienced researchers in quantum computing and healthcare by helping them understand the current research landscape, identifying potential opportunities and challenges, and making informed decisions when designing new architectures and applications for quantum computing in healthcare. Full article
(This article belongs to the Special Issue Internet of Things (IoT) for Smart Living and Public Health)
Show Figures

Figure 1

34 pages, 2792 KiB  
Article
BPMNE4IoT: A Framework for Modeling, Executing and Monitoring IoT-Driven Processes
by Yusuf Kirikkayis, Florian Gallik, Michael Winter and Manfred Reichert
Future Internet 2023, 15(3), 90; https://doi.org/10.3390/fi15030090 - 22 Feb 2023
Cited by 15 | Viewed by 5611
Abstract
The Internet of Things (IoT) enables a variety of smart applications, including smart home, smart manufacturing, and smart city. By enhancing Business Process Management Systems with IoT capabilities, the execution and monitoring of business processes can be significantly improved. Providing a holistic support [...] Read more.
The Internet of Things (IoT) enables a variety of smart applications, including smart home, smart manufacturing, and smart city. By enhancing Business Process Management Systems with IoT capabilities, the execution and monitoring of business processes can be significantly improved. Providing a holistic support for modeling, executing and monitoring IoT-driven processes, however, constitutes a challenge. Existing process modeling and process execution languages, such as BPMN 2.0, are unable to fully meet the IoT characteristics (e.g., asynchronicity and parallelism) of IoT-driven processes. In this article, we present BPMNE4IoT—A holistic framework for modeling, executing and monitoring IoT-driven processes. We introduce various artifacts and events based on the BPMN 2.0 metamodel that allow realizing the desired IoT awareness of business processes. The framework is evaluated along two real-world scenarios from two different domains. Moreover, we present a user study for comparing BPMNE4IoT and BPMN 2.0. In particular, this study has confirmed that the BPMNE4IoT framework facilitates the support of IoT-driven processes. Full article
(This article belongs to the Special Issue IoT-Based BPM for Smart Environments)
Show Figures

Figure 1

23 pages, 2561 KiB  
Article
HealthBlock: A Framework for a Collaborative Sharing of Electronic Health Records Based on Blockchain
by Leina Abdelgalil and Mohamed Mejri
Future Internet 2023, 15(3), 87; https://doi.org/10.3390/fi15030087 - 21 Feb 2023
Cited by 14 | Viewed by 4688
Abstract
Electronic health records (EHRs) play an important role in our life. However, most of the time, they are scattered and saved on different databases belonging to distinct institutions (hospitals, laboratories, clinics, etc.) geographically distributed across one or many countries. Due to this decentralization [...] Read more.
Electronic health records (EHRs) play an important role in our life. However, most of the time, they are scattered and saved on different databases belonging to distinct institutions (hospitals, laboratories, clinics, etc.) geographically distributed across one or many countries. Due to this decentralization and the heterogeneity of the different involved systems, medical staff are facing difficulties in correctly collaborating by sharing, protecting, and tracking their patient’s electronic health-record history to provide them with the best care. Additionally, patients have no control over their private EHRs. Blockchain has many promising future uses for the healthcare domain because it provides a better solution for sharing data while preserving the integrity, the interoperability, the availability of the classical client–server architectures used to manage EHRS. This paper proposes a framework called HealthBlock for collaboratively sharing EHRs and their privacy preservation. Different technologies have been combined to achieve this goal. The InterPlanetary File System (IPFS) technology stores and shares patients’ EHRs in distributed off-chain storage and ensures the record’s immutability; Hyperledger Indy gives patients full control over their EHRs, and Hyperledger Fabric stores the patient-access control policy and delegations. Full article
Show Figures

Figure 1

17 pages, 2570 KiB  
Article
Machine Learning for Data Center Optimizations: Feature Selection Using Shapley Additive exPlanation (SHAP)
by Yibrah Gebreyesus, Damian Dalton, Sebastian Nixon, Davide De Chiara and Marta Chinnici
Future Internet 2023, 15(3), 88; https://doi.org/10.3390/fi15030088 - 21 Feb 2023
Cited by 26 | Viewed by 7400
Abstract
The need for artificial intelligence (AI) and machine learning (ML) models to optimize data center (DC) operations increases as the volume of operations management data upsurges tremendously. These strategies can assist operators in better understanding their DC operations and help them make informed [...] Read more.
The need for artificial intelligence (AI) and machine learning (ML) models to optimize data center (DC) operations increases as the volume of operations management data upsurges tremendously. These strategies can assist operators in better understanding their DC operations and help them make informed decisions upfront to maintain service reliability and availability. The strategies include developing models that optimize energy efficiency, identifying inefficient resource utilization and scheduling policies, and predicting outages. In addition to model hyperparameter tuning, feature subset selection (FSS) is critical for identifying relevant features for effectively modeling DC operations to provide insight into the data, optimize model performance, and reduce computational expenses. Hence, this paper introduces the Shapley Additive exPlanation (SHAP) values method, a class of additive feature attribution values for identifying relevant features that is rarely discussed in the literature. We compared its effectiveness with several commonly used, importance-based feature selection methods. The methods were tested on real DC operations data streams obtained from the ENEA CRESCO6 cluster with 20,832 cores. To demonstrate the effectiveness of SHAP compared to other methods, we selected the top ten most important features from each method, retrained the predictive models, and evaluated their performance using the MAE, RMSE, and MPAE evaluation criteria. The results presented in this paper demonstrate that the predictive models trained using features selected with the SHAP-assisted method performed well, with a lower error and a reasonable execution time compared to other methods. Full article
(This article belongs to the Special Issue Machine Learning Perspective in the Convolutional Neural Network Era)
Show Figures

Figure 1

13 pages, 2344 KiB  
Article
Forest Fire Detection and Notification Method Based on AI and IoT Approaches
by Kuldoshbay Avazov, An Eui Hyun, Alabdulwahab Abrar Sami S, Azizbek Khaitov, Akmalbek Bobomirzaevich Abdusalomov and Young Im Cho
Future Internet 2023, 15(2), 61; https://doi.org/10.3390/fi15020061 - 31 Jan 2023
Cited by 39 | Viewed by 9296
Abstract
There is a high risk of bushfire in spring and autumn, when the air is dry. Do not bring any flammable substances, such as matches or cigarettes. Cooking or wood fires are permitted only in designated areas. These are some of the regulations [...] Read more.
There is a high risk of bushfire in spring and autumn, when the air is dry. Do not bring any flammable substances, such as matches or cigarettes. Cooking or wood fires are permitted only in designated areas. These are some of the regulations that are enforced when hiking or going to a vegetated forest. However, humans tend to disobey or disregard guidelines and the law. Therefore, to preemptively stop people from accidentally starting a fire, we created a technique that will allow early fire detection and classification to ensure the utmost safety of the living things in the forest. Some relevant studies on forest fire detection have been conducted in the past few years. However, there are still insufficient studies on early fire detection and notification systems for monitoring fire disasters in real time using advanced approaches. Therefore, we came up with a solution using the convergence of the Internet of Things (IoT) and You Only Look Once Version 5 (YOLOv5). The experimental results show that IoT devices were able to validate some of the falsely detected fires or undetected fires that YOLOv5 reported. This report is recorded and sent to the fire department for further verification and validation. Finally, we compared the performance of our method with those of recently reported fire detection approaches employing widely used performance matrices to test the achieved fire classification results. Full article
(This article belongs to the Special Issue Machine Learning Perspective in the Convolutional Neural Network Era)
Show Figures

Figure 1

18 pages, 968 KiB  
Article
The Cloud-to-Edge-to-IoT Continuum as an Enabler for Search and Rescue Operations
by Leonardo Militano, Adriana Arteaga, Giovanni Toffetti and Nathalie Mitton
Future Internet 2023, 15(2), 55; https://doi.org/10.3390/fi15020055 - 30 Jan 2023
Cited by 16 | Viewed by 5603
Abstract
When a natural or human disaster occurs, time is critical and often of vital importance. Data from the incident area containing the information to guide search and rescue (SAR) operations and improve intervention effectiveness should be collected as quickly as possible and with [...] Read more.
When a natural or human disaster occurs, time is critical and often of vital importance. Data from the incident area containing the information to guide search and rescue (SAR) operations and improve intervention effectiveness should be collected as quickly as possible and with the highest accuracy possible. Nowadays, rescuers are assisted by different robots able to fly, climb or crawl, and with different sensors and wireless communication means. However, the heterogeneity of devices and data together with the strong low-delay requirements cause these technologies not yet to be used at their highest potential. Cloud and Edge technologies have shown the capability to offer support to the Internet of Things (IoT), complementing it with additional resources and functionalities. Nonetheless, building a continuum from the IoT to the edge and to the cloud is still an open challenge. SAR operations would benefit strongly from such a continuum. Distributed applications and advanced resource orchestration solutions over the continuum in combination with proper software stacks reaching out to the edge of the network may enhance the response time and effective intervention for SAR operation. The challenges for SAR operations, the technologies, and solutions for the cloud-to-edge-to-IoT continuum will be discussed in this paper. Full article
(This article belongs to the Special Issue Moving towards 6G Wireless Technologies)
Show Figures

Figure 1

20 pages, 2842 KiB  
Article
Envisioning Architecture of Metaverse Intensive Learning Experience (MiLEx): Career Readiness in the 21st Century and Collective Intelligence Development Scenario
by Eman AbuKhousa, Mohamed Sami El-Tahawy and Yacine Atif
Future Internet 2023, 15(2), 53; https://doi.org/10.3390/fi15020053 - 30 Jan 2023
Cited by 26 | Viewed by 6399
Abstract
Th metaverse presents a new opportunity to construct personalized learning paths and to promote practices that scale the development of future skills and collective intelligence. The attitudes, knowledge and skills that are necessary to face the challenges of the 21st century should be [...] Read more.
Th metaverse presents a new opportunity to construct personalized learning paths and to promote practices that scale the development of future skills and collective intelligence. The attitudes, knowledge and skills that are necessary to face the challenges of the 21st century should be developed through iterative cycles of continuous learning, where learners are enabled to experience, reflect, and produce new ideas while participating in a collective creativity process. In this paper, we propose an architecture to develop a metaverse-intensive learning experience (MiLEx) platform with an illustrative scenario that reinforces the development of 21st century career practices and collective intelligence. The learning ecosystem of MiLEx integrates four key elements: (1) key players that define the main actors and their roles in the learning process; (2) a learning context that defines the learning space and the networks of expected interactions among human and non-human objects; (3) experiential learning instances that deliver education via a real-life–virtual merge; and (4) technology support for building practice communities online, developing experiential cycles and transforming knowledge between human and non-human objects within the community. The proposed MiLEx architecture incorporates sets of technological and data components to (1) discover/profile learners and design learner-centric, theoretically grounded and immersive learning experiences; (2) create elements and experiential learning scenarios; (3) analyze learner’s interactive and behavioral patterns; (4) support the emergence of collective intelligence; (5) assess learning outcomes and monitor the learner’s maturity process; and (6) evaluate experienced learning and recommend future experiences. We also present the MiLEx continuum as a cyclic flow of information to promote immersive learning. Finally, we discuss some open issues to increase the learning value and propose some future work suggestions to further shape the transformative potential of metaverse-based learning environments. Full article
(This article belongs to the Special Issue Software Engineering and Data Science II)
Show Figures

Figure 1

29 pages, 3716 KiB  
Article
Analysis of Lightweight Cryptographic Algorithms on IoT Hardware Platform
by Mohammed El-hajj, Hussien Mousawi and Ahmad Fadlallah
Future Internet 2023, 15(2), 54; https://doi.org/10.3390/fi15020054 - 30 Jan 2023
Cited by 20 | Viewed by 7926
Abstract
Highly constrained devices that are interconnected and interact to complete a task are being used in a diverse range of new fields. The Internet of Things (IoT), cyber-physical systems, distributed control systems, vehicular systems, wireless sensor networks, tele-medicine, and the smart grid are [...] Read more.
Highly constrained devices that are interconnected and interact to complete a task are being used in a diverse range of new fields. The Internet of Things (IoT), cyber-physical systems, distributed control systems, vehicular systems, wireless sensor networks, tele-medicine, and the smart grid are a few examples of these fields. In any of these contexts, security and privacy might be essential aspects. Research on secure communication in Internet of Things (IoT) networks is a highly contested topic. One method for ensuring secure data transmission is cryptography. Because IoT devices have limited resources, such as power, memory, and batteries, IoT networks have boosted the term “lightweight cryptography”. Algorithms for lightweight cryptography are designed to efficiently protect data while using minimal resources. In this research, we evaluated and benchmarked lightweight symmetric ciphers for resource-constrained devices. The evaluation is performed using two widely used platform: Arduino and Raspberry Pi. In the first part, we implemented 39 block ciphers on an ATMEGA328p microcontroller and analyzed them in the terms of speed, cost, and energy efficiency during encryption and decryption for different block and key sizes. In the second part, the 2nd-round NIST candidates (80 stream and block cipher algorithms) were added to the first-part ciphers in a comprehensive analysis for equivalent block and key sizes in the terms of latency and energy efficiency. Full article
Show Figures

Figure 1

22 pages, 3265 KiB  
Article
Engineering Resource-Efficient Data Management for Smart Cities with Apache Kafka
by Theofanis P. Raptis, Claudio Cicconetti, Manolis Falelakis, Grigorios Kalogiannis, Tassos Kanellos and Tomás Pariente Lobo
Future Internet 2023, 15(2), 43; https://doi.org/10.3390/fi15020043 - 22 Jan 2023
Cited by 10 | Viewed by 4874
Abstract
In terms of the calibre and variety of services offered to end users, smart city management is undergoing a dramatic transformation. The parties involved in delivering pervasive applications can now solve key issues in the big data value chain, including data gathering, analysis, [...] Read more.
In terms of the calibre and variety of services offered to end users, smart city management is undergoing a dramatic transformation. The parties involved in delivering pervasive applications can now solve key issues in the big data value chain, including data gathering, analysis, and processing, storage, curation, and real-world data visualisation. This trend is being driven by Industry 4.0, which calls for the servitisation of data and products across all industries, including the field of smart cities, where people, sensors, and technology work closely together. In order to implement reactive services such as situational awareness, video surveillance, and geo-localisation while constantly preserving the safety and privacy of affected persons, the data generated by omnipresent devices needs to be processed fast. This paper proposes a modular architecture to (i) leverage cutting-edge technologies for data acquisition, management, and distribution (such as Apache Kafka and Apache NiFi); (ii) develop a multi-layer engineering solution for revealing valuable and hidden societal knowledge in the context of smart cities processing multi-modal, real-time, and heterogeneous data flows; and (iii) address the key challenges in tasks involving complex data flows and offer general guidelines to solve them. In order to create an effective system for the monitoring and servitisation of smart city assets with a scalable platform that proves its usefulness in numerous smart city use cases with various needs, we deduced some guidelines from an experimental setting performed in collaboration with leading industrial technical departments. Ultimately, when deployed in production, the proposed data platform will contribute toward the goal of revealing valuable and hidden societal knowledge in the context of smart cities. Full article
(This article belongs to the Special Issue Network Cost Reduction in Cloud and Fog Computing Environments)
Show Figures

Figure 1

35 pages, 13088 KiB  
Review
From 5G to 6G—Challenges, Technologies, and Applications
by Ahmed I. Salameh and Mohamed El Tarhuni
Future Internet 2022, 14(4), 117; https://doi.org/10.3390/fi14040117 - 12 Apr 2022
Cited by 71 | Viewed by 12893
Abstract
As the deployment of 5G mobile radio networks gains momentum across the globe, the wireless research community is already planning the successor of 5G. In this paper, we highlight the shortcomings of 5G in meeting the needs of more data-intensive, low-latency, and ultra-high-reliability [...] Read more.
As the deployment of 5G mobile radio networks gains momentum across the globe, the wireless research community is already planning the successor of 5G. In this paper, we highlight the shortcomings of 5G in meeting the needs of more data-intensive, low-latency, and ultra-high-reliability applications. We then discuss the salient characteristics of the 6G network following a hierarchical approach including the social, economic, and technological aspects. We also discuss some of the key technologies expected to support the move towards 6G. Finally, we quantify and summarize the research work related to beyond 5G and 6G networks through an extensive search of publications and research groups and present a possible timeline for 6G activities. Full article
Show Figures

Graphical abstract

28 pages, 1891 KiB  
Review
ML-Based 5G Network Slicing Security: A Comprehensive Survey
by Ramraj Dangi, Akshay Jadhav, Gaurav Choudhary, Nicola Dragoni, Manas Kumar Mishra and Praveen Lalwani
Future Internet 2022, 14(4), 116; https://doi.org/10.3390/fi14040116 - 8 Apr 2022
Cited by 52 | Viewed by 10632
Abstract
Fifth-generation networks efficiently support and fulfill the demands of mobile broadband and communication services. There has been a continuing advancement from 4G to 5G networks, with 5G mainly providing the three services of enhanced mobile broadband (eMBB), massive machine type communication (eMTC), and [...] Read more.
Fifth-generation networks efficiently support and fulfill the demands of mobile broadband and communication services. There has been a continuing advancement from 4G to 5G networks, with 5G mainly providing the three services of enhanced mobile broadband (eMBB), massive machine type communication (eMTC), and ultra-reliable low-latency services (URLLC). Since it is difficult to provide all of these services on a physical network, the 5G network is partitioned into multiple virtual networks called “slices”. These slices customize these unique services and enable the network to be reliable and fulfill the needs of its users. This phenomenon is called network slicing. Security is a critical concern in network slicing as adversaries have evolved to become more competent and often employ new attack strategies. This study focused on the security issues that arise during the network slice lifecycle. Machine learning and deep learning algorithm solutions were applied in the planning and design, construction and deployment, monitoring, fault detection, and security phases of the slices. This paper outlines the 5G network slicing concept, its layers and architectural framework, and the prevention of attacks, threats, and issues that represent how network slicing influences the 5G network. This paper also provides a comparison of existing surveys and maps out taxonomies to illustrate various machine learning solutions for different application parameters and network functions, along with significant contributions to the field. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Figure 1

18 pages, 3711 KiB  
Article
Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust Intrusion Detection
by João Vitorino, Nuno Oliveira and Isabel Praça
Future Internet 2022, 14(4), 108; https://doi.org/10.3390/fi14040108 - 29 Mar 2022
Cited by 20 | Viewed by 8882
Abstract
Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a domain with tabular data must be [...] Read more.
Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a domain with tabular data must be realistic within that domain. This work establishes the fundamental constraint levels required to achieve realism and introduces the adaptative perturbation pattern method (A2PM) to fulfill these constraints in a gray-box setting. A2PM relies on pattern sequences that are independently adapted to the characteristics of each class to create valid and coherent data perturbations. The proposed method was evaluated in a cybersecurity case study with two scenarios: Enterprise and Internet of Things (IoT) networks. Multilayer perceptron (MLP) and random forest (RF) classifiers were created with regular and adversarial training, using the CIC-IDS2017 and IoT-23 datasets. In each scenario, targeted and untargeted attacks were performed against the classifiers, and the generated examples were compared with the original network traffic flows to assess their realism. The obtained results demonstrate that A2PM provides a scalable generation of realistic adversarial examples, which can be advantageous for both adversarial training and attacks. Full article
(This article belongs to the Topic Cyber Security and Critical Infrastructures)
Show Figures

Graphical abstract

30 pages, 2168 KiB  
Review
Self-Organizing Networks for 5G and Beyond: A View from the Top
by Andreas G. Papidas and George C. Polyzos
Future Internet 2022, 14(3), 95; https://doi.org/10.3390/fi14030095 - 17 Mar 2022
Cited by 22 | Viewed by 10446
Abstract
We describe self-organizing network (SON) concepts and architectures and their potential to play a central role in 5G deployment and next-generation networks. Our focus is on the basic SON use case applied to radio access networks (RAN), which is self-optimization. We analyze SON [...] Read more.
We describe self-organizing network (SON) concepts and architectures and their potential to play a central role in 5G deployment and next-generation networks. Our focus is on the basic SON use case applied to radio access networks (RAN), which is self-optimization. We analyze SON applications’ rationale and operation, the design and dimensioning of SON systems, possible deficiencies and conflicts that occur through the parallel operation of functions, and describe the strong reliance on machine learning (ML) and artificial intelligence (AI). Moreover, we present and comment on very recent proposals for SON deployment in 5G networks. Typical examples include the binding of SON systems with techniques such as Network Function Virtualization (NFV), Cloud RAN (C-RAN), Ultra-Reliable Low Latency Communications (URLLC), massive Machine-Type Communication (mMTC) for IoT, and automated backhauling, which lead the way towards the adoption of SON techniques in Beyond 5G (B5G) networks. Full article
(This article belongs to the Special Issue 5G Enabling Technologies and Wireless Networking)
Show Figures

Figure 1

27 pages, 1227 KiB  
Article
A Survey on Intrusion Detection Systems for Fog and Cloud Computing
by Victor Chang, Lewis Golightly, Paolo Modesti, Qianwen Ariel Xu, Le Minh Thao Doan, Karl Hall, Sreeja Boddu and Anna Kobusińska
Future Internet 2022, 14(3), 89; https://doi.org/10.3390/fi14030089 - 13 Mar 2022
Cited by 53 | Viewed by 10103
Abstract
The rapid advancement of internet technologies has dramatically increased the number of connected devices. This has created a huge attack surface that requires the deployment of effective and practical countermeasures to protect network infrastructures from the harm that cyber-attacks can cause. Hence, there [...] Read more.
The rapid advancement of internet technologies has dramatically increased the number of connected devices. This has created a huge attack surface that requires the deployment of effective and practical countermeasures to protect network infrastructures from the harm that cyber-attacks can cause. Hence, there is an absolute need to differentiate boundaries in personal information and cloud and fog computing globally and the adoption of specific information security policies and regulations. The goal of the security policy and framework for cloud and fog computing is to protect the end-users and their information, reduce task-based operations, aid in compliance, and create standards for expected user actions, all of which are based on the use of established rules for cloud computing. Moreover, intrusion detection systems are widely adopted solutions to monitor and analyze network traffic and detect anomalies that can help identify ongoing adversarial activities, trigger alerts, and automatically block traffic from hostile sources. This survey paper analyzes factors, including the application of technologies and techniques, which can enable the deployment of security policy on fog and cloud computing successfully. The paper focuses on a Software-as-a-Service (SaaS) and intrusion detection, which provides an effective and resilient system structure for users and organizations. Our survey aims to provide a framework for a cloud and fog computing security policy, while addressing the required security tools, policies, and services, particularly for cloud and fog environments for organizational adoption. While developing the essential linkage between requirements, legal aspects, analyzing techniques and systems to reduce intrusion detection, we recommend the strategies for cloud and fog computing security policies. The paper develops structured guidelines for ways in which organizations can adopt and audit the security of their systems as security is an essential component of their systems and presents an agile current state-of-the-art review of intrusion detection systems and their principles. Functionalities and techniques for developing these defense mechanisms are considered, along with concrete products utilized in operational systems. Finally, we discuss evaluation criteria and open-ended challenges in this area. Full article
Show Figures

Figure 1

25 pages, 2331 KiB  
Review
Digital Twin—Cyber Replica of Physical Things: Architecture, Applications and Future Research Directions
by Cheng Qian, Xing Liu, Colin Ripley, Mian Qian, Fan Liang and Wei Yu
Future Internet 2022, 14(2), 64; https://doi.org/10.3390/fi14020064 - 21 Feb 2022
Cited by 78 | Viewed by 13608
Abstract
The Internet of Things (IoT) connects massive smart devices to collect big data and carry out the monitoring and control of numerous things in cyber-physical systems (CPS). By leveraging machine learning (ML) and deep learning (DL) techniques to analyze the collected data, physical [...] Read more.
The Internet of Things (IoT) connects massive smart devices to collect big data and carry out the monitoring and control of numerous things in cyber-physical systems (CPS). By leveraging machine learning (ML) and deep learning (DL) techniques to analyze the collected data, physical systems can be monitored and controlled effectively. Along with the development of IoT and data analysis technologies, a number of CPS (smart grid, smart transportation, smart manufacturing, smart cities, etc.) adopt IoT and data analysis technologies to improve their performance and operations. Nonetheless, directly manipulating or updating the real system has inherent risks. Thus, creating a digital clone of a real physical system, denoted as a Digital Twin (DT), is a viable strategy. Generally speaking, a DT is a data-driven software and hardware emulation platform, which is a cyber replica of physical systems. Meanwhile, a DT describes a specific physical system and tends to achieve the functions and use cases of physical systems. Since DT is a complex digital system, finding a way to effectively represent a variety of things in timely and efficient manner poses numerous challenges to the networking, computing, and data analytics for IoT. Furthermore, the design of a DT for IoT systems must consider numerous exceptional requirements (e.g., latency, reliability, safety, scalability, security, and privacy). To address such challenges, the thoughtful design of DTs offers opportunities for novel and interdisciplinary research efforts. To address the aforementioned problems and issues, in this paper, we first review the architectures of DTs, data representation, and communication protocols. We then review existing efforts on applying DT into IoT data-driven smart systems, including the smart grid, smart transportation, smart manufacturing, and smart cities. Further, we summarize the existing challenges from CPS, data science, optimization, and security and privacy perspectives. Finally, we outline possible future research directions from the perspectives of performance, new DT-driven services, model and learning, and security and privacy. Full article
(This article belongs to the Special Issue Towards Convergence of Internet of Things and Cyber-Physical Systems)
Show Figures

Graphical abstract

22 pages, 2136 KiB  
Article
Open-Source MQTT-Based End-to-End IoT System for Smart City Scenarios
by Cristian D’Ortona, Daniele Tarchi and Carla Raffaelli
Future Internet 2022, 14(2), 57; https://doi.org/10.3390/fi14020057 - 15 Feb 2022
Cited by 23 | Viewed by 10276
Abstract
Many innovative services are emerging based on the Internet of Things (IoT) technology, aiming at fostering better sustainability of our cities. New solutions integrating Information and Communications Technologies (ICTs) with sustainable transport media are encouraged by several public administrations in the so-called Smart [...] Read more.
Many innovative services are emerging based on the Internet of Things (IoT) technology, aiming at fostering better sustainability of our cities. New solutions integrating Information and Communications Technologies (ICTs) with sustainable transport media are encouraged by several public administrations in the so-called Smart City scenario, where heterogeneous users in city roads call for safer mobility. Among several possible applications, recently, there has been a lot of attention on the so-called Vulnerable Road Users (VRUs), such as pedestrians or bikers. They can be equipped with wearable sensors that are able to communicate their data through a chain of devices towards the cloud for agile and effective control of their mobility. This work describes a complete end-to-end IoT system implemented through the integration of different complementary technologies, whose main purpose is to monitor the information related to road users generated by wearable sensors. The system has been implemented using an ESP32 micro-controller connected to the sensors and communicating through a Bluetooth Low Energy (BLE) interface with an Android device, which is assumed to always be carried by any road user. Based on this, we use it as a gateway node, acting as a real-time asynchronous publisher of a Message Queue Telemetry Transport (MQTT) protocol chain. The MQTT broker is configured on a Raspberry PI device and collects sensor data to be sent to a web-based control panel that performs data monitoring and processing. All the architecture modules have been implemented through open-source technologies. The analysis of the BLE packet exchange has been carried out by resorting to the Wireshark packet analyzer. In addition, a feasibility analysis has been carried out by showing the capability of the proposed solution to show the values gathered through the sensors on a remote dashboard. The developed system is publicly available to allow the possible integration of other modules for additional Smart City services or extension to further ICT applications. Full article
(This article belongs to the Special Issue Mobility and Cyber-Physical Intelligence)
Show Figures

Graphical abstract

39 pages, 1220 KiB  
Review
Network Function Virtualization and Service Function Chaining Frameworks: A Comprehensive Review of Requirements, Objectives, Implementations, and Open Research Challenges
by Haruna Umar Adoga and Dimitrios P. Pezaros
Future Internet 2022, 14(2), 59; https://doi.org/10.3390/fi14020059 - 15 Feb 2022
Cited by 36 | Viewed by 10619
Abstract
Network slicing has become a fundamental property for next-generation networks, especially because an inherent part of 5G standardisation is the ability for service providers to migrate some or all of their network services to a virtual network infrastructure, thereby reducing both capital and [...] Read more.
Network slicing has become a fundamental property for next-generation networks, especially because an inherent part of 5G standardisation is the ability for service providers to migrate some or all of their network services to a virtual network infrastructure, thereby reducing both capital and operational costs. With network function virtualisation (NFV), network functions (NFs) such as firewalls, traffic load balancers, content filters, and intrusion detection systems (IDS) are either instantiated on virtual machines (VMs) or lightweight containers, often chained together to create a service function chain (SFC). In this work, we review the state-of-the-art NFV and SFC implementation frameworks and present a taxonomy of the current proposals. Our taxonomy comprises three major categories based on the primary objectives of each of the surveyed frameworks: (1) resource allocation and service orchestration, (2) performance tuning, and (3) resilience and fault recovery. We also identify some key open research challenges that require further exploration by the research community to achieve scalable, resilient, and high-performance NFV/SFC deployments in next-generation networks. Full article
(This article belongs to the Section Network Virtualization and Edge/Fog Computing)
Show Figures

Graphical abstract

24 pages, 2977 KiB  
Review
Research on Progress of Blockchain Consensus Algorithm: A Review on Recent Progress of Blockchain Consensus Algorithms
by Huanliang Xiong, Muxi Chen, Canghai Wu, Yingding Zhao and Wenlong Yi
Future Internet 2022, 14(2), 47; https://doi.org/10.3390/fi14020047 - 30 Jan 2022
Cited by 70 | Viewed by 11631
Abstract
Blockchain technology can solve the problem of trust in the open network in a decentralized way. It has broad application prospects and has attracted extensive attention from academia and industry. The blockchain consensus algorithm ensures that the nodes in the chain reach consensus [...] Read more.
Blockchain technology can solve the problem of trust in the open network in a decentralized way. It has broad application prospects and has attracted extensive attention from academia and industry. The blockchain consensus algorithm ensures that the nodes in the chain reach consensus in the complex network environment, and the node status ultimately remains the same. The consensus algorithm is one of the core technologies of blockchain and plays a pivotal role in the research of blockchain technology. This article gives the basic concepts of the blockchain, summarizes the key technologies of the blockchain, especially focuses on the research of the blockchain consensus algorithm, expounds the general principles of the consensus process, and classifies the mainstream consensus algorithms. Then, focusing on the improvement of consensus algorithm performance, it reviews the research progress of consensus algorithms in detail, analyzes and compares the characteristics, suitable scenarios, and possible shortcomings of different consensus algorithms, and based on this, studies the future development trend of consensus algorithms for reference. Full article
(This article belongs to the Special Issue Distributed Systems for Emerging Computing: Platform and Application)
Show Figures

Figure 1

19 pages, 3481 KiB  
Article
Task Offloading Based on LSTM Prediction and Deep Reinforcement Learning for Efficient Edge Computing in IoT
by Youpeng Tu, Haiming Chen, Linjie Yan and Xinyan Zhou
Future Internet 2022, 14(2), 30; https://doi.org/10.3390/fi14020030 - 18 Jan 2022
Cited by 42 | Viewed by 8267
Abstract
In IoT (Internet of Things) edge computing, task offloading can lead to additional transmission delays and transmission energy consumption. To reduce the cost of resources required for task offloading and improve the utilization of server resources, in this paper, we model the task [...] Read more.
In IoT (Internet of Things) edge computing, task offloading can lead to additional transmission delays and transmission energy consumption. To reduce the cost of resources required for task offloading and improve the utilization of server resources, in this paper, we model the task offloading problem as a joint decision making problem for cost minimization, which integrates the processing latency, processing energy consumption, and the task throw rate of latency-sensitive tasks. The Online Predictive Offloading (OPO) algorithm based on Deep Reinforcement Learning (DRL) and Long Short-Term Memory (LSTM) networks is proposed to solve the above task offloading decision problem. In the training phase of the model, this algorithm predicts the load of the edge server in real-time with the LSTM algorithm, which effectively improves the convergence accuracy and convergence speed of the DRL algorithm in the offloading process. In the testing phase, the LSTM network is used to predict the characteristics of the next task, and then the computational resources are allocated for the task in advance by the DRL decision model, thus further reducing the response delay of the task and enhancing the offloading performance of the system. The experimental evaluation shows that this algorithm can effectively reduce the average latency by 6.25%, the offloading cost by 25.6%, and the task throw rate by 31.7%. Full article
(This article belongs to the Special Issue Machine Learning for Wireless Communications)
Show Figures

Figure 1

19 pages, 2039 KiB  
Review
IoT for Smart Cities: Machine Learning Approaches in Smart Healthcare—A Review
by Taher M. Ghazal, Mohammad Kamrul Hasan, Muhammad Turki Alshurideh, Haitham M. Alzoubi, Munir Ahmad, Syed Shehryar Akbar, Barween Al Kurdi and Iman A. Akour
Future Internet 2021, 13(8), 218; https://doi.org/10.3390/fi13080218 - 23 Aug 2021
Cited by 360 | Viewed by 24044
Abstract
Smart city is a collective term for technologies and concepts that are directed toward making cities efficient, technologically more advanced, greener and more socially inclusive. These concepts include technical, economic and social innovations. This term has been tossed around by various actors in [...] Read more.
Smart city is a collective term for technologies and concepts that are directed toward making cities efficient, technologically more advanced, greener and more socially inclusive. These concepts include technical, economic and social innovations. This term has been tossed around by various actors in politics, business, administration and urban planning since the 2000s to establish tech-based changes and innovations in urban areas. The idea of the smart city is used in conjunction with the utilization of digital technologies and at the same time represents a reaction to the economic, social and political challenges that post-industrial societies are confronted with at the start of the new millennium. The key focus is on dealing with challenges faced by urban society, such as environmental pollution, demographic change, population growth, healthcare, the financial crisis or scarcity of resources. In a broader sense, the term also includes non-technical innovations that make urban life more sustainable. So far, the idea of using IoT-based sensor networks for healthcare applications is a promising one with the potential of minimizing inefficiencies in the existing infrastructure. A machine learning approach is key to successful implementation of the IoT-powered wireless sensor networks for this purpose since there is large amount of data to be handled intelligently. Throughout this paper, it will be discussed in detail how AI-powered IoT and WSNs are applied in the healthcare sector. This research will be a baseline study for understanding the role of the IoT in smart cities, in particular in the healthcare sector, for future research works. Full article
(This article belongs to the Special Issue AI and IoT technologies in Smart Cities)
Show Figures

Figure 1

26 pages, 3426 KiB  
Review
Survey of Localization for Internet of Things Nodes: Approaches, Challenges and Open Issues
by Sheetal Ghorpade, Marco Zennaro and Bharat Chaudhari
Future Internet 2021, 13(8), 210; https://doi.org/10.3390/fi13080210 - 16 Aug 2021
Cited by 47 | Viewed by 7958
Abstract
With exponential growth in the deployment of Internet of Things (IoT) devices, many new innovative and real-life applications are being developed. IoT supports such applications with the help of resource-constrained fixed as well as mobile nodes. These nodes can be placed in anything [...] Read more.
With exponential growth in the deployment of Internet of Things (IoT) devices, many new innovative and real-life applications are being developed. IoT supports such applications with the help of resource-constrained fixed as well as mobile nodes. These nodes can be placed in anything from vehicles to the human body to smart homes to smart factories. Mobility of the nodes enhances the network coverage and connectivity. One of the crucial requirements in IoT systems is the accurate and fast localization of its nodes with high energy efficiency and low cost. The localization process has several challenges. These challenges keep changing depending on the location and movement of nodes such as outdoor, indoor, with or without obstacles and so on. The performance of localization techniques greatly depends on the scenarios and conditions from which the nodes are traversing. Precise localization of nodes is very much required in many unique applications. Although several localization techniques and algorithms are available, there are still many challenges for the precise and efficient localization of the nodes. This paper classifies and discusses various state-of-the-art techniques proposed for IoT node localization in detail. It includes the different approaches such as centralized, distributed, iterative, ranged based, range free, device-based, device-free and their subtypes. Furthermore, the different performance metrics that can be used for localization, comparison of the different techniques, some prominent applications in smart cities and future directions are also covered. Full article
Show Figures

Figure 1

18 pages, 516 KiB  
Article
Designing a Network Intrusion Detection System Based on Machine Learning for Software Defined Networks
by Abdulsalam O. Alzahrani and Mohammed J. F. Alenazi
Future Internet 2021, 13(5), 111; https://doi.org/10.3390/fi13050111 - 28 Apr 2021
Cited by 159 | Viewed by 12082
Abstract
Software-defined Networking (SDN) has recently developed and been put forward as a promising and encouraging solution for future internet architecture. Managed, the centralized and controlled network has become more flexible and visible using SDN. On the other hand, these advantages bring us a [...] Read more.
Software-defined Networking (SDN) has recently developed and been put forward as a promising and encouraging solution for future internet architecture. Managed, the centralized and controlled network has become more flexible and visible using SDN. On the other hand, these advantages bring us a more vulnerable environment and dangerous threats, causing network breakdowns, systems paralysis, online banking frauds and robberies. These issues have a significantly destructive impact on organizations, companies or even economies. Accuracy, high performance and real-time systems are essential to achieve this goal successfully. Extending intelligent machine learning algorithms in a network intrusion detection system (NIDS) through a software-defined network (SDN) has attracted considerable attention in the last decade. Big data availability, the diversity of data analysis techniques, and the massive improvement in the machine learning algorithms enable the building of an effective, reliable and dependable system for detecting different types of attacks that frequently target networks. This study demonstrates the use of machine learning algorithms for traffic monitoring to detect malicious behavior in the network as part of NIDS in the SDN controller. Different classical and advanced tree-based machine learning techniques, Decision Tree, Random Forest and XGBoost are chosen to demonstrate attack detection. The NSL-KDD dataset is used for training and testing the proposed methods; it is considered a benchmarking dataset for several state-of-the-art approaches in NIDS. Several advanced preprocessing techniques are performed on the dataset in order to extract the best form of the data, which produces outstanding results compared to other systems. Using just five out of 41 features of NSL-KDD, a multi-class classification task is conducted by detecting whether there is an attack and classifying the type of attack (DDoS, PROBE, R2L, and U2R), accomplishing an accuracy of 95.95%. Full article
(This article belongs to the Special Issue Mobile and Wireless Network Security and Privacy)
Show Figures

Figure 1

Back to TopTop