Next Issue
Volume 15, November
Previous Issue
Volume 15, September
 
 

Future Internet, Volume 15, Issue 10 (October 2023) – 29 articles

Cover Story (view full-size image): Machine learning has emerged as a significant influencer, revolutionizing various application domains, such as cybersecurity. Building machine learning solutions consist of multiple processes, including data pre-processing, model selection, and parameter optimization. Although several surveys have extensively discussed machine learning, there is a lack of surveys that describe its architecture and phases. This survey describes machine models that are classified into supervised, semi-supervised, unsupervised, and reinforcement learning. Additionally, the survey explores recent advancements in data pre-processing techniques and the fine-tuning of parameters. Furthermore, the survey discusses research gaps, challenges, and potential research directions to address them. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
46 pages, 17840 KiB  
Communication
A Comprehensive Analysis and Investigation of the Public Discourse on Twitter about Exoskeletons from 2017 to 2023
by Nirmalya Thakur, Kesha A. Patel, Audrey Poon, Rishika Shah, Nazif Azizi and Changhee Han
Future Internet 2023, 15(10), 346; https://doi.org/10.3390/fi15100346 - 22 Oct 2023
Cited by 1 | Viewed by 2254
Abstract
Exoskeletons have emerged as a vital technology in the last decade and a half, with diverse use cases in different domains. Even though several works related to the analysis of Tweets about emerging technologies exist, none of those works have focused on the [...] Read more.
Exoskeletons have emerged as a vital technology in the last decade and a half, with diverse use cases in different domains. Even though several works related to the analysis of Tweets about emerging technologies exist, none of those works have focused on the analysis of Tweets about exoskeletons. The work of this paper aims to address this research gap by presenting multiple novel findings from a comprehensive analysis of about 150,000 Tweets about exoskeletons posted between May 2017 and May 2023. First, findings from temporal analysis of these Tweets reveal the specific months per year when a significantly higher volume of Tweets was posted and the time windows when the highest number of Tweets, the lowest number of Tweets, Tweets with the highest number of hashtags, and Tweets with the highest number of user mentions were posted. Second, the paper shows that there are statistically significant correlations between the number of Tweets posted per hour and the different characteristics of these Tweets. Third, the paper presents a multiple linear regression model to predict the number of Tweets posted per hour in terms of these characteristics of Tweets. The R2 score of this model was observed to be 0.9540. Fourth, the paper reports that the 10 most popular hashtags were #exoskeleton, #robotics, #iot, #technology, #tech, #innovation, #ai, #sci, #construction and #news. Fifth, sentiment analysis of these Tweets was performed, and the results show that the percentages of positive, neutral, and negative Tweets were 46.8%, 33.1%, and 20.1%, respectively. To add to this, in the Tweets that did not express a neutral sentiment, the sentiment of surprise was the most common sentiment. It was followed by sentiments of joy, disgust, sadness, fear, and anger, respectively. Furthermore, hashtag-specific sentiment analysis revealed several novel insights. For instance, for almost all the months in 2022, the usage of #ai in Tweets about exoskeletons was mainly associated with a positive sentiment. Sixth, lexicon-based approaches were used to detect possibly sarcastic Tweets and Tweets that contained news, and the results are presented. Finally, a comparison of positive Tweets, negative Tweets, neutral Tweets, possibly sarcastic Tweets, and Tweets that contained news is presented in terms of the different characteristic properties of these Tweets. The findings reveal multiple novel insights related to the similarities, variations, and trends of character count, hashtag usage, and user mentions in such Tweets during this time range. Full article
Show Figures

Figure 1

28 pages, 6446 KiB  
Article
A Graph DB-Based Solution for Semantic Technologies in the Future Internet
by Stefano Ferilli, Eleonora Bernasconi, Davide Di Pierro and Domenico Redavid
Future Internet 2023, 15(10), 345; https://doi.org/10.3390/fi15100345 - 20 Oct 2023
Cited by 1 | Viewed by 1660
Abstract
With the progressive improvements in the power, effectiveness, and reliability of AI solutions, more and more critical human problems are being handled by automated AI-based tools and systems. For more complex or particularly critical applications, the level of knowledge, not just information, must [...] Read more.
With the progressive improvements in the power, effectiveness, and reliability of AI solutions, more and more critical human problems are being handled by automated AI-based tools and systems. For more complex or particularly critical applications, the level of knowledge, not just information, must be handled by systems where explicit relationships among objects are represented and processed. For this purpose, the knowledge representation branch of AI proposes Knowledge Graphs, widely used in the Semantic Web, where different online applications may interact by understanding the meaning of the data they process and exchange. This paper describes a framework and online platform for the Internet-based knowledge graph definition, population, and exploitation based on the LPG graph model. Its main advantages are its efficiency and representational power and the wide range of functions that it provides to its users beyond traditional Semantic Web reasoning: network analysis, data mining, multistrategy reasoning, and knowledge browsing. Still, it can also be mapped onto the SW. Full article
(This article belongs to the Special Issue Graph Machine Learning and Complex Networks)
Show Figures

Figure 1

22 pages, 1668 KiB  
Article
Blockchain Technology for Secure Communication and Formation Control in Smart Drone Swarms
by Athanasios Koulianos and Antonios Litke
Future Internet 2023, 15(10), 344; https://doi.org/10.3390/fi15100344 - 19 Oct 2023
Viewed by 1815
Abstract
Today, intelligent drone technology is rapidly expanding, particularly in the defense industry. A swarm of drones can communicate, share data, and make the best decisions on their own. Drone swarms can swiftly and effectively carry out missions like surveillance, reconnaissance, and rescue operations, [...] Read more.
Today, intelligent drone technology is rapidly expanding, particularly in the defense industry. A swarm of drones can communicate, share data, and make the best decisions on their own. Drone swarms can swiftly and effectively carry out missions like surveillance, reconnaissance, and rescue operations, without exposing military troops to hostile conditions. However, there are still significant problems that need to be resolved. One of them is to protect communications on these systems from threat actors. In this paper, we use blockchain technology as a defense mechanism against such issues. Drones can communicate data safely, without the need for a centralized authority (ground station), when using a blockchain to facilitate communication between them in a leader–follower hierarchy structure. Solidity has been used to create a compact, lightweight, and effective smart contract that automates the process of choosing a position in a certain swarm formation structure. Additionally, a mechanism for electing a new leader is proposed. The effectiveness of the presented model is assessed through a simulation that makes use of a DApp we created and Gazebo software. The purpose of this work is to develop a reliable and secure UAV swarm communication system that will enable widespread global adoption by numerous sectors. Full article
Show Figures

Figure 1

31 pages, 5864 KiB  
Article
Towards an Optimal Cloud-Based Resource Management Framework for Next-Generation Internet with Multi-Slice Capabilities
by Salman Ali AlQahtani
Future Internet 2023, 15(10), 343; https://doi.org/10.3390/fi15100343 - 19 Oct 2023
Cited by 1 | Viewed by 1431
Abstract
With the advent of 5G networks, the demand for improved mobile broadband, massive machine-type communication, and ultra-reliable, low-latency communication has surged, enabling a wide array of new applications. A key enabling technology in 5G networks is network slicing, which allows the creation of [...] Read more.
With the advent of 5G networks, the demand for improved mobile broadband, massive machine-type communication, and ultra-reliable, low-latency communication has surged, enabling a wide array of new applications. A key enabling technology in 5G networks is network slicing, which allows the creation of multiple virtual networks to support various use cases on a unified physical network. However, the limited availability of radio resources in the 5G cloud-Radio Access Network (C-RAN) and the ever-increasing data traffic volume necessitate efficient resource allocation algorithms to ensure quality of service (QoS) for each network slice. This paper proposes an Adaptive Slice Allocation (ASA) mechanism for the 5G C-RAN, designed to dynamically allocate resources and adapt to changing network conditions and traffic delay tolerances. The ASA system incorporates slice admission control and dynamic resource allocation to maximize network resource efficiency while meeting the QoS requirements of each slice. Through extensive simulations, we evaluate the ASA system’s performance in terms of resource consumption, average waiting time, and total blocking probability. Comparative analysis with a popular static slice allocation (SSA) approach demonstrates the superiority of the ASA system in achieving a balanced utilization of system resources, maintaining slice isolation, and provisioning QoS. The results highlight the effectiveness of the proposed ASA mechanism in optimizing future internet connectivity within the context of 5G C-RAN, paving the way for enhanced network performance and improved user experiences. Full article
Show Figures

Figure 1

12 pages, 284 KiB  
Article
Challenges of Network Forensic Investigation in Fog and Edge Computing
by Daniel Spiekermann and Jörg Keller
Future Internet 2023, 15(10), 342; https://doi.org/10.3390/fi15100342 - 18 Oct 2023
Cited by 1 | Viewed by 1898
Abstract
While network forensics has matured over the decades and even made progress in the last 10 years when deployed in virtual networks, network forensics in fog and edge computing is still not progressed to that level despite the now widespread use of these [...] Read more.
While network forensics has matured over the decades and even made progress in the last 10 years when deployed in virtual networks, network forensics in fog and edge computing is still not progressed to that level despite the now widespread use of these paradigms. By using an approach similar to software testing, i.e., a mixture of systematic and experience, we analyze obstacles specific to forensics in fog and edge computing such as spatial dispersion and possibly incomplete recordings, and derive how far these obstacles can be overcome by adapting processes and techniques from other branches of network forensics, and how new solutions could look otherwise. In addition, we present a discussion of open problems of network forensics in fog and edge environments and discusses the challenges for an investigator. Full article
(This article belongs to the Special Issue Edge and Fog Computing for the Internet of Things)
Show Figures

Figure 1

22 pages, 1122 KiB  
Article
kClusterHub: An AutoML-Driven Tool for Effortless Partition-Based Clustering over Varied Data Types
by Konstantinos Gratsos , Stefanos Ougiaroglou  and Dionisis Margaris 
Future Internet 2023, 15(10), 341; https://doi.org/10.3390/fi15100341 - 18 Oct 2023
Viewed by 1318
Abstract
Partition-based clustering is widely applied over diverse domains. Researchers and practitioners from various scientific disciplines engage with partition-based algorithms relying on specialized software or programming libraries. Addressing the need to bridge the knowledge gap associated with these tools, this paper introduces kClusterHub, an [...] Read more.
Partition-based clustering is widely applied over diverse domains. Researchers and practitioners from various scientific disciplines engage with partition-based algorithms relying on specialized software or programming libraries. Addressing the need to bridge the knowledge gap associated with these tools, this paper introduces kClusterHub, an AutoML-driven web tool that simplifies the execution of partition-based clustering over numerical, categorical and mixed data types, while facilitating the identification of the optimal number of clusters, using the elbow method. Through automatic feature analysis, kClusterHub selects the most appropriate algorithm from the trio of k-means, k-modes, and k-prototypes. By empowering users to seamlessly upload datasets and select features, kClusterHub selects the algorithm, provides the elbow graph, recommends the optimal number of clusters, executes clustering, and presents the cluster assignment, through tabular representations and exploratory plots. Therefore, kClusterHub reduces the need for specialized software and programming skills, making clustering more accessible to non-experts. For further enhancing its utility, kClusterHub integrates a REST API to support the programmatic execution of cluster analysis. The paper concludes with an evaluation of kClusterHub’s usability via the System Usability Scale and CPU performance experiments. The results emerge that kClusterHub is a streamlined, efficient and user-friendly AutoML-inspired tool for cluster analysis. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

20 pages, 12842 KiB  
Article
Flying Watchdog-Based Guard Patrol with Check Point Data Verification
by Endrowednes Kuantama, Avishkar Seth, Alice James and Yihao Zhang
Future Internet 2023, 15(10), 340; https://doi.org/10.3390/fi15100340 - 16 Oct 2023
Viewed by 1566
Abstract
The effectiveness of human security-based guard patrol systems often faces challenges related to the consistency of perimeter checks regarding timing and patterns. Some solutions use autonomous drones for monitoring assistance but primarily optimize their camera-based object detection capabilities for favorable lighting conditions. This [...] Read more.
The effectiveness of human security-based guard patrol systems often faces challenges related to the consistency of perimeter checks regarding timing and patterns. Some solutions use autonomous drones for monitoring assistance but primarily optimize their camera-based object detection capabilities for favorable lighting conditions. This research introduces an innovative approach to address these limitations—a flying watchdog designed to augment patrol operations with predetermined flight patterns, enabling checkpoint identification and position verification through vision-based methods. The system has a laser-based data transmitter to relay real-time location and timing information to a receiver. The proposed system consists of drone and ground checkpoints with distinctive shapes and colored lights, further enhanced by solar panels serving as laser data receivers. The result demonstrates the drone’s ability to detect four white dot LEDs with square configurations at distances ranging from 18 to 20 m, even under deficient light conditions based on the OpenCV detection algorithm. Notably, the study underscores the significance of achieving an even distribution of light shapes to mitigate light scattering effects on readings while also confirming that ambient light levels up to a maximum of 390 Lux have no adverse impact on the performance of the sensing device. Full article
Show Figures

Figure 1

15 pages, 4099 KiB  
Article
Reinforcement Learning Approach for Adaptive C-V2X Resource Management
by Teguh Indra Bayu, Yung-Fa Huang and Jeang-Kuo Chen
Future Internet 2023, 15(10), 339; https://doi.org/10.3390/fi15100339 - 15 Oct 2023
Viewed by 1528
Abstract
The modulation coding scheme (MCS) index is the essential configuration parameter in cellular vehicle-to-everything (C-V2X) communication. As referenced by the 3rd Generation Partnership Project (3GPP), the MCS index will dictate the transport block size (TBS) index, which will affect the size of transport [...] Read more.
The modulation coding scheme (MCS) index is the essential configuration parameter in cellular vehicle-to-everything (C-V2X) communication. As referenced by the 3rd Generation Partnership Project (3GPP), the MCS index will dictate the transport block size (TBS) index, which will affect the size of transport blocks and the number of physical resource blocks. These numbers are crucial in the C-V2X resource management since it is also bound to the transmission power used in the system. To the authors’ knowledge, this particular area of research has not been previously investigated. Ultimately, this research establishes the fundamental principles for future studies seeking to use the MCS adaptability in many contexts. In this work, we proposed the application of the reinforcement learning (RL) algorithm, as we used the Q-learning approach to adaptively change the MCS index according to the current environmental states. The simulation results showed that our proposed RL approach outperformed the static MCS index and was able to attain stability in a short number of events. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Graphical abstract

15 pages, 667 KiB  
Article
Financial Data Quality Evaluation Method Based on Multiple Linear Regression
by Meng Li, Jiqiang Liu and Yeping Yang
Future Internet 2023, 15(10), 338; https://doi.org/10.3390/fi15100338 - 14 Oct 2023
Viewed by 1313
Abstract
With the rapid growth of customer data in financial institutions, such as trusts, issues of data quality have become increasingly prominent. The main challenge lies in constructing an effective evaluation method that ensures accurate and efficient assessment of customer data quality when dealing [...] Read more.
With the rapid growth of customer data in financial institutions, such as trusts, issues of data quality have become increasingly prominent. The main challenge lies in constructing an effective evaluation method that ensures accurate and efficient assessment of customer data quality when dealing with massive customer data. In this paper, we construct a data quality evaluation index system based on the analytic hierarchy process through a comprehensive investigation of existing research on data quality. Then, redundant features are filtered based on the Shapley value, and the multiple linear regression model is employed to adjust the weight of different indices. Finally, a case study of the customer and institution information of a trust institution is conducted. The results demonstrate that the utilization of completeness, accuracy, timeliness, consistency, uniqueness, and compliance to establish a quality evaluation index system proves instrumental in conducting extensive and in-depth research on data quality measurement dimensions. Additionally, the data quality evaluation approach based on multiple linear regression facilitates the batch scoring of data, and the incorporation of the Shapley value facilitates the elimination of invalid features. This enables the intelligent evaluation of large-scale data quality for financial data. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

21 pages, 3136 KiB  
Article
Edge-Computing-Based People-Counting System for Elevators Using MobileNet–Single-Stage Object Detection
by Tsu-Chuan Shen and Edward T.-H. Chu
Future Internet 2023, 15(10), 337; https://doi.org/10.3390/fi15100337 - 14 Oct 2023
Viewed by 1622
Abstract
Existing elevator systems lack the ability to display the number of people waiting on each floor and inside the elevator. This causes an inconvenience as users cannot tell if they should wait or seek alternatives, leading to unnecessary time wastage. In this work, [...] Read more.
Existing elevator systems lack the ability to display the number of people waiting on each floor and inside the elevator. This causes an inconvenience as users cannot tell if they should wait or seek alternatives, leading to unnecessary time wastage. In this work, we adopted edge computing by running the MobileNet–Single-Stage Object Detection (SSD) algorithm on edge devices to recognize the number of people inside an elevator and waiting on each floor. To ensure the accuracy of people counting, we fine-tuned the SSD parameters, such as the recognition frequency and confidence thresholds, and utilized the line of interest (LOI) counting strategy for people counting. In our experiment, we deployed four NVIDIA Jetson Nano boards in a four-floor building as edge devices to count people when they entered specific areas. The counting results, such as the number of people waiting on each floor and inside the elevator, were provided to users through a web app. Our experimental results demonstrate that the proposed method achieved an average accuracy of 85% for people counting. Furthermore, when comparing it to sending all images back to a remote server for people counting, the execution time required for edge computing was shorter, without compromising the accuracy significantly. Full article
Show Figures

Figure 1

26 pages, 4052 KiB  
Article
Fluent but Not Factual: A Comparative Analysis of ChatGPT and Other AI Chatbots’ Proficiency and Originality in Scientific Writing for Humanities
by Edisa Lozić and Benjamin Štular
Future Internet 2023, 15(10), 336; https://doi.org/10.3390/fi15100336 - 13 Oct 2023
Cited by 4 | Viewed by 4706
Abstract
Historically, mastery of writing was deemed essential to human progress. However, recent advances in generative AI have marked an inflection point in this narrative, including for scientific writing. This article provides a comprehensive analysis of the capabilities and limitations of six AI chatbots [...] Read more.
Historically, mastery of writing was deemed essential to human progress. However, recent advances in generative AI have marked an inflection point in this narrative, including for scientific writing. This article provides a comprehensive analysis of the capabilities and limitations of six AI chatbots in scholarly writing in the humanities and archaeology. The methodology was based on tagging AI-generated content for quantitative accuracy and qualitative precision by human experts. Quantitative accuracy assessed the factual correctness in a manner similar to grading students, while qualitative precision gauged the scientific contribution similar to reviewing a scientific article. In the quantitative test, ChatGPT-4 scored near the passing grade (−5) whereas ChatGPT-3.5 (−18), Bing (−21) and Bard (−31) were not far behind. Claude 2 (−75) and Aria (−80) scored much lower. In the qualitative test, all AI chatbots, but especially ChatGPT-4, demonstrated proficiency in recombining existing knowledge, but all failed to generate original scientific content. As a side note, our results suggest that with ChatGPT-4, the size of large language models has reached a plateau. Furthermore, this paper underscores the intricate and recursive nature of human research. This process of transforming raw data into refined knowledge is computationally irreducible, highlighting the challenges AI chatbots face in emulating human originality in scientific writing. Our results apply to the state of affairs in the third quarter of 2023. In conclusion, while large language models have revolutionised content generation, their ability to produce original scientific contributions in the humanities remains limited. We expect this to change in the near future as current large language model-based AI chatbots evolve into large language model-powered software. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

16 pages, 3130 KiB  
Article
Comparison of Supervised Learning Algorithms on a 5G Dataset Reduced via Principal Component Analysis (PCA)
by Joan D. Gonzalez-Franco, Jorge E. Preciado-Velasco, Jose E. Lozano-Rizk, Raul Rivera-Rodriguez, Jorge Torres-Rodriguez and Miguel A. Alonso-Arevalo
Future Internet 2023, 15(10), 335; https://doi.org/10.3390/fi15100335 - 11 Oct 2023
Cited by 1 | Viewed by 1482
Abstract
Improving the quality of service (QoS) and meeting service level agreements (SLAs) are critical objectives in next-generation networks. This article presents a study on applying supervised learning (SL) algorithms in a 5G/B5G service dataset after being subjected to a principal component analysis (PCA). [...] Read more.
Improving the quality of service (QoS) and meeting service level agreements (SLAs) are critical objectives in next-generation networks. This article presents a study on applying supervised learning (SL) algorithms in a 5G/B5G service dataset after being subjected to a principal component analysis (PCA). The study objective is to evaluate if the reduction of the dimensionality of the dataset via PCA affects the predictive capacity of the SL algorithms. A machine learning (ML) scheme proposed in a previous article used the same algorithms and parameters, which allows for a fair comparison with the results obtained in this work. We searched the best hyperparameters for each SL algorithm, and the simulation results indicate that the support vector machine (SVM) algorithm obtained a precision of 98% and a F1 score of 98.1%. We concluded that the findings of this study hold significance for research in the field of next-generation networks, which involve a wide range of input parameters and can benefit from the application of principal component analysis (PCA) on the performance of QoS and maintaining the SLA. Full article
Show Figures

Figure 1

29 pages, 9294 KiB  
Article
Oceania’s 5G Multi-Tier Fixed Wireless Access Link’s Long-Term Resilience and Feasibility Analysis
by Satyanand Singh, Joanna Rosak-Szyrocka, István Drotár and Xavier Fernando
Future Internet 2023, 15(10), 334; https://doi.org/10.3390/fi15100334 - 10 Oct 2023
Cited by 2 | Viewed by 1551
Abstract
Information and communications technologies play a vital role in achieving the Sustainable Development Goals (SDGs) and bridging the gap between developed and developing countries. However, various socioeconomic factors adversely impact the deployment of digital infrastructure, such as 5G networks, in the countries of [...] Read more.
Information and communications technologies play a vital role in achieving the Sustainable Development Goals (SDGs) and bridging the gap between developed and developing countries. However, various socioeconomic factors adversely impact the deployment of digital infrastructure, such as 5G networks, in the countries of Oceania. The high-speed broadband fifth-generation cellular network (5G) will improve the quality of service for growing mobile users and the massive Internet of Things (IoT). It will also provide ultra-low-latency services required by smart city applications. This study investigates the planning process for a 5G radio access network incorporating sub-6 GHz macro-remote radio units (MRRUs) and mmWave micro-remote radio units (mRRUs). We carefully define an optimization problem for 5G network planning, considering the characteristics of urban macro-cells (UMa) and urban micro-cells (UMi) with appropriate channel models and link budgets. We determine the minimum number of MRRUs and mRRUs that can be installed in each area while meeting coverage and user traffic requirements. This will ensure adequate broadband low-latency network coverage with micro-cells instead of macro-cells. This study evaluates the technical feasibility analysis of combining terrestrial and airborne networks to provide 5G coverage in Oceania, with a special emphasis on Fiji. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

18 pages, 1943 KiB  
Article
Data-Driven Safe Deliveries: The Synergy of IoT and Machine Learning in Shared Mobility
by Fatema Elwy, Raafat Aburukba, A. R. Al-Ali, Ahmad Al Nabulsi, Alaa Tarek, Ameen Ayub and Mariam Elsayeh
Future Internet 2023, 15(10), 333; https://doi.org/10.3390/fi15100333 - 10 Oct 2023
Viewed by 1387
Abstract
Shared mobility is one of the smart city applications in which traditional individually owned vehicles are transformed into shared and distributed ownership. Ensuring the safety of both drivers and riders is a fundamental requirement in shared mobility. This work aims to design and [...] Read more.
Shared mobility is one of the smart city applications in which traditional individually owned vehicles are transformed into shared and distributed ownership. Ensuring the safety of both drivers and riders is a fundamental requirement in shared mobility. This work aims to design and implement an adequate framework for shared mobility within the context of a smart city. The characteristics of shared mobility are identified, leading to the proposal of an effective solution for real-time data collection, tracking, and automated decisions focusing on safety. Driver and rider safety is considered by identifying dangerous driving behaviors and the prompt response to accidents. Furthermore, a trip log is recorded to identify the reasons behind the accident. A prototype implementation is presented to validate the proposed framework for a delivery service using motorbikes. The results demonstrate the scalability of the proposed design and the integration of the overall system to enhance the rider’s safety using machine learning techniques. The machine learning approach identifies dangerous driving behaviors with an accuracy of 91.59% using the decision tree approach when compared against the support vector machine and K-nearest neighbor approaches. Full article
Show Figures

Figure 1

29 pages, 2458 KiB  
Review
Machine Learning: Models, Challenges, and Research Directions
by Tala Talaei Khoei and Naima Kaabouch
Future Internet 2023, 15(10), 332; https://doi.org/10.3390/fi15100332 - 09 Oct 2023
Cited by 1 | Viewed by 3299
Abstract
Machine learning techniques have emerged as a transformative force, revolutionizing various application domains, particularly cybersecurity. The development of optimal machine learning applications requires the integration of multiple processes, such as data pre-processing, model selection, and parameter optimization. While existing surveys have shed light [...] Read more.
Machine learning techniques have emerged as a transformative force, revolutionizing various application domains, particularly cybersecurity. The development of optimal machine learning applications requires the integration of multiple processes, such as data pre-processing, model selection, and parameter optimization. While existing surveys have shed light on these techniques, they have mainly focused on specific application domains. A notable gap that exists in current studies is the lack of a comprehensive overview of machine learning architecture and its essential phases in the cybersecurity field. To address this gap, this survey provides a holistic review of current studies in machine learning, covering techniques applicable to any domain. Models are classified into four categories: supervised, semi-supervised, unsupervised, and reinforcement learning. Each of these categories and their models are described. In addition, the survey discusses the current progress related to data pre-processing and hyperparameter tuning techniques. Moreover, this survey identifies and reviews the research gaps and key challenges that the cybersecurity field faces. By analyzing these gaps, we propose some promising research directions for the future. Ultimately, this survey aims to serve as a valuable resource for researchers interested in learning about machine learning, providing them with insights to foster innovation and progress across diverse application domains. Full article
(This article belongs to the Collection Machine Learning Approaches for User Identity)
Show Figures

Figure 1

21 pages, 3652 KiB  
Article
A Personalized Ontology Recommendation System to Effectively Support Ontology Development by Reuse
by Marwa Abdelreheim, Taysir Hassan A. Soliman and Friederike Klan
Future Internet 2023, 15(10), 331; https://doi.org/10.3390/fi15100331 - 07 Oct 2023
Cited by 1 | Viewed by 1334
Abstract
The profusion of existing ontologies in different domains has made reusing ontologies a best practice when developing new ontologies. The ontology reuse process reduces the expensive cost of developing a new ontology, in terms of time and effort, and supports semantic interoperability. Existing [...] Read more.
The profusion of existing ontologies in different domains has made reusing ontologies a best practice when developing new ontologies. The ontology reuse process reduces the expensive cost of developing a new ontology, in terms of time and effort, and supports semantic interoperability. Existing ontology development tools do not assist in the recommendation of ontologies or their concepts to be reused. Also, existing ontology recommendation tools could suggest whole ontologies covering a set of input keywords without referring to which parts of them (e.g., concepts) can be reused. In this paper, we propose an effective ontology recommendation system that helps the user in the iterative development and reuse of ontologies. The system allows the user to provide explicit preferences about the new ontology, and iteratively guides the user to parts from existing ontologies which match his preferences for reuse. Finally, we developed a prototype of our ontology recommendation system and conducted a user-based evaluation to assess the effectiveness of our approach. Full article
Show Figures

Figure 1

37 pages, 12528 KiB  
Article
Leveraging Taxonomical Engineering for Security Baseline Compliance in International Regulatory Frameworks
by Šarūnas Grigaliūnas, Michael Schmidt, Rasa Brūzgienė, Panayiota Smyrli and Vladislav Bidikov
Future Internet 2023, 15(10), 330; https://doi.org/10.3390/fi15100330 - 07 Oct 2023
Viewed by 1516
Abstract
A surge in successful Information Security (IS) breaches targeting Research and Education (R&E) institutions highlights a pressing need for enhanced protection. Addressing this, a consortium of European National Research and Education Network (NREN) organizations has developed a unified IS framework. This paper aims [...] Read more.
A surge in successful Information Security (IS) breaches targeting Research and Education (R&E) institutions highlights a pressing need for enhanced protection. Addressing this, a consortium of European National Research and Education Network (NREN) organizations has developed a unified IS framework. This paper aims to introduce the Security Baseline for NRENs and a security maturity model tailored for R&E entities, derived from established security best practices to meet the specific needs of NRENs, universities, and various research institutions. The models currently in existence do not possess a system to smoothly correlate varying requirement tiers with distinct user groups or scenarios, baseline standards, and existing legislative actions. This segmentation poses a significant hurdle to the community’s capacity to guarantee consistency, congruency, and thorough compliance with a cohesive array of security standards and regulations. By employing taxonomical engineering principles, a mapping of baseline requirements to other security frameworks and regulations has been established. This reveals a correlation across most regulations impacting R&E institutions and uncovers an overlap in the high-level requirements, which is beneficial for the implementation of multiple standards. Consequently, organizations can systematically compare diverse security requirements, pinpoint gaps in their strategy, and formulate a roadmap to bolster their security initiatives. Full article
(This article belongs to the Special Issue Information and Future Internet Security, Trust and Privacy II)
Show Figures

Graphical abstract

23 pages, 3075 KiB  
Article
End-to-End Service Availability in Heterogeneous Multi-Tier Cloud–Fog–Edge Networks
by Igor Kabashkin
Future Internet 2023, 15(10), 329; https://doi.org/10.3390/fi15100329 - 06 Oct 2023
Cited by 3 | Viewed by 1362
Abstract
With the evolution towards the interconnected future internet spanning satellites, aerial systems, terrestrial infrastructure, and oceanic networks, availability modeling becomes imperative to ensure reliable service. This paper presents a methodology to assess end-to-end availability in complex multi-tiered architectures using a Markov model tailored [...] Read more.
With the evolution towards the interconnected future internet spanning satellites, aerial systems, terrestrial infrastructure, and oceanic networks, availability modeling becomes imperative to ensure reliable service. This paper presents a methodology to assess end-to-end availability in complex multi-tiered architectures using a Markov model tailored to the unique characteristics of cloud, fog, edge, and IoT layers. By quantifying individual tier reliability and combinations thereof, the approach enables setting availability targets during the design and evaluation of operational systems. In the paper, a methodology is proposed to construct a Markov model for the reliability of discrete tiers and end-to-end service availability in heterogeneous multi-tier cloud–fog–edge networks, and the model is demonstrated through numerical examples assessing availability in multi-tier networks. The numerical examples demonstrate the adaptability of the model to various topologies from conventional three-tier to arbitrary multi-level architectures. As connectivity becomes ubiquitous across heterogeneous devices and networks, the proposed approach and availability modeling provide an effective tool for reinforcing the future internet’s fault tolerance and service quality. Full article
Show Figures

Figure 1

17 pages, 7744 KiB  
Article
Evaluating MPTCP Congestion Control Algorithms: Implications for Streaming in Open Internet
by Łukasz Piotr Łuczak, Przemysław Ignaciuk and Michał Morawski
Future Internet 2023, 15(10), 328; https://doi.org/10.3390/fi15100328 - 04 Oct 2023
Viewed by 1405
Abstract
In today’s digital era, the demand for uninterrupted and efficient data streaming is paramount across various sectors, from entertainment to industrial automation. While the traditional single-path solutions often fell short in ensuring rapid and consistent data transfers, Multipath TCP (MPTCP) emerges as a [...] Read more.
In today’s digital era, the demand for uninterrupted and efficient data streaming is paramount across various sectors, from entertainment to industrial automation. While the traditional single-path solutions often fell short in ensuring rapid and consistent data transfers, Multipath TCP (MPTCP) emerges as a promising alternative, enabling simultaneous data transfer across multiple network paths. The efficacy of MPTCP, however, hinges on the choice of appropriate congestion control (CC) algorithms. Addressing the present knowledge gap, this research provides a thorough evaluation of key MPTCP CC algorithms in the context of streaming applications in open Internet environments. Our findings reveal that BALIA stands out as the most suitable choice for MPTCP streaming, adeptly balancing waiting time, throughput, and Head-of-Line blocking reduction. Conversely, the wVegas algorithm, with its delay-centric approach, proves less adequate for multipath streaming. This study underscores the imperative to fine-tune MPTCP for streaming applications, at the same time offering insights for future development areas and innovations. Full article
(This article belongs to the Special Issue Applications of Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

15 pages, 1932 KiB  
Article
MSEN: A Multi-Scale Evolutionary Network for Modeling the Evolution of Temporal Knowledge Graphs
by Yong Yu, Shudong Chen, Rong Du, Da Tong, Hao Xu and Shuai Chen
Future Internet 2023, 15(10), 327; https://doi.org/10.3390/fi15100327 - 30 Sep 2023
Cited by 1 | Viewed by 1257
Abstract
Temporal knowledge graphs play an increasingly prominent role in scenarios such as social networks, finance, and smart cities. As such, research on temporal knowledge graphs continues to deepen. In particular, research on temporal knowledge graph reasoning holds great significance, as it can provide [...] Read more.
Temporal knowledge graphs play an increasingly prominent role in scenarios such as social networks, finance, and smart cities. As such, research on temporal knowledge graphs continues to deepen. In particular, research on temporal knowledge graph reasoning holds great significance, as it can provide abundant knowledge for downstream tasks such as question answering and recommendation systems. Current reasoning research focuses primarily on interpolation and extrapolation. Extrapolation research aims to predict the likelihood of events occurring in future timestamps. Historical events are crucial for predicting future events. However, existing models struggle to fully capture the evolutionary characteristics of historical knowledge graphs. This paper proposes a multi-scale evolutionary network (MSEN) model that leverages Hierarchical Transfer aware Graph Neural Network (HT-GNN) in a local memory encoder to aggregate rich structural semantics from each timestamp’s knowledge graph. It also utilizes Time Related Graph Neural Network (TR-GNN) in a global memory encoder to model temporal-semantic dependencies of entities across the global knowledge graph, mining global evolutionary patterns. The model integrates information from both encoders to generate entity embeddings for predicting future events. The proposed MSEN model demonstrates strong performance compared to several baselines on typical benchmark datasets. Results show MSEN achieves the highest prediction accuracy. Full article
Show Figures

Figure 1

27 pages, 312 KiB  
Article
A New Approach to Web Application Security: Utilizing GPT Language Models for Source Code Inspection
by Zoltán Szabó and Vilmos Bilicki
Future Internet 2023, 15(10), 326; https://doi.org/10.3390/fi15100326 - 28 Sep 2023
Cited by 1 | Viewed by 2690
Abstract
Due to the proliferation of large language models (LLMs) and their widespread use in applications such as ChatGPT, there has been a significant increase in interest in AI over the past year. Multiple researchers have raised the question: how will AI be applied [...] Read more.
Due to the proliferation of large language models (LLMs) and their widespread use in applications such as ChatGPT, there has been a significant increase in interest in AI over the past year. Multiple researchers have raised the question: how will AI be applied and in what areas? Programming, including the generation, interpretation, analysis, and documentation of static program code based on promptsis one of the most promising fields. With the GPT API, we have explored a new aspect of this: static analysis of the source code of front-end applications at the endpoints of the data path. Our focus was the detection of the CWE-653 vulnerability—inadequately isolated sensitive code segments that could lead to unauthorized access or data leakage. This type of vulnerability detection consists of the detection of code segments dealing with sensitive data and the categorization of the isolation and protection levels of those segments that were previously not feasible without human intervention. However, we believed that the interpretive capabilities of GPT models could be explored to create a set of prompts to detect these cases on a file-by-file basis for the applications under study, and the efficiency of the method could pave the way for additional analysis tasks that were previously unavailable for automation. In the introduction to our paper, we characterize in detail the problem space of vulnerability and weakness detection, the challenges of the domain, and the advances that have been achieved in similarly complex areas using GPT or other LLMs. Then, we present our methodology, which includes our classification of sensitive data and protection levels. This is followed by the process of preprocessing, analyzing, and evaluating static code. This was achieved through a series of GPT prompts containing parts of static source code, utilizing few-shot examples and chain-of-thought techniques that detected sensitive code segments and mapped the complex code base into manageable JSON structures.Finally, we present our findings and evaluation of the open source project analysis, comparing the results of the GPT-based pipelines with manual evaluations, highlighting that the field yields a high research value. The results show a vulnerability detection rate for this particular type of model of 88.76%, among others. Full article
18 pages, 7166 KiB  
Article
Investigating IPTV Malware in the Wild
by Adam Lockett, Ioannis Chalkias, Cagatay Yucel, Jane Henriksen-Bulmer and Vasilis Katos
Future Internet 2023, 15(10), 325; https://doi.org/10.3390/fi15100325 - 28 Sep 2023
Cited by 1 | Viewed by 1804
Abstract
Technologies providing copyright-infringing IPTV content are commonly used as an illegal alternative to legal IPTV subscriptions and services, as they usually have lower monetary costs and can be more convenient for users who follow content from different sources. These infringing IPTV technologies may [...] Read more.
Technologies providing copyright-infringing IPTV content are commonly used as an illegal alternative to legal IPTV subscriptions and services, as they usually have lower monetary costs and can be more convenient for users who follow content from different sources. These infringing IPTV technologies may include websites, software, software add-ons, and physical set-top boxes. Due to the free or low cost of illegal IPTV technologies, illicit IPTV content providers will often resort to intrusive advertising, scams, and the distribution of malware to increase their revenue. We developed an automated solution for collecting and analysing malware from illegal IPTV technologies and used it to analyse a sample of illicit IPTV websites, application (app) stores, and software. Our results show that our IPTV Technologies Malware Analysis Framework (IITMAF) classified 32 of the 60 sample URLs tested as malicious compared to running the same test using publicly available online antivirus solutions, which only detected 23 of the 60 sample URLs as malicious. Moreover, the IITMAF also detected malicious URLs and files from 31 of the sample’s websites, one of which had reported ransomware behaviour. Full article
Show Figures

Figure 1

25 pages, 374 KiB  
Review
Dynamic Risk Assessment in Cybersecurity: A Systematic Literature Review
by Pavlos Cheimonidis and Konstantinos Rantos
Future Internet 2023, 15(10), 324; https://doi.org/10.3390/fi15100324 - 28 Sep 2023
Cited by 2 | Viewed by 2562
Abstract
Traditional information security risk assessment (RA) methodologies and standards, adopted by information security management systems and frameworks as a foundation stone towards robust environments, face many difficulties in modern environments where the threat landscape changes rapidly and new vulnerabilities are being discovered. In [...] Read more.
Traditional information security risk assessment (RA) methodologies and standards, adopted by information security management systems and frameworks as a foundation stone towards robust environments, face many difficulties in modern environments where the threat landscape changes rapidly and new vulnerabilities are being discovered. In order to overcome this problem, dynamic risk assessment (DRA) models have been proposed to continuously and dynamically assess risks to organisational operations in (near) real time. The aim of this work is to analyse the current state of DRA models that have been proposed for cybersecurity, through a systematic literature review. The screening process led us to study 50 DRA models, categorised based on the respective primary analysis methods they used. The study provides insights into the key characteristics of these models, including the maturity level of the examined models, the domain or application area in which these models flourish, and the information they utilise in order to produce results. The aim of this work is to answer critical research questions regarding the development of dynamic risk assessment methodologies and provide insights on the already developed methods as well as future research directions. Full article
(This article belongs to the Special Issue Privacy and Security in Computing Continuum and Data-Driven Workflows)
Show Figures

Figure 1

13 pages, 534 KiB  
Article
Temporal-Guided Knowledge Graph-Enhanced Graph Convolutional Network for Personalized Movie Recommendation Systems
by Chin-Yi Chen and Jih-Jeng Huang
Future Internet 2023, 15(10), 323; https://doi.org/10.3390/fi15100323 - 28 Sep 2023
Viewed by 1303
Abstract
Traditional movie recommendation systems are increasingly falling short in the contemporary landscape of abundant information and evolving user behaviors. This study introduced the temporal knowledge graph recommender system (TKGRS), a ground-breaking algorithm that addresses the limitations of existing models. TKGRS uniquely integrates graph [...] Read more.
Traditional movie recommendation systems are increasingly falling short in the contemporary landscape of abundant information and evolving user behaviors. This study introduced the temporal knowledge graph recommender system (TKGRS), a ground-breaking algorithm that addresses the limitations of existing models. TKGRS uniquely integrates graph convolutional networks (GCNs), matrix factorization, and temporal decay factors to offer a robust and dynamic recommendation mechanism. The algorithm’s architecture comprises an initial embedding layer for identifying the user and item, followed by a GCN layer for a nuanced understanding of the relationships and fully connected layers for prediction. A temporal decay factor is also used to give weightage to recent user–item interactions. Empirical validation using the MovieLens 100K, 1M, and Douban datasets showed that TKGRS outperformed the state-of-the-art models according to the evaluation metrics, i.e., RMSE and MAE. This innovative approach sets a new standard in movie recommendation systems and opens avenues for future research in advanced graph algorithms and machine learning techniques. Full article
Show Figures

Graphical abstract

26 pages, 45126 KiB  
Article
Application of an Effective Hierarchical Deep-Learning-Based Object Detection Model Integrated with Image-Processing Techniques for Detecting Speed Limit Signs, Rockfalls, Potholes, and Car Crashes
by Yao-Liang Chung
Future Internet 2023, 15(10), 322; https://doi.org/10.3390/fi15100322 - 28 Sep 2023
Cited by 1 | Viewed by 1922
Abstract
Against the backdrop of rising road traffic accident rates, measures to prevent road traffic accidents have always been a pressing issue in Taiwan. Road traffic accidents are mostly caused by speeding and roadway obstacles, especially in the form of rockfalls, potholes, and car [...] Read more.
Against the backdrop of rising road traffic accident rates, measures to prevent road traffic accidents have always been a pressing issue in Taiwan. Road traffic accidents are mostly caused by speeding and roadway obstacles, especially in the form of rockfalls, potholes, and car crashes (involving damaged cars and overturned cars). To address this, it was necessary to design a real-time detection system that could detect speed limit signs, rockfalls, potholes, and car crashes, which would alert drivers to make timely decisions in the event of an emergency, thereby preventing secondary car crashes. This system would also be useful for alerting the relevant authorities, enabling a rapid response to the situation. In this study, a hierarchical deep-learning-based object detection model is proposed based on You Only Look Once v7 (YOLOv7) and mask region-based convolutional neural network (Mask R-CNN) algorithms. In the first level, YOLOv7 identifies speed limit signs and rockfalls, potholes, and car crashes. In the second level, Mask R-CNN subdivides the speed limit signs into nine categories (30, 40, 50, 60, 70, 80, 90, 100, and 110 km/h). The images used in this study consisted of screen captures of dashcam footage as well as images obtained from the Tsinghua-Tencent 100K dataset, Google Street View, and Google Images searches. During model training, we employed Gaussian noise and image rotation to simulate poor weather conditions as well as obscured, slanted, or twisted objects. Canny edge detection was used to enhance the contours of the detected objects and accentuate their features. The combined use of these image-processing techniques effectively increased the quantity and variety of images in the training set. During model testing, we evaluated the model’s performance based on its mean average precision (mAP). The experimental results showed that the mAP of our proposed model was 8.6 percentage points higher than that of the YOLOv7 model—a significant improvement in the overall accuracy of the model. In addition, we tested the model using videos showing different scenarios that had not been used in the training process, finding the model to have a rapid response time and a lower overall mean error rate. To summarize, the proposed model is a good candidate for road safety detection. Full article
(This article belongs to the Section Big Data and Augmented Intelligence)
Show Figures

Figure 1

14 pages, 1084 KiB  
Article
Exploring the Factors Affecting Countries’ Adoption of Blockchain-Enabled Central Bank Digital Currencies
by Medina Ayta Mohammed, Carmen De-Pablos-Heredero and José Luis Montes Botella
Future Internet 2023, 15(10), 321; https://doi.org/10.3390/fi15100321 - 28 Sep 2023
Cited by 2 | Viewed by 2065
Abstract
Central bank-issued digital currencies have sparked significant interest and are currently the subject of extensive research, owing to their potential for rapid settlement, low fees, accessibility, and automated monetary policies. However, central bank digital currencies are still in their infancy and the levels [...] Read more.
Central bank-issued digital currencies have sparked significant interest and are currently the subject of extensive research, owing to their potential for rapid settlement, low fees, accessibility, and automated monetary policies. However, central bank digital currencies are still in their infancy and the levels of adoption vary significantly between nations, with a few countries seeing widespread adoption. We used partial least squares structural equation modeling to investigate the nonlinear relationship between key national development indicators and central bank digital deployment across 67 countries. We explore the technological, environmental, legal, and economic factors that affect central bank digital currency adoption by country. We found a statistically significant and positive correlation between countries’ central bank digital currency adoption status and a country’s level of democracy and public confidence in governance, and a negative association between regulatory quality and income inequality. There was no significant association between countries’ central bank digital currency adoption status and their level of network readiness, foreign exchange reserves, and sustainable development goal rank. Thus, we posit that a country that is highly democratic and has good governance adopts central bank digital currencies more readily than others. Based on our findings, we suggested areas for additional research and highlighted policy considerations related to the wider adoption of central bank digital currency. Full article
(This article belongs to the Special Issue Blockchain and Web 3.0: Applications, Challenges and Future Trends)
Show Figures

Figure 1

19 pages, 979 KiB  
Article
Multi-Antenna Jammer-Assisted Secure Short Packet Communications in IoT Networks
by Dechuan Chen, Jin Li, Jianwei Hu, Xingang Zhang and Shuai Zhang
Future Internet 2023, 15(10), 320; https://doi.org/10.3390/fi15100320 - 26 Sep 2023
Cited by 1 | Viewed by 1090
Abstract
In this work, we exploit a multi-antenna cooperative jammer to enable secure short packet communications in Internet of Things (IoT) networks. Specifically, we propose three jamming schemes to combat eavesdropping, i.e., the zero forcing beamforming (ZFB) scheme, null-space artificial noise (NAN) scheme, and [...] Read more.
In this work, we exploit a multi-antenna cooperative jammer to enable secure short packet communications in Internet of Things (IoT) networks. Specifically, we propose three jamming schemes to combat eavesdropping, i.e., the zero forcing beamforming (ZFB) scheme, null-space artificial noise (NAN) scheme, and transmit antenna selection (TAS) scheme. Assuming Rayleigh fading, we derive new closed-form approximations for the secrecy throughput with finite blocklength coding. To gain further insights, we also analyze the asymptotic performance of the secrecy throughput in the case of infinite blocklength. Furthermore, we investigate the optimization problem in terms of maximizing the secrecy throughput with the latency and reliability constraints to determine the optimal blocklength. Simulation results validate the accuracy of the approximations and evaluate the impact of key parameters such as the jamming power and the number of antennas at the jammer on the secrecy throughput. Full article
Show Figures

Figure 1

13 pages, 4268 KiB  
Article
Modeling 3D NAND Flash with Nonparametric Inference on Regression Coefficients for Reliable Solid-State Storage
by Michela Borghesi, Cristian Zambelli, Rino Micheloni and Stefano Bonnini
Future Internet 2023, 15(10), 319; https://doi.org/10.3390/fi15100319 - 26 Sep 2023
Viewed by 1038
Abstract
Solid-state drives represent the preferred backbone storage solution thanks to their low latency and high throughput capabilities compared to mechanical hard disk drives. The performance of a drive is intertwined with the reliability of the memories; hence, modeling their reliability is an important [...] Read more.
Solid-state drives represent the preferred backbone storage solution thanks to their low latency and high throughput capabilities compared to mechanical hard disk drives. The performance of a drive is intertwined with the reliability of the memories; hence, modeling their reliability is an important task to be performed as a support for storage system designers. In the literature, storage developers devise dedicated parametric statistical approaches to model the evolution of the memory’s error distribution through well-known statistical frameworks. Some of these well-founded reliability models have a deep connection with the 3D NAND flash technology. In fact, the more precise and accurate the model, the less the probability of incurring storage performance slowdowns. In this work, to avoid some limitations of the parametric methods, a non-parametric approach to test the model goodness-of-fit based on combined permutation tests is carried out. The results show that the electrical characterization of different memory blocks and pages tested provides an FBC feature that can be well-modeled using a multiple regression analysis. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Graphical abstract

18 pages, 1957 KiB  
Article
An Enhanced Minimax Loss Function Technique in Generative Adversarial Network for Ransomware Behavior Prediction
by Mazen Gazzan and Frederick T. Sheldon
Future Internet 2023, 15(10), 318; https://doi.org/10.3390/fi15100318 - 22 Sep 2023
Cited by 1 | Viewed by 1351
Abstract
Recent ransomware attacks threaten not only personal files but also critical infrastructure like smart grids, necessitating early detection before encryption occurs. Current methods, reliant on pre-encryption data, suffer from insufficient and rapidly outdated attack patterns, despite efforts to focus on select features. Such [...] Read more.
Recent ransomware attacks threaten not only personal files but also critical infrastructure like smart grids, necessitating early detection before encryption occurs. Current methods, reliant on pre-encryption data, suffer from insufficient and rapidly outdated attack patterns, despite efforts to focus on select features. Such an approach assumes that the same features remain unchanged. This approach proves ineffective due to the polymorphic and metamorphic characteristics of ransomware, which generate unique attack patterns for each new target, particularly in the pre-encryption phase where evasiveness is prioritized. As a result, the selected features quickly become obsolete. Therefore, this study proposes an enhanced Bi-Gradual Minimax (BGM) loss function for the Generative Adversarial Network (GAN) Algorithm that compensates for the attack patterns insufficiency to represents the polymorphic behavior at the earlier phases of the ransomware lifecycle. Unlike existing GAN-based models, the BGM-GAN gradually minimizes the maximum loss of the generator and discriminator in the network. This allows the generator to create artificial patterns that resemble the pre-encryption data distribution. The generator is used to craft evasive adversarial patterns and add them to the original data. Then, the generator and discriminator compete to optimize their weights during the training phase such that the generator produces realistic attack patterns, while the discriminator endeavors to distinguish between the real and crafted patterns. The experimental results show that the proposed BGM-GAN reached maximum accuracy of 0.98, recall (0.96), and a minimum false positive rate (0.14) which all outperform those obtained by the existing works. The application of BGM-GAN can be extended to early detect malware and other types of attacks. Full article
(This article belongs to the Special Issue Information and Future Internet Security, Trust and Privacy II)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop