sensors-logo

Journal Browser

Journal Browser

Harnessing Machine Learning and AI in Cybersecurity

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (15 July 2023) | Viewed by 18446

Special Issue Editors


E-Mail Website
Guest Editor
Department of Networks and Computer Security, SUNY Polytechnic Institute, College of Engineering, Utica, NY 13502, USA
Interests: cybersecurity; cloud computing; applied artificial intelligence; advanced machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Florida International University, Miami, FL 33174, USA
Interests: security and resiliency analysis and design; computer networks; SDN/NVF; clouds; cyber–physical systems/Internet of Things (IoT); formal verification and synthesis; autonomous and unmanned vehicles
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science, University of Windsor, Windsor, ON N9B 3P4, Canada
Interests: cybersecurity; dependability; resilient computing; applied machine learning; secure software engineering
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, The University of Arizona, Tucson, AZ 85719, USA
Interests: network security; computer security; industrial control systems (ICS) security; Internet of things (IoT) security

Special Issue Information

Dear Colleagues,

Coordinated cyber attacks may cause cascading failures over large areas of critical systems operations and are credible threats. Artificial intelligence (AI)-based tools for cybersecurity have emerged to help information security teams reduce breach risks and improve their security posture efficiently and effectively. 

AI and machine learning (ML) have become critical technologies in information security, as they are able to quickly analyze millions of events and identify many different types of threats—from malware exploiting zero-day vulnerabilities to identifying risky behavior that might lead to a phishing attack or downloading malicious code. These technologies learn over time, drawing from the past to identify new types of attacks now.

This Special Issue focuses on the practical aspects of cybersecurity, with an emphasis on interdisciplinary approaches. We welcome original contributions on novel threats, defenses and security, and autonomic security solutions. We also seek contributions motivated by real-world security and forensic problems as well as theoretical works that have clear intentions for practical applications. Topics of interest include, but are not limited to, the following:

  • Novel cyber attacks in addition to their modeling and analysis.
  • Zero Trust models and zero-day attacks.
  • Intrusion deception techniques.
  • Risk assessment approaches.
  • Attack detection and defending.
  • Network slicing security in 5G systems.
  • Trusted computing and communication for 5G and cloud computing systems.
  • Game theoretic approaches for 5G and cloud computing systems.
  • Information theoretic security and privacy.
  • Autonomous intrusion response systems.
  • Verification and validation of 5G and cloud computing systems security.
  • Data confidentiality and privacy.
  • Authentication and access control.
  • Design of system architectures and control for 5G and cloud computing systems security.
  • Attack financial damage analysis of 5G assets.
  • Interdisciplinary approaches for securing 5G and cloud computing systems.
  • Experiments, test beds, and prototyping systems for security.
  • Monitoring, tracking, and detection systems
  • Application of efficient machine intelligence to cybercrimes and attacks.
  • Efficient machine intelligence application to cybercrimes and attacks.

Papers are published upon acceptance, regardless of the Special Issue submission deadline.

Dr. Hisham Kholidy
Dr. Mohammad Rahman
Dr. Sherif Saad
Dr. Pratik Satam
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 22523 KiB  
Article
A Malicious Code Detection Method Based on Stacked Depthwise Separable Convolutions and Attention Mechanism
by Hong Huang, Rui Du, Zhaolian Wang, Xin Li and Guotao Yuan
Sensors 2023, 23(16), 7084; https://doi.org/10.3390/s23167084 - 10 Aug 2023
Cited by 4 | Viewed by 1839
Abstract
To address the challenges of weak model generalization and limited model capacity adaptation in traditional malware detection methods, this article presents a novel malware detection approach based on stacked depthwise separable convolutions and self-attention, termed CoAtNet. This method combines the strengths of the [...] Read more.
To address the challenges of weak model generalization and limited model capacity adaptation in traditional malware detection methods, this article presents a novel malware detection approach based on stacked depthwise separable convolutions and self-attention, termed CoAtNet. This method combines the strengths of the self-attention module’s robust model adaptation and the convolutional networks’ powerful generalization abilities. The initial step involves transforming the malicious code into grayscale images. These images are subsequently processed using a detection model that employs stacked depthwise separable convolutions and an attention mechanism. This model effectively recognizes and classifies the images, automatically extracting essential features from malicious software images. The effectiveness of the method was validated through comparative experiments using both the Malimg dataset and the augmented Blended+ dataset. The approach’s performance was evaluated against popular models, including XceptionNet, EfficientNetB0, ResNet50, VGG16, DenseNet169, and InceptionResNetV2. The experimental results highlight that the model surpasses other malware detection models in terms of accuracy and generalization ability. In conclusion, the proposed method addresses the limitations of traditional malware detection approaches by leveraging stacked depthwise separable convolutions and self-attention. Comprehensive experiments demonstrate its superior performance compared to existing models. This research contributes to advancing the field of malware detection and provides a promising solution for enhanced accuracy and robustness. Full article
(This article belongs to the Special Issue Harnessing Machine Learning and AI in Cybersecurity)
Show Figures

Figure 1

15 pages, 11775 KiB  
Article
Fooling Examples: Another Intriguing Property of Neural Networks
by Ming Zhang, Yongkang Chen and Cheng Qian
Sensors 2023, 23(14), 6378; https://doi.org/10.3390/s23146378 - 13 Jul 2023
Cited by 6 | Viewed by 1654
Abstract
Neural networks have been proven to be vulnerable to adversarial examples; these are examples that can be recognized by both humans and neural networks, although neural networks give incorrect predictions. As an intriguing property of neural networks, adversarial examples pose a serious threat [...] Read more.
Neural networks have been proven to be vulnerable to adversarial examples; these are examples that can be recognized by both humans and neural networks, although neural networks give incorrect predictions. As an intriguing property of neural networks, adversarial examples pose a serious threat to the secure application of neural networks. In this article, we present another intriguing property of neural networks: the fact that well-trained models believe some examples to be recognizable objects (often with high confidence), while humans cannot recognize such examples. We refer to these as “fooling examples”. Specifically, we take inspiration from the construction of adversarial examples and develop an iterative method for generating fooling examples. The experimental results show that fooling examples can not only be easily generated, with a success rate of nearly 100% in the white-box scenario, but also exhibit strong transferability across different models in the black-box scenario. Tests on the Google Cloud Vision API show that fooling examples can also be recognized by real-world computer vision systems. Our findings reveal a new cognitive deficit of neural networks, and we hope that these potential security threats will be addressed in future neural network applications. Full article
(This article belongs to the Special Issue Harnessing Machine Learning and AI in Cybersecurity)
Show Figures

Figure 1

18 pages, 538 KiB  
Article
Adversarial-Aware Deep Learning System Based on a Secondary Classical Machine Learning Verification Approach
by Mohammed Alkhowaiter, Hisham Kholidy, Mnassar A. Alyami, Abdulmajeed Alghamdi and Cliff Zou
Sensors 2023, 23(14), 6287; https://doi.org/10.3390/s23146287 - 11 Jul 2023
Cited by 4 | Viewed by 1823
Abstract
Deep learning models have been used in creating various effective image classification applications. However, they are vulnerable to adversarial attacks that seek to misguide the models into predicting incorrect classes. Our study of major adversarial attack models shows that they all specifically target [...] Read more.
Deep learning models have been used in creating various effective image classification applications. However, they are vulnerable to adversarial attacks that seek to misguide the models into predicting incorrect classes. Our study of major adversarial attack models shows that they all specifically target and exploit the neural networking structures in their designs. This understanding led us to develop a hypothesis that most classical machine learning models, such as random forest (RF), are immune to adversarial attack models because they do not rely on neural network design at all. Our experimental study of classical machine learning models against popular adversarial attacks supports this hypothesis. Based on this hypothesis, we propose a new adversarial-aware deep learning system by using a classical machine learning model as the secondary verification system to complement the primary deep learning model in image classification. Although the secondary classical machine learning model has less accurate output, it is only used for verification purposes, which does not impact the output accuracy of the primary deep learning model, and, at the same time, can effectively detect an adversarial attack when a clear mismatch occurs. Our experiments based on the CIFAR-100 dataset show that our proposed approach outperforms current state-of-the-art adversarial defense systems. Full article
(This article belongs to the Special Issue Harnessing Machine Learning and AI in Cybersecurity)
Show Figures

Figure 1

39 pages, 6237 KiB  
Article
Investigating Generalized Performance of Data-Constrained Supervised Machine Learning Models on Novel, Related Samples in Intrusion Detection
by Laurens D’hooge, Miel Verkerken, Tim Wauters, Filip De Turck and Bruno Volckaert
Sensors 2023, 23(4), 1846; https://doi.org/10.3390/s23041846 - 7 Feb 2023
Cited by 10 | Viewed by 2389
Abstract
Recently proposed methods in intrusion detection are iterating on machine learning methods as a potential solution. These novel methods are validated on one or more datasets from a sparse collection of academic intrusion detection datasets. Their recognition as improvements to the state-of-the-art is [...] Read more.
Recently proposed methods in intrusion detection are iterating on machine learning methods as a potential solution. These novel methods are validated on one or more datasets from a sparse collection of academic intrusion detection datasets. Their recognition as improvements to the state-of-the-art is largely dependent on whether they can demonstrate a reliable increase in classification metrics compared to similar works validated on the same datasets. Whether these increases are meaningful outside of the training/testing datasets is rarely asked and never investigated. This work aims to demonstrate that strong general performance does not typically follow from strong classification on the current intrusion detection datasets. Binary classification models from a range of algorithmic families are trained on the attack classes of CSE-CIC-IDS2018, a state-of-the-art intrusion detection dataset. After establishing baselines for each class at various points of data access, the same trained models are tasked with classifying samples from the corresponding attack classes in CIC-IDS2017, CIC-DoS2017 and CIC-DDoS2019. Contrary to what the baseline results would suggest, the models have rarely learned a generally applicable representation of their attack class. Stability and predictability of generalized model performance are central issues for all methods on all attack classes. Focusing only on the three best-in-class models in terms of interdataset generalization, reveals that for network-centric attack classes (brute force, denial of service and distributed denial of service), general representations can be learned with flat losses in classification performance (precision and recall) below 5%. Other attack classes vary in generalized performance from stark losses in recall (−35%) with intact precision (98+%) for botnets to total degradation of precision and moderate recall loss for Web attack and infiltration models. The core conclusion of this article is a warning to researchers in the field. Expecting results of proposed methods on the test sets of state-of-the-art intrusion detection datasets to translate to generalized performance is likely a serious overestimation. Four proposals to reduce this overestimation are set out as future work directions. Full article
(This article belongs to the Special Issue Harnessing Machine Learning and AI in Cybersecurity)
Show Figures

Figure 1

32 pages, 18936 KiB  
Article
Development of a Machine-Learning Intrusion Detection System and Testing of Its Performance Using a Generative Adversarial Network
by Andrei-Grigore Mari, Daniel Zinca and Virgil Dobrota
Sensors 2023, 23(3), 1315; https://doi.org/10.3390/s23031315 - 24 Jan 2023
Cited by 9 | Viewed by 5719
Abstract
Intrusion detection and prevention are two of the most important issues to solve in network security infrastructure. Intrusion detection systems (IDSs) protect networks by using patterns to detect malicious traffic. As attackers have tried to dissimulate traffic in order to evade the rules [...] Read more.
Intrusion detection and prevention are two of the most important issues to solve in network security infrastructure. Intrusion detection systems (IDSs) protect networks by using patterns to detect malicious traffic. As attackers have tried to dissimulate traffic in order to evade the rules applied, several machine learning-based IDSs have been developed. In this study, we focused on one such model involving several algorithms and used the NSL-KDD dataset as a benchmark to train and evaluate its performance. We demonstrate a way to create adversarial instances of network traffic that can be used to evade detection by a machine learning-based IDS. Moreover, this traffic can be used for training in order to improve performance in the case of new attacks. Thus, a generative adversarial network (GAN)—i.e., an architecture based on a deep-learning algorithm capable of creating generative models—was implemented. Furthermore, we tested the IDS performance using the generated adversarial traffic. The results showed that, even in the case of the GAN-generated traffic (which could successfully evade IDS detection), by using the adversarial traffic in the testing process, we could improve the machine learning-based IDS performance. Full article
(This article belongs to the Special Issue Harnessing Machine Learning and AI in Cybersecurity)
Show Figures

Figure 1

28 pages, 1036 KiB  
Article
ReinforSec: An Automatic Generator of Synthetic Malware Samples and Denial-of-Service Attacks through Reinforcement Learning
by Aldo Hernandez-Suarez, Gabriel Sanchez-Perez, Linda K. Toscano-Medina, Hector Perez-Meana, Jesus Olivares-Mercado, Jose Portillo-Portillo, Gibran Benitez-Garcia, Ana Lucila Sandoval Orozco and Luis Javier García Villalba
Sensors 2023, 23(3), 1231; https://doi.org/10.3390/s23031231 - 20 Jan 2023
Cited by 3 | Viewed by 3369
Abstract
In recent years, cybersecurity has been strengthened through the adoption of processes, mechanisms and rapid sources of indicators of compromise in critical areas. Among the most latent challenges are the detection, classification and eradication of malware and Denial of Service Cyber-Attacks (DoS). The [...] Read more.
In recent years, cybersecurity has been strengthened through the adoption of processes, mechanisms and rapid sources of indicators of compromise in critical areas. Among the most latent challenges are the detection, classification and eradication of malware and Denial of Service Cyber-Attacks (DoS). The literature has presented different ways to obtain and evaluate malware- and DoS-cyber-attack-related instances, either from a technical point of view or by offering ready-to-use datasets. However, acquiring fresh, up-to-date samples requires an arduous process of exploration, sandbox configuration and mass storage, which may ultimately result in an unbalanced or under-represented set. Synthetic sample generation has shown that the cost associated with setting up controlled environments and time spent on sample evaluation can be reduced. Nevertheless, the process is performed when the observations already belong to a characterized set, totally detached from a real environment. In order to solve the aforementioned, this work proposes a methodology for the generation of synthetic samples of malicious Portable Executable binaries and DoS cyber-attacks. The task is performed via a Reinforcement Learning engine, which learns from a baseline of different malware families and DoS cyber-attack network properties, resulting in new, mutated and highly functional samples. Experimental results demonstrate the high adaptability of the outputs as new input datasets for different Machine Learning algorithms. Full article
(This article belongs to the Special Issue Harnessing Machine Learning and AI in Cybersecurity)
Show Figures

Figure 1

Back to TopTop