Special Issue "Machine Learning for Cybersecurity Threats, Challenges, and Opportunities III"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 June 2023 | Viewed by 742

Special Issue Editor

Department of Software Engineering and Artificial Intelligence (DISIA), Faculty of Computer Science and Engineering, Office 431, Universidad Complutense de Madrid (UCM), 28040 Madrid, Spain
Interests: computer security; cyber security; privacy; information security; cryptography; intrusion detection; malware; trust; anonymity
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Cybersecurity has become a major priority for every organization. The right controls and procedures must be put in place to detect potential attacks and protect against them. However, the number of cyber-attacks will be always bigger than the number of people trying to protect themselves against them. New threats are being discovered on a daily basis, making it harder for current solutions to cope with a large amount of data to analyze. Machine-learning systems can be trained to find attacks which are similar to known attacks. In this way, we can detect even the first intrusions of their kind and develop better security measures.

The sophistication of threats has also increased substantially. Sophisticated zero-day attacks may go undetected for months at a time. Attack patterns may be engineered to take place over extended periods of time, making them very difficult for traditional intrusion-detection technologies to detect. Even worse, new attack tools and strategies can now be developed using adversarial machine learning techniques, requiring a rapid co-evolution of defenses that matches the speed and sophistication of machine-learning-based offensive techniques. Based on this motivation, this Special Issue aims to provide a forum for people from academia and industry to communicate their latest results on theoretical advances and industrial case studies that combine machine learning techniques, such as reinforcement learning, adversarial machine learning, and deep learning with significant problems in cybersecurity. Research papers can be focused on offensive and defensive applications of machine learning to security. The potential topics of interest to this Special Issue are listed below. Submissions can contemplate original research, serious dataset collection and benchmarking, or critical surveys.

Potential topics include, but are not limited to, the following:

  • Adversarial training and defensive distillation;
  • Attacks against machine learning;
  • Black-box attacks against machine learning;
  • Challenges of machine learning for cyber-security;
  • Ethics of machine learning for cyber-security applications;
  • Generative adversarial models;
  • Graph representation learning;
  • Machine-learning forensics;
  • Machine-learning threat intelligence;
  • Malware detection;
  • Neural graph learning;
  • One-shot learning; continuous learning;
  • Scalable machine learning for cyber security;
  • Steganography and steganalysis based on machine-learning techniques;
  • Strength and shortcomings of machine learning for cyber-security.

Prof. Dr. Luis Javier García Villalba
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2300 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
A Study on Detection of Malicious Behavior Based on Host Process Data Using Machine Learning
Appl. Sci. 2023, 13(7), 4097; https://doi.org/10.3390/app13074097 (registering DOI) - 23 Mar 2023
Viewed by 98
Abstract
With the rapid increase in the number of cyber-attacks, detecting and preventing malicious behavior has become more important than ever before. In this study, we propose a method for detecting and classifying malicious behavior in host process data using machine learning algorithms. One [...] Read more.
With the rapid increase in the number of cyber-attacks, detecting and preventing malicious behavior has become more important than ever before. In this study, we propose a method for detecting and classifying malicious behavior in host process data using machine learning algorithms. One of the challenges in this study is dealing with high-dimensional and imbalanced data. To address this, we first preprocessed the data using Principal Component Analysis (PCA) and Uniform Manifold Approximation and Projection (UMAP) to reduce the dimensions of the data and visualize the distribution. We then used the Adaptive Synthetic (ADASYN) and Synthetic Minority Over-sampling Technique (SMOTE) to handle the imbalanced data. We trained and evaluated the performance of the models using various machine learning algorithms, such as K-Nearest Neighbor, Naive Bayes, Random Forest, Autoencoder, and Memory-Augmented Deep Autoencoder (MemAE). Our results show that the preprocessed datasets using both ADASYN and SMOTE significantly improved the performance of all models, achieving higher precision, recall, and F1-Score values. Notably, the best performance was obtained when using the preprocessed dataset (SMOTE) with the MemAE model, yielding an F1-Score of 1.00. The evaluation was also conducted by measuring the Area Under the Receiver Operating Characteristic Curve (AUROC), which showed that all models performed well with an AUROC of over 90%. Our proposed method provides a promising approach for detecting and classifying malicious behavior in host process data using machine learning algorithms, which can be used in various fields such as anomaly detection and medical diagnosis. Full article
Show Figures

Figure 1

Article
Leveraging Adversarial Samples for Enhanced Classification of Malicious and Evasive PDF Files
Appl. Sci. 2023, 13(6), 3472; https://doi.org/10.3390/app13063472 - 08 Mar 2023
Viewed by 376
Abstract
The Portable Document Format (PDF) is considered one of the most popular formats due to its flexibility and portability across platforms. Although people have used machine learning techniques to detect malware in PDF files, the problem with these models is their weak resistance [...] Read more.
The Portable Document Format (PDF) is considered one of the most popular formats due to its flexibility and portability across platforms. Although people have used machine learning techniques to detect malware in PDF files, the problem with these models is their weak resistance against evasion attacks, which constitutes a major security threat. The goal of this study is to introduce three machine learning-based systems that enhance malware detection in the presence of evasion attacks by substantially relying on evasive data to train malware and evasion detection models. To evaluate the robustness of the proposed systems, we used two testing datasets, a real dataset containing around 100,000 PDF samples and an evasive dataset containing 500,000 samples that we generated. We compared the results of the proposed systems to a baseline model that was not adversarially trained. When tested against the evasive dataset, the proposed systems provided an increase of around 80% in the f1-score compared to the baseline. This proves the value of the proposed approaches towards the ability to deal with evasive attacks. Full article
Show Figures

Figure 1

Back to TopTop