AI and Cybersecurity: Emerging Trends and Key Challenges

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 30 April 2026 | Viewed by 1214

Special Issue Editors


E-Mail Website
Guest Editor
School of IT, Melbourne Institute of Technology, Melbourne, VIC 3000, Australia
Interests: computer networks; network security; cybersecurity; data analytics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Information Technology, Deakin University, Melbourne, VIC 3125, Australia
Interests: data management; data science; machine learning; cybersecurity and privacy
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue, entitled "AI and Cybersecurity: Emerging Trends and Key Challenges", aims to explore the dynamic intersection of artificial intelligence and cybersecurity, highlighting both their transformative potential and the pressing risks associated with their convergence. As AI technologies become increasingly embedded in digital infrastructures, they offer powerful tools for threat detection, risk assessment, and automated response. However, they also introduce novel vulnerabilities and ethical concerns that demand rigorous scrutiny. This Special Issue invites the submission of original research, reviews, and case studies that address key challenges such as adversarial AI, secure machine learning, privacy-preserving algorithms, and the role that AI plays in cyber defense and resilience. Contributions are encouraged from academia, industry, and the government to foster a multidisciplinary dialogue on securing AI systems and leveraging AI for robust cybersecurity. By showcasing cutting-edge developments and critical perspectives, this Special Issue seeks to advance our understanding and guide future innovation at the nexus of AI and cybersecurity.

Suggested topics of interest for this Special Issue include the following:

  • AI-driven threat detection and response systems;
  • Adversarial machine learning and model robustness;
  • Privacy-preserving AI and federated learning;
  • Secure AI model deployment and lifecycle management;
  • AI in malware analysis and intrusion detection;
  • Ethical and regulatory challenges in AI-based cybersecurity;
  • Explainability and trust in AI for security applications;
  • Cybersecurity for AI systems and data pipelines;
  • Human–AI collaboration in cyber defense;
  • Emerging standards and frameworks for AI security.

We look forward to receiving your contributions.

Prof. Dr. Savitri Bevinakoppa
Prof. Dr. Gang Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • cybersecurity
  • adversarial machine learning
  • threat detection
  • privacy-preserving AI
  • secure AI systems
  • intrusion detection
  • ethical AI

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

29 pages, 632 KB  
Article
ML-PSDFA: A Machine Learning Framework for Synthetic Log Pattern Synthesis in Digital Forensics
by Wafa Alorainy
Electronics 2025, 14(19), 3947; https://doi.org/10.3390/electronics14193947 - 6 Oct 2025
Viewed by 449
Abstract
This study introduces the Machine Learning (ML)-Driven Pattern Synthesis for Digital Forensics in Synthetic Log Analysis (ML-PSDFA) framework to address critical gaps in digital forensics, including the reliance on real-world data, limited pattern diversity, and forensic integration challenges. A key innovation is the [...] Read more.
This study introduces the Machine Learning (ML)-Driven Pattern Synthesis for Digital Forensics in Synthetic Log Analysis (ML-PSDFA) framework to address critical gaps in digital forensics, including the reliance on real-world data, limited pattern diversity, and forensic integration challenges. A key innovation is the introduction of a novel temporal forensics loss LTFL in the Synthetic Attack Pattern Generator (SAPG), which enhances the preservation of temporal sequences in synthetic logs that are crucial for forensic analysis. The framework employs the SAPG with hybrid seed data (UNSW-NB15 and CICIDS2017) to create 500,000 synthetic log entries using Google Colab, achieving a realism score of 0.96, a temporal consistency score of 0.90, and an entropy of 4.0. The methodology employs a three-layer architecture that integrates data generation, pattern analysis, and forensic training, utilizing TimeGAN, XGBoost classification with hyperparameter tuning via Optuna, and reinforcement learning (RL) to optimize the extraction of evidence. Due to enhanced synthetic data quality and advanced modeling, the results exhibit an average classification precision of 98.5% (best fold 98.7%) 98.5% (best fold 98.7%), outperforming previously reported approaches. Feature importance analysis highlights timestamps (0.40) and event types (0.30), while the RL workflow reduces false positives by 17% over 1000 episodes, aligning with RL benchmarks. The temporal forensics loss improves the realism score from 0.92 to 0.96 and introduces a temporal consistency score of 0.90, demonstrating enhanced forensic relevance. This work presents a scalable and accessible training platform for legally constrained environments, as well as a novel RL-based evidence extraction method. Limitations include a lack of real-system validation and resource constraints. Future work will explore dynamic reward tuning and simulated benchmarks to enhance precision and generalizability. Full article
(This article belongs to the Special Issue AI and Cybersecurity: Emerging Trends and Key Challenges)
Show Figures

Figure 1

20 pages, 620 KB  
Article
Discriminative Regions and Adversarial Sensitivity in CNN-Based Malware Image Classification
by Anish Roy and Fabio Di Troia
Electronics 2025, 14(19), 3937; https://doi.org/10.3390/electronics14193937 - 4 Oct 2025
Viewed by 459
Abstract
The escalating prevalence of malware poses a significant threat to digital infrastructure, demanding robust yet efficient detection methods. In this study, we evaluate multiple Convolutional Neural Network (CNN) architectures, including basic CNN, LeNet, AlexNet, GoogLeNet, and DenseNet, on a dataset of 11,000 malware [...] Read more.
The escalating prevalence of malware poses a significant threat to digital infrastructure, demanding robust yet efficient detection methods. In this study, we evaluate multiple Convolutional Neural Network (CNN) architectures, including basic CNN, LeNet, AlexNet, GoogLeNet, and DenseNet, on a dataset of 11,000 malware images spanning 452 families. Our experiments demonstrate that CNN models can achieve reliable classification performance across both multiclass and binary tasks. However, we also uncover a critical weakness in that even minimal image perturbations, such as pixel modification lower than 1% of the total image pixels, drastically degrade accuracy and reveal CNNs’ fragility in adversarial settings. A key contribution of this work is spatial analysis of malware images, revealing that discriminative features concentrate disproportionately in the bottom-left quadrant. This spatial bias likely reflects semantic structure, as malware payload information often resides near the end of binary files when rasterized. Notably, models trained in this region outperform those trained in other sections, underscoring the importance of spatial awareness in malware classification. Taken together, our results reveal that CNN-based malware classifiers are simultaneously effective and vulnerable to learning strong representations but sensitive to both subtle perturbations and positional bias. These findings highlight the need for future detection systems that integrate robustness to noise with resilience against spatial distortions to ensure reliability in real-world adversarial environments. Full article
(This article belongs to the Special Issue AI and Cybersecurity: Emerging Trends and Key Challenges)
Show Figures

Figure 1

Back to TopTop