Generative Artificial Intelligence (AI) for Cybersecurity

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Cybersecurity".

Deadline for manuscript submissions: 31 December 2025 | Viewed by 9064

Special Issue Editors


E-Mail Website
Guest Editor
Computer Science Department, Worcester Polytechnic Institute, Worcester, MA 01609, USA
Interests: machine learning; artificial intelligence; responsible AI; trustworthy AI; causal inference; data mining

E-Mail Website
Guest Editor
Faculty of Business and Information Technology, Ontario Tech University, Oshawa, ON L1G 0C5, Canada
Interests: cybersecurity; machine learning; deep learning; AutoML; model optimization; network data analytics; internet of things (IoT); 5G/6G networks; intrusion detection; anomaly detection; concept drift; online learning; continual learning; adversarial machine learning

Special Issue Information

Dear Colleagues,

In recent years, the prevalence of cyber threats and attacks has escalated dramatically, posing serious challenges to global security. These threats manifest in a variety of attacks, including data breaches, phishing schemes, identity theft, misinformation/disinformation, financial fraud, and attacks on critical infrastructure, affecting countless individuals and organizations. The growing sophistication and frequency of these cyber-attacks demonstrate the urgent need for advanced cybersecurity measures and constant vigilance in the digital landscape.

Cybersecurity encompasses the technologies and practices developed to protect computer systems, networks, and data from digital attacks, unauthorized access, and damage. It includes various protective measures such as encryption, setting up firewalls, and continuously monitoring systems to maintain the safety and privacy of data. In today’s digital world, generative artificial intelligence (generative AI) is becoming a powerful tool in cybersecurity. It helps to automate complicated security tasks, assists in identifying and analyzing cyber threats, and can simulate advanced cyber-attacks to improve training and readiness. Furthermore, generative AI plays a role in creating defense strategies against new types of cyber threats, including those involving misinformation and deepfakes.

This Special Issue seeks original, unpublished articles that address recent advances in artificial intelligence with a special focus on generative AI for cybersecurity. Authors are invited to submit manuscripts addressing the development of artificial intelligence and generative models for simulating cyber threats, designing cyber defense, and analyzing cyber forensics. Technical papers, reviews, surveys, and case studies are encouraged. Topics of interest include, but are not limited to, the following:

  • Generative models for threat simulation and cyber-attack training;
  • Countermeasures and defenses to AI-generated cyber threats;
  • Generative AI in cyber forensics;
  • Integration of AI with existing cybersecurity frameworks;
  • AI-driven threat detection and analysis;
  • Generative AI in risk assessment;
  • Ethical implications of AI in security;
  • Responsible AI in cybersecurity;
  • Adversarial machine learning;
  • Generative AI for misinformation/disinformation;
  • Deepfake detection and mitigation;
  • Content authentication technologies.

Dr. Raha Moraffah
Dr. Li Yang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • generative AI
  • cyber security
  • adversarial machine learning
  • responsible AI
  • cyber-attacks
  • cyber defenses
  • risk assessment
  • misinformation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 3479 KiB  
Article
Generative AI-Enhanced Intelligent Tutoring System for Graduate Cybersecurity Programs
by Madhav Mukherjee, John Le and Yang-Wai Chow
Future Internet 2025, 17(4), 154; https://doi.org/10.3390/fi17040154 - 31 Mar 2025
Viewed by 316
Abstract
Due to the widespread applicability of generative artificial intelligence, we have seen it adopted across many areas of education, providing universities with new opportunities, particularly in cybersecurity education. With the industry facing a skills shortage, this paper explores the use of generative artificial [...] Read more.
Due to the widespread applicability of generative artificial intelligence, we have seen it adopted across many areas of education, providing universities with new opportunities, particularly in cybersecurity education. With the industry facing a skills shortage, this paper explores the use of generative artificial intelligence in higher cybersecurity education as an intelligent tutoring system to enhance factors leading to positive student outcomes. Despite its success in content generation and assessment within cybersecurity, the field’s multidisciplinary nature presents additional challenges to scalability and generalisability. We propose a solution using agents to orchestrate specialised large language models and to demonstrate its applicability in graduate level cybersecurity topics offered at a leading Australian university. We aim to show a generalisable and scalable solution to diversified educational paradigms, highlighting its relevant features, and a method to evaluate the quality of content as well as the general effectiveness of the intelligent tutoring system on subjective factors aligned with positive student outcomes. We further explore areas for future research in model efficiency, privacy, security, and scalability. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Figure 1

26 pages, 587 KiB  
Article
GDPR and Large Language Models: Technical and Legal Obstacles
by Georgios Feretzakis, Evangelia Vagena, Konstantinos Kalodanis, Paraskevi Peristera, Dimitris Kalles and Athanasios Anastasiou
Future Internet 2025, 17(4), 151; https://doi.org/10.3390/fi17040151 - 28 Mar 2025
Viewed by 468
Abstract
Large Language Models (LLMs) have revolutionized natural language processing but present significant technical and legal challenges when confronted with the General Data Protection Regulation (GDPR). This paper examines the complexities involved in reconciling the design and operation of LLMs with GDPR requirements. In [...] Read more.
Large Language Models (LLMs) have revolutionized natural language processing but present significant technical and legal challenges when confronted with the General Data Protection Regulation (GDPR). This paper examines the complexities involved in reconciling the design and operation of LLMs with GDPR requirements. In particular, we analyze how key GDPR provisions—including the Right to Erasure, Right of Access, Right to Rectification, and restrictions on Automated Decision-Making—are challenged by the opaque and distributed nature of LLMs. We discuss issues such as the transformation of personal data into non-interpretable model parameters, difficulties in ensuring transparency and accountability, and the risks of bias and data over-collection. Moreover, the paper explores potential technical solutions such as machine unlearning, explainable AI (XAI), differential privacy, and federated learning, alongside strategies for embedding privacy-by-design principles and automated compliance tools into LLM development. The analysis is further enriched by considering the implications of emerging regulations like the EU’s Artificial Intelligence Act. In addition, we propose a four-layer governance framework that addresses data governance, technical privacy enhancements, continuous compliance monitoring, and explainability and oversight, thereby offering a practical roadmap for GDPR alignment in LLM systems. Through this comprehensive examination, we aim to bridge the gap between the technical capabilities of LLMs and the stringent data protection standards mandated by GDPR, ultimately contributing to more responsible and ethical AI practices. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Figure 1

28 pages, 368 KiB  
Article
A CIA Triad-Based Taxonomy of Prompt Attacks on Large Language Models
by Nicholas Jones, Md Whaiduzzaman, Tony Jan, Amr Adel, Ammar Alazab and Afnan Alkreisat
Future Internet 2025, 17(3), 113; https://doi.org/10.3390/fi17030113 - 3 Mar 2025
Viewed by 1627
Abstract
The rapid proliferation of Large Language Models (LLMs) across industries such as healthcare, finance, and legal services has revolutionized modern applications. However, their increasing adoption exposes critical vulnerabilities, particularly through adversarial prompt attacks that compromise LLM security. These prompt-based attacks exploit weaknesses in [...] Read more.
The rapid proliferation of Large Language Models (LLMs) across industries such as healthcare, finance, and legal services has revolutionized modern applications. However, their increasing adoption exposes critical vulnerabilities, particularly through adversarial prompt attacks that compromise LLM security. These prompt-based attacks exploit weaknesses in LLMs to manipulate outputs, leading to breaches of confidentiality, corruption of integrity, and disruption of availability. Despite their significance, existing research lacks a comprehensive framework to systematically understand and mitigate these threats. This paper addresses this gap by introducing a taxonomy of prompt attacks based on the Confidentiality, Integrity, and Availability (CIA) triad, an important cornerstone of cybersecurity. This structured taxonomy lays the foundation for a unique framework of prompt security engineering, which is essential for identifying risks, understanding their mechanisms, and devising targeted security protocols. By bridging this critical knowledge gap, the present study provides actionable insights that can enhance the resilience of LLM to ensure their secure deployment in high-stakes and real-world environments. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Figure 1

26 pages, 760 KiB  
Article
GenSQLi: A Generative Artificial Intelligence Framework for Automatically Securing Web Application Firewalls Against Structured Query Language Injection Attacks
by Vahid Babaey and Arun Ravindran
Future Internet 2025, 17(1), 8; https://doi.org/10.3390/fi17010008 - 31 Dec 2024
Viewed by 1163
Abstract
The widespread adoption of web services has heightened exposure to cybersecurity threats, particularly SQL injection (SQLi) attacks that target the database layers of web applications. Traditional Web Application Firewalls (WAFs) often fail to keep pace with evolving attack techniques, necessitating adaptive defense mechanisms. [...] Read more.
The widespread adoption of web services has heightened exposure to cybersecurity threats, particularly SQL injection (SQLi) attacks that target the database layers of web applications. Traditional Web Application Firewalls (WAFs) often fail to keep pace with evolving attack techniques, necessitating adaptive defense mechanisms. This paper introduces a novel generative AI framework designed to enhance SQLi mitigation by leveraging Large Language Models (LLMs). The framework achieves two primary objectives: (1) generating diverse and validated SQLi payloads using in-context learning, thereby minimizing hallucinations, and (2) automating defense mechanisms by testing these payloads against a vulnerable web application secured by a WAF, classifying bypassing attacks, and constructing effective WAF security rules through generative AI techniques. Experimental results using the GPT-4o LLM demonstrate the framework’s efficacy: 514 new SQLi payloads were generated, 92.5% of which were validated against a MySQL database and 89% of which successfully bypassed the ModSecurity WAF equipped with the latest OWASP Core Rule Set. By applying our automated rule-generation methodology, 99% of previously successful attacks were effectively blocked with only 23 new security rules. In contrast, Google Gemini-Pro achieved a lower bypass rate of 56.6%, underscoring performance variability across LLMs. Future work could extend the proposed framework to autonomously defend against other web attacks, including Cross-Site Scripting (XSS), session hijacking, and specific Distributed Denial-of-Service (DDoS) attacks. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Figure 1

20 pages, 1445 KiB  
Article
Empowering LLMs with Toolkits: An Open-Source Intelligence Acquisition Method
by Xinyang Yuan, Jiarong Wang, Haozhi Zhao , Tian Yan and Fazhi Qi
Future Internet 2024, 16(12), 461; https://doi.org/10.3390/fi16120461 - 7 Dec 2024
Viewed by 1295
Abstract
The acquisition of cybersecurity threat intelligence is a critical task in the implementation of effective security defense strategies. Recently, advancements in large language model (LLM) technology have led to remarkable capabilities in natural language processing and understanding. In this paper, we introduce an [...] Read more.
The acquisition of cybersecurity threat intelligence is a critical task in the implementation of effective security defense strategies. Recently, advancements in large language model (LLM) technology have led to remarkable capabilities in natural language processing and understanding. In this paper, we introduce an LLM-based approach for open-source intelligence (OSINT) acquisition. This approach autonomously obtains OSINT based on user requirements, eliminating the need for manual scanning or querying, thus saving significant time and effort. To further address the knowledge limitations and timeliness challenges inherent in LLMs when handling threat intelligence, we propose a framework that integrates chain-of-thought techniques to assist LLMs in utilizing tools to acquire OSINT. Based on this framework, we have developed a threat intelligence acquisition agent capable of decomposing logical reasoning problems into multiple steps and gradually solving them using appropriate tools, along with a toolkit for the agent to dynamically access during the problem-solving process. To validate the effectiveness of our approach, we have designed four evaluation metrics to assess the agent’s performance and constructed a test set. Experimental results indicate that the agent achieves high accuracy rates in OSINT acquisition tasks, with a substantial improvement noted over its baseline large language model counterpart in specific intelligence acquisition scenarios. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Figure 1

37 pages, 1164 KiB  
Article
Early Ransomware Detection with Deep Learning Models
by Matan Davidian, Michael Kiperberg and Natalia Vanetik
Future Internet 2024, 16(8), 291; https://doi.org/10.3390/fi16080291 - 11 Aug 2024
Cited by 3 | Viewed by 2553
Abstract
Ransomware is a growing-in-popularity type of malware that restricts access to the victim’s system or data until a ransom is paid. Traditional detection methods rely on analyzing the malware’s content, but these methods are ineffective against unknown or zero-day malware. Therefore, zero-day malware [...] Read more.
Ransomware is a growing-in-popularity type of malware that restricts access to the victim’s system or data until a ransom is paid. Traditional detection methods rely on analyzing the malware’s content, but these methods are ineffective against unknown or zero-day malware. Therefore, zero-day malware detection typically involves observing the malware’s behavior, specifically the sequence of application programming interface (API) calls it makes, such as reading and writing files or enumerating directories. While previous studies have used machine learning (ML) techniques to classify API call sequences, they have only considered the API call name. This paper systematically compares various subsets of API call features, different ML techniques, and context-window sizes to identify the optimal ransomware classifier. Our findings indicate that a context-window size of 7 is ideal, and the most effective ML techniques are CNN and LSTM. Additionally, augmenting the API call name with the operation result significantly enhances the classifier’s precision. Performance analysis suggests that this classifier can be effectively applied in real-time scenarios. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Graphical abstract

Back to TopTop