Next Article in Journal
Unplugged Activities for Teaching Decision Trees to Secondary Students—A Case Study Analysis Using the SOLO Taxonomy
Previous Article in Journal
Long Short-Term Memory Networks: A Comprehensive Survey
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Large Language Models in Cybersecurity: A Survey of Applications, Vulnerabilities, and Defense Techniques

by
Niveen O. Jaffal
1,
Mohammed Alkhanafseh
1 and
David Mohaisen
2,*
1
Department of Computer Science, Birzeit University, Birzeit P.O. Box 14, Palestine
2
Department of Computer Science, University of Central Florida, Orlando, FL 32816, USA
*
Author to whom correspondence should be addressed.
AI 2025, 6(9), 216; https://doi.org/10.3390/ai6090216
Submission received: 17 July 2025 / Revised: 17 August 2025 / Accepted: 21 August 2025 / Published: 5 September 2025

Abstract

Large Language Models (LLMs) are transforming cybersecurity by enabling intelligent, adaptive, and automated approaches to threat detection, vulnerability assessment, and incident response. With their advanced language understanding and contextual reasoning, LLMs surpass traditional methods in tackling challenges across domains such as the Internet of Things (IoT), blockchain, and hardware security. This survey provides a comprehensive overview of LLM applications in cybersecurity, focusing on two core areas: (1) the integration of LLMs into key cybersecurity domains, and (2) the vulnerabilities of LLMs themselves, along with mitigation strategies. By synthesizing recent advancements and identifying key limitations, this work offers practical insights and strategic recommendations for leveraging LLMs to build secure, scalable, and future-ready cyber defense systems.

1. Introduction

Advances in machine learning and deep learning, particularly with the advent of transformer architectures [1], have driven the development of LLMs. These advanced Natural Language Processing (NLP) systems, characterized by their extensive parameterization, are trained on foundational tasks such as masked language modeling and autoregressive prediction. These training paradigms enable LLMs to process human language effectively, analyzing contextual semantics and probabilistic relationships across massive text datasets. LLMs exhibit four essential characteristics: (i) a deep understanding of the natural language context; (ii) the ability to generate human-like text; (iii) advanced contextual awareness, particularly in knowledge-intensive applications; (iv) strong instruction following capabilities that support problem solving and decision making. Prominent LLMs, including BERT [2], GPT-3.5 [3] and GPT-4 [4], PaLM [5], Claude [6], and Chinchilla [7], have demonstrated exceptional performance in various NLP tasks, such as language understanding, text generation, and reasoning. The adaptability of these models is particularly notable as they enable breakthroughs in downstream applications with minimal fine-tuning. These include open domain question answering [8], dialogue systems [9], and program synthesis [10], among others. Using rich linguistic representations and robust reasoning capabilities, LLMs are reshaping the way NLP challenges are addressed, paving the way for transformative advancements across various domains.
The increasing complexity and sophistication of cyber threats require innovative and adaptive approaches to strengthen cybersecurity mitigation, and LLMs have found many applications in the security domain. For instance, LLMs are known to generalize from human languages to other domains with minor modifications, making them ideal for automating the generation of security rules, associating cyber threats with one another, and even discovering new phenomena and threats previously unseen. These models are often deployed in adversarial and dynamic settings, where they may be subject to manipulation, misuse, or targeted attacks by both benign and malicious actors [11]. As a result, understanding the robustness of LLMs, mapping their potential attack surfaces, and developing effective mitigation strategies have become critical areas of concern for both researchers and practitioners. The current body of literature has explored various facets of LLMs in cybersecurity, ranging from specific applications and case studies to analyses of vulnerabilities and defense techniques.
Nonetheless, the existing literature remains fragmented, addressing individual domains, threats, or defense techniques in isolation. There is a clear need for a comprehensive and integrative analysis that synthesizes these diverse threads, highlights the current state of knowledge, identifies unresolved challenges, and outlines open research directions for the secure and effective integration of LLMs in modern cybersecurity practices. Motivated by these gaps, this survey conducts a thorough examination of the roles of LLMs in cybersecurity by reviewing recent literature. It synthesizes findings across various domains, evaluates the strengths and limitations of current approaches, and identifies ongoing challenges and new research directions. The analysis is structured around specific research questions and a comparative evaluation of prior work, with a focus on methodological rigor. The goal is to offer a reference for researchers and practitioners, highlighting key scientific and technical priorities for securely adopting LLMs in cybersecurity. To maintain brevity, abbreviations are defined at first mention. A comprehensive alphabetized list of all abbreviations used in this survey is provided in the “Abbreviations” section.

1.1. Research Questions

To effectively position our work in the context of previous studies, we formulate several broad research questions that we aim to address through this effort. RQ1: What are the key cybersecurity domains where LLMs address specific tasks and challenges within these domains, and how? The first research question focuses on the scope and nature of security tasks in which LLMs have been applied, with the aim of categorizing and understanding the breadth of security challenges addressed in different security domains. By analyzing previous studies, this question seeks to provide a detailed inventory of the various security tasks that utilize LLMs, offering insight into their adaptability, effectiveness, and impact within each domain. RQ2: What vulnerabilities are associated with LLMs in cybersecurity applications, and what strategies can be implemented to mitigate these risks and protect models? The second research question investigates the vulnerabilities of LLMs and explores defense techniques to improve their security. Analyzing potential attack vectors and prevention techniques provides a comprehensive understanding of the challenges and safeguards necessary to improve the resilience of these models in cybersecurity applications.

1.2. Contributions

This survey provides a comprehensive and integrative analysis of LLM-based applications, vulnerabilities, and defense techniques across eight cybersecurity domains: network security, software and system security, blockchain security, cloud security, threat intelligence, social engineering, critical infrastructure, and IoT security. Earlier studies explored individual aspects of LLMs in cybersecurity, including applications, vulnerabilities, and defenses. Our work integrates these three perspectives into a unified framework, offering a holistic understanding of how LLMs are deployed, how they can be attacked, and how they can be defended. For example, we explicitly map application scenarios such as vulnerability detection and smart contract auditing with their corresponding attack surfaces and documented defense strategies, providing practitioners with direct mapping not offered in earlier literature. To this end, the key contributions of this survey are summarized as follows:
  • Applications: We enumerate and evaluate the diverse applications of LLMs in cybersecurity, including hardware design security, intrusion detection, malware detection, and phishing prevention, while analyzing their capabilities in these different contexts to address how LLMs are used.
  • Vulnerabilities and Mitigation: We systematically enumerate and examine the vulnerabilities in LLMs with respect to their implications for security applications. Our exploration of such an attack surface encompasses aspects like prompt injection, jailbreaking attacks, data poisoning, and backdoor attacks. Moreover, we enumerate and assess the defense techniques that are in place or could be further deployed to reduce these risks and improve LLM security for those critical applications.
  • Potential Challenges: We identify the potential challenges that arise in the use of LLMs for specified cybersecurity tasks, calling for more attention and research actions from the community.

1.3. Organization

The remaining sections are structured as follows. Section 3 summarizes recent surveys on LLMs and their applications in various cybersecurity domains, highlighting previous work and positioning this survey within the literature on LLM applications and security. Section 4 explores the security domains enhanced by LLM innovations, analyzing their impact on cybersecurity applications and their effectiveness in addressing complex challenges. Section 5 investigates LLM vulnerabilities, categorizing threats and countermeasures while highlighting the need for proactive mitigation strategies to ensure secure deployment. Section 6 discusses the challenges and limitations of integrating LLMs into cybersecurity, considering both practical and theoretical aspects. Finally, Section 7 highlights the potential of LLMs in cybersecurity, summarizes key findings and defense strategies, and outlines future research directions.

2. Methodology

This survey followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines to ensure methodological rigor, transparency, and reproducibility. The article selection process followed four sequential phases: identification, screening, eligibility assessment, and inclusion with quantitative attrition data documented at each stage. Figure 1 (PRISMA flowchart) summarizes this process, and the detailed methodology is as follows:
  • Identification phase: In this phase, we systematically searched six major electronic databases and scientific repositories, including IEEE Xplore, ACM Digital Library, SpringerLink, ScienceDirect, MDPI, and arXiv. Search queries were carefully designed using keyword combinations explicitly targeting our scope: “Large Language Models,” “LLMs,” “Cybersecurity,” “Applications,” “Vulnerabilities,” and “Defense Techniques.” To reflect recent advancements, our searches were limited to studies published within the five-year period from 2021 to 2025, with particular emphasis on the most recent developments in the field. The process yielded 1021 records from databases, with no records identified from registers.
  • Screening phase: Before screening, 412 duplicate records were removed, leaving 609 records for title/abstract screening. Two independent reviewers screened each record, excluding 342 articles for the following reasons: off-topic content unrelated to cybersecurity-focused LLM applications (n = 256), publication types limited to editorials, commentaries, or book chapters (n = 48), and non-peer-reviewed or preprint-only sources lacking formal publication (n = 38). Disagreements between reviewers were resolved through discussion or consultation with a third reviewer.
  • Eligibility phase: The remaining 267 reports were sought for full-text retrieval, all of which were accessible. These were assessed against predefined inclusion and exclusion criteria:
    (a)
    Inclusion criteria: It was required that studies (1) explicitly discuss applications of LLMs in cybersecurity, (2) examine vulnerabilities or attacks targeting LLMs within cybersecurity contexts, (3) evaluate or propose defense techniques to secure LLMs, or (4) demonstrate empirical, analytical, or technical methodological rigor.
    (b)
    Exclusion criteria: This comprised (1) studies not clearly focused on the intersection of LLMs and cybersecurity, (2) purely theoretical articles without empirical validation or analytical evaluation, (3) articles lacking methodological transparency or reproducibility, and (4) short papers (<6 pages) lacking comprehensive evaluation. Based on these criteria, 44 reports were excluded for the following reasons: no empirical validation (n = 19), theoretical-only without methodological detail (n = 15), and short pages (n = 10). This phase was also conducted independently by two reviewers, with all disagreements resolved by consensus or through a third-party adjudicator to ensure consistency and precision.
  • Inclusion phase: Applying these strict inclusion criteria, a total of 223 studies were selected for comprehensive analysis and inclusion in our final review. These studies were categorized as follows: 129 studies focusing on the application of LLMs in cybersecurity domains; by timeline, including 23 studies from 2021, 38 from 2022, 63 from 2023, and 99 from 2024 to 2025; by attack type, covering backdoor (2 studies), data poisoning (6), prompt injection (2), and jailbreaking (2); by defense techniques, encompassing red teaming (5), content filtering (5), safety fine-tuning (6), model merging (6), and other defenses (17); and by publication venue, with 131 studies published in security-focused venues and 92 in artificial intelligence venues. Figure 2 presents the distribution of the 223 studies included in this survey, categorized by publication timeline, attack type, defense techniques, and publication venue. This classification highlights research trends, the variety of examined attack and defense techniques, and their distribution across both security and AI research domains, thus providing a clear and organized summary of the literature at the intersection of LLMs and cybersecurity.

3. Related Reviews

We present in this section several previously selected surveys that make significant contributions; these surveys encompass a detailed and thorough examination of key aspects related to LLMs, including the datasets utilized for training and fine-tuning these models, the vulnerabilities inherent to these models, and the strategies proposed for their mitigation. In addition, they highlight the innovative methodologies employed by LLMs to address complex security challenges, offering a comparative analysis of their efficacy in addressing issues across domains such as cloud computing, IoT, hardware, blockchain, software, and system security.
As shown in Table 1, this work provides a comprehensive analysis of LLM applications in cybersecurity, distinguishing itself from previous work by addressing critical gaps across five key security domains. Although existing surveys, such as [12,13], explore general LLM applications and vulnerabilities, they often lack detailed information on specific domains such as IoT, cloud, hardware, and blockchain security. Furthermore, ours bridges gaps in underexplored domains such as IoT and cloud environments, providing a more holistic view of LLM deployment in cybersecurity. This work advances the state-of-the-art by integrating broader perspectives with practical insights, positioning it as a critical addition to the evolving body of research. Finally, the work covers a range of recent advances and papers that are more timely and not covered in previous studies and surveys.
Table 1. A summary of the related work of LLM applications. Highlights study contribution, datasets used by the LLM for security use case, vulnerabilities associated with LLMs and defense techniques, optimized techniques for LLMs, and the following domains: ➀ Internet of things, ➁ cloud, ➂ hardware, ➃ blockchain, and ➄ software and system security.
Table 1. A summary of the related work of LLM applications. Highlights study contribution, datasets used by the LLM for security use case, vulnerabilities associated with LLMs and defense techniques, optimized techniques for LLMs, and the following domains: ➀ Internet of things, ➁ cloud, ➂ hardware, ➃ blockchain, and ➄ software and system security.
RefYearStudy AreaContributionDVODomains
[14]2023LLMs in SoCSurveys LLM potential, challenges, and outlook for SoC security.
[15]2024LLMs for blockchainCovers LLM use in auditing, anomaly detection, and vulnerability repair.
[12]2024LLM applicationsExplores use cases, dataset limitations, and mitigation strategies.
[13]2024GenAI in cybersecurityReviews GenAI attack applications in cybersecurity.
[16]2024LLM in cybersecurityHighlights LLM capabilities for solving key cybersecurity issues.
[17]2024SurveySurveys 42 models, their roles in cybersecurity, and known flaws.
Ours2025Apps, vulns., and defensesPresents LLM use across domains, identifies vulnerabilities, and proposes defenses.

4. Security Domains and Tasks Empowered by LLM-Based Innovations

The increasing complexity of cybersecurity threats has driven the adoption of LLMs across various security domains. This survey categorizes LLM applications into eight key domains: network, software and system, information and content, hardware, blockchain, cloud, incident response and threat intelligence, and IoT security. This classification provides a structural perspective on the diverse applications of LLMs in cybersecurity domains, offering deeper insights into their contributions to strengthening security robustness across real-time applications and domains. Figure 3 visualizes the distribution of LLM applications across cybersecurity domains. Moreover, Table 2 and Table 3 present LLM-based solutions for various cybersecurity tasks and use cases. LLMs have proven to be instrumental in improving efficiency, accuracy, and adaptability, marking a significant leap in modern cybersecurity defense strategies.

4.1. LLMs in Network Security

This section delves into the applications of LLMs in the field of network security. LLMs have proven to be versatile and powerful tools for addressing a wide range of tasks in this domain, including web fuzzing, traffic and intrusion detection, threat analysis, and penetration testing, which we review in the following subsections.
Takeaway LLMs enhance network security by improving intrusion and anomaly detection through in-context learning and graph-based techniques. In CTI, they automate intelligence extraction for real-time threat monitoring. Tools like GPTFuzzer refine web fuzzing by generating targeted test cases, while PentestGPT streamlines penetration testing through automated reconnaissance and exploit generation. These advancements boost efficiency, accuracy, and adaptability in network security operations.

4.1.1. Web Fuzzing

Web fuzzing is a mutation-based testing technique that incrementally generates test cases by leveraging coverage feedback obtained from instrumented web applications [12]. Given the critical importance of security in web applications, fuzzing is vital in identifying potential vulnerabilities. For example, Liang et al. [18] introduced GPTFuzzer, a tool built on an encoder-decoder architecture. GPTFuzzer effectively generated payloads targeting Web Application Firewalls (WAFs) to identify and test Structured Query Language injection (SQLi), Cross-Site Scripting (XSS), and Remote Code Execution (RCE) attacks. This was achieved through reinforcement learning, fine-tuning, and applying a KL divergence penalty, which helped overcome local optima during the payload generation process.
Similarly, Liu et al. [19] utilized an encoder–decoder architecture to design SQL injection detection test cases for web applications, translating user input into new test scenarios. Meng et al. [20] expanded the scope by employing LLMs to generate structured and sequential test inputs for network protocols that lack machine-readable formats. These advancements demonstrate the growing potential of machine learning and fuzzing in enhancing the security testing process for web applications.
Table 2. LLM-based models for cybersecurity (part I).
Table 2. LLM-based models for cybersecurity (part I).
AuthorRef.YearDatasetTaskKey ContributionsChallengesOptimized Technique
Meng et al.[20]2024PROFUZZWeb FuzzingEnhanced state transitions and vuln. discovery.Weak for closed protocols.GPT-3.5 for grammar and seed tuning.
Liu et al.[21]2023GramBeddings, MendeleyTraffic DetectionCharBERT improves URL detection.High compute, few adversarial tests.CharBERT + attention + pooling.
Moskal et al.[22]2023Sim. forensic exp.Threat AnalysisRefined ACT loop via sandboxed LLM agents.Weakness with complex net/env.FSM + chained prompts.
Temara et al.[23]2023NonePentestingMulti-tool consolidation via LLM.Pre-2021 limit, prompt sensitivity.Case-driven extraction.
Tihanyi et al.[24]2023Form AIVuln. DetectionBenchmarked form AI dataset.Limited scope, costly verification.Zero-shot + ESBMC.
Sun et al.[25]2024Solidity, Java CWEFirmware Vuln.LLM4Vuln detects misuse + zero-days.No cross-language support.Prompt + retrieval + reasoning.
Meng et al.[26]2023RISC, MIPSHW Vuln.NSPG w/HS-BERT for spec extraction.Doc scarcity, manual labels.Fine-tuned HS-BERT + MLM.
Du et al.[27]2023Synthetic SFGBug LocalizationGraph-based CodeBERT + contrastive loss.Low scale, limited eval.HMCBL w/neg. sampling.
Joyce et al.[28]2023VirusTotal (2006–2023)Malware Feature Learn.AVScan2Vec for AV task embedding.Mix of benign/malware, retrain needs.Token + label masking w/Siamese fine-tuning.
Labonne et al.[29]2023Email spam datasetsPhishing Detect.Spam-T5 outperforms baseline.Costly training, weak domain generality.Prefix-tuned Flan-T5.
Table 3. LLMs-based models for cybersecurity (part II).
Table 3. LLMs-based models for cybersecurity (part II).
AuthorRef.YearDatasetTaskKey ContributionsChallengesOptimized Technique
Malul et al.[30]2024KCFsMisconfig. DetectionGenKubeSec w/UMI for high accuracy.Unseen configs, labeled data, scaling.Fine-tuned LLM + few-shot.
Kasula et al.[31]2023NSL-KDD, cloud logsData LeakageReal-time detection in dynamic clouds.Low generalization.RF + LSTM.
Bandara et al.[32]2024PBOMsContainer Sec.DevSec-GPT tracks vulns via blockchain.LLM overhead.Llama2 + JSON schema.
Nguyen et al.[33]2024Compliance dataComplianceOlla Bench tests LLM reasoning.Scalability, adaptability.KG + SEM.
Ji et al.[34]2024Incident reportsAlert PrioritizationSEVENLLM for multi-task response.Limited multilinguality.Prompt-tuned LLM.
Gai et al.[35]2023Ethereum txnsAnomaly Detect.BLOCKGPT ranks real-time anomalies.False positives.EVM tree + transformer.
Ahmad et al.[36]2023MITRE, OpenTitanHW Bug RepairLLM repair in Verilog, beats CirFix.Multi-line bug limits.Prompt tuning + testbench.
Tseng et al.[37]2024CTI ReportsThreat IntelGPT-4 extracts IoCs + Regex rules.Extraction accuracy.Segmentation + GPT-4.
Scanlon et al.[38]2023ForensicsForensic TasksGPT-4 used in education + analysis.Output inconsistency.Prompt + expert check.
Martin et al.[39]2023OpenAI corpusBias Detect.ChatGPT shows political bias.Ethical balance.GSS-based bias reverse.

4.1.2. Traffic and Intrusion Detection

Detecting network traffic and intrusions is crucial for network security and management. LLMs have become powerful tools for intrusion detection, demonstrating versatility across traditional web applications, IoT ecosystems, and in-vehicle networks [40]. These models effectively learn complex patterns in malicious traffic, identify deviations in user behavior that signal anomalies, and interpret the intent behind intrusions and abnormal activities.
A marked improvement is the effort of Liu et al. [21], who used LLMs as a tool to extract features from malicious URLs while expanding the detection to user-level contexts. Zhang et al. [41] demonstrated that employing GPT-4 for in-context learning can achieve an intrusion detection accuracy of more than 95% with a limited amount of labeled data, thus eliminating the need for fine-tuning. Houssel et al. [7] explored the explainability of LLMs in the context of NetFlow-based Network Intrusion Detection Systems (NIDS), showing that they can augment traditional methods and improve interpretability by using tools such as Retrieval Augmented Generation (RAG). LLMs promise real-time analysis and response, enabling zero-day vulnerability discovery and identification of emerging attack patterns through continuous learning. Additionally, they scale efficiently to handle large network traffic, making them ideal for high-throughput settings. Their integration with methods such as graph-based anomaly detection and reinforcement learning increases their ability to detect complex threats [42]. These capabilities highlight the crucial role of LLMs in modern intrusion detection, providing robust, scalable, and advanced digital infrastructure protection.

4.1.3. Cyber Threat Intelligence (CTI)

CTI is now a vital element in risk management, as highlighted by recent studies [43]. The rise in CTI reports requires automated tools for efficient creation and assessment. In network threat analysis, LLMs are critical, particularly in CTI generation and analysis, enhancing decision-making processes. The CTI generation involves extracting intelligence from diverse sources, such as books, blogs, and news, and transforming it into structured reports. An example is CVEDrill by Aghaei et al. [44], which helps to formulate prioritized cybersecurity risk reports and predict their influence on systems. Similarly, Moskal et al. [22] explored the role of ChatGPT in automating responses to network threats, demonstrating its utility in handling basic attack scenarios.
LLMs improve CTI reports by providing real-time updates to monitor evolving cyber threats. Their ability to handle large datasets, identify threat patterns, and merge intelligence from multiple sources improves the efficiency and effectiveness of cybersecurity efforts [45]. With capabilities such as integrating live threat data, modeling attack scenarios, and offering predictive insights, LLMs play a vital role in proactive threat management, advancing CTI reporting, and supporting informed, data-driven decisions in cybersecurity.

4.1.4. Penetration Testing

Penetration testing is conducted through simulated attacks on computer systems to assess their defenses and remains essential for organizations to combat cyber threats. Traditionally, penetration testing involves three key stages: information gathering, payload creation, and vulnerability exploitation. Recent advances emphasize the crucial role of LLMs in automating and improving these steps. Temara et al. [23] utilized LLMs to optimize data collection by retrieving key information about the target, such as IP addresses, domain info, vendor technologies and Secure Socket Layer (SSL)/Transport Layer Security (TLS) certificates. Likewise, Sai Charan et al. [46] explored LLMs in the creation of malicious payloads, noting that models such as ChatGPT can produce more precise and sophisticated attack vectors. This illustrates the dual-use nature of LLM technology, identified in cybersecurity research as the ability of a single technological feature to be exploited for both beneficial and harmful purposes, depending on the intentions of the user [47]. In practical scenarios, the generative capabilities used for legitimate purposes such as penetration testing, vulnerability discovery, and red-team operations can equally be exploited to automate extensive attacks or develop evasive exploits. Prior research [48] highlights the importance of integrating governance frameworks, ethical standards, and technical safeguards, such as output filtering, usage tracking, and adversarial testing, to reduce malicious exploitation while maintaining the utility of LLMs for security research. Additionally, Happe et al. [49] advanced the Linux privilege escalation automation with LLM, providing practical guidance for privilege escalation in penetration tests.
PentestGPT [50] is a cutting-edge automated penetration testing tool that uses LLMs, demonstrating remarkable abilities in a benchmark with 13 scenarios and 182 subtasks. Its effectiveness comes from three self-interacting modules: inference, generation, and parsing, which work within a recursive feedback cycle, sharing intermediate results to enhance the overall reasoning and execution process. In the wider framework of autonomous agent architectures, these self-interacting modules are semi-autonomous components engineered to work together dynamically, permitting adaptive strategic changes in response to real-time task outcomes [51]. This design enhances the coordination of tasks and flexibility in complex, multi-stage testing scenarios, enabling the system to efficiently manage various penetration scenarios with higher precision. Investigations are underway into the use of LLMs for adversarial simulations and custom exploit scripts, which are valuable for practical testing. Future developments could include real-time reinforcement learning and adaptive features to enhance the simulation of dynamic threats and improve the accuracy and prediction of testing. However, the risk of misuse underscores the need for responsible use and strict ethical oversight.
Challenges and Open Directions: Despite their advantages, LLMs for network security face key challenges, including vulnerability to adversarial attacks, which raises concerns about robustness and security. Real-time adaptability requires efficient continuous learning while maintaining scalability in high-throughput environments. Ethical risks encompass more than just dual-use scenarios, where models intended for defense are turned toward offensive purposes; they also include structural biases and risks to privacy. Bias may lead to skewed detection accuracy or unfair impacts on specific groups, while privacy risks arise from unintended exposure of sensitive information. It is crucial to implement safeguards like access controls, audited datasets, output filtering, and a commitment to responsible AI frameworks. Enhancing explainability and transparency in security-critical applications is vital for trust and informed decision-making. Addressing these issues is crucial to fully harnessing LLMs for network security.

4.2. LLMs in Software and System Security

The growing complexity of software systems, coupled with the increasing number of reported vulnerabilities, requires advanced security solutions. LLMs have emerged as powerful tools for automating vulnerability detection, program fuzzing, bug repair, reverse engineering, binary analysis, and system log analysis. These tasks are important for ensuring the reliability of the software, improving automated security assessments, and mitigate security threats [12]. Furthermore, their ability to process and analyze large amounts of code and large-scale system logs outperforms traditional approaches, especially in enabling real-time anomaly detection and proactive threat mitigation. In this section, we explore and examine how LLMs improve software and security tasks by introducing advanced techniques that improve accuracy, efficiency, and scalability.
Takeaway: LLMs enhance software and system security by enabling real-time vulnerability detection, automated bug repair, precise binary analysis, adaptive fuzzing tests, reverse engineering, and system log analysis. Tools such as LATTE, ZeroLeak, and Repilot demonstrate their robustness in mitigating security threats, while fine-tuning and contrastive learning techniques enhance bug detection precision. Moreover, LLM-driven fuzz testing and binary analysis optimize vulnerability assessment methodologies, and their integration into system log analysis facilitates real-time anomaly detection and proactive cyber threat response.

4.2.1. Vulnerability Detection

The growing number of reports in Common Vulnerabilities and Exposures (CVEs) highlights the rise in software vulnerabilities, thereby increasing the risk of cybersecurity breaches and posing major economic and social threats. As such, detecting vulnerabilities has become a critical requirement for protecting software systems and ensuring societal and economic stability.
Recent advances have showcased the potential of LLMs in vulnerability detection tasks, particularly for static code analysis, where they demonstrate superior performance compared to traditional methods such as graph neural networks or rule-based approaches [52,53,54]. The GPT series of models stands out for their ability to identify vulnerabilities effectively [54,55,56,57]. However, challenges persist, as LLMs can generate false positives due to subtle variations in function names, variable usage, or library modifications. A prominent example of integrating LLMs into vulnerability detection is LATTE, proposed by Liu et al. [58], which combines LLMs with automated binary taint analysis to enhance the traditional vulnerability detection approach. LATTE addresses the limitations of traditional taint analysis methods that rely heavily on manual customization of taint propagation and vulnerability inspection rules. Impressively, LATTE identified 37 previously undiscovered vulnerabilities in real firmware. The assessment methodology involved analyzing a diverse set of firmware binaries, thus validating LATTE’s effectiveness. Despite these promising results, the generalizability of LATTE remains partially restricted to firmware environments similar to those tested, indicating a potential limitation in broader contexts. Tihany et al. [24] further demonstrated the utility of LLMs by using them to generate FormAI, a large-scale vulnerability-labeled dataset. However, their study also revealed a critical concern: more than 50% of the code generated by LLMs contained vulnerabilities, raising significant security risks for automated code generation processes. These findings underscore the dual-use nature of LLMs, highlighting both their potential to advance vulnerability detection and the need for rigorous oversight to mitigate associated risks.

4.2.2. Vulnerability Repair

The rapid increase in detected vulnerabilities, coupled with the increasing complexity of modern software systems, has made manual vulnerability remediation an extremely time-consuming and resource-intensive task for security professionals [59]. Advancements in LLMs and related architectures have demonstrated promising capabilities in automating vulnerability repair tasks. The T5 model, built on an encoder-decoder framework, has shown superior performance in generating effective fixes for vulnerabilities [60,61]. However, challenges persist in maintaining the functional correctness of repaired code [62], with LLM performance varying between programming languages, particularly with limited capabilities observed in repairing Java vulnerabilities [63]. Several innovative approaches have emerged to address these challenges. Alrashedy et al. [64] developed an automated vulnerability repair tool that integrates feedback from static analysis tools, enabling iterative improvement of fixes. Tol et al. [65] proposed ZeroLeak, a technique that uses LLMs to identify and mitigate side-channel vulnerabilities. ZeroLeak uses advanced context-sensitive predictions from LLMs to effectively pinpoint potential leakage sources, achieving notably higher precision compared to conventional heuristic-based techniques. Performance metrics indicated substantial reductions in false positives and improved detection accuracy. However, the computational complexity inherent in ZeroLeak’s context management processes significantly increases resource consumption, suggesting the need for optimized inference strategies. Charalambous et al. [66] combined LLM with Bounded Model Checking (BMC) to ensure the precision of the corrected code, mitigating functionality issues often seen after automated vulnerability fixes.

4.2.3. Bug Detection

Software and hardware failures, commonly termed bugs, can cause program malfunctions or unexpected results [67]. Beyond affecting performance, some bugs can be manipulated by attackers to create security vulnerabilities, highlighting the critical need for bug detection to maintain software system safety and reliability. LLMs have emerged as effective tools for automating bug detection. They can generate code lines relative to the original to detect potential bugs. LLMs also utilize feedback from static analysis tools, enhancing the accuracy and precision of bug identification [68,69]. Fine-tuning is crucial for adapting LLMs to bug detection, allowing error identification without test cases by using annotated datasets [70,71]. Du et al. [27] and Li et al. [72] used contrastive learning to train LLMs to differentiate between correct and faulty code, increasing error detection in complex codebases. Fang et al. [73] introduced Represent Them All (RTA), a platform-independent representation method that combines contrastive learning and custom fine-tuning. This technique excels in bug detection and predicts bug priority and severity, highlighting the potential of comprehensive representation methods for software quality.

4.2.4. Bug Repair

LLMs efficiently automate bug fixes by generating precise code, improving development speed but risking security concerns like vulnerabilities [74]. Addressing unresolved bugs is crucial and requires automated repairs in modern software engineering. LLMs are adept at creating repair patches for software defects. Architectures, such as Repilot [75], use encoder-decoder frameworks for accurate repairs, leveraging LLMs’ grasp of code semantics to produce high-quality patches on par with traditional techniques [76]. Fine-tuning enhances LLMs’ real-world repair capabilities with domain-specific datasets for better language and task handling, delivering reliable fixes. Interactive feedback systems, such as ChatGPT [77], further refine repair precision, supporting effective patch development through iterative validation and a deep understanding of software semantics.

4.2.5. Program Fuzzing

Program fuzzing, fuzz testing, or simply fuzzing, is an automated software testing technique that generates input to uncover unexpected behaviors, such as crashes or vulnerabilities. Various effective fuzzing tools have successfully detected bugs and security flaws in real systems [78]. The incorporation of LLMs into fuzzing has significantly improved the generation of test cases. Traditional methods often rely on predefined patterns, which limit their effectiveness. In contrast, LLMs can produce diverse and contextually suitable test cases for different programming languages and system features [79]. LLMs employ advanced strategies, such as repetitive and iterative querying [80], to improve the creation of test cases. These methods allow LLMs to create test cases that have the following characteristics:
  • Identify vulnerabilities: LLMs can analyze previous bug reports to create inputs that uncover similar issues in new or updated systems [81].
  • Create variations: They generate various test cases related to sample inputs, ensuring coverage of potential edge cases [82].
  • Optimize compilers: By examining compiler code, LLMs craft programs that trigger specific optimizations, revealing compilation process flaws [83].
  • Divide testing tasks: A dual-model interaction lets LLMs separate tasks like test case generation and requirements analysis for efficient parallel processing.
The adaptability of LLMs to create intelligent and tailored test inputs has transformed fuzz testing, making it more effective at finding complex bugs. As these models advance, integrating feedback loops, real-time testing, and domain-specific knowledge will further improve system security and robustness.

4.2.6. Reverse Engineering and Binary Analysis

Reverse engineering involves analyzing artifacts such as software or hardware to discern their functionality, which can be used defensively or maliciously. It is crucial for security in vulnerability analysis, malware investigation, and intellectual property protection [84]. Although LLMs excel at automating reverse engineering by identifying software functions and extracting essential data, Xu et al. [85] showcased their ability to restore variable names from binaries through iterative query propagation. Moreover, Armengol et al. [86] paired type inference engines with LLMs to disassemble executables and generate source code, simplifying the binary-to-source translation process. LLMs are crucial in binary program analysis, improving comprehension of low-level code structures and behaviors. Significant progress includes the following:
  • DexBert: Proposed by Sun et al. [87], this tool characterizes the binary bytecode of the Android system, improving the specific binary analysis of the Android ecosystem.
  • SYMC Framework: Developed by Pei et al. [88], this framework uses group theory to preserve the semantic symmetry of the code during analysis. The approach has demonstrated exceptional generalization and robustness in diverse binary analysis tasks.
  • Authorship Analysis: Song et al. [89] applied LLMs to address software authorship analysis challenges, enabling effective organization level verification of Advanced Persistent Threat (APT) malicious software.
LLMs also improve the readability and usability of decompiler outputs, helping reverse engineers interpret and understand binary files more effectively [90]. By improving decompiler-generated code, these models reduce manual effort and increase the efficiency of reverse engineering processes. As LLM technologies advance, their integration with reverse engineering tools is expected to further enhance functionality. Future directions may include real-time disassembly, automated malware deobfuscation, and dynamic binary analysis using hybrid techniques that combine LLMs with symbolic execution or formal verification methods.

4.2.7. Malware Detection

The increasing complexity and volume of malware require sophisticated detection approaches. Signature-based and heuristic methods often fall short against novel camouflaged malware because of the sophisticated evasion techniques used by attackers, such as encryption, polymorphism, and metamorphism [91]. LLMs are effective tools for detecting semantic and structural malware features, thus boosting detection. AVScan2Vec technique proposed by Joyce et al. [28] converts antivirus scan reports into vectors, enabling efficient handling of large malware datasets and excelling in tasks like classification and clustering. Using semantic patterns in antivirus data, this method improves scalability and accuracy, presenting a new approach to malware analysis. LLMs have been explored for their role in both detecting and analyzing malware development and supporting prevention. As noted by Botacin [92], LLMs can combine functionalities to create modular malware components, helping to evolve malware variants. Although LLMs cannot independently generate complete malware from prompts, their ability to create elements supports countermeasure development, highlighting the importance of responsible LLM usage to avoid misuse.

4.2.8. System Log Analysis

Examining extensive log data from software systems manually is impractical due to its complexity and volume. Deep learning techniques have been proposed for anomaly detection within logs. They face challenges such as managing high-dimensional, noisy data, tackling class imbalances, and achieving generality [93]. Recently, researchers have used the advanced language comprehension capabilities of LLMs to enhance anomaly detection in system logs. LLMs surpass conventional deep learning models in both accuracy and interpretability [94]. Their adaptability can be further optimized by fine-tuning for specific types of logs [95] or by adopting strategies based on reinforcement learning [96], which enables precise identification of anomalies specific to particular domains. LLMs prove beneficial in analyzing cloud server logs [21]. By integrating reasoning with log data, they effectively identify root causes of issues within cloud services. This demonstrates the impact of LLMs in the realm of system log analysis, providing a scalable and intelligent approach to detect and address anomalies in intricate environments.
Challenges and Open Directions: Despite their advantages, LLMs face key challenges, including a high false positive rate in detection and functional correctness in automated bug repair. Research indicates that 50% of LLM-generated code contains exploitable threats, raising significant security concerns. Additionally, scalability challenges in high-throughput environments and ethical risks associated with the dual-use of AI necessitate strict oversight and policies. Future research should focus on developing real-time security techniques, improving adversarial defenses, and integrating hybrid AI models with formal verification and reinforcement learning to enhance LLM-driven security solutions’ reliability and interpretability.

4.3. LLMs in Information and Content Security

With phishing, misinformation, manipulation, and cybercrime, LLMs are powerful tools for enhancing information and content security through context-aware learning, fine-tuned detection models, and advanced response mechanisms that enable real-time threat identification, fraud prevention, and content moderation with greater accuracy and scalability [97]. An especially influential method in this field is prompt-based learning. In this technique, a pre-trained language model is directed to complete a particular task using strategically designed natural-language prompts, thereby leveraging its existing linguistic and contextual knowledge without the need for significant parameter fine-tuning [98].
In the realm of cybersecurity, this method enables swift adaptation to emerging threats such as innovative phishing techniques or misinformation campaigns by reframing detection or analysis tasks into specific queries. This adaptability minimizes computational overhead and accelerates deployment, allowing security teams to revise detection strategies by simply altering the prompt instead of retraining the entire model. As a result, it offers a highly flexible and resource-efficient approach for rapidly evolving threat environments.
Takeaway: LLMs have transformed the security of information and content through prompt-based learning and fine-tuning models. They accurately detect phishing attempts and scams while proactively disrupting fraud through automated scam engagement. In content moderation and online safety, LLMs improve the identification of harmful misinformation. In steganography, they facilitate the embedding of covert data and advance steganalysis using few-shot learning and natural language ciphertext encoding for secure communication. LLMs also support file identification, incident response, and evidence extraction in digital forensics. Tools such as PassGPT strengthen authentication security by generating high-entropy passwords and evaluating the strength of the password.

4.3.1. Phishing and Scam Detection

Network deception involves intentionally adding false or misleading information, which threatens user privacy and property security. Typical attack methods include emails, SMS, and web ads that are used to direct users to phishing sites or harmful links [53]. LLMs can produce deceptive content on a large scale with certain prompts. However, LLM-generated phishing emails usually have lower click-through rates than manually created ones, highlighting the limitations of automated methods [99]. LLMs are highly effective at identifying phishing emails. They use prompt-based strategies with website data or fine-tuned models suited to email traits to achieve high effectiveness in phishing detection. They also excel at detecting spam, which often includes phishing. Labonne et al. [29] show that LLMs significantly outperform traditional machine learning in spam detection, confirming their prowess in this area. Beyond detection, LLMs offer innovative uses against scams. As shown in [100], LLMs can mimic human interactions with scammers automatically, wasting their time and resources. This reduces the efficiency of scammers and reduces the impact of scam emails. These capabilities show the potential of LLMs in improving phishing and scam detection, providing scalable, intelligent, and proactive cybersecurity defenses.

4.3.2. Harmful Contents Detection

Social media platforms are criticized for worsening political polarization and weakening public discourse. Harmful content, often mirroring users’ political opinions, can lead to toxic discussions and harmful behavior. LLMs help detect harmful content in three main areas: identifying extreme political positions [101], monitoring crime-related discourse [82], and identifying fake social media accounts or bots [102]. Although LLMs can identify this content, their interpretations often reflect their internal biases, highlighting the complexities of dealing with intricate social and political topics. Martin et al. [39] significantly contributed by creating a large-scale dataset of harmful and benign discourse for 13 minority groups using LLMs. Validation showed that human annotators struggled to distinguish between LLM-generated and human-authored discourse, indicating the potential of LLMs to improve harmful content detection, aiding efforts against toxic behavior.

4.3.3. Steganography

Anderson [103] described the embedding of secret data within regular information carriers to ensure that hidden content remains secure. This is crucial for secure communication. Recent advancements use LLMs to improve steganography and steganalysis techniques. Wang et al. [104] presented a novel steganalysis method using LLMs and few-shot learning. By integrating small labeled datasets with auxiliary unlabeled data, this technique addresses the scarcity of labeled samples, greatly improving detection in low-data scenarios. This marks a significant advancement in language-based steganalysis. Bauer et al. [105] concurrently showed how the GPT-2 model could encode ciphertext into natural language cover texts. This feature allows users to determine the outward appearance of the ciphertext, making it possible to discreetly transfer sensitive information across public forums. Such developments show the contributions of LLMs in contemporary steganography, offering secure techniques for embedding data, as well as improved methods for detecting potential misuse.

4.3.4. Access Control

Access control is crucial for cybersecurity, designed to limit the actions of authorized users within a system. Despite the new authentication technologies, passwords are still the primary method of enforcing access control [106]. PassGPT, an advanced password generation system that uses LLMs, presents an innovative strategy for crafting passwords that adhere to user-specified constraints. This technique excels beyond conventional methods, including those based on Generative Adversarial Networks (GANs), by generating a more diverse collection of unique passwords. Furthermore, PassGPT improves the effectiveness of password strength evaluators, underscoring the promise of LLMs to pioneer and strengthen access control measures [107].

4.3.5. Forensics

Digital forensics plays a crucial role in cybercriminal prosecution by ensuring that evidence extracted from digital devices is allowed in court [108]. This domain protects the integrity and enhances the effectiveness of cybercrime investigations. Scanlon et al. [38] conducted an evaluation of LLMs within the domain of digital forensics, focusing on tasks such as file identification and responding to incidents. The research concluded that, although LLMs should not be considered independent tools, they offer valuable assistance in particular forensic situations.
Challenges and Open Directions: Despite their advantages, LLMs face challenges in information and content security. In phishing and scam detection, they struggle to replicate human-crafted phishing strategies, requiring adversarial training and adaptive threat response. In harmful content detection, LLMs require bias mitigation and a context-aware model to improve accuracy. Their role in steganography enables covert data embedding, increasing the need for advanced steganalysis. In access control, they demand stronger authentication frameworks. In digital forensics, stronger datasets and legal frameworks are needed. Ensuring ethical, unbiased, and secure deployment remains critical in this domain.

4.4. LLMs in Hardware Security

SoC architectures, essential for modern computing, integrate multiple IP cores, but introduce security challenges, as a weakness in any single core can compromise the entire system [109]. Although software and firmware updates can address many issues, some vulnerabilities cannot be patched this way, necessitating rigorous security measures during the initial design phase. This section provides an overview of LLM applications in hardware security, with a particular focus on their role in detecting and mitigating vulnerabilities. Through these capabilities, LLMs demonstrate their potential to enhance the security of System-On-Chip (SoC) architectures.
Takeaway: LLMs enhance hardware security by automating vulnerability detection, security verification, and repair mechanisms in SoC architectures. Using NLP-driven analysis, they identify threats within hardware design documents and link weaknesses to Common Weakness Enumerations (CWEs) while enforcing security assertions. Additionally, LLMs improve hardware vulnerability repair by generating secure hardware code and leveraging large vulnerability corpora to improve patching automation. These advancements make LLMs a powerful, scalable, and proactive solution for strengthening modern SoC security.

4.4.1. Hardware Vulnerability Detection

LLMs are increasingly being used to detect hardware vulnerabilities by scrutinizing security aspects embedded in hardware development documents. In an illustration of their capabilities, Meng et al. [26] leveraged HS-BERT, a model trained on a variety of hardware architecture documents, including Reduced Instruction Set Computing (RISC-V), Open Source Reduced Instruction Set Computing (OpenRISC), and Microprocessor without Interlocked Pipeline Stages (MIPS), which facilitated the discovery of eight security vulnerabilities in the OpenTitan SoC design. This demonstrates LLMs’ proficiency in dissecting complex hardware configurations to pinpoint significant security issues. Building on this advancement, Paria et al. [110] extended the utility of LLMs by identifying vulnerabilities within user-specified SoC designs. Their innovative strategy links identified vulnerabilities to CWEs, makes corresponding security assertions, and enforces security measures to counteract potential threats. These cutting-edge advances demonstrate the importance of LLMs in improving hardware security by streamlining the process of detecting and averting vulnerabilities in sophisticated hardware infrastructures.

4.4.2. Hardware Vulnerability Repair

LLMs play an essential role in the security verification of SoC. They handle a variety of tasks, including the identification of vulnerabilities, their assessment, and verification, as well as the development of strategies for their mitigation [14]. By utilizing comprehensive data on hardware vulnerabilities, LLMs provide suggestions for repairs, which significantly enhance the effectiveness and precision of security assessments and mitigation efforts. Nair et al. [111] demonstrated that LLMs can identify hardware vulnerabilities while generating code and can produce hardware code that prioritizes security. In their study, they utilized LLMs to design hardware that effectively addresses ten identified CWEs. In a complementary study, Tan et al. [112] constructed an extensive corpus detailing hardware security vulnerabilities and evaluated the proficiency of LLMs in automating the repair of these vulnerabilities.
Challenges and Open Directions: LLMs enhance hardware vulnerability detection and repair, but face challenges in accurately interpreting complex SoC architectures, which require deep contextual understanding. The generalization of LLMs across diverse hardware designs is a limitation, as models trained on specific architectures may not effectively identify vulnerabilities in unfamiliar systems. Furthermore, linking detected vulnerabilities to CWEs and generating security assertions requires further refinement to ensure precision and reliability in automated mitigation efforts. Open research directions include developing specialized LLMs tailored for hardware security, improving adaptive security assertion generation, and integrating LLMs with formal verification methods to enhance the accuracy of security enforcement.

4.5. LLMs in Blockchain Security

Blockchain technology has revolutionized decentralized finance, digital identity, and secure transactions by providing a transparent ledger system. However, its growing adoption has also introduced critical security challenges, particularly in smart contract vulnerabilities and transaction anomalies [113]. As blockchain ecosystems become increasingly complex, the need for intelligent, scalable, and proactive security mechanisms has become essential. LLMs present a transformative solution. In this section, we explore how LLMs impact blockchain security, particularly in smart contract security and the identification of transaction anomalies. Examining their substantial capabilities highlights the potential of LLMs to transform vulnerability management, thus strengthening blockchain systems by mitigating risks and improving overall security.
Takeaway: LLMs enhance blockchain security by enabling smart contract vulnerability detection and real-time transaction anomaly identification. Frameworks like GPTLENS employ a dual-phase approach to generate and prioritize threat scenarios, reducing false-positive rates and improving verification accuracy. Unlike rule-based models, LLMs dynamically identify abnormal transactions without predefined constraints, allowing a wider spectrum of anomalies, improving intrusion detection efficiency, and minimizing the need for manual analysis.

4.5.1. Smart Contract Security

Blockchain applications are heavily based on smart contracts, but the construction of these contracts can result in vulnerabilities that pose significant risks, including the potential for substantial financial loss. LLMs offer the potential to automate the detection of these vulnerabilities, although their performance is frequently hampered by common errors and a limited understanding of context [43,114]. To address these challenges, frameworks such as GPTLENS [82] employ a dual-phase approach. This process begins with the creation of a wide range of potential vulnerability scenarios and then proceeds with the evaluation and prioritization of these scenarios to minimize false-positive detections. Sun et al. [25] contributed to the field of smart contract security by incorporating LLMs into the analysis of the program to detect logical vulnerabilities effectively. They structured their approach by categorizing vulnerabilities into specific scenarios and attributes, using LLMs for detection, and confirming findings with static analysis. This development shows the ability of LLMs to improve smart contract security, but there remains a need to reduce false positives and increase accuracy.

4.5.2. Transaction Anomaly Detection

Real-time detection of intrusions within blockchain transactions is challenging due to the extensive search space and the substantial amount of manual analysis required [115]. Conventional methods, such as reward-based and pattern-based models, rely on designated rules or predetermined patterns to pinpoint profitable or suspicious transactions. However, these methods often fail to detect a wide range of anomalies [116]. LLMs offer a versatile and generalizable solution for real-time detection of anomalies. Gai et al. [35] demonstrated that LLMs are capable of dynamically identifying anomalies in blockchain transactions as they occur. Unlike traditional methods, LLMs are not constrained by predefined rules or limited search spaces, which enables them to detect a broader array of abnormal transactions. This flexibility highlights the potential of LLMs to advance anomaly detection in blockchain, delivering more efficient and comprehensive solutions.
Challenges and Open Directions: Despite advancements in LLMs for blockchain security, they still face challenges such as contextual limitations, false positives, and limited understanding of contract logic. Scalability, computational efficiency, and generalization among blockchain protocols also remain significant hurdles. Future research should focus on improving the contextual reasoning of LLMs, integrating formal verification, and developing hybrid AI-driven security frameworks that combine symbolic execution, reinforcement learning, and deep learning-based anomaly detection to improve the accuracy and robustness of LLMs in blockchain security.

4.6. LLMs in Cloud Security

The dynamic nature of cloud environments requires real-time threat intelligence, automated security enforcement, and proactive anomaly detection to ensure system integrity and data protection. The integration of LLMs into cloud security has significantly improved threat detection, security monitoring, and data leakage prevention [117]. Using advanced NLP techniques, these models have improved automation and overall efficiency in addressing complex security challenges, such as misconfigurations, data leaks, compliance issues, and container security.
Takeaway: LLMs have transformed cloud security by enhancing threat detection, misconfiguration analysis, data leakage monitoring, container security, and compliance enforcement. In misconfiguration detection, tools like GenKubeSec provide automated reasoning and high-precision detection in Kubernetes environments. Data leakage monitoring benefits from AI-driven models such as Secure Cloud AI, which improves real-time detection. Frameworks like DevSec-GPT improve vulnerability tracking and compliance validation. Additionally, OllaBench and PRADA demonstrate the effectiveness of LLMs by automating regulatory adherence, managing data compliance, and integrating zero-trust security models across multi-cloud environments.

4.6.1. Misconfiguration Detection

Misconfiguration detection plays an essential role in ensuring the security and stability of systems in cloud-native settings. Recent innovations have incorporated machine learning and LLMs to effectively identify, pinpoint, and address these misconfigurations. A notable advancement was made by Mitchel et al. [118], who developed a tool designed to utilize the system call data obtained from Linux kernels operating within Kubernetes clusters. This tool applies anomaly detection techniques, including Principal Component Analysis (PCA), to detect hidden attacks that capitalize on misconfigurations. Pranata et al. [119] proposed a framework integrating metamorphic testing with PCA to identify misconfigurations in cloud-native applications, enhancing scalability and reducing developer effort. Malul et al. [30] developed GenKubeSec, an LLM-based system effective in detecting Kubernetes configuration file misconfigurations, offering automated reasoning and solutions. GenKubeSec achieved a precision of 0.990 and a recall of 0.999, surpassing traditional rule-based tools and using a UMI for standardized evaluations.

4.6.2. Data Leakage Monitoring

Data leakage poses a significant threat to security in cloud computing by compromising the confidentiality and integrity of sensitive information. Issues typically arise from misconfigured hypervisors, inadequate dashboard authentication, and insecure VM replication, especially during operations such as data migration. Ariffin et al. [120] recommended using Wireshark for packet analysis to monitor data flows during VM migration and dashboard authentication, revealing significant risks in the lack of TLS encryption, which exposed credentials. Vaidya et al. [121] suggested perturbation and fake object injection techniques to better detect unauthorized access by embedding decoy elements. Advanced AI-driven frameworks are revolutionizing data leakage monitoring. Kasula et al. [31] introduced “Secure Cloud AI,” a hybrid model that combines Random Forest and LSTM networks for real-time anomaly detection. Their method achieved 94.78% precision in identifying malware and efficiently classifying network traffic anomalies, demonstrating the necessary scalability and adaptability for dynamic cloud settings. However, challenges persist in scaling for large cloud installations, detecting complex multi-layered attacks, and achieving cross-platform compatibility. Future work should improve AI integration, real-time encryption, and anomaly detection to strengthen defenses against data leakage.

4.6.3. Container Security

Integrating LLMs into container security has shown significant progress in securing cloud-native environments by automating vulnerability detection, optimizing container management, and improving pipeline integrity. For example, the DevSec-GPT [32] framework uses Meta’s Llama2 LLM to create Pipeline Bills of Materials (PBOMs) for container security, analyzing vulnerabilities and development data to ensure comprehensive tracking and prevent supply chain attacks via blockchain traceability. In runtime and security management, LLMs improve real-time anomaly detection by analyzing logs and configurations. Lanka et al. [122] employed LLMs to analyze data from the decoy system, detecting malicious patterns and attacker tactics. The RAG model quickly identified threats in containers by matching commands with adversary data. Additionally, LLMs facilitated compliance checks through JSON schema generation for vulnerability scans on platforms such as GitHub Actions and Kubernetes. Automated documentation updates helped meet regulatory standards, while blockchain features, such as NFT tokenization, improved data provenance and auditability.

4.6.4. Compliance Enforcement

LLMs transform cloud security compliance by automating regulations and boosting efficiency. Nguyen et al. [33] introduced OllaBench, an evaluation tool with 24 cognitive behavioral theories to test the reasoning of LLMs in compliance. OllaBench revealed that GPT-4o and Claude are effective in regulatory automation, providing key information to compliance teams. Henze et al. [123] proposed PRADA, a cloud storage system using LLMs for transparent data management. It achieves compliance by tagging and routing data by attributes, addressing issues like localization and encryption in distributed systems. PRADA notably improved the management of Data Handling Requirements (DHRs), providing a scalable compliance solution for multi-cloud environments. Dye et al. [124] highlighted the critical role of LLMs in integrating Zero-Trust Architectures (ZTA) with Attribute-Based Access Control (ABAC) systems for compliance. Cloud providers such as AWS have utilized LLMs to automate policy generation, monitor privilege escalation, and adhere to Federal Risk and Authorization Management Program (FedRAMP) and National Institute of Standards and Technology (NIST) standards, enhancing least-privilege access control and increasing security in hybrid and public clouds.
Challenges and Open Directions: Despite progress in applying LLMs to cloud security, major challenges persist, such as scalability in large environments, detecting multi-layered attacks, and maintaining cross-platform compatibility. Data leakage prevention demands tighter AI integration, real-time encryption, and adaptive anomaly detection. Container security calls for efficient, real-time anomaly detection and compliance checks. Compliance enforcement remains hindered by real-time auditing, localization, and evolving regulations. Future research should target self-learning models, zero-trust architectures, and automated security frameworks to improve scalability, adaptability, and resilience.

4.7. LLMs in Incident Response and Threat Intelligence

As cyber threats become more sophisticated, traditional methods of incident response and threat intelligence often struggle with scalability, speed, and accuracy. LLMs are emerging as transformative tools in incident response and threat intelligence by automating cybersecurity data analysis, enhancing decision-making, and improving the speed and accuracy of threat detection [125]. Their advanced pattern recognition allows security teams to identify anomalies and threats within vast amounts of unstructured data, reducing manual effort and response time.
Takeaway: LLMs are reshaping incident response and threat intelligence through automation and improved precision. SEVENLLM cuts down false positives in SIEMs via multi-task learning for smarter alerting. HunGPT boosts threat analysis with interpretable anomaly detection and structured knowledge extraction. DISASLLM advances malicious code detection, while MALSIGHT generates readable summaries of malware behavior. Together, these tools streamline reverse engineering and speed up intelligence workflows, underscoring the growing role of LLMs in cybersecurity.

4.7.1. Alert Prioritization

LLMs are revolutionizing alert prioritization in incident response and threat intelligence, supporting efficient context-driven security operations. Molleti et al. [126] showcased the ability of LLM agents to manage large security datasets, alleviate alert fatigue, and highlight key threats through advanced NLP. These agents seamlessly integrate with SIEM systems, boosting threat detection and response. Ji et al. [34] introduced SEVENLLM, an optimized LLM framework using selected bilingual data. Specializing in analyzing and prioritizing security alerts, SEVENLLM employs multi-task learning and SEVENLLM Bench evaluations. This approach improves Indicator of Compromise (IoC) detection and refines alert prioritization by evaluating their severity and impact. LLMs significantly improve alert management by reducing false positives and enabling quick responses; however, challenges persist, such as model interpretability and adaptation to evolving threats, underscoring their impact on modern cybersecurity.

4.7.2. Automated Threat Intelligence Analysis

Incorporating LLMs into cybersecurity has greatly improved automated threat analysis by reducing the manual work involved with unstructured CTI reports. Tseng et al. [37] presented an AI agent using LLMs such as GPT-4 to automatically extract IoCs from CTI reports and create regex patterns for Security Information and Event Management (SIEM) systems. The agent also constructs relationship graphs to depict connections between IoCs, streamlining incident response and reducing dependency on human intervention. Using LLMs like Llama 2 and Mistral 7B, Fieblinger et al. [127] generated Knowledge Graphs (KGs) from CTI reports. Their approach involves fine-tuning and prompt engineering to derive triples, which are employed in link prediction. KGs offer better-structured data representation, enhancing decision-making and threat prediction. HuntGPT, presented by Ali and Kostakos [128], showcases the capabilities of LLMs in cybersecurity. This dashboard combines GPT-3.5 with XAI frameworks such as SHAP and LIME, delivering understandable threat intelligence. It emphasizes detected anomalies and clarifies their context, boosting trust and enabling dynamic cybersecurity processes. These innovations highlight the role of LLMs in automating threat tasks, speeding up response, and enhancing detection accuracy in security operations.

4.7.3. Threat Hunting

LLMs are revolutionizing threat hunting by automating the analysis of intricate cybersecurity data. Schwartz et al. [129] developed LLMCloudHunter, a framework using LLMs like GPT-4o to create OSCTI detection rules. It achieved 92% precision and 98% recall, improving rule generation for SIEM systems and boosting threat detection in cloud environments. Mitra et al. [130] presented LOCALINTEL, a system that merges global threat intelligence (e.g., CVE, CWE) with local knowledge using RAG, reducing the workload of Security Operations Center (SOC) analysts. LLMs achieved a RAGAS score of 0.9535. Their ability to automate threat detection and response promises scalable, accurate, proactive cybersecurity. Future work should improve real-time adaptability and integration with evolving threats.

4.7.4. Malware Reverse Engineering

The integration of LLMs into malware reverse engineering has transformed the analysis of complex binary malware. DISASLLM, introduced by Rong et al. [131], leverages an LLM-based classifier fine-tuned on assembly code to efficiently identify valid instruction boundaries within hidden executables. This approach significantly outperforms traditional disassembly tools by incorporating the semantic understanding of LLMs, thereby improving the detection of malicious code segments. Complementing this, MALSIGHT, proposed by Lu et al. [132], employs LLMs such as MalT5 to iteratively generate human-readable summaries of malware functionality from malicious source code and benign pseudocode. This method enhances the usability, accuracy, and completeness of malware behavior descriptions, effectively bridging the semantic gap introduced by obfuscation techniques. Furthermore, Patsakis et al. [133] demonstrated the capabilities of LLMs such as GPT-4 to clarify real-world malware campaigns, including Emotet. Their study shows that these models can extract actionable information, such as C2 server configurations, from heavily obfuscated scripts. Although local LLMs still face accuracy limitations, cloud-based tools like GPT-4 excel at interpreting malware payloads.
Challenges and Open Directions: Despite their influence in this domain, LLMs face key challenges. In alert prioritization, contextual accuracy is required to minimize false positives and misclassifications in SIEM systems. Threat intelligence automation struggles with processing hidden data and adapting to new patterns in real time. Effective threat hunting requires a deeper integration with security monitoring data and retrieval-augmented intelligence. Future research must focus on improving contextual LLM reasoning, integrating Explainable Artificial Intelligence (XAI) for transparent threat analysis, and improving real-time adaptability.

4.8. LLMs in IoT Security

The rapid expansion of IoT ecosystems has introduced significant security challenges due to resource constraints, diverse architectures, and evolving cyber threats [134]. Traditional security solutions struggle with scalability, real-time anomaly detection, and firmware vulnerability management, making IoT devices prime targets for cyberattacks. LLMs have emerged as transformative solutions that enable automated threat detection and efficient processing. The following subsections explore the role of LLMs in enhancing IoT security, focusing on firmware vulnerability detection, behavioral anomaly detection, and automated threat report summarization.
Takeaway: LLMs have significantly advanced IoT security by enhancing firmware vulnerability detection, behavioral anomaly detection, and automated threat report summarization. Frameworks such as LLM4Vuln improve firmware vulnerability analysis by integrating Retrieval-Augmented Generation (RAG) and prompt engineering, enhancing reasoning across various programming languages. The UVSCAN framework employs NLP-driven binary analysis, excelling in detecting API misuse. In behavioral anomaly detection, an Intrusion Detection System Agent (IDS-Agent) combines reasoning pipelines, memory retrieval, and external knowledge to detect zero-day attacks. These advancements integrate LLM reasoning, NLP frameworks, and efficient learning models, providing robust, scalable, and resource-efficient solutions for IoT firmware vulnerabilities.

4.8.1. Firmware Vulnerability Detection

LLMs have notably improved IoT firmware vulnerability detection by translating abstract security needs into actionable analyses. Sun et al. [25] introduced LLM4Vuln, which separates the reasoning abilities of LLMs for accurate vulnerability detection in languages like Solidity and Java. The framework improves performance using advanced prompt engineering and RAG to integrate current vulnerability knowledge. Zhao et al. [135] noted that the UVSCAN framework supplements this by translating high-level API specifications into binary-level analysis in IoT firmware using NLP-driven methods. It excels at identifying API misuse, causality errors, and return value issues, offering scalability across various architectures, such as RISC and MIPS. Furthermore, Li et al. [136] introduced Binary Neural Networks (BNNs) to enhance resource efficiency in IoT settings, allowing lightweight on-device learning for vulnerability detection with maintained accuracy.

4.8.2. Behavioral Anomaly Detection

With the rapid increase in IoT devices, security challenges have intensified, making the detection of behavioral anomalies vital. LLMs have transformed this field by leveraging reasoning and contextual understanding. Li et al. [137] introduced IDS-Agent, an intrusion detection system powered by LLMs, which combines reasoning pipelines, external knowledge, and memory retrieval to detect malicious traffic accurately. In benchmarks such as the Army Cyber Institute for Cybersecurity of Things 2023 (ACI-IoT’23) and the Canadian Institute for Cybersecurity IoT 2023 (CIC-IoT’23), IDS-Agent achieved a 61% recall in identifying zero-day attacks, outperforming traditional machine learning methods and providing better interpretability. Su et al. [138] applied GPT-4o and domain-adapted BERT to IoT time series data, detecting behavioral shifts for threat identification. These models scaled efficiently in resource-limited IoT networks, minimized false alarms, and provided actionable insights to address incidents.

4.8.3. Automated Threat Report Summarization

The rise of IoT devices has increased the volume and complexity of threat data, emphasizing the need for automated processing. LLMs effectively transform unstructured IoT threat reports into actionable insights. Feng et al. [139] developed IoTShield, an LLM-based framework for assessing IoT vulnerability reports, extracting critical details such as exploit parameters and severity ratings, and generating custom signatures for IDS to enhance defense accuracy. Building on this, Baral et al. [140] integrated XAI with LLMs to produce personalized threat reports tailored to the expertise of analysts. Their system balances technical complexity with user-friendliness, improving decision-making and response efficiency. These advancements underscore the role of LLMs in optimizing IoT threat intelligence processes and addressing challenges related to scale, complexity, and interpretability in modern IoT security environments.
Table 4 categorizes key cybersecurity domains along with their associated tasks and corresponding counts, highlighting the potential of LLMs to enhance security operations. We examine the applications of LLMs across 32 security tasks spanning eight distinct security domains. Furthermore, this classification serves as a foundational reference for the role of LLMs in cybersecurity research, facilitating more adaptive and intelligent security. In terms of quantitative analysis, Software and System Security accounts for the largest share with 47 studies, constituting 36.4% of the total. This is followed by Network Security with 20 studies (15.5%) and Information and Content Security with 17 studies (13.2%). The remaining areas: Cloud Security (12 studies; 9.3%), Incident Response and Threat Intelligence (11 studies; 8.5%), Blockchain Security (8 studies; 6.2%), IoT Security (8 studies; 6.2%), and Hardware Security (6 studies; 4.7%), collectively represent just under one-third of the research reviewed. This distribution underscores the research community’s strong emphasis on software, system, and network protection, while indicating comparatively limited exploration in hardware, blockchain, and IoT security.
Table 5 provides a quick comparison of the key techniques of LLMs using different metrics for comparison, such as the dataset used, performance, strength, weakness, and maturity. Most approaches remain at the research or early adoption stage, with only a few reaching mature deployment. While the results shown in Table 5 are promising, they are often tied to specific datasets, and common challenges include limited generalization and false positives.
Challenges and Open Directions: Firmware vulnerability detection still requires improved cross-architecture generalization to enhance accuracy in various IoT environments. Behavioral anomaly detection struggles with reducing false positives in complex IoT ecosystems while maintaining efficient resource utilization in constrained environments. Automated summarization of threats and reports demands better contextual understanding to generate actionable insights. Future research should focus on developing energy-efficient LLM models for IoT, creating lightweight architectures, and improving real-time intrusion detection through adaptive learning.

5. Vulnerabilities and Defense Techniques in LLMs

Existing research works have classified the vulnerabilities and challenges associated with LLMs into distinct domains. Security and privacy risks include misinformation [141], trustworthiness concerns [58], hallucinations [142], and significant resource consumption [143]. These risks focus on the need for robust measures to mitigate potential security attacks. Security is mainly intended to protect systems by preventing unauthorized access, modification, malfunction, or denial of service to legitimate users during regular operations [144]. On the other hand, privacy aims to protect personal information and ensure that individuals retain control over who can access their sensitive data [145]. In this work, our objective is to systematically examine LLM vulnerabilities by focusing on security attacks through a goal-oriented approach. The subcategories include backdoor attacks, data poisoning, prompt injection, and jailbreaking, each paired with the corresponding defense techniques. This structure highlights the challenges and innovative countermeasures in securing LLMs, providing a clear framework for understanding and addressing these risks. The following subsections first explore key primary defense techniques against security attacks on LLMs, followed by an analysis of major security attack types with their defense methods, focusing on a secure and reliable LLM deployment.

5.1. Defense Techniques Against Security Attacks on LLMs

This subsection outlines key defense techniques proposed to enhance the robustness and safety of LLMs, belonging to three main categories. The first focuses on preventing LLMs from generating harmful output using a set of rules and constraints at the input or output levels. Red teaming and content filtering play a critical role in intercepting and blocking potentially harmful interactions before they occur, ensuring that LLMs adhere to ethical and safety standards. The second category is concerned with modifying LLMs’ internal mechanisms or representations to improve LLM robustness and safety. Safety fine-tuning and model merging are key techniques within this category, working to make the model more resilient to adversarial attacks and misalignments, while strengthening its safety protocols through model optimization. The third category integrates the strengths of these defense strategies to offer a more comprehensive protection framework.
Figure 4 presents a quantitative analysis of the defense strategies examined in this review. Among the 39 defense-related studies analyzed, the largest portion, 43.6%, falls under a diverse “Others” category, which includes evaluation frameworks, hybrid detection pipelines, policy-enforcement mechanisms, and guardrail-based interventions. Safety fine-tuning and model merging each account for 15.4% of the studies, while red teaming and content filtering each contribute 12.8%. This distribution indicates that although well-established methods remain prevalent, there is substantial research devoted to innovative or hybrid defense methodologies, reflecting an evolving landscape aimed at enhancing the robustness and safety of LLMs beyond conventional approaches.
Red Team Defenses: This is an effective technique for simulating real-world attack scenarios to identify LLM vulnerabilities [146]. The process begins with an attack scenario simulation, where researchers test LLM responses to issues such as abusive language. This is followed by test case generation using classifiers to create scenarios that help eliminate harmful outputs. Finally, the attack detection process assesses the susceptibility of LLMs to adversarial attacks. Continuous updates to security policies, refinement of procedures, and strengthening of technical defenses ensure that LLMs remain robust and secure against evolving threats. Ganguli et al. [147] proposed an efficient AI-assisted interface to facilitate large-scale Red Team data collection for further analysis. Additionally, their research explored the scalability of different LLM types and sizes under Red Team attacks and their ability to reject various threats.
Challenges: This technique poses several challenges, such as its resource-intensive nature and the need for skilled experts to effectively simulate complex attack strategies [148]. Red Teaming is still in its early stages with limited statistical data. However, recent advancements, such as leveraging automated approaches for test case generation and classification, have improved scalability and diversity in Red Teaming efforts [147,149].
Content Filtering: This technique encompasses input and output filtering to protect the integrity and appropriateness of LLM interactions by identifying and intercepting harmful inputs and outputs in real time. Recent advancements have introduced two key approaches: rule-based and learning-based systems. Rule-based systems rely on predefined rules or patterns, such as detecting adversarial prompts with high perplexity values [150]. To neutralize semantically sensitive attacks, Jain et al. [151] utilize paraphrasing and re-tokenization techniques. On the other hand, learning-based systems employ innovative methods, such as an alignment-checking function designed by Cao et al. [152] to detect and block alignment-breaking attacks, enhancing security.
Challenges: Despite advancements, this technique still suffers from significant limitations that hinder its effectiveness and robustness [153]. One key issue is the evolving nature of adversarial prompts designed to bypass detection mechanisms. Rule-based filters, while simple, struggle to remain effective against increasingly sophisticated attacks. Learning-based systems, which rely on large and diverse datasets, may fail to fully capture harmful content variations and struggle with balancing sensitivity and specificity [154]. This imbalance can lead to false positives (flagging benign content) or false negatives (missing harmful content). Additionally, the lack of explainability in machine learning-based filtering decisions complicates transparency and acceptance, underscoring the need for more interpretable and trustworthy models.
Safety Fine-Tuning: This widely used technique customizes pre-trained LLMs for specific downstream tasks, offering flexibility and adaptability. Recent research, such as that presented by Xiangyu et al. [155], found that fine-tuning with a few adversarially designed training examples can compromise the safety alignment of LLMs. Additionally, even benign, commonly used datasets can inadvertently degrade this alignment, highlighting the risks of unregulated fine-tuning. Researchers have proposed data augmentation and constrained optimization objectives to address these challenges [156]. Data augmentation enriches the training dataset with a diverse range of samples, including adversarial and edge cases, helping the model generalize safety principles more effectively. Constrained optimization applies additional loss functions or restrictions during training to guide the model toward prioritizing safety without sacrificing task performance. Bianchi et al. [157] and Zhao et al. [158] validate the effectiveness of this technique, suggesting that incorporating a small number of safety-related examples during fine-tuning improves the safety of LLMs without reducing their practical effectiveness. Together, these defensive techniques enhance safety throughout the generative process, reducing the risk of adversarial attacks and unintentional misalignments while preserving the adaptability and effectiveness of fine-tuned LLMs for real-world tasks.
Challenges: While widely used, integrating safety-related examples into fine-tuning can limit the generalization of LLMs to benign, non-malicious inputs or degrade performance on specific tasks [159]. Additionally, the reliance on manually curated safety examples or prompt templates demands considerable human expertise and is subject to subjective biases, leading to inconsistencies and affecting model reproducibility across different use cases [160]. Moreover, fine-tuning itself can both enhance safety through the introduction of safety-focused examples and expose the model to new vulnerabilities when adversarial examples are incorporated.
Model Merging: The model merging technique combines multiple models to enhance robustness and improve performance [161]. This method complements other defense strategies by significantly strengthening the resilience of LLMs against adversarial manipulations. By leveraging the diversity of multiple models, it creates a more robust system capable of handling adversarial inputs while maintaining generalization across various tasks [162]. When merging models, different fine-tuned models that are initialized from the same pre-trained backbone share optimization trajectories while diverging in specific parameters tailored to different tasks. These diverging parameters can be merged through arithmetic averaging, allowing the model to generalize better across domain inputs and perform multi-task learning. This idea has been proven effective in fields like Federated Learning (FL) and Continual Learning (CL), where model parameters from different tasks are combined to mitigate conflicts. Zou et al. [163] and Kadhe et al. [164] applied model merging techniques to balance unlearning unsafe responses while minimizing over-defensiveness.
Challenges: Despite its potential, model merging faces significant challenges due to a lack of deep theoretical investigation in two key areas [165]. First, the relationship between adversarially updated model parameters derived from unlearning objectives and the embeddings associated with safe responses remains unclear. There is a risk that adversarial training with a limited set of harmful response texts could lead to overfitting, making the model more susceptible to new, unseen jailbreaking prompts. This limited approach may fail to generalize effectively to novel adversarial threats, undermining the robustness of LLMs. Second, controlling the over-defensiveness of merged model parameters presents a significant challenge. While merging aims to improve resilience, it does not provide a clear methodology for preventing over-defensive behavior, where the model may excessively restrict certain types of outputs, limiting its usability and flexibility. This lack of control could hinder the model’s ability to balance safety with task performance, as overly defensive behavior may result in missed or overly cautious responses, impacting user experience and task accuracy [166]. These challenges highlight the need for further research into the theoretical foundations of model merging to ensure its effectiveness and versatility as a defense mechanism.

5.2. Adversarial Attacks

Adversarial attacks are a key vulnerability in LLMs, involving input manipulation to trigger errors or unintended outputs [59]. These attacks exploit the sensitivity of a model to minor input changes in order to deceive it. In Deep Neural Networks (DNNs), such attacks disrupt operations by altering input data, and this also applies to LLMs, with potential effects such as spreading misinformation or creating biased content [167]. Across the 12 analyzed studies focused on attacks, the four main types (backdoor attacks, data poisoning, prompt injection, and jailbreaking) were equally represented, each accounting for 25% of the total. This even distribution suggests that research attention has been evenly allocated across these threat areas, reflecting a balanced recognition of the risks they pose to LLM security. In the following, we review those attacks and defenses. At a high level, Table 6 highlights various defense techniques proposed to protect LLMs against these four key security vulnerabilities. The following subsections explore these techniques, discussing their mechanisms, limitations, and practical challenges.
Figure 5 presents a structured overview of security attacks targeting LLMs alongside corresponding defense techniques. By integrating these advanced security frameworks, LLMs can achieve enhanced robustness and reliability for safe and trustworthy deployment in real-world applications.

5.2.1. Data Poisoning

Adversaries pose a serious threat to LLMs by deliberately altering training datasets. Through the insertion of misleading samples, they create subtle distortions that bias the model, leading to errors in prediction and decision-making [168]. Adversaries can manipulate DNNs to serve malicious purposes by corrupting training data. Research indicates that poisoned data can be discreetly added to datasets during training or fine-tuning of Language Models (LMs), exploiting vulnerabilities in data pipelines [169]. These risks are particularly pronounced when using external or unverified dataset sources, highlighting the need for strict data curation and validation to mitigate such threats.
Defense Techniques: Several defense techniques have been employed to protect LLMs from poisoning attacks, including data validation, filtering, cleaning, and anomaly detection [170]. Yan et al. [171] introduced ParaFuzz, a framework designed to detect poisoned samples in NLP models. This technique employs fuzzing, a software testing methodology, to identify poisoned samples as outliers by analyzing the interpretability of model predictions. ParaFuzz provides a robust method for filtering malicious data during training, noting that poisoned data often diverges from benign distributions and takes longer for a model to learn. Additionally, dataset curation techniques, as highlighted by Contiella et al. [172], emphasize the importance of removing near-duplicate poisoned samples, identifying known triggers, and isolating anomalies in training datasets. This approach has proven effective against attacks such as AutoPoison [173] and TrojanPuzzle [174]. Fine-pruning has also emerged as an effective defense against data poisoning attacks [175], while perplexity filtering and query rephrasing are specifically employed to mitigate white-box attacks such as AgentPoison [176]. Despite these advancements, continuous research is necessary to refine and optimize defenses against evolving threats in modern LLMs.

5.2.2. Backdoor Attacks

In a backdoor attack, poisoned samples are introduced into a model to embed hidden malicious functionality. These attacks allow the model to perform normally on benign inputs but behave maliciously on specific, poisoned inputs. Backdoor attacks in LLMs can be categorized as input-triggered, prompt-triggered, instruction-triggered, and demonstration-triggered [177]. Adversaries use techniques such as injecting triggers into training data, modifying prompts to elicit malicious output, exploiting fine-tuning processes with poisoned instructions, or subtly altering demonstrations to manipulate model behavior while embedding hidden vulnerabilities without detection [178].
Defense Techniques: To counter backdoor attacks in LMs, various advanced mitigation techniques have been proposed. One such approach is Fine Mixing, introduced by Zhang et al. [179], which employs a two-step fine-tuning process that merges backdoor-optimized weights with pre-trained weights, followed by refinement on a clean dataset. This method also integrates Embedding Purification (E-PUR) to neutralize backdoors in word embeddings, improving model robustness. Another technique, CUBE (Clustering-Based Unsupervised Backdoor Elimination) by Cui et al. [180], leverages the HDBSCAN density clustering algorithm to identify and separate poisoned samples from clean ones based on distinct clustering patterns. Additionally, Masking Differential Prompting (MDP) by Xi et al. [181] offers an efficient and adaptable defense for prompt-based LLMs by exploiting the increased sensitivity of poisoned samples to random masking, which causes significant variations in their probability distributions. While these techniques enhance model security, further research is needed to assess their effectiveness against advanced backdoor attacks in modern LLMs such as GPT-4 and Llama-3.

5.3. Prompt Hacking

Prompt hacking involves strategically manipulating input prompts to influence the output of LLMs. By crafting precise and intentional prompts, attackers aim to direct model responses toward specific objectives, which may include generating unintended or harmful outcomes. Since LLMs operate through interaction-based systems where user queries drive their outputs, carefully designed prompts can exploit the model’s underlying mechanisms, overriding safeguards and producing misleading, malicious, or unexpected results.

5.3.1. Jailbreaking Attacks

Jailbreaking attacks involve bypassing software restrictions imposed by manufacturers or service providers, granting users elevated access to system functionality. While commonly associated with Apple’s iOS [182], similar practices exist for Android and other systems. Jailbreaking grants users privileged access to core functions and the file system, allowing for unauthorized application installations, bypassing regional locks, and performing advanced system manipulations [183]. However, it introduces significant risks, such as compromised security, loss of functionality, and potentially irreversible damage to the device.
Defense Techniques: Several defense techniques have been developed to mitigate jailbreaking attacks on LLMs. Kumar et al. [184] introduced substring-safety filtering, which analyzes and filters input prompts to block harmful or unintended responses, providing robust defense despite its increased complexity for longer inputs. Wu et al. [185] developed a self-reminder system that directs LLMs toward safe behaviors, improving context-specific responses and reducing jailbreak success rates, particularly in role-playing scenarios. Jin et al. [186] proposed a goal prioritization method that prioritizes safety over utility in response generation, thereby reducing harmful content risks. Additionally, Robey et al. [187] introduced the Smooth LLM framework, which applies randomized smoothing by perturbing input prompts and aggregating outputs to lower the success rate of instruction-based attacks on models such as Llama-2 and Vicuna, enhancing model defenses against emerging threats.

5.3.2. Prompt Injection

Prompt injection manipulates LLMs to generate attacker-desired outputs by bypassing safety mechanisms [188]. Carefully crafted prompts enable adversaries to override original commands or execute malicious actions. This vulnerability facilitates harmful content generation, including data leakage, unauthorized access, hate speech, disinformation, and other security breaches [189]. In prompt injection attacks, adversaries may directly instruct the LLM to bypass filtering mechanisms or process compromised inputs. Additionally, attackers can pre-inject harmful prompts into web content, which the LLM may inadvertently process, making these attacks difficult to detect and mitigate.

5.3.3. Advanced Defense Tactics: Model Merging, Adversarial Training, and Unlearning

Prompt injection defenses are broadly categorized into prevention-based and detection-based methods, with ongoing research efforts enhancing their effectiveness. Prevention-based defenses, as described in [190], seek to block injected tasks through techniques such as paraphrasing, re-tokenization [151], and data prompt isolation [191]. Paraphrasing disrupts the sequence of injected data, while re-tokenization breaks down infrequent tokens in compromised prompts, mitigating malicious instructions. Detection-based defenses, such as those proposed by Wang et al. [104], assess prompt integrity through response-based or prompt-based evaluations. Perplexity-based detection [192], for instance, identifies compromised triggers by analyzing quality degradation and increased perplexity. Despite these advancements, Liu et al. [193] found that traditional prevention and detection techniques remain inadequate against sophisticated optimization-based attacks, such as Judge Deceiver [194], highlighting the need for continuous innovation in this field.
In addition to the above methods, advanced defense tactics such as model merging, adversarial training, and unlearning are gaining attention. Model merging combines the parameters of multiple fine-tuned models into one unified model, improving robustness without requiring full retraining [195]. Adversarial training, and more recently adversarial unlearning, enhances resistance by training models with crafted malicious inputs and explicitly countering attempts to reintroduce harmful knowledge [196]. Unlearning (machine unlearning) allows models to forget harmful, biased, or private training data without retraining from scratch, with recent frameworks such as those suggested in [164], which offer structured approaches to evaluation and control.
In [197], the authors introduce a certified unlearning framework that provides formal guarantees for forgetting specific data in neural networks. Their approach ensures that once data are removed, the behavior of the model is provably independent of those data, improving trust and safety in secure applications.
These defense mechanisms differ in complexity, effectiveness, and applicability. Prompt engineering is lightweight and easy to apply, but may not generalize across all threats. Model merging enhances robustness with minimal retraining costs, although it may introduce performance variance. Unlearning provides strong privacy guarantees but often incurs computational overhead and can affect model accuracy. Together, they form a layered defense strategy in which trade-offs between security, efficiency, and adaptability can be balanced according to deployment needs.
Table 7 presents an analysis of security techniques employed across various approaches to safeguard LLMs from security attacks. This analysis highlights the diverse methodologies adopted in existing research and provides insights into their effectiveness and robustness in protecting LLMs.

5.4. Real-Life Case Studies and Benchmarking

In addition to theoretical approaches, several practical applications of LLMs in cybersecurity have recently been demonstrated. PentestGPT has been applied to penetration testing tasks, assisting in identifying vulnerabilities and guiding exploitation processes more efficiently than traditional manual methods [50]. Further evaluations of real-world penetration testing platforms demonstrate how LLM modular agents can efficiently perform complex security tasks [50]. In parallel, ChatAFL, an LLM-guided protocol fuzzing system, has significantly outperformed state-of-the-art fuzzers such as AFLNet and NSFuzz, discovering nine previously unknown vulnerabilities and achieving approximately 50% greater state coverage in protocol implementations [12]. These case studies underscore both the promise of LLM-based tools for accelerating vulnerability detection and the need for continuous benchmarking to assess accuracy, scalability, and potential misuse.

6. Limitations of Existing Works and Future Research Directions

While LLMs have shown significant potential in addressing cybersecurity challenges, several inherent limitations hinder their broader adoption and effectiveness in security tasks. One major limitation is the lack of interpretability, as the black-box nature of LLMs prevents users from understanding how models make security-critical decisions. This undermines trust and transparency, which are essential for deployment in sensitive domains. The lack of transparency is particularly concerning due to the increased risks posed by AI-generated content (AIGC) [199], including privacy breaches, the spread of misinformation, and the production of vulnerable code [200]. Moreover, the use of LLMs in cybersecurity domains such as network, hardware, blockchain, and content security is constrained by the lack of high-quality, domain-specific datasets needed for effective fine-tuning [196]. Future research should focus on developing explainability tools to clarify model decisions, collaborating with domain experts to curate relevant datasets, and refining models to integrate cybersecurity-specific knowledge. Expanding LLM capabilities to handle multimodal inputs, such as voice, images, and videos, would enhance their contextual understanding in security settings [85]. Addressing these challenges will enable LLMs to better support cybersecurity efforts, offering deeper insights and facilitating the development of secure, automated solutions.
Despite substantial advancements in understanding security vulnerabilities in machine learning models, research on mitigating backdoor attacks in LLM-based tasks, such as text summarization and generation, remains limited [195]. Mitigating backdoor attacks is crucial for ensuring robust defenses and secure LLM deployment. While poisoning attacks on ML models have been well studied [201], advanced techniques such as ProAttack [178] and BadPrompt [202] are not yet fully addressed.
To enhance LLM security and build robust systems, further research is required across various tasks and architectures. Current defense measures, such as dataset cleansing [203] and anomaly detection, often disrupt the development process by inadvertently removing critical data. Similarly, methods like early stopping after specific training epochs offer moderate protection but frequently lead to reduced model performance [204]. Adaptive defense mechanisms are necessary to strike a balance between securing models and maintaining their utility.
Despite their potential, LLMs remain inherently vulnerable to security risks, posing critical challenges for their application in cybersecurity tasks. One key avenue for future research is enabling LLMs to autonomously detect and resolve vulnerabilities within their architectures. Such self-repair capabilities would enhance resilience while reducing dependence on external restrictions. Achieving this requires a dual focus: automating cybersecurity tasks using LLMs while concurrently implementing robust self-protection mechanisms to mitigate model-specific risks. Current research highlights significant shortcomings in LLM defenses. For example, while safety features in existing models, such as those in ChatGPT, can prevent simple attacks, multi-step exploits continue to compromise these systems [39]. Moreover, newer AI systems, such as Bing AI, have demonstrated even greater susceptibility to advanced attack strategies [205]. Techniques inspired by human reasoning, such as self-reminder mechanisms [206], have been proposed to mitigate these vulnerabilities; however, their effectiveness in handling complex queries remains underexplored. Addressing this gap requires deeper insights into how these mechanisms influence model reasoning and further refinements to optimize defense strategies without compromising performance.
Defending LLMs against vulnerabilities requires addressing both intrinsic weaknesses and external adversarial threats. Given the impracticality of manually auditing extensive training data sets, alternative strategies have been used, such as personal information filtering and restrictive terms of use, to mitigate the inclusion of sensitive content. Advanced techniques like MDP [181] have shown promise in countering earlier backdoor attacks. However, these methods struggle with complex NLP tasks, such as paraphrasing and semantic similarity, and do not fully address emerging threats like BadPrompt [202] and BToP [207]. The lack of comprehensive evaluation metrics, such as perplexity, further complicates the assessment of attack and defense strategies. Additionally, the high computational demands of LLM training in federated learning (FL) environments create additional obstacles to scalable and adaptive defenses. Organizations such as OpenAI implement defense measures, yet these do not fully eliminate risks like jailbreaking. Malicious datasets and prompts still allow attackers to bypass security mechanisms and generate harmful content [208]. These persistent vulnerabilities underscore the urgent need for flexible and scalable defense strategies that can adapt to evolving attack techniques. In summary, securing LLMs requires a holistic strategy to improve robustness, scalability, and effectiveness across various applications. Future studies should focus on developing interpretability frameworks, autonomous protection mechanisms, multimodal capabilities, and improved evaluation metrics to create more secure and reliable AI-driven cybersecurity solutions.
Emerging Multimodal and Autonomous Defenses: Another promising direction is the use of multimodal LLMs and autonomous self-protection mechanisms. Multimodal LLMs integrate information from different sources, such as text, images, code, and network traffic, enabling more accurate detection of complex cyber threats [209]. Autonomous self-protection mechanisms, such as genetic AI frameworks in which multiple LLM agents assign roles and monitor each other’s output, help detect manipulation and enforce safety automatically [97,210]. Although these approaches are still in the early stages, they offer promising new paths toward making LLM-based security systems more adaptive, resilient, and trustworthy; however, real-world deployment and evaluation remain limited.

Ethical and Privacy Considerations

While LLMs hold significant promise in cybersecurity, their deployment raises important ethical and privacy concerns. These include risks of misuse for malicious purposes, exposure of sensitive data during model training, and the amplification of bias or discriminatory outcomes. Ensuring privacy requires techniques such as differential privacy and secure data handling, while ethical deployment involves establishing clear guidelines for responsible use and transparency [207]. Recent surveys have also proposed practical mitigation strategies and frameworks to manage privacy risks and ensure accountability in LLM-based systems. For example, ref. [211] provides a comprehensive survey on the security and privacy challenges of AI-generated content in the Metaverse, highlighting issues that are also relevant to LLM-based cybersecurity systems. Addressing these concerns is essential to maintaining trust and ensuring that LLM-based security solutions enhance, rather than compromise, the safety of users and organizations.
In light of these considerations, fully harnessing the potential of LLMs in cybersecurity requires addressing complex socio-technical challenges, with a focus on algorithmic accountability, structural biases, and regulatory constraints. Ensuring algorithmic accountability means that the reasoning and decision-making processes within LLM-driven security systems must remain transparent, traceable, and open to independent scrutiny. The lack of clear explanations in critical scenarios, such as intrusion detection or vulnerability analysis, can undermine trust, delay response, and pose legal risks [212,213]. Equally important is the mitigation of structural biases, which can emerge due to imbalanced training datasets or inadequate fine-tuning. These biases may result in skewed detection patterns, inadequate representation of certain threat types, or irregular false alarms affecting particular user groups, issues that not only compromise fairness but also introduce vulnerabilities that adversaries could exploit [214]. Addressing these biases necessitates intentional dataset development, ongoing audits, and the application of algorithms that mitigate bias during both training and inference. Overarching both of these aspects is the growing impact of regulatory frameworks, such as the EU AI Act [215], the GDPR [216], and the NIST AI Risk Management Framework [217], which set clear mandates for data governance, transparency, human oversight, and risk management. When addressed with foresight, these regulations can serve not as restrictions but as guiding principles, complementing accountability and bias mitigation to ensure that LLM-based security systems are robust both technically and socially.

7. Conclusions

LLMs are at the forefront of transforming cybersecurity, offering innovative solutions to address increasingly complex challenges. This survey analyzed LLM applications on 32 security tasks covering eight domains, including blockchain, hardware, IoT, and cloud security, with a focus on task-specific applications such as vulnerability detection, malware analysis, and threat intelligence automation. Their versatility and significant impact were demonstrated in various cybersecurity applications.
This study initially explored the landscape of LLM applications, categorizing security tasks within each domain while highlighting their potential in modern cybersecurity. Furthermore, we examined LLM vulnerabilities, such as adversarial attacks and prompt injection, and identified mitigation strategies, including re-tokenization, perplexity-based detection, prompt isolation, and other defense mechanisms aimed at enhancing security. Another important challenge is explainability and interpretability. In high-security scenarios, decision-makers must understand how and why an LLM produces its outputs to build trust and ensure accountability. Without interpretability, even accurate predictions may not be adopted due to concerns about hidden biases or unpredictable behaviors. Recent methods, such as explainable AI (XAI) techniques and retrieval-augmented reasoning, have been proposed to make LLM decisions more transparent and understandable. Future work should integrate these methods into cybersecurity applications, enabling analysts to verify outputs, detect anomalies, and maintain confidence in AI-driven defenses. This survey lays the foundation for future research, providing key insights for integrating LLMs into secure cybersecurity frameworks. By addressing existing challenges, LLMs can evolve to develop robust solutions that counter emerging cyber threats and safeguard critical digital infrastructure.

Author Contributions

Conceptualization, M.A. and D.M.; Methodology, M.A. and D.M.; Investigation, N.O.J.; Visualization, N.O.J.; Writing original draft preparation, N.O.J.; Writing review and editing, M.A. and D.M.; Supervision, M.A. and D.M.; Project administration, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

ABACAttribute-Based Access Control
ACI-IoTArmy Cyber Institute Internet-of-Things dataset
AIArtificial Intelligence
APTAdvanced Persistent Threat
BMCBounded Model Checking
BNNBinary Neural Network
CUBEClustering-Based Unsupervised Backdoor Elimination
CTICyber Threat Intelligence
CVECommon Vulnerabilities and Exposures
CWECommon Weakness Enumeration
DHRData Handling Requirements
FedRAMPFederal Risk and Authorization Management Program
GANGenerative Adversarial Network
IDSIntrusion Detection System
IoTInternet of Things
KGKnowledge Graph
LLMLarge Language Model
MDPMasking Differential Prompting
MIPSMicroprocessor without Interlocked Pipeline Stages
NIDSNetwork Intrusion Detection System
NISTNational Institute of Standards and Technology
NLPNatural Language Processing
PBOMPipeline Bill of Materials
PCAPrincipal Component Analysis
PRISMAPreferred Reporting Items for Systematic Reviews and Meta-Analyses
RAGRetrieval-Augmented Generation
RCERemote Code Execution
RISC-VReduced Instruction Set Computer V (open ISA)
RTARepresent Them All
SIEMSecurity Information and Event Management
SOCSecurity Operations Center
SoCSystem-on-Chip
SQLiSQL Injection
SSLSecure Sockets Layer
XAIExplainable Artificial Intelligence
XSSCross-Site Scripting
ZTAZero Trust Architecture

References

  1. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2023. [Google Scholar] [CrossRef]
  2. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv 2019. [Google Scholar] [CrossRef]
  3. OpenAI. GPT-3.5. 2022. Available online: https://platform.openai.com/docs/models/gpt-3-5 (accessed on 11 January 2025).
  4. OpenAI. GPT-4. 2023. Available online: https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo (accessed on 11 January 2025).
  5. Chen, C.; Su, J.; Chen, J.; Wang, Y.; Bi, T.; Yu, J.; Wang, Y.; Lin, X.; Chen, T.; Zheng, Z. When ChatGPT Meets Smart Contract Vulnerability Detection: How Far Are We? arXiv 2024. [Google Scholar] [CrossRef]
  6. Bai, Y.; Kadavath, S.; Kundu, S.; Askell, A.; Kernion, J.; Jones, A.; Chen, A.; Goldie, A.; Mirhoseini, A.; McKinnon, C.; et al. Constitutional AI: Harmlessness from AI Feedback. arXiv 2022. [Google Scholar] [CrossRef]
  7. Houssel, P.R.B.; Singh, P.; Layeghy, S.; Portmann, M. Towards Explainable Network Intrusion Detection using Large Language Models. arXiv 2024. [Google Scholar] [CrossRef]
  8. Abdallah, A.; Jatowt, A. Generator-Retriever-Generator Approach for Open-Domain Question Answering. arXiv 2024. [Google Scholar] [CrossRef]
  9. Ou, J.; Lu, J.; Liu, C.; Tang, Y.; Zhang, F.; Zhang, D.; Gai, K. DialogBench: Evaluating LLMs as Human-like Dialogue Systems. arXiv 2024. [Google Scholar] [CrossRef]
  10. Aguina-Kang, R.; Gumin, M.; Han, D.H.; Morris, S.; Yoo, S.J.; Ganeshan, A.; Jones, R.K.; Wei, Q.A.; Fu, K.; Ritchie, D. Open-Universe Indoor Scene Generation using LLM Program Synthesis and Uncurated Object Databases. arXiv 2024. [Google Scholar] [CrossRef]
  11. Shayegani, E.; Mamun, M.A.A.; Fu, Y.; Zaree, P.; Dong, Y.; Abu-Ghazaleh, N. Survey of vulnerabilities in large language models revealed by adversarial attacks. arXiv 2023, arXiv:2310.10844. [Google Scholar] [CrossRef]
  12. Xu, H.; Wang, S.; Li, N.; Wang, K.; Zhao, Y.; Chen, K.; Yu, T.; Liu, Y.; Wang, H. Large Language Models for Cyber Security: A Systematic Literature Review. arXiv 2024. [Google Scholar] [CrossRef]
  13. Yigit, Y.; Buchanan, W.J.; Tehrani, M.G.; Maglaras, L. Review of Generative AI Methods in Cybersecurity. arXiv 2024. [Google Scholar] [CrossRef]
  14. Saha, D.; Tarek, S.; Yahyaei, K.; Saha, S.K.; Zhou, J.; Tehranipoor, M.; Farahmandi, F. LLM for SoC Security: A Paradigm Shift. arXiv 2023. [Google Scholar] [CrossRef]
  15. He, Z.; Li, Z.; Yang, S.; Qiao, A.; Zhang, X.; Luo, X.; Chen, T. Large Language Models for Blockchain Security: A Systematic Literature Review. arXiv 2024. [Google Scholar] [CrossRef]
  16. Divakaran, D.M.; Peddinti, S.T. LLMs for Cyber Security: New Opportunities. arXiv 2024. [Google Scholar] [CrossRef]
  17. Ferrag, M.A.; Alwahedi, F.; Battah, A.; Cherif, B.; Mechri, A.; Tihanyi, N. Generative AI and Large Language Models for Cyber Security: All Insights You Need. arXiv 2024. [Google Scholar] [CrossRef]
  18. Liang, H.; Li, X.; Xiao, D.; Liu, J.; Zhou, Y.; Wang, A.; Li, J. Generative Pre-Trained Transformer-Based Reinforcement Learning for Testing Web Application Firewalls. IEEE Trans. Dependable Secur. Comput. 2024, 21, 309–324. [Google Scholar] [CrossRef]
  19. Liu, M.; Li, K.; Chen, T.A. DeepSQLi: Deep Semantic Learning for Testing SQL Injection. In Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA’20), Virtual Event, USA, 18–22 July 2020; Association for Computing Machinery: New York, NY, USA; pp. 286–297. [Google Scholar]
  20. Meng, R.; Mirchev, M.; Böhme, M.; Roychoudhury, A. Large Language Model–Guided Protocol Fuzzing. In Proceedings of the 31st Annual Network and Distributed System Security Symposium (NDSS 2024), San Diego, CA, USA, 26 February–1 March 2024; The Internet Society: Reston, VA, USA, 2024. [Google Scholar]
  21. Liu, R.; Wang, Y.; Xu, H.; Qin, Z.; Liu, Y.; Cao, Z. Malicious URL Detection via Pretrained Language Model Guided Multi-Level Feature Attention Network. arXiv 2023. [Google Scholar] [CrossRef]
  22. Moskal, S.; Laney, S.; Hemberg, E.; O’Reilly, U. LLMs Killed the Script Kiddie: How Agents Supported by Large Language Models Change the Landscape of Network Threat Testing. arXiv 2023. [Google Scholar] [CrossRef]
  23. Temara, S. Maximizing Penetration Testing Success with Effective Reconnaissance Techniques using ChatGPT. arXiv 2023. [Google Scholar] [CrossRef]
  24. Tihanyi, N.; Bisztray, T.; Jain, R.; Ferrag, M.A.; Cordeiro, L.C.; Mavroeidis, V. The FormAI Dataset: Generative AI in Software Security through the Lens of Formal Verification. In Proceedings of the 19th International Conference on Predictive Models and Data Analytics in Software Engineering, PROMISE 2023, San Francisco, CA, USA, 8 December 2023; ACM: New York, NY, USA, 2023; pp. 33–43. [Google Scholar] [CrossRef]
  25. Sun, Y.; Wu, D.; Xue, Y.; Liu, H.; Wang, H.; Xu, Z.; Xie, X.; Liu, Y. GPTScan: Detecting Logic Vulnerabilities in Smart Contracts by Combining GPT with Program Analysis. In Proceedings of the 46th IEEE/ACM International Conference on Software Engineering, ICSE 2024, Lisbon, Portugal, 14–20 April 2024; ACM: New York, NY, USA, 2024; pp. 166:1–166:13. [Google Scholar] [CrossRef]
  26. Meng, X.; Srivastava, A.; Arunachalam, A.; Ray, A.; Silva, P.H.; Psiakis, R.; Makris, Y.; Basu, K. Unlocking Hardware Security Assurance: The Potential of LLMs. arXiv 2023. [Google Scholar] [CrossRef]
  27. Du, Y.; Yu, Z. Pre-training Code Representation with Semantic Flow Graph for Effective Bug Localization. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE 2023, San Francisco, CA, USA, 3–9 December 2023; ACM: New York, NY, USA, 2023; pp. 579–591. [Google Scholar] [CrossRef]
  28. Joyce, R.J.; Patel, T.; Nicholas, C.; Raff, E. AVScan2Vec: Feature Learning on Antivirus Scan Data for Production-Scale Malware Corpora. In Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security, AISec 2023, Copenhagen, Denmark, 30 November 2023; ACM: New York, NY, USA, 2023; pp. 185–196. [Google Scholar] [CrossRef]
  29. Labonne, M.; Moran, S. Spam-T5: Benchmarking Large Language Models for Few-Shot Email Spam Detection. arXiv 2023. [Google Scholar] [CrossRef]
  30. Malul, E.; Meidan, Y.; Mimran, D.; Elovici, Y.; Shabtai, A. GenKubeSec: LLM-Based Kubernetes Misconfiguration Detection, Localization, Reasoning, and Remediation. arXiv 2024. [Google Scholar] [CrossRef]
  31. Kasula, V.K.; Yadulla, A.R.; Konda, B.; Yenugula, M. Fortifying cloud environments against data breaches: A novel AI-driven security framework. World J. Adv. Res. Rev. 2024, 24, 1613–1626. [Google Scholar] [CrossRef]
  32. Bandara, E.; Shetty, S.; Mukkamala, R.; Rahman, A.; Foytik, P.B.; Liang, X.; De Zoysa, K.; Keong, N.W. DevSec-GPT—Generative-AI (with Custom-Trained Meta’s Llama2 LLM), Blockchain, NFT and PBOM Enabled Cloud Native Container Vulnerability Management and Pipeline Verification Platform. In Proceedings of the 2024 IEEE Cloud Summit, Washington, DC, USA, 27–28 June 2024; pp. 28–35. [Google Scholar] [CrossRef]
  33. Nguyen, T.N. OllaBench: Evaluating LLMs’ Reasoning for Human-Centric Interdependent Cybersecurity. arXiv 2024. [Google Scholar] [CrossRef]
  34. Ji, H.; Yang, J.; Chai, L.; Wei, C.; Yang, L.; Duan, Y.; Wang, Y.; Sun, T.; Guo, H.; Li, T.; et al. SEvenLLM: Benchmarking, Eliciting, and Enhancing Abilities of Large Language Models in Cyber Threat Intelligence. arXiv 2024, arXiv:2405.03446. [Google Scholar] [CrossRef]
  35. Luo, H.; Luo, J.; Vasilakos, A.V. BC4LLM: A perspective of trusted artificial intelligence when blockchain meets large language models. Neurocomputing 2024, 599, 128089. [Google Scholar] [CrossRef]
  36. Ahmad, B.; Thakur, S.; Tan, B.; Karri, R.; Pearce, H. Fixing Hardware Security Bugs with Large Language Models. arXiv 2023. [Google Scholar] [CrossRef]
  37. Tseng, P.; Yeh, Z.; Dai, X.; Liu, P. Using LLMs to Automate Threat Intelligence Analysis Workflows in Security Operation Centers. arXiv 2024, arXiv:2407.13093. [Google Scholar] [CrossRef]
  38. Scanlon, M.; Breitinger, F.; Hargreaves, C.; Hilgert, J.; Sheppard, J. ChatGPT for digital forensic investigation: The good, the bad, and the unknown. Forensic Sci. Int. Digit. Investig. 2023, 46, 301609. [Google Scholar] [CrossRef]
  39. Martin, J.L. The Ethico-Political Universe of ChatGPT. J. Soc. Comput. 2023, 4, 1–11. [Google Scholar] [CrossRef]
  40. López Delgado, J.L.; López Ramos, J.A. A Comprehensive Survey on Generative AI Solutions in IoT Security. Electronics 2024, 13, 4965. [Google Scholar] [CrossRef]
  41. Zhang, H.; Sediq, A.B.; Afana, A.; Erol-Kantarci, M. Large Language Models in Wireless Application Design: In-Context Learning-enhanced Automatic Network Intrusion Detection. arXiv 2024. [Google Scholar] [CrossRef]
  42. Pazho, A.D.; Noghre, G.A.; Purkayastha, A.A.; Vempati, J.; Martin, O.; Tabkhi, H. A survey of graph-based deep learning for anomaly detection in distributed systems. IEEE Trans. Knowl. Data Eng. 2023, 36, 1–20. [Google Scholar] [CrossRef]
  43. Chen, P.; Desmet, L.; Huygens, C. A Study on Advanced Persistent Threats. In Proceedings of the IFIP International Conference on Communications and Multimedia Security, Aveiro, Portugal, 25–26 September 2014; pp. 63–72. [Google Scholar] [CrossRef]
  44. Aghaei, E.; Al-Shaer, E.; Shadid, W.; Niu, X. Automated CVE Analysis for Threat Prioritization and Impact Prediction. arXiv 2023. [Google Scholar] [CrossRef]
  45. Cuong Nguyen, H.; Tariq, S.; Baruwal Chhetri, M.; Quoc Vo, B. Towards effective identification of attack techniques in cyber threat intelligence reports using large language models. In Proceedings of the Companion Proceedings of the ACM on Web Conference 2025, Sydney, Australia, 28 April–2 May 2025; pp. 942–946. [Google Scholar]
  46. Charan, P.V.S.; Chunduri, H.; Anand, P.M.; Shukla, S.K. From Text to MITRE Techniques: Exploring the Malicious Use of Large Language Models for Generating Cyber Attack Payloads. arXiv 2023. [Google Scholar] [CrossRef]
  47. Grinbaum, A.; Adomaitis, L. Dual use concerns of generative AI and large language models. J. Responsible Innov. 2024, 11, 2304381. [Google Scholar] [CrossRef]
  48. Ienca, M.; Vayena, E. Dual use in the 21st century: Emerging risks and global governance. Swiss Med. Wkly. 2018, 148, w14688. [Google Scholar] [CrossRef]
  49. Happe, A.; Kaplan, A.; Cito, J. LLMs as Hackers: Autonomous Linux Privilege Escalation Attacks. arXiv 2024. [Google Scholar] [CrossRef]
  50. Deng, G.; Liu, Y.; Mayoral-Vilches, V.; Liu, P.; Li, Y.; Xu, Y.; Zhang, T.; Liu, Y.; Pinzger, M.; Rass, S. PentestGPT: An LLM-empowered Automatic Penetration Testing Tool. arXiv 2024. [Google Scholar] [CrossRef]
  51. Xu, M.; Fan, J.; Huang, X.; Zhou, C.; Kang, J.; Niyato, D.; Mao, S.; Han, Z.; Shen, X.; Lam, K.Y. Forewarned is forearmed: A survey on large language model-based agents in autonomous cyberattacks. arXiv 2025, arXiv:2505.12786. [Google Scholar] [CrossRef]
  52. Quan, V.L.A.; Phat, C.T.; Nguyen, K.V.; Duy, P.T.; Pham, V.H. XGV-BERT: Leveraging Contextualized Language Model and Graph Neural Network for Efficient Software Vulnerability Detection. arXiv 2023. [Google Scholar] [CrossRef]
  53. Thapa, C.; Jang, S.I.; Ahmed, M.E.; Camtepe, S.; Pieprzyk, J.; Nepal, S. Transformer-Based Language Models for Software Vulnerability Detection. arXiv 2022. [Google Scholar] [CrossRef]
  54. Ullah, S.; Han, M.; Pujar, S.; Pearce, H.; Coskun, A.; Stringhini, G. LLMs Cannot Reliably Identify and Reason About Security Vulnerabilities (Yet?): A Comprehensive Evaluation, Framework, and Benchmarks. arXiv 2024. [Google Scholar] [CrossRef]
  55. Khare, A.; Dutta, S.; Li, Z.; Solko-Breslin, A.; Alur, R.; Naik, M. Understanding the Effectiveness of Large Language Models in Detecting Security Vulnerabilities. arXiv 2024. [Google Scholar] [CrossRef]
  56. Liu, X.; Tan, Y.; Xiao, Z.; Zhuge, J.; Zhou, R. Not the End of Story: An Evaluation of ChatGPT-Driven Vulnerability Description Mappings. In Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, 9–14 July 2023; Association for Computational Linguistics: Stroudsburg, PA, USA, 2023; pp. 3724–3731. Available online: https://aclanthology.org/2023.findings-acl.229/ (accessed on 11 January 2025). [CrossRef]
  57. Zhang, C.; Liu, H.; Zeng, J.; Yang, K.; Li, Y.; Li, H. Prompt-Enhanced Software Vulnerability Detection Using ChatGPT. arXiv 2024. [Google Scholar] [CrossRef]
  58. Liu, P.; Sun, C.; Zheng, Y.; Feng, X.; Qin, C.; Wang, Y.; Xu, Z.; Li, Z.; Di, P.; Jiang, Y.; et al. Harnessing the Power of LLM to Support Binary Taint Analysis. arXiv 2024. [Google Scholar] [CrossRef]
  59. Zhang, Z.; Yang, J.; Ke, P.; Mi, F.; Wang, H.; Huang, M. Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization. arXiv 2024. [Google Scholar] [CrossRef]
  60. Fu, M.; Tantithamthavorn, C.; Le, T.; Nguyen, V.; Phung, D. VulRepair: A T5-Based Automated Software Vulnerability Repair. In Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2022), Singapore, 14–18 November 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 935–947. [Google Scholar] [CrossRef]
  61. Zhang, Q.; Fang, C.; Yu, B.; Sun, W.; Zhang, T.; Chen, Z. Pre-trained Model-based Automated Software Vulnerability Repair: How Far are We? arXiv 2023. [Google Scholar] [CrossRef]
  62. Pearce, H.; Tan, B.; Ahmad, B.; Karri, R.; Dolan-Gavitt, B. Examining Zero-Shot Vulnerability Repair with Large Language Models. arXiv 2022. [Google Scholar] [CrossRef]
  63. Wu, Y.; Jiang, N.; Pham, H.V.; Lutellier, T.; Davis, J.; Tan, L.; Babkin, P.; Shah, S. How Effective Are Neural Networks for Fixing Security Vulnerabilities. In Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA’23, Seattle, WA, USA, 17–21 July 2023; ACM: New York, NY, USA, 2023; pp. 1282–1294. [Google Scholar] [CrossRef]
  64. Alrashedy, K.; Aljasser, A.; Tambwekar, P.; Gombolay, M. Can LLMs Patch Security Issues? arXiv 2024. [Google Scholar] [CrossRef]
  65. Tol, M.C.; Sunar, B. ZeroLeak: Using LLMs for Scalable and Cost Effective Side-Channel Patching. arXiv 2023. [Google Scholar] [CrossRef]
  66. Charalambous, Y.; Tihanyi, N.; Jain, R.; Sun, Y.; Ferrag, M.A.; Cordeiro, L.C. Dataset for: A New Era in Software Security: Towards Self-Healing Software via Large Language Models and Formal Verification; Version 1; Zenodo: Genève, Switzerland, 2023. [Google Scholar] [CrossRef]
  67. Qin, F.; Tucek, J.; Sundaresan, J.; Zhou, Y. Rx: Treating bugs as allergies—A safe method to survive software failures. In Proceedings of the Twentieth ACM Symposium on Operating Systems Principles, Brighton, UK, 23–26 October 2005; pp. 235–248. [Google Scholar]
  68. Jin, M.; Shahriar, S.; Tufano, M.; Shi, X.; Lu, S.; Sundaresan, N.; Svyatkovskiy, A. InferFix: End-to-End Program Repair with LLMs. arXiv 2023. [Google Scholar] [CrossRef]
  69. Li, H.; Hao, Y.; Zhai, Y.; Qian, Z. The Hitchhiker’s Guide to Program Analysis: A Journey with Large Language Models. arXiv 2023. [Google Scholar] [CrossRef]
  70. Lee, J.; Han, K.; Yu, H. A Light Bug Triage Framework for Applying Large Pre-trained Language Model. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering, ASE’22, Rochester, MI, USA, 10–14 October 2022. [Google Scholar] [CrossRef]
  71. Yang, A.Z.H.; Martins, R.; Goues, C.L.; Hellendoorn, V.J. Large Language Models for Test-Free Fault Localization. arXiv 2023. [Google Scholar] [CrossRef]
  72. Li, T.O.; Zong, W.; Wang, Y.; Tian, H.; Wang, Y.; Cheung, S.C.; Kramer, J. Nuances are the Key: Unlocking ChatGPT to Find Failure-Inducing Tests with Differential Prompting. arXiv 2023. [Google Scholar] [CrossRef]
  73. Fang, S.; Zhang, T.; Tan, Y.; Jiang, H.; Xia, X.; Sun, X. RepresentThemAll: A Universal Learning Representation of Bug Reports. In Proceedings of the 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), Melbourne, Australia, 14–20 May 2023; pp. 602–614. [Google Scholar]
  74. Perry, N.; Srivastava, M.; Kumar, D.; Boneh, D. Do Users Write More Insecure Code with AI Assistants? In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, CCS’23, Copenhagen, Denmark, 26–30 November 2023; ACM: New York, NY, USA, 2023; pp. 2785–2799. [Google Scholar] [CrossRef]
  75. Wei, Y.; Xia, C.S.; Zhang, L. Copiloting the Copilots: Fusing Large Language Models with Completion Engines for Automated Program Repair. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ESEC/FSE’23, San Francisco, CA, USA, 5–7 December 2023; ACM: New York, NY, USA, 2023. [Google Scholar] [CrossRef]
  76. Xia, C.S.; Zhang, L. Automated Program Repair via Conversation: Fixing 162 out of 337 Bugs for $0.42 Each using ChatGPT. In Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA’24, Vienna, Austria, 16–20 September 2024; ACM: New York, NY, USA, 2024; pp. 819–831. [Google Scholar] [CrossRef]
  77. Xia, C.S.; Zhang, L. Conversational Automated Program Repair. arXiv 2023. [Google Scholar] [CrossRef]
  78. Boehme, M.; Cadar, C.; Roychoudhury, A. Fuzzing: Challenges and Reflections. IEEE Softw. 2020, 38, 79–86. [Google Scholar] [CrossRef]
  79. Deng, Y.; Xia, C.S.; Peng, H.; Yang, C.; Zhang, L. Large Language Models are Zero-Shot Fuzzers: Fuzzing Deep-Learning Libraries via Large Language Models. arXiv 2023. [Google Scholar] [CrossRef]
  80. Zhang, C.; Zheng, Y.; Bai, M.; Li, Y.; Ma, W.; Xie, X.; Li, Y.; Sun, L.; Liu, Y. How Effective Are They? Exploring Large Language Model Based Fuzz Driver Generation. In Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA’24, Vienna, Austria, 16–20 September 2024; ACM: New York, NY, USA, 2024; pp. 1223–1235. [Google Scholar] [CrossRef]
  81. Deng, Y.; Xia, C.S.; Yang, C.; Zhang, S.D.; Yang, S.; Zhang, L. Large Language Models are Edge-Case Fuzzers: Testing Deep Learning Libraries via FuzzGPT. arXiv 2023. [Google Scholar] [CrossRef]
  82. Hu, S.; Huang, T.; Ilhan, F.; Tekin, S.F.; Liu, L. Large Language Model-Powered Smart Contract Vulnerability Detection: New Perspectives. In Proceedings of the 5th IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2023, Atlanta, GA, USA, 1–4 November 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 297–306. [Google Scholar] [CrossRef]
  83. Yang, C.; Deng, Y.; Lu, R.; Yao, J.; Liu, J.; Jabbarvand, R.; Zhang, L. WhiteFox: White-Box Compiler Fuzzing Empowered by Large Language Models. Proc. ACM Program. Lang. 2024, 8, 709–735. [Google Scholar] [CrossRef]
  84. Nurgaliyev, A. Analysis of Reverse Engineering and Cyber Assaults. Ph.D. Thesis, Victoria University, Melbourne, Australia, 2023. [Google Scholar]
  85. Xu, X.; Zhang, Z.; Su, Z.; Huang, Z.; Feng, S.; Ye, Y.; Jiang, N.; Xie, D.; Cheng, S.; Tan, L.; et al. Symbol Preference Aware Generative Models for Recovering Variable Names from Stripped Binary. arXiv 2024. [Google Scholar] [CrossRef]
  86. Armengol-Estapé, J.; Woodruff, J.; Cummins, C.; O’Boyle, M.F.P. SLaDe: A Portable Small Language Model Decompiler for Optimized Assembly. arXiv 2024. [Google Scholar] [CrossRef]
  87. Sun, T.; Allix, K.; Kim, K.; Zhou, X.; Kim, D.; Lo, D.; Bissyandé, T.F.; Klein, J. DexBERT: Effective, Task-Agnostic and Fine-grained Representation Learning of Android Bytecode. arXiv 2023. [Google Scholar] [CrossRef]
  88. Pei, K.; Li, W.; Jin, Q.; Liu, S.; Geng, S.; Cavallaro, L.; Yang, J.; Jana, S. Exploiting Code Symmetries for Learning Program Semantics. arXiv 2024. [Google Scholar] [CrossRef]
  89. Song, Q.; Zhang, Y.; Ouyang, L.; Chen, Y. BinMLM: Binary Authorship Verification with Flow-aware Mixture-of-Shared Language Model. arXiv 2022. [Google Scholar] [CrossRef]
  90. Manuel, D.; Islam, N.T.; Khoury, J.; Nunez, A.; Bou-Harb, E.; Najafirad, P. Enhancing Reverse Engineering: Investigating and Benchmarking Large Language Models for Vulnerability Analysis in Decompiled Binaries. arXiv 2024, arXiv:2411.04981. [Google Scholar] [CrossRef]
  91. Faruki, P.; Bhan, R.; Jain, V.; Bhatia, S.; El Madhoun, N.; Pamula, R. A survey and evaluation of android-based malware evasion techniques and detection frameworks. Information 2023, 14, 374. [Google Scholar] [CrossRef]
  92. Botacin, M. GPThreats-3: Is Automatic Malware Generation a Threat? In Proceedings of the 2023 IEEE Security and Privacy Workshops (SPW), San Francisco, CA, USA, 25–25 May 2023; pp. 238–254. [Google Scholar] [CrossRef]
  93. Jakub, B.; Branišová, J. A Dynamic Rule Creation Based Anomaly Detection Method for Identifying Security Breaches in Log Records. Wirel. Pers. Commun. 2017, 94, 497–511. [Google Scholar] [CrossRef]
  94. Shan, S.; Huo, Y.; Su, Y.; Li, Y.; Li, D.; Zheng, Z. Face It Yourselves: An LLM-Based Two-Stage Strategy to Localize Configuration Errors via Logs. In Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA’24, Vienna, Austria, 16–20 September 2024; ACM: New York, NY, USA, 2024; pp. 13–25. [Google Scholar] [CrossRef]
  95. Karlsen, E.; Luo, X.; Zincir-Heywood, N.; Heywood, M. Benchmarking Large Language Models for Log Analysis, Security, and Interpretation. arXiv 2023. [Google Scholar] [CrossRef]
  96. Han, X.; Yuan, S.; Trabelsi, M. LogGPT: Log Anomaly Detection via GPT. arXiv 2023. [Google Scholar] [CrossRef]
  97. Kasri, W.; Himeur, Y.; Alkhazaleh, H.A.; Tarapiah, S.; Atalla, S.; Mansoor, W.; Al-Ahmad, H. From vulnerability to defense: The role of large language models in enhancing cybersecurity. Computation 2025, 13, 30. [Google Scholar] [CrossRef]
  98. Kamath, U.; Keenan, K.; Somers, G.; Sorenson, S. Prompt-based Learning. In Large Language Models: A Deep Dive: Bridging Theory and Practice; Springer: Berlin/Heidelberg, Germany, 2024; pp. 83–133. [Google Scholar]
  99. Heiding, F.; Schneier, B.; Vishwanath, A.; Bernstein, J.; Park, P.S. Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models. arXiv 2023. [Google Scholar] [CrossRef]
  100. Cambiaso, E.; Caviglione, L. Scamming the Scammers: Using ChatGPT to Reply Mails for Wasting Time and Resources. arXiv 2023. [Google Scholar] [CrossRef]
  101. Hanley, H.W.A.; Durumeric, Z. Twits, Toxic Tweets, and Tribal Tendencies: Trends in Politically Polarized Posts on Twitter. arXiv 2024. [Google Scholar] [CrossRef]
  102. Cai, Z.; Tan, Z.; Lei, Z.; Zhu, Z.; Wang, H.; Zheng, Q.; Luo, M. LMBot: Distilling Graph Knowledge into Language Model for Graph-less Deployment in Twitter Bot Detection. arXiv 2024. [Google Scholar] [CrossRef]
  103. Anderson, R.; Petitcolas, F. On The Limits of Steganography. IEEE J. Sel. Areas Commun. 1998, 16, 474–481. [Google Scholar] [CrossRef]
  104. Wang, H.; Yang, Z.; Yang, J.; Chen, C.; Huang, Y. Linguistic Steganalysis in Few-Shot Scenario. IEEE Trans. Inf. Forensics Secur. 2023, 18, 4870–4882. [Google Scholar] [CrossRef]
  105. Bauer, L.A.; IV, J.K.H.; Markelon, S.A.; Bindschaedler, V.; Shrimpton, T. Covert Message Passing over Public Internet Platforms Using Model-Based Format-Transforming Encryption. arXiv 2021. [Google Scholar] [CrossRef]
  106. Paterson, K.; Stebila, D. One-Time-Password-Authenticated Key Exchange. In Proceedings of the Australasian Conference on Information Security and Privacy, Sydney, Australia, 5–7 July 2010. [Google Scholar] [CrossRef]
  107. Rando, J.; Perez-Cruz, F.; Hitaj, B. PassGPT: Password Modeling and (Guided) Generation with Large Language Models. arXiv 2023. [Google Scholar] [CrossRef]
  108. Selamat, S.R.; Robiah, Y.; Sahib, S. Mapping Process of Digital Forensic Investigation Framework. IJCSNS Int. J. Comput. Sci. Netw. Secur. 2008, 8, 163–169. [Google Scholar]
  109. Huang, Z.; Wang, Q. Enhancing Architecture-level Security of SoC Designs via the Distributed Security IPs Deployment Methodology. J. Inf. Sci. Eng. 2020, 36, 387–421. [Google Scholar]
  110. Paria, S.; Dasgupta, A.; Bhunia, S. DIVAS: An LLM-based End-to-End Framework for SoC Security Analysis and Policy-based Protection. arXiv 2023. [Google Scholar] [CrossRef]
  111. Nair, M.; Sadhukhan, R.; Mukhopadhyay, D. How Hardened is Your Hardware? Guiding ChatGPT to Generate Secure Hardware Resistant to CWEs. In Proceedings of the Cyber Security, Cryptology, and Machine Learning: 7th International Symposium, CSCML 2023, Be’er Sheva, Israel, 29–30 June 2023; pp. 320–336. [Google Scholar] [CrossRef]
  112. Ahmad, B.; Thakur, S.; Tan, B.; Karri, R.; Pearce, H. On Hardware Security Bug Code Fixes by Prompting Large Language Models. IEEE Trans. Inf. Forensics Secur. 2024, 19, 4043–4057. [Google Scholar] [CrossRef]
  113. Mohammed, A. Blockchain and Distributed Ledger Technology (DLT): Investigating the use of blockchain for secure transactions, smart contracts, and fraud prevention. Int. J. Adv. Eng. Manag. (IJAEM) 2025, 7, 1057–1068. Available online: https://ijaem.net/issue_dcp/Blockchain%20and%20Distributed%20Ledger%20Technology%20DLT%20Investigating%20the%20use%20of%20blockchain%20for%20secure%20transactions%2C%20smart%20contracts%2C%20and%20fraud%20prevention.pdf (accessed on 10 May 2025). [CrossRef]
  114. David, I.; Zhou, L.; Qin, K.; Song, D.; Cavallaro, L.; Gervais, A. Do you still need a manual smart contract audit? arXiv 2023. [Google Scholar] [CrossRef]
  115. Howard, J.D.; Kahnt, T. To be specific: The role of orbitofrontal cortex in signaling reward identity. Behav. Neurosci. 2021, 135, 210. [Google Scholar] [CrossRef]
  116. Rodler, M.; Li, W.; Karame, G.O.; Davi, L. Sereum: Protecting Existing Smart Contracts Against Re-Entrancy Attacks. In Proceedings of the 26th Annual Network and Distributed System Security Symposium, NDSS 2019, San Diego, CA, USA, 24–27 February 2019; The Internet Society: Reston, VA, USA, 2019. [Google Scholar]
  117. Rathod, V.; Nabavirazavi, S.; Zad, S.; Iyengar, S.S. Privacy and security challenges in large language models. In Proceedings of the 2025 IEEE 15th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 6–8 January 2025; IEEE: Piscataway, NJ, USA, 2025; pp. 746–752. [Google Scholar]
  118. Mitchell, B.S.; Mancoridis, S.; Kashyap, J. On the Automatic Identification of Misconfiguration Errors in Cloud Native Systems. In Proceedings of the 2024 7th Artificial Intelligence and Cloud Computing Conference, Tokyo, Japan, 14–16 December 2024. [Google Scholar]
  119. Pranata, A.A.; Barais, O.; Bourcier, J.; Noirie, L. Misconfiguration discovery with principal component analysis for cloud-native services. In Proceedings of the 2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC), Leicester, UK, 7–10 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 269–278. [Google Scholar]
  120. Ariffin, M.A.M.; Rahman, K.A.; Darus, M.Y.; Awang, N.; Kasiran, Z. Data leakage detection in cloud computing platform. Int. J. Adv. Trends Comput. Sci. Eng. 2019, 8, S1. [Google Scholar] [CrossRef]
  121. Vaidya, C.; Khobragade, P.K.; Golghate, A.A. Data Leakage Detection and Security Using Cloud Computing. Int. J. Eng. Res. Appl. 2016, 6, 1–4. [Google Scholar]
  122. Lanka, P.; Gupta, K.; Varol, C. Intelligent threat detection—AI-driven analysis of honeypot data to counter cyber threats. Electronics 2024, 13, 2465. [Google Scholar] [CrossRef]
  123. Henze, M.; Matzutt, R.; Hiller, J.; Mühmer, E.; Ziegeldorf, J.H.; van der Giet, J.; Wehrle, K. Complying with data handling requirements in cloud storage systems. IEEE Trans. Cloud Comput. 2020, 10, 1661–1674. [Google Scholar] [CrossRef]
  124. Dye, O.; Heo, J.; Cankaya, E.C. Reflection of Federal Data Protection Standards on Cloud Governance. arXiv 2024, arXiv:2403.07907. [Google Scholar] [CrossRef]
  125. Hassanin, M.; Moustafa, N. A comprehensive overview of large language models (llms) for cyber defences: Opportunities and directions. arXiv 2024, arXiv:2405.14487. [Google Scholar] [CrossRef]
  126. Molleti, R.; Goje, V.; Luthra, P.; Raghavan, P. Automated threat detection and response using LLM agents. World J. Adv. Res. Rev. 2024, 24, 79–90. [Google Scholar] [CrossRef]
  127. Fieblinger, R.; Alam, M.T.; Rastogi, N. Actionable cyber threat intelligence using knowledge graphs and large language models. In Proceedings of the 2024 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), Vienna, Austria, 8–12 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 100–111. [Google Scholar]
  128. Ali, T.; Kostakos, P. HuntGPT: Integrating machine learning-based anomaly detection and explainable AI with large language models (LLMs). arXiv 2023, arXiv:2309.16021. [Google Scholar]
  129. Schwartz, Y.; Benshimol, L.; Mimran, D.; Elovici, Y.; Shabtai, A. LLMCloudHunter: Harnessing LLMs for Automated Extraction of Detection Rules from Cloud-Based CTI. arXiv 2024. [Google Scholar] [CrossRef]
  130. Mitra, S.; Neupane, S.; Chakraborty, T.; Mittal, S.; Piplai, A.; Gaur, M.; Rahimi, S. LOCALINTEL: Generating Organizational Threat Intelligence from Global and Local Cyber Knowledge. arXiv 2024. [Google Scholar] [CrossRef]
  131. Rong, H.; Duan, Y.; Zhang, H.; Wang, X.; Chen, H.; Duan, S.; Wang, S. Disassembling Obfuscated Executables with LLM. arXiv 2024. [Google Scholar] [CrossRef]
  132. Lu, H.; Peng, H.; Nan, G.; Cui, J.; Wang, C.; Jin, W.; Wang, S.; Pan, S.; Tao, X. MALSIGHT: Exploring Malicious Source Code and Benign Pseudocode for Iterative Binary Malware Summarization. arXiv 2024. [Google Scholar] [CrossRef]
  133. Patsakis, C.; Casino, F.; Lykousas, N. Assessing LLMs in Malicious Code Deobfuscation of Real-world Malware Campaigns. arXiv 2024. [Google Scholar] [CrossRef]
  134. Bouzidi, M.; Gupta, N.; Cheikh, F.A.; Shalaginov, A.; Derawi, M. A novel architectural framework on IoT ecosystem, security aspects and mechanisms: A comprehensive survey. IEEE Access 2022, 10, 101362–101384. [Google Scholar] [CrossRef]
  135. Zhao, B.; Ji, S.; Zhang, X.; Tian, Y.; Wang, Q.; Pu, Y.; Lyu, C.; Beyah, R. UVSCAN: Detecting Third-Party Component Usage Violations in IoT Firmware. In Proceedings of the 32nd USENIX Security Symposium (USENIX Security 23), Anaheim, CA, USA, 9–11 August 2023; pp. 3421–3438. [Google Scholar]
  136. Li, S.; Min, G. On-Device Learning Based Vulnerability Detection in Iot Environment. J. Ind. Inf. Integr. 2025, 47, 100900. [Google Scholar] [CrossRef]
  137. Li, Y.; Xiang, Z.; Bastian, N.D.; Song, D.; Li, B. IDS-Agent: An LLM Agent for Explainable Intrusion Detection in IoT Networks. In Proceedings of the NeurIPS 2024 Workshop on Open-World Agents, Vancouver, BC, Canada, 14 December 2024. [Google Scholar]
  138. Su, J.; Jiang, C.; Jin, X.; Qiao, Y.; Xiao, T.; Ma, H.; Wei, R.; Jing, Z.; Xu, J.; Lin, J. Large Language Models for Forecasting and Anomaly Detection: A Systematic Literature Review. arXiv 2024. [Google Scholar] [CrossRef]
  139. Feng, X.; Liao, X.; Wang, X.; Wang, H.; Li, Q.; Yang, K.; Zhu, H.; Sun, L. Understanding and securing device vulnerabilities through automated bug report analysis. In Proceedings of the SEC’19: Proceedings of the 28th USENIX Conference on Security Symposium, Santa Clara, CA, USA, 14–16 August 2019. [Google Scholar]
  140. Baral, S.; Saha, S.; Haque, A. An Adaptive End-to-End IoT Security Framework Using Explainable AI and LLMs. In Proceedings of the 2024 IEEE 10th World Forum on Internet of Things (WF-IoT), Ottawa, ON, Canada, 10–13 November 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 469–474. [Google Scholar]
  141. Pathak, P.N. How Do You Protect Machine Learning from Attacks? 2023. Available online: https://www.linkedin.com/advice/1/how-do-you-protect-machine-learning-from-attacks (accessed on 28 January 2024).
  142. Kaddour, J.; Harris, J.; Mozes, M.; Bradley, H.; Raileanu, R.; McHardy, R. Challenges and Applications of Large Language Models. arXiv 2023. [Google Scholar] [CrossRef]
  143. Tornede, A.; Deng, D.; Eimer, T.; Giovanelli, J.; Mohan, A.; Ruhkopf, T.; Segel, S.; Theodorakopoulos, D.; Tornede, T.; Wachsmuth, H.; et al. AutoML in the Age of Large Language Models: Current Challenges, Future Opportunities and Risks. arXiv 2024. [Google Scholar] [CrossRef]
  144. Computer Security Resource Center. Information Systems Security (INFOSEC). 2023. Available online: https://csrc.nist.gov/glossary/term/information_systems_security (accessed on 28 January 2024).
  145. CLOUDFLARE. What Is Data Privacy? 2023. Available online: https://www.cloudflare.com/learning/privacy/what-is-data-privacy/ (accessed on 28 January 2024).
  146. Phanireddy, S. LLM security and guardrail defense techniques in web applications. Int. J. Emerg. Trends Comput. Sci. Inf. Technol. 2025, 221–224. [Google Scholar] [CrossRef]
  147. Ganguli, D.; Lovitt, L.; Kernion, J.; Askell, A.; Bai, Y.; Kadavath, S.; Mann, B.; Perez, E.; Schiefer, N.; Ndousse, K.; et al. Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned. arXiv 2022. [Google Scholar] [CrossRef]
  148. Röttger, P.; Vidgen, B.; Nguyen, D.; Waseem, Z.; Margetts, H.; Pierrehumbert, J. HateCheck: Functional Tests for Hate Speech Detection Models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Bangkok, Thailand, 1–6 August 2021; Association for Computational Linguistics: Stroudsburg, PA, USA, 2021. [Google Scholar] [CrossRef]
  149. Perez, F.; Ribeiro, I. Ignore Previous Prompt: Attack Techniques For Language Models. arXiv 2022. [Google Scholar] [CrossRef]
  150. Alon, G.; Kamfonas, M. Detecting Language Model Attacks with Perplexity. arXiv 2023. [Google Scholar] [CrossRef]
  151. Jain, N.; Schwarzschild, A.; Wen, Y.; Somepalli, G.; Kirchenbauer, J.; yeh Chiang, P.; Goldblum, M.; Saha, A.; Geiping, J.; Goldstein, T. Baseline Defenses for Adversarial Attacks Against Aligned Language Models. arXiv 2023. [Google Scholar] [CrossRef]
  152. Cao, Z.; Xu, Y.; Huang, Z.; Zhou, S. ML4CO-KIDA: Knowledge Inheritance in Dataset Aggregation. arXiv 2022. [Google Scholar] [CrossRef]
  153. Marsoof, A.; Luco, A.; Tan, H.; Joty, S. Content-filtering AI systems–limitations, challenges and regulatory approaches. Inf. Commun. Technol. Law 2023, 32, 64–101. [Google Scholar] [CrossRef]
  154. Manzoor, H.U.; Shabbir, A.; Chen, A.; Flynn, D.; Zoha, A. A survey of security strategies in federated learning: Defending models, data, and privacy. Future Internet 2024, 16, 374. [Google Scholar] [CrossRef]
  155. Qi, X.; Panda, A.; Lyu, K.; Ma, X.; Roy, S.; Beirami, A.; Mittal, P.; Henderson, P. Safety Alignment Should Be Made More Than Just a Few Tokens Deep. arXiv 2024. [Google Scholar] [CrossRef]
  156. Fayaz, S.; Ahmad Shah, S.Z.; ud din, N.M.; Gul, N.; Assad, A. Advancements in data augmentation and transfer learning: A comprehensive survey to address data scarcity challenges. Recent Adv. Comput. Sci. Commun. (Former. Recent Pat. Comput. Sci.) 2024, 17, 14–35. [Google Scholar] [CrossRef]
  157. Bianchi, F.; Suzgun, M.; Attanasio, G.; Röttger, P.; Jurafsky, D.; Hashimoto, T.; Zou, J. Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions. In Proceedings of the The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, 7–11 May 2024; OpenReview: Hanover, NH, USA, 2024. [Google Scholar]
  158. Zhao, J.; Deng, Z.; Madras, D.; Zou, J.; Ren, M. Learning and Forgetting Unsafe Examples in Large Language Models. In Proceedings of the Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, 21–27 July 2024; OpenReview: Hanover, NH, USA, 2024. [Google Scholar]
  159. Qi, X.; Zeng, Y.; Xie, T.; Chen, P.Y.; Jia, R.; Mittal, P.; Henderson, P. Fine-tuning aligned language models compromises safety, even when users do not intend to! arXiv 2023, arXiv:2310.03693. [Google Scholar] [CrossRef]
  160. Papakyriakopoulos, O.; Choi, A.S.G.; Thong, W.; Zhao, D.; Andrews, J.; Bourke, R.; Xiang, A.; Koenecke, A. Augmented datasheets for speech datasets and ethical decision-making. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Chicago, IL, USA, 12–15 June 2023; pp. 881–904. [Google Scholar]
  161. Wortsman, M.; Ilharco, G.; Gadre, S.Y.; Roelofs, R.; Gontijo-Lopes, R.; Morcos, A.S.; Namkoong, H.; Farhadi, A.; Carmon, Y.; Kornblith, S.; et al. Model soups: Averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. arXiv 2022. [Google Scholar] [CrossRef]
  162. Lu, W.; Luu, R.K.; Buehler, M.J. Fine-tuning large language models for domain adaptation: Exploration of training strategies, scaling, model merging and synergistic capabilities. npj Comput. Mater. 2025, 11, 84. [Google Scholar] [CrossRef]
  163. Zou, A.; Wang, Z.; Carlini, N.; Nasr, M.; Kolter, J.Z.; Fredrikson, M. Universal and Transferable Adversarial Attacks on Aligned Language Models. arXiv 2023. [Google Scholar] [CrossRef]
  164. Kadhe, S.R.; Ahmed, F.; Wei, D.; Baracaldo, N.; Padhi, I. Split, Unlearn, Merge: Leveraging Data Attributes for More Effective Unlearning in LLMs. arXiv 2024. [Google Scholar] [CrossRef]
  165. Yadav, P.; Vu, T.; Lai, J.; Chronopoulou, A.; Faruqui, M.; Bansal, M.; Munkhdalai, T. What Matters for Model Merging at Scale? arXiv 2024, arXiv:2410.03617. [Google Scholar] [CrossRef]
  166. Xing, W.; Li, M.; Li, M.; Han, M. Towards robust and secure embodied ai: A survey on vulnerabilities and attacks. arXiv 2025, arXiv:2502.13175. [Google Scholar]
  167. Xu, D.; Fan, S.; Kankanhalli, M. Combating misinformation in the era of generative AI models. In Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, ON, Canada, 29 October–3 November 2023; pp. 9291–9298. [Google Scholar]
  168. Schwarzschild, A.; Goldblum, M.; Gupta, A.; Dickerson, J.P.; Goldstein, T. Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks. arXiv 2021. [Google Scholar] [CrossRef]
  169. Kurita, K.; Michel, P.; Neubig, G. Weight Poisoning Attacks on Pre-trained Models. arXiv 2020. [Google Scholar] [CrossRef]
  170. Shah, V. Machine Learning Algorithms for Cybersecurity: Detecting and Preventing Threats. Rev. Esp. Doc. Cient. 2021, 15, 42–66. [Google Scholar]
  171. Yan, L.; Zhang, Z.; Tao, G.; Zhang, K.; Chen, X.; Shen, G.; Zhang, X. ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP. arXiv 2023. [Google Scholar] [CrossRef]
  172. Continella, A.; Fratantonio, Y.; Lindorfer, M.; Puccetti, A.; Zand, A.; Krügel, C.; Vigna, G. Obfuscation-Resilient Privacy Leak Detection for Mobile Apps Through Differential Analysis. In Proceedings of the Network and Distributed System Security Symposium, San Diego, CA, USA, 26 February–1 March 2017. [Google Scholar]
  173. Shu, M.; Wang, J.; Zhu, C.; Geiping, J.; Xiao, C.; Goldstein, T. On the Exploitability of Instruction Tuning. arXiv 2023. [Google Scholar] [CrossRef]
  174. Aghakhani, H.; Dai, W.; Manoel, A.; Fernandes, X.; Kharkar, A.; Kruegel, C.; Vigna, G.; Evans, D.; Zorn, B.; Sim, R. TrojanPuzzle: Covertly Poisoning Code-Suggestion Models. arXiv 2024. [Google Scholar] [CrossRef]
  175. Liu, K.; Dolan-Gavitt, B.; Garg, S. Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks. arXiv 2018. [Google Scholar] [CrossRef]
  176. Chen, Z.; Xiang, Z.; Xiao, C.; Song, D.; Li, B. AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge Bases. arXiv 2024. [Google Scholar] [CrossRef]
  177. Chen, C.; Sun, Y.; Gong, X.; Gao, J.; Lam, K.Y. Neutralizing Backdoors through Information Conflicts for Large Language Models. arXiv 2024, arXiv:2411.18280. [Google Scholar] [CrossRef]
  178. Chowdhury, A.G.; Islam, M.M.; Kumar, V.; Shezan, F.H.; Jain, V.; Chadha, A. Breaking down the defenses: A comparative survey of attacks on large language models. arXiv 2024, arXiv:2403.04786. [Google Scholar] [CrossRef]
  179. Zhang, Z.; Lyu, L.; Ma, X.; Wang, C.; Sun, X. Fine-mixing: Mitigating Backdoors in Fine-tuned Language Models. arXiv 2022. [Google Scholar] [CrossRef]
  180. Cui, G.; Yuan, L.; He, B.; Chen, Y.; Liu, Z.; Sun, M. A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks. arXiv 2022. [Google Scholar] [CrossRef]
  181. Xi, Z.; Du, T.; Li, C.; Pang, R.; Ji, S.; Chen, J.; Ma, F.; Wang, T. Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks. arXiv 2023. [Google Scholar] [CrossRef]
  182. Wolk, M.H. The iphone jailbreaking exemption and the issue of openness. Cornell J. Law Public Policy 2009, 19, 795. [Google Scholar]
  183. Mondillo, G.; Colosimo, S.; Perrotta, A.; Frattolillo, V.; Indolfi, C.; del Giudice, M.M.; Rossi, F. Jailbreaking large language models: Navigating the crossroads of innovation, ethics, and health risks. J. Med. Artif. Intell. 2025, 8, 6. [Google Scholar] [CrossRef]
  184. Kumar, A.; Agarwal, C.; Srinivas, S.; Li, A.J.; Feizi, S.; Lakkaraju, H. Certifying LLM Safety against Adversarial Prompting. arXiv 2024. [Google Scholar] [CrossRef]
  185. Wu, F.; Xie, Y.; Yi, J.; Shao, J.; Curl, J.; Lyu, L.; Chen, Q.; Xie, X. Defending chatgpt against jailbreak attack via self-reminder. Nat. Mach. Intell. 2023, 5, 1486–1496. [Google Scholar] [CrossRef]
  186. Jin, H.; Hu, L.; Li, X.; Zhang, P.; Chen, C.; Zhuang, J.; Wang, H. JailbreakZoo: Survey, Landscapes, and Horizons in Jailbreaking Large Language and Vision-Language Models. arXiv 2024. [Google Scholar] [CrossRef]
  187. Robey, A.; Wong, E.; Hassani, H.; Pappas, G.J. SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks. arXiv 2024. [Google Scholar] [CrossRef]
  188. Crothers, E.; Japkowicz, N.; Viktor, H. Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods. arXiv 2023. [Google Scholar] [CrossRef]
  189. Rababah, B.; Wu, S.T.; Kwiatkowski, M.; Leung, C.K.; Akcora, C.G. SoK: Prompt Hacking of Large Language Models. arXiv 2024, arXiv:2410.13901. [Google Scholar] [CrossRef]
  190. Learn Prompting. Your Guide to Generative AI. 2023. Available online: https://learnprompting.org (accessed on 28 January 2024).
  191. Schulhoff, S. Instruction Defense. 2024. Available online: https://learnprompting.org/docs/prompt_hacking/defensive_measures/instruction (accessed on 11 October 2024).
  192. Gonen, H.; Iyer, S.; Blevins, T.; Smith, N.A.; Zettlemoyer, L. Demystifying Prompts in Language Models via Perplexity Estimation. arXiv 2024. [Google Scholar] [CrossRef]
  193. Liu, Y.; Jia, Y.; Geng, R.; Jia, J.; Gong, N.Z. Formalizing and Benchmarking Prompt Injection Attacks and Defenses. arXiv 2024. [Google Scholar] [CrossRef]
  194. Shi, J.; Yuan, Z.; Liu, Y.; Huang, Y.; Zhou, P.; Sun, L.; Gong, N.Z. Optimization-based Prompt Injection Attack to LLM-as-a-Judge. arXiv 2024. [Google Scholar] [CrossRef]
  195. Yang, H.; Xiang, K.; Ge, M.; Li, H.; Lu, R.; Yu, S. A comprehensive overview of backdoor attacks in large language models within communication networks. IEEE Netw. 2024, 38, 211–218. [Google Scholar] [CrossRef]
  196. Zhao, H.; Chen, H.; Yang, F.; Liu, N.; Deng, H.; Cai, H.; Wang, S.; Yin, D.; Du, M. Explainability for large language models: A survey. ACM Trans. Intell. Syst. Technol. 2024, 15, 1–38. [Google Scholar] [CrossRef]
  197. Koloskova, A.; Allouah, Y.; Jha, A.; Guerraoui, R.; Koyejo, S. Certified Unlearning for Neural Networks. arXiv 2025. [Google Scholar] [CrossRef]
  198. Liu, Y.; Yao, Y.; Ton, J.F.; Zhang, X.; Guo, R.; Cheng, H.; Klochkov, Y.; Taufiq, M.F.; Li, H. Trustworthy LLMs: A Survey and Guideline for Evaluating Large Language Models’ Alignment. arXiv 2024. [Google Scholar] [CrossRef]
  199. Raiaan, M.A.K.; Mukta, M.S.H.; Fatema, K.; Fahad, N.M.; Sakib, S.; Mim, M.M.J.; Ahmad, J.; Ali, M.E.; Azam, S. A review on large Language Models: Architectures, applications, taxonomies, open issues and challenges. IEEE Access 2024, 12, 26839–26874. [Google Scholar] [CrossRef]
  200. Mishra, M.; Stallone, M.; Zhang, G.; Shen, Y.; Prasad, A.; Soria, A.M.; Merler, M.; Selvam, P.; Surendran, S.; Singh, S.; et al. Granite code models: A family of open foundation models for code intelligence. arXiv 2024, arXiv:2405.04324. [Google Scholar] [CrossRef]
  201. Moore, E.; Imteaj, A.; Rezapour, S.; Amini, M.H. A survey on secure and private federated learning using blockchain: Theory and application in resource-constrained computing. IEEE Internet Things J. 2023, 10, 21942–21958. [Google Scholar] [CrossRef]
  202. Cai, X.; Xu, H.; Xu, S.; Zhang, Y.; Yuan, X. Badprompt: Backdoor attacks on continuous prompts. Adv. Neural Inf. Process. Syst. 2022, 35, 37068–37080. [Google Scholar]
  203. Huang, F. Data cleansing. In Encyclopedia of Big Data; Springer: Berlin/Heidelberg, Germany, 2022; pp. 275–279. [Google Scholar]
  204. Wan, A.; Wallace, E.; Shen, S.; Klein, D. Poisoning language models during instruction tuning. In Proceedings of the International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; PMLR: New York, NY, USA, 2023; pp. 35413–35425. [Google Scholar]
  205. Wang, B.; Yao, Y.; Shan, S.; Li, H.; Viswanath, B.; Zheng, H.; Zhao, B.Y. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 20–22 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 707–723. [Google Scholar]
  206. Das, B.C.; Amini, M.H.; Wu, Y. Security and privacy challenges of large language models: A survey. ACM Comput. Surv. 2024, 57, 1–39. [Google Scholar] [CrossRef]
  207. Yao, H.; Lou, J.; Qin, Z. Poisonprompt: Backdoor attack on prompt-based large language models. In Proceedings of the ICASSP 2024—2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Republic of Korea, 14–19 April 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 7745–7749. [Google Scholar]
  208. Charfeddine, M.; Kammoun, H.M.; Hamdaoui, B.; Guizani, M. Chatgpt’s security risks and benefits: Offensive and defensive use-cases, mitigation measures, and future implications. IEEE Access 2024, 12, 30263–30310. [Google Scholar] [CrossRef]
  209. Sun, R.; Chang, J.; Pearce, H.; Xiao, C.; Li, B.; Wu, Q.; Nepal, S.; Xue, M. SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach. arXiv 2024. [Google Scholar] [CrossRef]
  210. Kshetri, N. Transforming cybersecurity with agentic AI to combat emerging cyber threats. Telecommun. Policy 2025, 49, 102976. [Google Scholar] [CrossRef]
  211. Zhang, S.; Li, H.; Sun, K.; Chen, H.; Wang, Y.; Li, S. Security and Privacy Challenges of AIGC in Metaverse: A Comprehensive Survey. ACM Comput. Surv. 2025, 57, 1–37. [Google Scholar] [CrossRef]
  212. Brundage, M.; Avin, S.; Wang, J.; Belfield, H.; Krueger, G.; Hadfield, G.; Khlaaf, H.; Yang, J.; Toner, H.; Fong, R.; et al. Toward trustworthy AI development: Mechanisms for supporting verifiable claims. arXiv 2020, arXiv:2004.07213. [Google Scholar] [CrossRef]
  213. Gunning, D.; Stefik, M.; Choi, J.; Miller, T.; Stumpf, S.; Yang, G.Z. XAI—Explainable artificial intelligence. Sci. Robot. 2019, 4, eaay7120. [Google Scholar] [CrossRef] [PubMed]
  214. Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 2021, 54, 1–35. [Google Scholar] [CrossRef]
  215. Caruana, M.M.; Borg, R.M. Regulating Artificial Intelligence in the European Union. In The EU Internal Market in the Next Decade–Quo Vadis? Brill: Singapore, 2025; p. 108. [Google Scholar]
  216. Protection, F.D. General data protection regulation (GDPR). In Intersoft Consulting; European Union: Brussels, Belgium, 2018; Volume 24. [Google Scholar]
  217. NIST AI. Artificial Intelligence Risk Management Framework (AI RMF 1.0); NIST AI 100-1. 2023. Available online: https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf (accessed on 10 May 2025).
Figure 1. PRISMA flowchart of literature selection.
Figure 1. PRISMA flowchart of literature selection.
Ai 06 00216 g001
Figure 2. Distribution of the 223 studies included in this survey.
Figure 2. Distribution of the 223 studies included in this survey.
Ai 06 00216 g002
Figure 3. Cybersecurity domains and LLM-based applications.
Figure 3. Cybersecurity domains and LLM-based applications.
Ai 06 00216 g003
Figure 4. Distribution of defense techniques in the reviewed literature.
Figure 4. Distribution of defense techniques in the reviewed literature.
Ai 06 00216 g004
Figure 5. Security attacks and defense techniques.
Figure 5. Security attacks and defense techniques.
Ai 06 00216 g005
Table 4. Security domains and related tasks.
Table 4. Security domains and related tasks.
Security DomainsSecurity TasksTotal
Network SecurityWeb fuzzing (4)
Traffic and intrusion detection (5)
Cyber threat analysis (4)
Penetration test (7)
20
Software and
System Security
Vulnerability detection (7)
Vulnerability repair (7)
Bug detection (8)
Bug repair (4)
Program fuzzing (6)
Reverse engineering and binary analysis (7)
Malware detection (4)
System log analysis (5)
47
Information and Content SecurityPhishing and scam detection (4)
Harmful contents detection (4)
Steganography (3)
Access control (3)
Forensics (3)
17
Hardware SecurityHardware vulnerability detection (3)
Hardware vulnerability repair (3)
6
Blockchain SecuritySmart contract security (4)
Transaction anomaly detection (4)
8
Cloud SecurityMisconfiguration detection (3)
Data leakage monitoring (3)
Container security (3)
Compliance enforcement (3)
12
Incident Response and Threat Intel.Alert prioritization (2)
Automated threat intelligence analysis (3)
Threat hunting (3)
Malware reverse engineering (3)
11
IoT SecurityFirmware Vulnerability Detection (3)
Behavioral Anomaly Detection (2)
Automated Threat Report Summarization (3)
8
Table 5. Comparative summary of LLM techniques across cybersecurity applications.
Table 5. Comparative summary of LLM techniques across cybersecurity applications.
Application DomainRepresentative ModelDatasetAccuracy/Key MetricPros (Strengths)Cons (Limitations)
Network SecurityGPTFuzzer [18]PROFUZZHigh success in SQLi/XSS/RCE payload generationGenerates targeted payloads using RL, bypasses WAFsLimited to certain protocols; needs fine-tuning for closed systems
Software and System SecurityLATTE [58]Real firmware datasetDiscovered 37 zero-day vulnsCombines LLM with automated taint analysisRisk of false positives; high compute cost
Software and System SecurityZeroLeak [65]Side channel apps datasetEffective side channel mitigationSpecialized for side-channel vulnerabilitiesLimited cross language applicability
Blockchain SecurityGPTLENS [82]Ethereum transactionsReduced false positives in smart contract analysisDual phase vuln. scenario generationContextual understanding still limited
Cloud SecurityGenKubeSec [30]Kubernetes configsPrecision: 0.990, Recall: 0.999Automated reasoning; surpasses rule based toolsLimited cross platform generality
Cloud SecuritySecure Cloud AI [31]NSL-KDD, cloud logsAccuracy: 94.78%Hybrid RF + LSTM for real time detectionNeeds scaling for large environment
Incident Response and Threat IntelSEVENLLM [34]Incident reportsImproved IoC detection, reduced false positivesMultitask alert prioritizationLimited multilingual coverage
IOT SecurityLLM4Vuln [25]Firmware in Solidity and JavaEnhanced detection across languagesRAG and prompt engineering improves reasoningNeed more in architecture diversity
IOT SecurityIDS-Agent [137]ACI-IOT’23, CIC-IOT’23Recall: 61% for zero-day attacksCombines reasoning, memory retrieval, and external knowledgeModerate precision, room for higher automation
Table 6. Comparison of techniques for mitigating LLM vulnerabilities.
Table 6. Comparison of techniques for mitigating LLM vulnerabilities.
TechniqueBackdoorJailbreakingData
Poisoning
Prompt
Injection
ParaFuzz
CUBE
Masking Differential Prompting
Self Reminder System
Content Filtering
Red Team
Safety Fine-Tuning
A Goal Prioritization
Model Merge
Prompt Engineering
Smooth
Table 7. Comprehensive analysis of security defense techniques and related approaches. Abbreviations: RT (Red Team), CF (Content Filtering), SFT (Safety Fine-Tuning), MM (Model Merging), CE (CUBE), GP (Goal Prioritization), FM (Fine Mixing), P (Perplexity), PF (ParaFuzz), SF (Substring Filtering), S (Smooth), DPI (Data Prompt Isolation), PH (Paraphrasing), SR (Self-Reminder), C (Cleaning), CU (Curation), MDP (Masking Differential Prompting), R (Re-tokenization), PI (Prompt Injection).
Table 7. Comprehensive analysis of security defense techniques and related approaches. Abbreviations: RT (Red Team), CF (Content Filtering), SFT (Safety Fine-Tuning), MM (Model Merging), CE (CUBE), GP (Goal Prioritization), FM (Fine Mixing), P (Perplexity), PF (ParaFuzz), SF (Substring Filtering), S (Smooth), DPI (Data Prompt Isolation), PH (Paraphrasing), SR (Self-Reminder), C (Cleaning), CU (Curation), MDP (Masking Differential Prompting), R (Re-tokenization), PI (Prompt Injection).
AuthorRef.RTCFSFTMMCEGPFMPPFSFSDPIPHSRCCUMDPRPI
Alone et al.[150]
Bianchi et al.[157]
Cao et al.[152]
Contiella et al.[172]
Cui et al.[180]
Ribeiroet et al.[147]
Ganguali et al.[147]
Gonen et al.[192]
Jain et al.[151]
Jin et al.[186]
Kupmar et al.[184]
Liu et al.[198]
Perez et al.[149]
Xiangyu et al.[155]
Robey et al.[187]
Schulhoff et al.[191]
Shan et al.[170]
Wang et al.[104]
Wu et al.[185]
Xi et al.[181]
Yan et al.[171]
Zhang et al.[179]
Zhao et al.[158]
Zou et al.[163]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jaffal, N.O.; Alkhanafseh, M.; Mohaisen, D. Large Language Models in Cybersecurity: A Survey of Applications, Vulnerabilities, and Defense Techniques. AI 2025, 6, 216. https://doi.org/10.3390/ai6090216

AMA Style

Jaffal NO, Alkhanafseh M, Mohaisen D. Large Language Models in Cybersecurity: A Survey of Applications, Vulnerabilities, and Defense Techniques. AI. 2025; 6(9):216. https://doi.org/10.3390/ai6090216

Chicago/Turabian Style

Jaffal, Niveen O., Mohammed Alkhanafseh, and David Mohaisen. 2025. "Large Language Models in Cybersecurity: A Survey of Applications, Vulnerabilities, and Defense Techniques" AI 6, no. 9: 216. https://doi.org/10.3390/ai6090216

APA Style

Jaffal, N. O., Alkhanafseh, M., & Mohaisen, D. (2025). Large Language Models in Cybersecurity: A Survey of Applications, Vulnerabilities, and Defense Techniques. AI, 6(9), 216. https://doi.org/10.3390/ai6090216

Article Metrics

Back to TopTop