Abstract
The limitations of conventional rule-based security systems have been exposed by the quick evolution of cyber threats, necessitating more proactive, intelligent, and flexible solutions. In cybersecurity, Artificial Intelligence (AI) has emerged as a transformative factor, offering improved threat detection, prediction, and automated response capabilities. This paper explores the advantages of using AI in strengthening cybersecurity, focusing on its applications in machine learning, Deep Learning, Natural Language Processing, and reinforcement learning. We highlight the improvement brought by AI in terms of real-time incident response, detection accuracy, scalability, and false positive reduction while processing massive datasets. Furthermore, we examine the challenges that accompany the integration of AI into cybersecurity, including adversarial attacks, data quality constraints, interpretability, and ethical implications. The study concludes by identifying potential future directions, such as integration with blockchain and IoT, Explainable AI and the implementation of autonomous security systems. By presenting a comprehensive analysis, this paper underscores exceptional potential of AI to transform cybersecurity into a field that is more robust, adaptive, and predictive.
1. Introduction
Cybersecurity has become a critical concern due to the increased adoption of technology in many industries, resulting in the growing number of cyber threats and attacks. Modern systems are more prone to sophisticated cyber assaults that take advantage of vulnerabilities at scale, affecting everything from personal devices to critical infrastructure [1]. Traditional cybersecurity techniques, which are mostly rule-based and signature-driven, frequently fail to detect new threats or respond in real time to changing attack vectors. Due to this shortcoming, novel strategies that can handle the dynamic nature and complexity of the threat landscape are now required.
Artificial Intelligence (AI) is progressively being acknowledged as a cornerstone of next-generation cybersecurity. With its ability to identify patterns, learn from data, and adapt to new contexts, AI enables more efficient and proactive defense strategies compared to static systems, while machine learning (ML) algorithms have been effective in detecting anomalies, identifying phishing attempts, and classifying malware, Deep Learning (DL) techniques improve the ability of a system to recognize complex attack features [2,3,4,5]. In addition, Reinforcement Learning (RL) offers potential for creating automated, adaptive response systems, while Natural Language Processing (NLP) helps extract actionable insight from unstructured data sources [6,7,8].
The goal of this paper is to explore the gains of AI for cybersecurity, focusing on its potential to scale with big data, enable predictive defense mechanisms, improve accuracy and enhance threat detection. Furthermore, future research and development potentials are outlined, in addition to critically analyzing the challenges and risks of adopting AI, including adversarial threats and ethical considerations. Although, there exists many studies focused on independent AI application in cybersecurity. To the best of our knowledge, this paper is the first attempt to present the key AI applications concurrently, providing more comprehensive view and agile framework for responding to evolving cyber attacks.
The main contributions of this paper are summarized as follows:
- We present a unified and technically strong synthesis of the major AI paradigms, ML, DL, NLP, and RL, highlighting how each contributes to contemporary cybersecurity defense mechanisms.
- A structured mapping between AI model families and cybersecurity tasks is developed, clarifying how architectural characteristics, data modalities, and learning objectives align with functions such as intrusion detection, malware classification, phishing defense, anomaly detection, and autonomous cyber response.
- We offer a critical analysis of the practical strengths and limitations of AI-driven security systems, examining issues such as adversarial robustness, model interpretability, data scarcity, scalability, and computational overhead, thereby revealing the operational constraints that influence real-world adoption.
- Key research challenges and emerging directions are identified, including explainable and trustworthy AI, privacy-preserving learning models, multimodal threat intelligence fusion, and robust evaluation methodologies to guide future developments in AI-based cybersecurity.
- Strategic and governance considerations required for responsible deployment of AI in cybersecurity environments are articulated, emphasizing the importance of ethical design, regulatory alignment, and risk-aware system management.
The rest of this paper is structured as follows: Section 2 reviews related work and the current state of AI in cybersecurity. Section 3 outlines AI applications in different security domains. Section 4 discusses the key gains and advantages of AI for cybersecurity, while Section 5 highlights existing challenges and limitations. Section 6 identifies future directions, and Section 7 concludes the paper. Figure 1 illustrates the organization of the paper.
Figure 1.
Structure of the paper.
2. Literature Review
The development of AI marked the outcome of an attempt to design a structure that would not require the assistance of the human brain. More research later emerged on the subject as a result of the discovery, with increase in the development of intelligent robots and systems [9]. In addition, the development attempted to mimic human behavior without significantly impacting people. The entire history of AI demonstrates its potential in the development, management, and deployment of ML and DL to execute specific tasks [10]. Thus, AI technology is becoming increasingly affordable and accessible by reducing software development functions such as data management and deployment. Subsequently, AI is widely employed to monitor and prevent cybercrime as the hazards associated with it rise.
Cyber attacks have become a major concern in the digital age, endangering critical infrastructure and data security across several sectors, including industry, government, and the military. Artificial intelligence has evolved as an innovative tool for improved threat detection and response. Traditional cybersecurity methods, such as firewalls, intrusion prevention systems (IPSs), intrusion detection systems (IDSs), and antivirus software, have historically depended on rule-based and signature-driven methodologies [11]. These techniques function by comparing files or incoming traffics against a database of known harmful signatures or predefined rules. Access control restrictions are enforced by firewalls, while threats are detected by antivirus and IDS based on previously identified patterns. Previous studies have demonstrated that AI improves capacity of security systems to identify attack patterns and perform predictive analysis, enabling more proactive defenses [12,13,14,15]. Furthermore, public key infrastructure (PKI) and other encryption and authentication schemes have long been used to protect the integrity and confidentiality of data.
Although these strategies work well against known attacks, they have serious drawbacks in the current threat landscape. Firstly, zero-day attacks and polymorphic malware are difficult for signature-based approach to detect, as such threats do not match existing patterns. Secondly, a large number of false positives are frequently generated by rule-based systems, which delays responses and overwhelms analysts. Lastly, traditional methods have limited scalability and are unable to handle the enormous volume, velocity, and diversity of data generated in modern internet [16]. In addition, traditional systems are also less flexible since they require human intervention to update policies and signatures, which fails to keep up with evolving adversary tactics. These shortcomings have underscored the necessity for intelligent, adaptive, and proactive strategies, such as those offered by AI and ML, to complement and enhance conventional cybersecurity defenses.
Today, AI in digital security signifies a shift from static, rule-based protections to data-driven, adaptive defenses [17,18]. Unlike conventional systems, AI leverages ML, DL, and NLP to identify irregularities, predict attacks, and instantly automate responses. This adaptability enables proactive security against large-scale attacks, polymorphic malware and zero-day exploits that often overwhelm traditional techniques. As a result, AI is becoming a fundamental component of next-generation cybersecurity, enhancing robustness in increasingly challenging digital ecosystems.
Notable ML techniques, such as decision trees, support vector machines, and random forests, have been widely applied for intrusion detection, malware classification, and anomaly detection, improving detection accuracy and reducing false positives [19]. Similarly, DL models, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs), further enhance pattern recognition in network traffic, phishing URLs, and malware behavior, enabling effective identification of complex and evolving threats [20]. Moreover, NLP contributes to cybersecurity by analyzing unstructured data sources such as threat reports, emails, and social media, facilitating phishing detection, threat intelligence extraction, and sentiment-based risk analysis.
Additionally, hybrid techniques that combine ML, DL, and NLP have emerged, delivering context-aware and multi-layered security solutions [21,22]. A significant paradigm shift in online security policies has been driven by the combined improvement of proactive threat prediction, situational awareness, and real-time incident response, brought about by these AI-driven methodologies.
Despite significant progress in applying AI to cybersecurity, a lot of research gaps still exists in the literature. Most existing studies focus on specific threat domains such as intrusion detection or malware classification, while offering limited integration across multiple layers of defense [23]. Furthermore, while DL and ML models have demonstrated excellent detection accuracy in controlled settings, their real-time implementation is constrained by concerns related to scalability, accessibility, adversarial attacks, and data imbalance. Again, the reproducibility and comparative analysis of AI-based security models are limited by lack of standardized datasets and benchmarking frameworks. In addition, the ethical, privacy, and regulatory impact of autonomous AI decision making in cybersecurity contexts have also received comparatively little attention in research. These shortcomings draw attention to the need for more resilient, interpretable, and robust AI systems that can adapt to evolving threats, while guaranteeing transparency and accountability.
Recent advances in AI applications for cybersecurity integrate state-of-the-art techniques with the ability to enhance threat detection and response mechanism, with unprecedented speed and accuracy. The most significant developments of AI have taken place in the following areas: behavioral analytics; identity verification; Explainable AI (XAI); Privacy-Preserving AI; integration with blockchain; quantum computing; Internet of Things (IoT) [24,25,26,27]. These systems are effective in handling the scale and complexity of today’s digital landscape, ensuring more robust defenses against a wide range of threats.
3. Artificial Intelligence Applications in Cybersecurity
3.1. Machine Learning Applications in Cybersecurity
Modern cybersecurity now relies heavily on ML, which enables systems to automatically identify patterns, learn from data, and detect threats with minimal human involvement. The most prominent ML applications in cybersecurity include anomaly detection, malware classification, and phishing detection, each addressing specific threat vectors through adaptive data-driven intelligence.
3.1.1. Anomaly Detection
Anomaly detection involves the process of identifying unusual behavior in data mining to detect fraud and other malicious activities. These suspicious behaviors constitute actions that deviate from expected patterns, signaling potential security incidents. As shown in Figure 2, supervised and unsupervised ML models such as k-Nearest Neighbors (kNNs), support vector machines (SVMs), Isolation Forests, and Autoencoders are commonly used [28]. ML techniques offer great benefits in the modeling of network behavior, reducing false positives, and improving accuracy compared to static rule-based systems. In addition, ML can leverage on the capability to process vast amounts of network traffic or system logs to detect zero-day attacks, data exfiltration, and insider threats.
Figure 2.
Machine learning process for anomaly detection in network security.
3.1.2. Malware Classification
With the huge adversarial role played by malware in cybercrime, ML models have demonstrated high accuracy in classifying malware based on API call sequences, code features, and behavioral characteristics. In this regard, frequently used techniques to differentiate between benign and malicious files include decision trees, random forests, and Deep Neural Networks (DNNs) [29]. Thus, accurate malware analysis or classification allows defenders to detect infections for data protection, achieved by extracting feature vectors from static analysis (file signatures) or dynamic analysis (runtime behavior). Ultimately, ML can automate classification of new and polymorphic malware, thereby reducing reliance on traditional signature-based detection.
3.1.3. Phishing Detection
Today, one of the most evolving and persistent cybersecurity threats is Phishing, which poses significant risks to individuals and organizations. Phishing detection leverages ML to analyze features in URLs, email content, and webpage structures to detect fraudulent communication. Traditional rule-based phishing detection methods, such as blacklists and heuristic-based approaches, often fail to identify sophisticated phishing attempts, prompting the need for more adaptive and intelligent solutions. Algorithms such as Logistic Regression, Naïve Bayes, and Gradient Boosting have been widely used to identify phishing attempts with high precision [30,31]. NLP techniques are also integrated to analyze email text and detect linguistic signs of deception, supporting ML-based models which continuously adapt to evolving phishing tactics, and provide real-time protection across communication channels.
3.2. Deep Learning Applications in Cybersecurity
Deep Learning, regarded as a subset of AI and an advancement of ML, analyzes vast amount of data using multi-layered computational architecture and programmed neural networks [32,33]. Recently, the transformative role of DL in cybersecurity has made it possible to automatically learn high-level attributes and extract complex patterns from raw data. Unlike traditional ML models that require manual feature engineering, DL models, especially CNNs and RNNs, enable end-to-end learning that enhances detection accuracy and scalability in complex threat environments. Additionally, DL guarantees comprehensive security analysis since it can process text, images, network logs, and unstructured data. Two prominent DL applications in cybersecurity are image-based malware detection and intrusion detection using neural networks. Both malware classification and intrusion detection rely on a range of input signals that capture either network behavior or program execution characteristics.
3.2.1. Image-Based Malware Detection
One of the most important steps to detect and remove an image-based malware from an infected system is to determine the signature, type, name, and behavior of the malware. In image-based malware detection, executable files are transformed into grayscale or RGB images representing byte-level patterns, as shown in Figure 3. This approach allows the classifier to quarantine malware by studying the texture of the converted malware images [34]. DL models, particularly CNNs, are trained to automatically identify malicious signatures from these visual representations.
Figure 3.
Image-based malware detection process using CNN.
The process involves converting malware binaries into images, normalizing them, and feeding them into a CNN for classification. Widely used inputs include opcode sequences, API/system call traces, byte-level representations, and control-flow graphs. Prior to model training, these inputs are generally subjected to preprocessing steps such as normalization, categorical encoding, embedding of sequential features, noise reduction, and conversion of raw traces into fixed-length or variable-length sequences. These processing steps help transform heterogeneous cybersecurity data into formats suitable for convolutional and recurrent neural architectures. CNNs capture spatial correlations within the image, effectively distinguishing between benign and malicious code, even for obfuscated or polymorphic malware that evades traditional signature-based techniques. The major benefit of this approach is the ability to recognize a new malware variant based on image texture similarity, without relying on manual feature extraction or signature updates.
3.2.2. Intrusion Detection Using Neural Networks
Within a network environment, the IDS functions to regularly collect and analyze information from the systems for malicious attack detection. The approach based on DL utilizes neural networks such as RNNs, Deep Belief Networks (DBNs), and Long Short-Term Memory (LSTM), to detect anomalies and cyber intrusions within large network datasets. RNNs and LSTMs are particularly effective in analyzing temporal dependencies in network traffic flows, capturing subtle sequential patterns that may indicate ongoing attacks. Deep architectures also allow for feature abstraction from raw traffic data, improving the detection of both known and novel attack types. Ioannou and Fahmy [35] demonstrated the benefit of neural-network-based IDSs in terms of faster networks scaling, in addition to offering high adaptability, flexibility and accuracy needed to accept updated model parameters for emerging threats. Figure 4 depicts the application of neural networks in intrusion detection using DL technique. The first stage involves collecting raw network traffic data. For IDS applications, studies typically use network flow features (e.g., packet size, duration, byte counts, protocol flags), full packet payloads, or sequence-based representations of traffic patterns. The collected data is often noisy and in a format unsuitable for direct input into neural networks, necessitating transformation via data processing. Thereafter, the DL neural network model undergoes training on a labeled dataset and subsequently deployed to analyze new, unseen network traffic in the intrusion detection stage.
Figure 4.
Deep-Learning-based intrusion detection workflow using neural networks.
3.3. Natural Language Processing Applications in Cybersecurity
Like DL, the NLP is also a subfield of AI that enables machines to understand, interpret, and generate human language. In cybersecurity, NLP plays a crucial role in extracting actionable intelligence from vast amounts of unstructured textual data, social media posts, threat reports, security blogs, and dark web communications [36,37]. It combines computational linguistics with ML, DL, and statistical models. By leveraging techniques such as named entity recognition (NER), text mining, sentiment analysis and topic modeling, NLP provides enhanced situational awareness, early threat detection, and automated information processing. Two key NLP applications in cybersecurity are threat intelligence extraction and analysis of unstructured security data.
3.3.1. NLP for Threat Intelligence Extraction
Cyber Threat Intelligence (CTI) involves gathering and analyzing information about the Tactics, Techniques, and Procedures (TTPs) employed by potential cyber threats, to enable proactive defense [38]. NLP techniques automatically extract entities (e.g., IP addresses, malware names, CVE identifiers, threat actors) and relations contained in unstructured text sources such as technical blogs, security advisories, and social media feeds (Figure 5). By applying methods such as Dependency Parsing, NER, and Relation Extraction, NLP models transform raw text into structured threat intelligence datasets usable by automated security systems and security information and event management (SIEM) tools. Specifically, NLP has been exploited as a powerful tool used to analyze large datasets of text-based information. Hence, this approach accelerates intelligence gathering and enhances real-time awareness of emerging cyber threats [39].
Figure 5.
NLP pipeline for threat intelligence extraction.
3.3.2. Analyzing Unstructured Security Data
The cybersecurity ecosystem generates massive volumes of unstructured textual data, which include incident reports, firewall logs, and dark web postings, that contain latent indicators of compromise (IOCs). The increasing volume of data generated has made the demand to efficiently analyze data more crucial [40]. NLP models, particularly those based on transformer architectures (e.g., BERT, RoBERTa, and GPT models), are now employed to analyze, classify, and summarize these texts automatically [41].
For instance, NLP can perform automated log summarization [42], detect malicious intent in textual data, and correlate security events from multiple data streams. Topic modeling and clustering algorithms such as Latent Dirichlet Allocation (LDA) [43] also help categorize threat discussions and identify emerging attack trends. Employing NLP in this manner facilitates real-time analysis of unstructured data, improving the accuracy and speed of threat detection and incident response.
3.4. Reinforcement Learning Applications in Cybersecurity
The importance of RL is increasing due to its application in cyber defense and automated response systems. As a leading AI-based machine learning method, RL agents directly learn optimal actions through interaction with their environment [44]. Unlike supervised learning, RL does not require labeled data; instead, the agents learn by trial-and-error feedback appearing as rewards and penalties to dynamically adapt to evolving threats. RL models enable self-learning defense mechanisms that can anticipate, mitigate, and respond to attacks autonomously. This action is a major advancement toward autonomous cybersecurity and Zero Trust architectures [45].
3.4.1. Adaptive Defense Mechanisms
Adaptive defense mechanisms rely on RL agents that continuously monitor the network environment and adjust defense policies in real time. As depicted in Figure 6a, the system observes network states (e.g., traffic patterns, intrusion attempts) and selects defense actions (e.g., reconfiguration, isolation, or blocking) based on learned policies [46]. Through techniques such as Deep Q-Networks (DQN) and Policy Gradient Methods, the RL agent learns to maximize long-term security rewards, effectively anticipating attack strategies and minimizing system compromise. As the realm of cybersecurity evolves, there is necessity for dynamic and adaptive defense mechanisms as demonstrated in [47]. Thus, RL enables dynamic and context-aware adaptation, reducing reliance on static rule-based security controls.
Figure 6.
Reinforcement learning applications in cybersecurity. (a) Reinforcement learning framework for adaptive defense; (b) automated response system using reinforcement learning.
3.4.2. Automated Response Systems
Automated response systems employ RL to decide and execute real-time countermeasures when an intrusion or anomaly is detected. Instead of predefined rule triggers, RL-based systems autonomously select the optimal sequence of defensive actions to contain or neutralize an attack with minimal disruption [48]. For instance, an RL agent embedded in a Security Operations Center (SOC) can decide whether to quarantine a host, terminate a connection, or escalate to human analysts, depending on the expected long-term impact (see Figure 6b). Multi-agent Reinforcement Learning (MARL) is also gaining traction, allowing multiple agents to cooperate (e.g., endpoint and network agents) for distributed cyber defense [49]. RL-based response systems provide proactive, intelligent, and minimally invasive defense actions, improving reaction speed and resilience.
Table 1 summarizes the applications of AI in cybersecurity, including the techniques and potential benefits.
Table 1.
Summary of AI applications in cybersecurity.
4. Gains of AI for Cybersecurity
4.1. Improved Threat Detection and Prediction
The integration of AI into cybersecurity has revolutionized the accuracy, speed, and adaptability of prediction mechanisms and threat detection. Most conventional security systems rely heavily on signature-based and rule-driven models, which are limited in identifying zero-day attacks, advanced persistent threats (APTs), and polymorphic malware [50]. In contrast, AI-powered approaches leverage ML, DL, and neural network architectures to identify complex and evolving attack patterns through data-driven learning and behavioral analysis.
4.1.1. Data-Driven Threat Detection
Data-driven threat detection describes the ability of AI systems to make an informed decisions via the use of large amount of data to train different algorithms. AI-based detection models are capable of analyzing massive and heterogeneous security data, including endpoint activity, logs, and network traffic, utilizing statistical methods, to uncover anomalies and previously unseen intrusions. Supervised ML algorithms such as SVM, Random Forest, and Gradient Boosting have demonstrated superior performance in detecting network intrusions and malware variants [51]. DL models, such as CNNs and RNNs, have extended this capability by automatically learning hierarchical features, enabling detection of obfuscated malware signatures and nonlinear attack behaviors.
4.1.2. Predictive Threat Modeling
Despite advancements, contemporary real-time cyber attack detection systems continue to exhibit several critical shortcomings. Beyond detection, AI enables predictive analytics that forecast potential attacks before they occur. By leveraging historical threat data, attack graphs, and behavioral trends, predictive models can infer attacker intent and assess the likelihood of exploitation for specific vulnerabilities [52]. Techniques such as probabilistic graphical models, RL, and time series forecasting allow proactive decision making, enabling security teams to anticipate and mitigate risks rather than react after a breach. Table 2 highlights the comparison between traditional and cutting-edge AI-based detection systems.
Table 2.
Comparison of traditional vs. AI-based threat detection and prediction systems.
4.2. Real-Time Incident Response and Automation
The ever-growing complexity and velocity of cyber threats demand security mechanisms capable of instantaneous detection, decision making, and response. AI plays a pivotal role in enabling real-time, automated incident response systems that can analyze, prioritize, and remediate security incidents with minimal human intervention. Through advanced ML, DL, and RL algorithms, AI empowers security infrastructures to act autonomously, thereby reducing response latency, operational overhead, and human error.
4.2.1. Intelligent Automation of Incident Response
Advances in AI analytics have inspired a transition from reactive incident reporting to a proactive and predictive approach in incident management. AI-driven automation enables security systems to process and respond to alerts from multiple sources such as IDS, SIEM platforms, and Endpoint Detection and Response (EDR) tools in real time [53]. Using AI orchestration engines, the system can automatically classify the severity of an event, correlate it with threat intelligence feeds, and initiate mitigation actions such as isolating infected nodes or blocking malicious IPs.
Similarly, combining AI with Security Orchestration, Automation, and Response (SOAR) platforms also presents a big step for cyber threat detection, mitigation, and prevention efforts. It enhances coordination between different cybersecurity tools, leading to seamless and timely defense actions across an organization’s digital ecosystem.
4.2.2. Reinforcement Learning for Adaptive Responses
Utilizing RL enhances automation by enabling adaptive and context-aware response policies. RL agents learn through continuous interaction with the system environment, evaluating defense actions and optimizing strategies to minimize risk and system impact. Such an approach is particularly effective in dynamic attack scenarios, where traditional rule-based responses may fail to adapt. The comparison between manual and AI-driven incident response capability is presented in Table 3. By employing algorithms such as DQN and Policy Gradient Methods, AI-driven systems can autonomously balance between aggressive mitigation (e.g., isolating critical nodes) and operational continuity.
Table 3.
Comparing manual and AI-driven incident response systems.
4.3. Scalability in Handling Big Data for Security Analytics
The exponential growth of digital data in modern enterprises, driven by cloud computing, the Internet of Things (IoT), and mobile technologies, has created massive and complex datasets that challenge traditional cybersecurity analytics. In this scenario, AI provides the computational scalability and analytical intelligence required to manage and interpret such vast data volumes in real time. By integrating ML, DL, and distributed computing, AI enables the efficient processing, correlation, and visualization of multi-source security data, leading to faster detection, richer insights, and proactive risk management.
4.3.1. AI and Big Data Convergence in Cybersecurity
Today, the sophistication of attacks and the complex nature and large scale of IT infrastructures has made cybersecurity detection more challenging. Reliance on convectional security systems, such as signature-based IDS and SIEM tools, involves the daunting task of scaling with the volume, velocity, and variety of modern cybersecurity data [54]. In contrast, AI models are designed to thrive in big data ecosystems by leveraging parallel computing, stream processing, and automated feature extraction.
Robust ML algorithms efficiently process high-dimensional datasets to uncover correlations across millions of events, while DNNs can learn complex, nonlinear representations of attacks without manual intervention. Similar to [55], Figure 7 shows the deployment of AI-driven analytics platforms on distributed frameworks such as Apache Spark 3.2.x or Hadoop 3.2.x. to achieve scalable processing across large, heterogeneous data sources.
Figure 7.
Scalable AI-driven security analytics framework.
4.3.2. Real-Time and Distributed Security Analytics
Real-time and distributed data analytics involves the analysis and processing of data as it arrives, delivering immediate insights that are crucial for time-sensitive applications. Accordingly, AI enhances real-time analytics by leveraging streaming machine learning and event-driven architectures. As highlighted in Table 4, these systems can analyze terabytes of data per second, detecting anomalies or malicious activities as they emerge. Techniques such as online learning and incremental neural updates allow models to evolve continuously without retraining from the scratch [56].
Table 4.
Comparative overview of traditional vs. AI-driven big data security analytics.
In distributed security environments such as Software-Defined Networks (SDNs) or multi-cloud architectures, AI enables federated learning, allowing multiple agents to learn from local datasets without transferring sensitive information [57,58]. This approach ensures data privacy, scalability, and efficiency across diverse organizational infrastructures.
4.4. Enhanced Accuracy and Reduced False Positives
Among the major benefits of AI in cybersecurity is its ability to achieve higher detection accuracy while minimizing false positives, a long-standing challenge in traditional security systems. Conventional approaches such as signature-based IDS and static rule-based firewalls are effective against known threats but often generate numerous false alarms due to their inability to distinguish between actual attacks and legitimate anomalies [59]. Exploiting AI, particularly through ML and DL, enhances detection precision by learning complex behavioral patterns and adaptive threat features, resulting in more reliable and context-aware security analytics.
4.4.1. AI-Powered Precision in Detection
Training AI models on vast datasets containing both malicious and benign behaviors allows them to learn subtle distinctions that static systems fail to capture. Techniques involving SVMs, Random Forests, and neural networks effectively reduce misclassification errors by identifying nonlinear relationships between input features and attack indicators [51].
Moreover, DL architectures like CNNs and RNNs extract hierarchical representations from raw network traffic, user activity, and log files. This ability to automatically learn discriminative features improves classification precision across diverse threat categories, ranging from phishing to insider attacks.
4.4.2. Contextual Correlation and Adaptive Learning
Research has persistently shown the urgent need to evolve assessment practices in response to AI. This approach is expected to improve accuracy not only through learning but also via contextual correlation. Thus, linked alerts across multiple data dimensions, time, source, behavior, and network context help to eliminate redundant or irrelevant signals [60]. For instance, an AI-enhanced SIEM system can correlate a login anomaly with user location, time of access, and historical activity to determine whether it truly represents a security breach [61].
Furthermore, adaptive learning mechanisms allow AI models to update their parameters in real time as new threats emerge, ensuring that detection accuracy remains high even in evolving attack landscapes. This dynamic adjustment contrasts sharply with static rule-based systems that require manual reconfiguration.
4.4.3. Quantitative Gains and Practical Impact
Empirical studies demonstrated in [62] showed that AI-based systems can achieve 50–70% reduction in false positives with up to 95–99% detection accuracy, higher by 3.4–4% compared to traditional techniques presented in [63,64]. These improvements translate directly into operational efficiency by reducing alert fatigue among cybersecurity analysts and enabling faster, more confident responses to genuine incidents. AI-based filters and anomaly detectors within platforms such as SOAR and EDR significantly enhance overall incident handling performance.
4.5. Proactive Security Through Predictive Analytics
One of the transformative contributions of AI to cybersecurity is the shift from reactive defense mechanisms to proactive security strategies enabled by predictive analytics. Traditional cybersecurity systems typically operate on a post-incident or signature-based paradigm, responding to threats only after they have been identified or executed. In contrast, AI-driven predictive models enable the anticipation and prevention of cyber attacks before they occur by analyzing patterns, trends, and anomalies across massive and heterogeneous datasets [52].
4.5.1. Predictive Threat Modeling and Risk Forecasting
AI techniques such as ML, DL, and Statistical Inference Models facilitate the identification of early IoCs and latent threat vectors through data-driven prediction. Using supervised and unsupervised learning methods, predictive analytics models can infer relationships between system behaviors and potential security breaches [65].
For example, [66] conducted a comprehensive review on Time Series Forecasting Models (e.g., ARIMA, LSTM networks) which are increasingly used to predict attack trends based on historical intrusion data and evolving adversarial tactics. Similarly, [67] proposed Bayesian Networks and Hidden Markov Models (HMMs) to quantify the likelihood of future attacks by computing probabilistic relationships between system vulnerabilities and exploit likelihoods.
4.5.2. Early Threat Detection and Incident Prevention
Predictive analytics enables organizations to forecast potential attack vectors and detect precursor events that precede full-scale breaches. AI systems analyze subtle deviations in user behavior, network flow, or endpoint performance to issue early alerts, often days or weeks before an actual compromise.
For instance, UEBA tools apply ML to establish behavioral baselines and detect anomalies that signal insider threats or credential misuse [68]. Similarly, threat intelligence platforms integrated with AI forecasting engines can identify zero-day vulnerabilities by monitoring exploit trends and dark web chatter, allowing defenders to act pre-emptively.
4.5.3. Predictive Maintenance and Vulnerability Management
AI-enhanced predictive systems are also critical in vulnerability prioritization and patch management. Predictive algorithms assess the exploitability score and potential impact of unpatched vulnerabilities, guiding system administrators to allocate resources efficiently. This proactive approach minimizes downtime and prevents exploitation by cyber adversaries who often exploit known but unaddressed weaknesses. Table 5 compares the strength of predictive maintenance against reactive cybersecurity models.
Table 5.
Comparison between reactive and predictive cybersecurity models.
4.5.4. Strategic and Operational Benefits
Predictive analytics in cybersecurity yields significant strategic and operational advantages. By identifying emerging risks and projecting their likely impact, AI supports risk-based decision making and enhances cyber resilience. Enterprises can deploy preventive countermeasures such as dynamic access control, segmentation, and patch automation, well in advance of an attack. Moreover, integrating predictive models into SOCs through AI-driven dashboards allows analysts to visualize threat trajectories and simulate potential outcomes, thereby strengthening incident readiness and crisis management.
5. Challenges, Limitations, and Future Directions
Despite its potential, predictive cybersecurity analytics faces challenges such as data quality issues, concept drift, and adversarial manipulation of AI models. The accuracy of predictions heavily depends on the availability of clean, representative datasets and the ability of algorithms to adapt to evolving attack techniques. Future research aims to combine federated learning and Explainable AI (XAI) to build trustworthy and interpretable predictive defense systems that maintain accuracy without compromising transparency or data privacy [57,69].
5.1. Challenges and Limitations
5.1.1. Adversarial AI and Risks of AI-Powered Attacks
While AI has enhanced cybersecurity defenses, it also introduces new vulnerabilities through adversarial AI, known as the deliberate manipulation of AI systems to produce false or misleading outputs. Attackers exploit the ML models that underpin intrusion detection, malware classification, and behavioral analysis by feeding them crafted adversarial inputs designed to evade detection or misclassify malicious activity.
Adversarial attacks such as evasion, poisoning, and model inversion threaten the reliability and trustworthiness of AI-driven security systems. In evasion attacks, small perturbations to network traffic or executable files can deceive a trained model into labeling malicious actions as benign. Poisoning attacks occur when adversaries insert manipulated data into training sets, corrupting model behavior over time [70,71,72]. Meanwhile, model extraction and inversion techniques allow attackers to reconstruct AI models or infer sensitive training data, compromising both security and privacy [73].
Moreover, the dual-use nature of AI technologies means that attackers can leverage the same algorithms used for defense, such as RL and generative models, to automate phishing campaigns, password cracking, and vulnerability exploitation at unprecedented speed and scale [74]. This emerging arms race between defensive and offensive AI presents a serious concern to maintaining cyber resilience. To mitigate these risks, researchers advocate for robust AI defense strategies including adversarial training, model hardening, XAI, and continuous validation of AI systems. However, balancing model complexity, interpretability, and robustness remains a significant ongoing research concern.
5.1.2. Data Quality and Availability of Training Models
A significant limitation in applying AI to cybersecurity lies in the quality and availability of training data. Large, diverse and accurately labeled datasets are required to effectively detect evolving cyber threats in AI models. However, due to scarcity, imbalance and proprietary restrictions in cybersecurity data, access to research and model development is limited [75]. Many real-world datasets contain noise, redundant information, or hidden fields, which reduce model accuracy and generalization.
Furthermore, the dynamic nature of cyber threats means that historical data may quickly become outdated, leading to concept drift, where models trained on past attacks fail to detect novel threats. Ensuring data diversity, maintaining up-to-date threat intelligence, and enabling secure data sharing frameworks are essential to overcoming these challenges.
5.1.3. Interpretability and Transparency of AI Systems
A critical challenge in applying AI to cybersecurity is the lack of transparency and interpretability in complex models such as DNNs. These systems often operate as “black boxes or opaque,” causing security analysts great difficulty to understand how and why a decision, such as classifying traffic as malicious was made [76]. This opacity reduces accountability, trust, and regulatory compliance, especially in high-stakes environments like critical infrastructure.
Without explainable reasoning, false positives or overlooked threats may go unchallenged, undermining operational confidence. Emerging approaches such as XAI, model visualization, and rule-based hybrid systems aim to enhance interpretability while maintaining detection accuracy, though balancing transparency and performance remains an open research issue.
5.1.4. Computational and Resource Requirements
Implementing AI for cybersecurity often entails high computational and resource demands due to large-scale data processing, model training, and real-time inference. Techniques such as DL and RL require powerful GPUs, extensive memory, and distributed computing infrastructures. These requirements can limit deployment in resource-constrained environments and increase operational costs.
Moreover, real-time threat detection and predictive analytics necessitate low-latency processing, which can strain networks and storage systems. Optimizing AI models for efficiency, scalability, and energy consumption remains a critical challenge in practical cybersecurity applications.
5.1.5. Ethical and Privacy Concerns
The adoption of AI in cybersecurity raises significant ethical and privacy concerns. AI systems often require access to sensitive personal, organizational, or network data, which can lead to privacy violations if mishandled or misused [77]. Additionally, automated decision making may inadvertently introduce bias, unfairly flagging certain users or behaviors, and can be exploited for surveillance or unauthorized monitoring.
Typical areas of ethical AI implementation involves development of fairness-aware algorithms in healthcare diagnostics, education and student evaluations, human resources and hiring, criminal justice reform, and social medial moderation. For example, Facebook encountered a significant data breach involving several popular applications incorporating AI components in 2021. The personal information of over 500 million users containing details such as location, phone numbers, gender, usernames, and real names were exposed by the incident [78]. These real-world case scenarios serve as illuminating instances that bridge the theoretical implications of ethical AI and privacy with practical applications.
Ensuring data protection, compliance with regulations, and ethical AI practices is essential, but balancing effective threat detection with privacy preservation remains a major limitation in deploying AI-driven cybersecurity solutions.
6. Future Directions
6.1. Explainable AI (XAI) for Trustworthy Cybersecurity
A promising future direction in AI-driven cybersecurity is the development of XAI to enhance trust, transparency, and accountability. XAI techniques aim to make AI decisions interpretable, allowing analysts to understand why a threat was detected or a response suggested [69].
Incorporating XAI can improve incident investigation, regulatory compliance, and human–AI collaboration, enabling security teams to verify model outputs, reduce false positives, and make informed decisions. Future research focusing on scalable, interpretable models that maintain high accuracy, while providing actionable explanations for complex cybersecurity scenarios are desirable.
6.2. Integration of AI with Blockchain and IoT Security
Future cybersecurity strategies will increasingly focus on the integration of AI with blockchain and IoTs security to maximize trust, automation, and resilience. AI can intelligently detect anomalies and predict attacks within IoT ecosystems, while blockchain’s decentralized ledger ensures data integrity, transparency, and secure device authentication [79,80].
Combining these technologies enables real-time, impenetrable threat detection and supports autonomous security management across distributed networks. Ongoing research aims to optimize this synergy for lightweight, scalable, and energy-efficient cybersecurity frameworks in smart environments.
6.3. Autonomous Security Systems for Critical Infrastructures
Another envisioned direction in AI-driven cybersecurity is the development of autonomous security systems that are capable of self-monitoring, decision making, and response within critical infrastructures such as energy grids, healthcare networks, and transportation systems. These systems leverage RL and adaptive AI to detect, predict, and neutralize threats in real time without human intervention [81].
By enabling continuous situational awareness and automated threat mitigation, autonomous AI enhances resilience and minimizes downtime in mission-critical operations. Future research will emphasize safe autonomy, human oversight, and ethical governance to ensure reliable deployment in high-stakes environments.
6.4. Policy and Governance Frameworks for AI in Cybersecurity
As AI becomes integral to cybersecurity, there is a growing need for robust policy and governance frameworks to guide its ethical, secure, and accountable deployment. Effective frameworks should address issues of data privacy, algorithmic bias, model transparency, and cross-border cyber regulation [82].
Future governance models will likely emphasize AI auditing, certification standards, and regulatory compliance to ensure responsible innovation. Establishing international collaboration and policy harmonization will be essential to manage risks and promote the trustworthy adoption of AI in global cybersecurity ecosystems.
7. Conclusions
The incorporation of AI-enabled applications into cybersecurity provides an important advancement in the fight against sophisticated cyber attacks. This study has emphasized that AI significantly increases the efficacy of cybersecurity systems by enhancing automated incident response and threat detection. In addition, this paper highlighted the benefits of AI in improving adaptive learning mechanism, expediting incident response and overall situational awareness through intelligent data analysis. The paper discussed full potential of ML, DL, NLP, and RL as important AI applications in cybersecurity in ensuring the real-time identification, prediction, and neutralization of attacks. Despite the enormous promise provided by AI, its implementation introduces challenges such as data breach occurrences, algorithm bias, adversarial manipulation, and operational and ethical efficiency. These concerns highlight the urgent need for robust policy and legislative frameworks that can guarantee the efficient utilization of AI within cybersecurity ecosystem. Therefore, to build trust and resilient AI-driven system requires proper ethical considerations based on accountability, explainability, and compliance with global standards. Future efforts in cybersecurity that will rely on state-of-the art technologies such as federated learning, XAI and autonomous cyber defense, and cooperative mitigation and self-learning capabilities were also suggested. In conclusion, the convergence of AI and cybersecurity not only strengthens disaster recovery, incident response, and proactive threat hunting processes, but also defines the next frontier of intelligent cybersecurity measures.
Author Contributions
Conceptualization, O.A.; methodology, O.A.; validation, O.A. and M.S.T.; writing—original draft preparation, O.A.; writing—review and editing, O.A.; supervision, M.S.T.; funding acquisition, M.S.T. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
No new data were created.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| AI | Artificial Intelligence |
| APTs | Advanced Persistent Threats |
| CNNs | Convolutional Neural Networks |
| CTI | Cyber Threat Intelligence |
| DBN | Deep Belief Network |
| DL | Deep Learning |
| DNN | Deep Neural Network |
| DQN | Deep Q-Network |
| EDR | Endpoint Detection and Response |
| IDS | Intrusion Detection System |
| IoT | Internet of Things |
| IoC | Indicators of Compromise |
| IPS | Intrusion Prevention System |
| kNNs | k-Nearest Neighbors |
| LDA | Latent Dirichlet Allocation |
| LSTM | Long Short-Term Memory |
| MARL | Multi-Agent Reinforcement Learning |
| ML | machine learning |
| NER | Named Entity Recognition |
| NLP | Natural Language Processing |
| PKI | Public Key Infrastructure |
| RNN | Recurrent Neural Network |
| RL | Reinforcement Learning |
| SDNs | Software-Defined Networks |
| SIEM | Security Information and Event Management |
| SOAR | Security Orchestration, Automation and Response |
| SOC | Security Operations Center |
| SVN | support vector machine |
| TTP | Tactics, Techniques, and Procedures |
| UEBA | User and Entity Behavior Analytics |
| XAI | Explainable Artificial Intelligence |
References
- Mohamed, N. Current Trends in AI and ML for Cybersecurity: A State-of-the-Art Survey. Cogent Eng. 2023, 10, 2272358. [Google Scholar] [CrossRef]
- Apruzzese, G.; Laskov, P.; Montes de Oca, E.; Mallouli, W.; Brdalo Rapa, L.; Grammatopoulos, A.V.; Di Franco, F. The Role of Machine Learning in Cybersecurity. Digit. Threat. 2023, 4, 8. [Google Scholar] [CrossRef]
- Yu, J.; Shvetsov, A.V.; Hamood Alsamhi, S. Leveraging Machine Learning for Cybersecurity Resilience in Industry 4.0: Challenges and Future Directions. IEEE Access 2024, 12, 159579–159596. [Google Scholar] [CrossRef]
- Miranda-García, A.; Rego, A.Z.; Pastor-López, I.; Sanz, B.; Tellaeche, A.; Gaviria, J.; Bringas, P.G. Deep Learning Applications on Cybersecurity: A Practical Approach. Neurocomputing 2024, 563, 126904. [Google Scholar] [CrossRef]
- Macas, M.; Wu, C.; Fuertes, W. Adversarial Examples: A Survey of Attacks and Defenses in Deep Learning-Enabled Cybersecurity Systems. Expert Syst. Appl. 2024, 238, 122223. [Google Scholar] [CrossRef]
- Fard, N.E.; Selmic, R.R.; Khorasani, K. A Review of Techniques and Policies on Cybersecurity Using Artificial Intelligence and Reinforcement Learning Algorithms. IEEE Technol. Soc. Mag. 2023, 42, 57–68. [Google Scholar] [CrossRef]
- Oh, S.H.; Kim, J.; Nah, J.H.; Park, J. Employing Deep Reinforcement Learning to Cyber-Attack Simulation for Enhancing Cybersecurity. Electronics 2024, 13, 555. [Google Scholar] [CrossRef]
- Alawida, M.; Mejri, S.; Mehmood, A.; Chikhaoui, B.; Isaac Abiodun, O. A Comprehensive Study of ChatGPT: Advancements, Limitations, and Ethical Considerations in Natural Language Processing and Cybersecurity. Information 2023, 14, 462. [Google Scholar] [CrossRef]
- Soori, M.; Dastres, R.; Arezoo, B.; Jough, F.K.G. Intelligent Robotic Systems in Industry 4.0: A Review. J. Adv. Manuf. Sci. Technol. 2024, 4, 2024007. [Google Scholar] [CrossRef]
- Kolosnjaji, B.; Xiao, H.; Xu, P.; Zarras, A. Artificial Intelligence for Cybersecurity: Develop AI Approaches to Solve Cybersecurity Problems in Your Organization; Packt Publishing Ltd.: Birmingham, UK, 2024; p. 356. [Google Scholar]
- Jafarizadeh, S.; Bour, H.; Soldani, D. AI-Driven Authentication Anomaly Detection for Modern Telco and Enterprise Networks. In Proceedings of the 2025 IEEE 50th Conference on Local Computer Networks (LCN), Sydney, Australia, 14–16 October 2025; pp. 1–6. [Google Scholar]
- Putra, H.; Mulyono, B.E.; Winarna, A.; Prakoso, L.Y. The Role of Artificial Intelligence (AI) in Addressing Cyber Threats. Indones. J. Interdiscip. Res. Sci. Technol. 2025, 3, 193–200. [Google Scholar] [CrossRef]
- Yadav, I.; Shekhawat, V.; Gautam, K.; Soni, G.K.; Yadav, R. Artificial Intelligence for Cybersecurity: Emerging Techniques, Challenges, and Future Trends. In Proceedings of the 2025 3rd International Conference on Sustainable Computing and Data Communication Systems (ICSCDS), Erode, India, 6–8 August 2025; pp. 1176–1180. [Google Scholar]
- Chen, G.; Yuan, Q. Application and Existing Problems of Computer Network Technology in the Field of Artificial Intelligence. In Proceedings of the 2021 2nd International Conference on Artificial Intelligence and Computer Engineering (ICAICE), Hangzhou, China, 5–7 November 2021; pp. 139–142. [Google Scholar]
- Nikolskaia, K.Y.; Naumov, V.B. The Relationship between Cybersecurity and Artificial Intelligence. In Proceedings of the 2021 International Conference on Quality Management, Transport and Information Security, Information Technologies (IT&QM&IS), Yaroslavl, Russia, 6–10 September 2021; pp. 94–97. [Google Scholar]
- Admass, W.S.; Munaye, Y.Y.; Diro, A.A. Cyber Security: State of the Art, Challenges and Future Directions. Cyber Secur. Appl. 2024, 2, 100031. [Google Scholar] [CrossRef]
- Glerean, E. Fundamentals of Secure AI Systems with Personal Data. 2025. Available online: https://elearning-2024.it.auth.gr/pluginfile.php/540242/mod_label/intro/spe-training-on-ai-and-data-protection-technical_en.pdf (accessed on 23 October 2025).
- Karn, A.L.; Ghanimi, H.M.; Iyengar, V.; Siddiqui, M.S.; Alharbi, M.G.; Alroobaea, R.; Yousef, A.; Sengan, S. Applying the Defense Model to Strengthen Information Security with Artificial Intelligence in Computer Networks of the Financial Services Sector. Sci. Rep. 2025, 15, 30292. [Google Scholar] [CrossRef]
- Nabi, F.; Zhou, X. Enhancing Intrusion Detection Systems through Dimensionality Reduction: A Comparative Study of Machine Learning Techniques for Cyber Security. Cyber Secur. Appl. 2024, 2, 100033. [Google Scholar] [CrossRef]
- Shevchuk, R.; Martsenyuk, V. Neural Networks toward Cybersecurity: Domain Map Analysis of State-of-the-Art Challenges. IEEE Access 2024, 12, 81265–81280. [Google Scholar] [CrossRef]
- Mohamed, N. Artificial Intelligence and Machine Learning in Cybersecurity: A Deep Dive into State-of-the-Art Techniques and Future Paradigms. Knowl. Inf. Syst. 2025, 67, 6969–7055. [Google Scholar] [CrossRef]
- Stamp, M.; Jureček, M. Machine Learning, Deep Learning and AI for Cybersecurity; Springer Nature: Cham, Switzerland, 2025. [Google Scholar] [CrossRef]
- Ali, R.; Ali, A.; Iqbal, F.; Hussain, M.; Ullah, F. Deep Learning Methods for Malware and Intrusion Detection: A Systematic Literature Review. Secur. Commun. Netw. 2022, 2959222. [Google Scholar] [CrossRef]
- Bennetot, A.; Donadello, I.; El Qadi El Haouari, A.; Dragoni, M.; Frossard, T.; Wagner, B.; Sarranti, A.; Tulli, S.; Trocan, M.; Chatila, R.; et al. A practical tutorial on explainable AI techniques. ACM Comput. Surv. 2024, 57, 1–44. [Google Scholar] [CrossRef]
- Dhinakaran, D.; Sankar, S.M.; Selvaraj, D.; Raja, S.E. Privacy-preserving data in IoT-based cloud systems: A comprehensive survey with AI integration. arXiv 2024, arXiv:2401.00794. [Google Scholar] [CrossRef]
- Pineda, V.G.; Valencia-Arias, A.; Giraldo, F.E.L.; Zapata-Ochoa, E.A. Integrating artificial intelligence and quantum computing: A systematic literature review of features and applications. Int. J. Cogn. Comput. Eng. 2025, 7, 26–39. [Google Scholar] [CrossRef]
- Menon, U.V.; Kumaravelu, V.B.; Kumar, C.V.; Rammohan, A.; Chinnadurai, S.; Venkatesan, R.; Hai, H.; Selvaprabhu, P. AI-powered IoT: A survey on integrating artificial intelligence with IoT for enhanced security, efficiency, and smart applications. IEEE Access 2025, 13, 50296–50339. [Google Scholar] [CrossRef]
- Protic, D.; Gaur, L.; Stankovic, M.; Rahman, M.A. Cybersecurity in Smart Cities: Detection of Opposing Decisions on Anomalies in the Computer Network Behaviour. Electronics 2022, 11, 3718. [Google Scholar] [CrossRef]
- Nelson, T.; O’Brien, A.; Noteboom, C. Machine Learning Applications in Malware Classification: A Meta-Analysis Literature Review. Int. J. Cybern. Inform. 2023, 12, 113–124. [Google Scholar] [CrossRef]
- Alhasan, A.E. Enhancing Real-Time Phishing Detection with AI: A Comparative Study of Transformer Models and Convolutional Neural Networks. 2025. Available online: https://www.diva-portal.org/smash/record.jsf?pid=diva2:1983139 (accessed on 23 October 2025).
- Catal, C.; Giray, G.; Tekinerdogan, B.; Kumar, S.; Shukla, S. Applications of Deep Learning for Phishing Detection: A Systematic Literature Review. Knowl. Inf. Syst. 2022, 64, 1457–1500. [Google Scholar] [CrossRef] [PubMed]
- Chaudhari, P.D.; Gudadhe, A. Deep Learning Approach and Its Application in the Cybersecurity Domain. In Proceedings of the 2025 International Conference on Machine Learning and Autonomous Systems (ICMLAS), Prawet, Thailand, 10–12 March 2025; pp. 520–524. [Google Scholar] [CrossRef]
- Nguyen, T.T.; Reddi, V.J. Deep Reinforcement Learning for Cyber Security. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 3779–3795. [Google Scholar] [CrossRef] [PubMed]
- Son, T.T.; Lee, C.; Le-Minh, H.; Aslam, N.; Raza, M.; Long, N.Q. An Evaluation of Image-Based Malware Classification Using Machine Learning. In Advances in Computational Collective Intelligence; Communications in Computer and Information Science; Hernes, M., Wojtkiewicz, K., Szczerbicki, E., Eds.; Springer International Publishing: Cham, Switzerland, 2020; Volume 1287, pp. 125–138. [Google Scholar] [CrossRef]
- Ioannou, L.; Fahmy, S.A. Network Intrusion Detection Using Neural Networks on FPGA SoCs. In Proceedings of the 2019 29th International Conference on Field Programmable Logic and Applications (FPL), Barcelona, Spain, 9–13 September 2019; pp. 232–238. [Google Scholar] [CrossRef]
- Vasques, X. Machine Learning Theory and Applications: Hands-on Use Cases with Python on Classical and Quantum Machines; John Wiley & Sons: Hoboken, NJ, USA, 2024. [Google Scholar]
- Hasanov, I.; Virtanen, S.; Hakkala, A.; Isoaho, J. Application of Large Language Models in Cybersecurity: A Systematic Literature Review. IEEE Access 2024, 12, 176751–176778. [Google Scholar] [CrossRef]
- Arazzi, M.; Arikkat, D.R.; Nicolazzo, S.; Nocera, A.; KA, R.R.; Conti, M. NLP-Based Techniques for Cyber Threat Intelligence. Comput. Sci. Rev. 2025, 58, 100765. [Google Scholar] [CrossRef]
- Hanks, C.; Maiden, M.; Ranade, P.; Finin, T.; Joshi, A. Recognizing and Extracting Cybersecurity Entities from Text. In Proceedings of the Workshop on Machine Learning for Cybersecurity, International Conference on Machine Learning, Guangzhou, China, 2–4 December; pp. 1–7.
- Koschmider, A.; Aleknonytė-Resch, M.; Fonger, F.; Imenkamp, C.; Lepsien, A.; Apaydin, K.; Harms, M.; Janssen, D.; Langhammer, D.; Ziolkowski, T.; et al. Process Mining for Unstructured Data: Challenges and Research Directions. In Modellierung 2024; Gesellschaft für Informatik e.V.: Bonn, Germany, 2024; pp. 1–18. [Google Scholar] [CrossRef]
- Chhetri, B.; Namin, A.S. The Application of Transformer-Based Models for Predicting Consequences of Cyber Attacks. In Proceedings of the 2025 IEEE 49th Annual Computers, Software, and Applications Conference (COMPSAC), Toronto, ON, Canada, 8–11 July 2025; pp. 523–532. [Google Scholar] [CrossRef]
- Zade, N.; Mate, G.; Kishor, K.; Rane, N.; Jete, M. NLP Based Automated Text Summarization and Translation: A Comprehensive Analysis. In Proceedings of the 2024 2nd International Conference on Sustainable Computing and Smart Systems (ICSCSS), Coimbatore, India, 10–12 July 2024; pp. 528–531. [Google Scholar] [CrossRef]
- Scheponik, T.; Sherman, A.T. LDA Topic Analysis of a Cybersecurity Textbook. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), Online, 16–20 November 2020; Available online: https://par.nsf.gov/biblio/10160115 (accessed on 26 March 2025).
- Cengiz, E.; Gök, M. Reinforcement Learning Applications in Cyber Security: A Review. Sak. Univ. J. Sci. 2023, 27, 481–503. [Google Scholar] [CrossRef]
- Afolalu, O.; Tsoeu, M.S. Enterprise Networking Optimization: A Review of Challenges, Solutions, and Technological Interventions. Future Internet 2025, 17, 133. [Google Scholar] [CrossRef]
- Hammad, A.A.; Jasim, F.T. Adaptive Cyber Defense Using Advanced Deep Reinforcement Learning Algorithms: A Real-Time Comparative Analysis. J. Comput. Theor. Appl. 2025, 2, 523–535. [Google Scholar] [CrossRef]
- Jaber, A. Transforming Cybersecurity Dynamics: Enhanced Self-Play Reinforcement Learning in Intrusion Detection and Prevention System. In Proceedings of the 2024 IEEE International Systems Conference (SysCon), Montreal, QC, Canada, 15–18 April 2024; pp. 1–8. [Google Scholar] [CrossRef]
- Afshar, R.R.; Zhang, Y.; Vanschoren, J.; Kaymak, U. Automated Reinforcement Learning: An Overview. arXiv 2022, arXiv:2201.05000. [Google Scholar] [CrossRef]
- Zhou, Z.; Liu, G.; Tang, Y. Multi-Agent Reinforcement Learning: Methods, Applications, Visionary Prospects, and Challenges. IEEE Trans. Intell. Veh. 2024, 9, 8190–8211. [Google Scholar] [CrossRef]
- Diana, L.; Dini, P.; Paolini, D. Overview on intrusion detection systems for computers networking security. Computers 2025, 14, 87. [Google Scholar] [CrossRef]
- Ajeesh, A.; Mathew, T. Enhancing Network Security: A Comparative Analysis of Deep Learning and Machine Learning Models for Intrusion Detection. In Proceedings of the 2024 International Conference on E-mobility, Power Control and Smart Systems (ICEMPS), Thiruvananthapuram, India, 8–20 April 2024; pp. 1–6. [Google Scholar] [CrossRef]
- Danish, M. Enhancing Cyber Security Through Predictive Analytics: Real-Time Threat Detection and Response. Int. J. Adv. Comput. Sci. Appl. 2025, 16, 38–49. [Google Scholar] [CrossRef]
- Kinyua, J.; Awuah, L. AI/ML in Security Orchestration, Automation and Response: Future Research Directions. Intell. Autom. Soft Comput. 2021, 28, 527–545. [Google Scholar] [CrossRef]
- Maleh, Y.; Shojafar, M.; Alazab, M.; Baddi, Y. Machine Intelligence and Big Data Analytics for Cybersecurity Applications; Studies in Computational Intelligence; Springer International Publishing: Cham, Switzerland, 2021; Volume 919. [Google Scholar] [CrossRef]
- Seabra, A.; Lifschitz, S. Advancing Polyglot Big Data Processing Using the Hadoop Ecosystem. arXiv 2025, arXiv:2504.14322. [Google Scholar] [CrossRef]
- Wang, J.; Hu, M.; Li, N.; Al-Ali, A.; Suganthan, P.N. Incremental Online Learning of Randomized Neural Network with Forward Regularization. arXiv 2024, arXiv:2412.13096. [Google Scholar] [CrossRef]
- Yurdem, B.; Kuzlu, M.; Gullu, M.K.; Catak, F.O.; Tabassum, M. Federated Learning: Overview, Strategies, Applications, Tools and Future Directions. Heliyon 2024, 10, e38137. [Google Scholar] [CrossRef]
- Berkani, M.R.A.; Chouchane, A.; Himeur, Y.; Ouamane, A.; Miniaoui, S.; Atalla, S.; Mansoor, W.; Al-Ahmad, H. Advances in Federated Learning: Applications and Challenges in Smart Building Environments and Beyond. Computers 2025, 14, 124. [Google Scholar] [CrossRef]
- Mehmood, M.K.; Arshad, H.; Alawida, M.; Mehmood, A. Enhancing Smishing Detection: A Deep Learning Approach for Improved Accuracy and Reduced False Positives. IEEE Access 2024, 12, 137176–137193. [Google Scholar] [CrossRef]
- Gonsalves, C. Contextual Assessment Design in the Age of Generative AI. J. Learn. Dev. High. Educ. 2025, 34, 1–14. [Google Scholar] [CrossRef]
- Tendikov, N.; Rzayeva, L.; Saoud, B.; Shayea, I.; Azmi, M.H.; Myrzatay, A.; Alnakhli, M. Security Information Event Management Data Acquisition and Analysis Methods with Machine Learning Principles. Results Eng. 2024, 22, 102254. [Google Scholar] [CrossRef]
- Osman, M.; He, J.; Zhu, N.; Mokbal, F.M.M. An Ensemble Learning Framework for the Detection of RPL Attacks in IoT Networks Based on the Genetic Feature Selection Approach. Ad Hoc Netw. 2024, 152, 103331. [Google Scholar] [CrossRef]
- Verma, A.; Ranga, V. Evaluation of Network Intrusion Detection Systems for RPL Based 6LoWPAN Networks in IoT. Wirel. Pers. Commun. 2019, 108, 1571–1594. [Google Scholar] [CrossRef]
- Sharma, M.; Elmiligi, H.; Gebali, F.; Verma, A. Simulating Attacks for RPL and Generating Multiclass Dataset for Supervised Machine Learning. In Proceedings of the 2019 IEEE 10th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 17–19 October 2019; pp. 20–26. [Google Scholar] [CrossRef]
- Johora, F.T.; Khan, M.S.I.; Kanon, E.; Rony, M.A.T.; Zubair, M.; Sarker, I.H. A Data-Driven Predictive Analysis on Cyber Security Threats with Key Risk Factors. arXiv 2024, arXiv:2404.00068. [Google Scholar] [CrossRef]
- Landauer, M.; Skopik, F.; Stojanović, B.; Flatscher, A.; Ullrich, T. A Review of Time-Series Analysis for Cyber Security Analytics: From Intrusion Detection to Attack Prediction. Int. J. Inf. Secur. 2025, 24, 3. [Google Scholar] [CrossRef]
- Dass, S.; Datta, P.; Namin, A.S. Attack Prediction Using Hidden Markov Model. In Proceedings of the 2021 IEEE 45th Annual Computers, Software, and Applications Conference (COMPSAC), Madrid, Spain, 12–16 July 2021; pp. 1695–1702. [Google Scholar] [CrossRef]
- Khan, M.Z.A.; Khan, M.M.; Arshad, J. Anomaly Detection and Enterprise Security Using User and Entity Behavior Analytics (UEBA). In Proceedings of the 2022 3rd International Conference on Innovations in Computer Science & Software Engineering (ICONICS), Karachi, Pakistan, 14–15 December 2022; pp. 1–9. [Google Scholar] [CrossRef]
- Dwivedi, R.; Dave, D.; Naik, H.; Singhal, S.; Omer, R.; Patel, P.; Qian, B.; Wen, Z.; Shah, T.; Morgan, G.; et al. Explainable AI (XAI): Core Ideas, Techniques, and Solutions. ACM Comput. Surv. 2023, 55, 1–33. [Google Scholar] [CrossRef]
- Tian, Z.; Cui, L.; Liang, J.; Yu, S. A Comprehensive Survey on Poisoning Attacks and Countermeasures in Machine Learning. ACM Comput. Surv. 2023, 55, 1–35. [Google Scholar] [CrossRef]
- Oprea, A.; Singhal, A.; Vassilev, A. Poisoning Attacks against Machine Learning: Can Machine Learning Be Trustworthy? Computer 2022, 55, 94–99. [Google Scholar] [CrossRef]
- Aljanabi, M.; Omran, A.H.; Mijwil, M.M.; Abotaleb, M.; El-kenawy, E.-S.M.; Mohammed, S.Y.; Ibrahim, A. Data Poisoning: Issues, Challenges, and Needs. In Proceedings of the 7th IET Smart Cities Symposium (SCS 2023), Online, 3–5 December 2023; pp. 359–363. [Google Scholar] [CrossRef]
- Zhou, Z.; Zhu, J.; Yu, F.; Li, X.; Peng, X.; Liu, T.; Han, B. Model Inversion Attacks: A Survey of Approaches and Countermeasures. arXiv 2024, arXiv:2411.10023. [Google Scholar] [CrossRef]
- Brundage, M.; Avin, S.; Clark, J.; Toner, H.; Eckersley, P.; Garfinkel, B.; Dafoe, A.; Scharre, P.; Zeitzoff, T.; Filar, B.; et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv 2024, arXiv:1802.07228. [Google Scholar] [CrossRef]
- Kouper, I.; Stone, S. Data Sharing and Use in Cybersecurity Research. CODATA Sci. J. 2024, 23, 1–19. [Google Scholar] [CrossRef]
- Mittelstadt, B. Interpretability and Transparency in Artificial Intelligence. In The Oxford Handbook of Digital Ethics; Oxford University Press: Oxford, UK, 2021; pp. 378–410. [Google Scholar] [CrossRef]
- Lasmar Almada, M.A. Law & Compliance in AI Security & Data Protection; AI and Data Protection Training Module; European Data Protection Supervisor: Brussels, Belgium, 2024; pp. 1–217. [Google Scholar]
- Huang, Y.; Arora, C.; Houng, W.C.; Kanij, T.; Madulgalla, A.; Grundy, J. Ethical concerns of generative AI and mitigation strategies: A systematic mapping study. arXiv 2025, arXiv:2502.00015. [Google Scholar] [CrossRef]
- Kuznetsov, O.; Sernani, P.; Romeo, L.; Frontoni, E.; Mancini, A. On the Integration of Artificial Intelligence and Blockchain Technology: A Perspective about Security. IEEE Access 2024, 12, 3881–3897. [Google Scholar] [CrossRef]
- Ruzbahani, A.M. AI-Protected Blockchain-Based IoT Environments: Harnessing the Future of Network Security and Privacy. arXiv 2024, arXiv:2405.13847. [Google Scholar] [CrossRef]
- Paulraj, J.; Raghuraman, B.; Gopalakrishnan, N.; Otoum, Y. Autonomous AI-Based Cybersecurity Framework for Critical Infrastructure: Real-Time Threat Mitigation. arXiv 2025, arXiv:2507.07416. [Google Scholar] [CrossRef]
- Kumar, S.; Upadhyay, P.; Ramsamy, G. Strengthening AI Governance: International Policy Frameworks, Security Challenges, and Ethical AI Deployment. ITU J. Future Evol. Technol. 2025, 6, 275–285. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).