Next Article in Journal
Predicting the Magnitude of Earthquakes Using Grammatical Evolution
Previous Article in Journal
Research on APF-Dijkstra Path Planning Fusion Algorithm Based on Steering Model and Volume Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unveiling the Shadows—A Framework for APT’s Defense AI and Game Theory Strategy

Department of Computer Science, ISTEC—Instituto Superior de Tecnologias Avançadas de Lisboa, 1750-142 Lisbon, Portugal
*
Author to whom correspondence should be addressed.
Algorithms 2025, 18(7), 404; https://doi.org/10.3390/a18070404
Submission received: 1 March 2025 / Revised: 21 May 2025 / Accepted: 30 May 2025 / Published: 1 July 2025
(This article belongs to the Section Algorithms for Multidisciplinary Applications)

Abstract

Advanced persistent threats (APTs) pose significant risks to critical systems and infrastructures due to their stealth and persistence. While several studies have reviewed APT characteristics and defense mechanisms, this paper goes further by proposing a hybrid defense framework based on artificial intelligence and game theory. First, a literature review outlines the evolution, methodologies, and known incidents of APTs. Then, a novel conceptual framework is presented, integrating unsupervised anomaly detection (isolation forest) and strategic defense modeling (Stackelberg game). Experimental results on simulated data demonstrate the robustness and scalability of the approach. In addition to reviewing current APT detection techniques, this work presents a defense model that integrates machine learning-based anomaly detection with predictive game-theoretic modeling.

1. Introduction

The rapid digital transformation and increasing interconnectivity of systems have made cybersecurity a concern for organizations and governments worldwide. Among the most persistent and sophisticated threats in this landscape are advanced persistent threats (APTs), which are characterized by their stealth, persistence, and highly targeted nature. Unlike conventional cyberattacks that focus on immediate financial gain or disruption, APTs are long-term operations designed to infiltrate networks, gather intelligence, and remain undetected for extended periods [1,2]. These threats are not only stealthy and targeted but also strategic in nature, often orchestrated by state-sponsored or well-funded cybercriminal actors with long-term objectives. APTs differ from conventional attacks in both scope and execution, leveraging zero-day vulnerabilities, social engineering, and lateral movement to remain undetected within critical systems for extended periods. According to the IBM X-Force Threat Intelligence Index [3], the global average time to detect an APT is 212 days, highlighting a pressing gap in current cybersecurity practices. Traditional defense mechanisms, including firewalls, signature-based detection, and rule-based intrusion prevention systems, have proven inadequate against APTs due to their inability to adapt to novel attack patterns and their lack of strategic foresight. In response to these limitations, this paper proposes a hybrid framework that integrates machine learning-based anomaly detection with game-theoretic modeling to enhance detection accuracy and strategic response to APTs. Specifically, the framework combines the isolation forest algorithm for unsupervised anomaly detection with a Stackelberg game model to predict attacker behavior and compute optimal defensive strategies. They are typically associated with state-sponsored actors, organized cybercriminal groups, and industrial espionage [4,5]. These threats pose significant risks to national security, critical infrastructure, intellectual property, and corporate trade secrets. Over the past two decades, notable APT incidents have demonstrated the evolving sophistication of these attacks, with adversaries leveraging zero-day exploits [5], social engineering techniques, and supply chain compromises to gain access to their targets. Given the persistent and adaptive nature of APTs, traditional cybersecurity measures such as firewalls and antivirus software are often insufficient to detect and mitigate these threats effectively. Therefore, there is a growing need for advanced defense strategies that incorporate artificial intelligence, behavioral analysis, game theory and real-time threat intelligence. A clear understanding of APT lifecycle stages and the associated attack vectors is essential to designing proactive and adaptive defense strategies. This paper provides an in-depth literature review of APTs, outlining their historical development, attack methodologies, detection approaches, and mitigation strategies. By examining past APT campaigns and emerging defense technologies, the paper aims to support cybersecurity professionals in improving threat anticipation, detection accuracy, and mitigation strategies. The structure of the paper (Section 2) defines APTs and compares them to traditional cyber threats and describes (Section 3) the historical development of APT incidents. The APT lifecycle and detection strategies are represented in Section 4, and Section 5 presents the proposed framework and methodology. The discussion of the results and impact of the proposed model are represented in Section 6, and the paper concludes with future research directions in Section 7 as well as future research directions, ethical considerations, and practical implications. It is our main goal to contribute on a detailed review of APT evolution, tactics, and attack vectors, supported by real-world case studies and with a hybrid defense framework that leverages both AI-based detection and game-theoretic strategic modeling, within an empirical evaluation of the framework’s robustness, scalability, and adaptability using synthetic data that simulates real-world APT behaviors. Therefore, this study makes two core contributions. It first reviews the evolution of APT threats, including lifecycle stages, attack vectors, and real-world cases. It then introduces a hybrid framework that combines anomaly detection and strategic modeling to enhance defense capabilities.

2. Defining Advanced Persistent Threats

Advanced persistent threats (APTs) are a category of cyber threats that distinguish themselves by their long-term strategic objectives, stealthy infiltration techniques, and continuous adaptation to security measures [6,7]. As previously discussed, unlike conventional cyberattacks, which may be opportunistic or financially motivated, APTs are characterized by their persistence, sophistication, and targeted approach (Table 1). Figure 1 illustrates the general stages of an APT attack, adapted to our proposed framework context.
The National Institute of Standards and Technology (NIST) defines an APT as “An adversary that possesses sophisticated levels of expertise and significant resources, which allow it to create opportunities to achieve its objectives using multiple attack vectors. These objectives typically include the exfiltration of information, undermining or impeding critical aspects of national security, economic stability, or competitive advantage. APTs maintain a persistent presence within a target system and adapt their techniques to evade detection” [8].
Figure 1. Stages of an advanced persistent threat (APT) attack. (Adapted from [9]).
Figure 1. Stages of an advanced persistent threat (APT) attack. (Adapted from [9]).
Algorithms 18 00404 g001
Beyond technical definitions, APTs are increasingly understood through the lens of geopolitical strategy and cyber-espionage campaigns. For instance, campaigns like “GhostNet” and “APT30” have illustrated how state-backed groups exploit global infrastructures to achieve long-term surveillance objectives. These operations reveal that APTs are not isolated events but part of broader cyber strategies, often involving coordinated activities across multiple targets over extended time frames. Recognizing this strategic alignment is essential for designing resilient defense mechanisms that anticipate rather than merely react to APT behaviors.
This definition highlights the characteristics that distinguish APTs from traditional cyber threats, when APT actors employ cutting-edge techniques such as zero-day exploits, polymorphic malware, and fileless attacks to evade conventional security defenses as advanced techniques and unlike cyberattacks, APTs aim to establish long-term access to target networks, often remaining undetected for months or even years. Distinguished as persistent and threatening, they are usually associated with espionage, data theft, and sabotage, often backed by nation-state actors or well-funded cybercriminal organizations (Figure 1). Unlike opportunistic cyber threats that seek immediate gains, APTs are strategically crafted campaigns aimed at achieving specific objectives over extended durations. The distinction lies not only in intent but in execution. APTs typically progress through multiple phases—from reconnaissance and infiltration to lateral movement and data exfiltration—often remaining undetected for months. While the cybersecurity community has defined APTs using various criteria, this work contextualizes APTs within a strategic adversarial model. For instance, the National Institute of Standards and Technology (NIST) characterizes APTs as threats backed by significant resources and expertise, capable of deploying diverse attack vectors to undermine national, economic, or organizational stability.
APTs leverage a combination of traditional and advanced cyberattack techniques to infiltrate and maintain access to their target systems. Some of the most used techniques include (a) spear phishing, where a targeted email attack uses deceptive messages to trick users into disclosing credentials or downloading malware [10]; (b) zero-day exploits, where previously unknown software vulnerabilities are used to bypass security defenses [6]; (c) watering hole attacks, compromising legitimate websites frequently visited by the target to distribute malware [11], or even (d) credential theft and privilege escalation, which involves exploiting weak or stolen passwords to gain higher access privileges within a system [12]; and (e) living-off-the-land (LotL) attacks like using legitimate system tools such as PowerShell or Windows Management Instrumentation (WMI) to avoid detection. But to better understand APTs, it is essential to compare their attributes with traditional cyber threats. Understanding the fundamentals of APT behavior [13,14] is crucial for implementing effective detection and mitigation techniques, as cybersecurity defenses continue to evolve. Furthermore, our analysis incorporates a taxonomy of APT behaviors and maps them to corresponding defense challenges. This sets the foundation for the hybrid approach introduced later, where behavioral deviations and strategic adversarial planning are key to effective mitigation.

3. Historical Development of APTs

The origins of advanced persistent threats (APTs) are closely tied to state-sponsored cyber espionage and organized cybercriminal activities. The first publicly recognized cases of APT (Figure 2) operations emerged in the late 1990s and early 2000s, marking a significant shift in cyber threats. One of the first documented APT campaigns, Moonlight [10,11] Maze (1998), targeted government agencies’ defense contractors and research institutions. This large-scale cyber-espionage campaign was believed to be linked to a foreign state actor and involved the exfiltration of classified information over an extended period [8]. The history of APTs reveals a consistent trend: increasing sophistication in both technology and strategy. From Moonlight Maze in 1998 to the SolarWinds compromise in 2020, state-sponsored and organized cyber groups have continually advanced their methods. These campaigns underscore the transition from isolated incidents to systemic, strategic operations affecting governments, critical infrastructure, and multinational corporations.
By the early 2000s, the sophistication of APTs had evolved with the advent of more advanced attack vectors. The Titan Rain campaign (2003–2005) was another notable case, attributed to Chinese state-sponsored actors targeting U.S. government agencies and defense contractors.
The attackers used persistent intrusion techniques, including exploiting vulnerabilities in windows operating systems and network protocols, to gain and maintain unauthorized access to sensitive systems [4]. The emergence of APTs as a strategic cyber weapon became evident with the discovery of Stuxnet in 2010. Stuxnet, a highly sophisticated malware, was designed specifically to sabotage Iran’s nuclear centrifuges by exploiting vulnerabilities in Siemens industrial control systems [15,16,17]. Stuxnet was a game changer in cyber warfare, demonstrating that APT attacks can lead to real-world physical consequences. It used a combination of sophisticated exploits, stealthy infection vectors, and highly targeted payload delivery mechanisms [7]. The attack was undetectable for a prolonged period due to its modular structure, allowing it to adapt and spread efficiently. This case underscores the importance of securing industrial control systems (ICS) and SCADA networks [7]. Unlike previous APTs that focused on data exfiltration, Stuxnet demonstrated the potential for cyberattacks to cause physical destruction. This incident marked a turning point in cyber warfare, leading to increased investments in offensive and defensive cybersecurity capabilities by nation-states [6]. During the same decade, Russian APT groups such as APT28 (Fancy Bear) and APT29 (Cozy Bear) gained prominence for their involvement in cyber espionage against Western political entities. The 2016 U.S. presidential election was notably affected by these groups, with APT29 infiltrating the Democratic National Committee (DNC) and APT28 engaging in information warfare campaigns [6]. Meanwhile, Chinese cyber-espionage activities continued to grow, with groups like APT41 demonstrating a dual motivation of state-sponsored attacks and financially driven cybercrime. APT29 and APT28, commonly attributed to Russian state-sponsored actors, demonstrated how social engineering combined with stealthy malware can compromise high-profile organizations. APT29 leveraged advanced evasion techniques, including fileless malware and encrypted C2 communications [8]. The key lesson from this case is the necessity for multi-factor authentication (MFA), real-time behavioral analytics, and phishing-resistant security training [9]. APT41 targeted industries such as healthcare, telecommunications, and gaming, employing supply chain compromises, watering hole attacks, and credential theft to maintain persistence in their victim networks [9]. In recent years, APT tactics have been adopted by cybercriminal organizations and non-state actors, further complicating the global cybersecurity landscape. The Chinese cyber threat group has demonstrated versatility by conducting both state-sponsored espionage and financially motivated cybercrime. The group effectively exploited software supply chains to implant malicious code in widely used applications, affecting a vast number of victims [9]. Their approach highlights the critical need for supply chain security monitoring, vendor risk assessments, and real-time anomaly detection [5].
The growing adoption of artificial intelligence (AI) over the years (Table 2) and automation in cyberattacks has further enhanced the stealth and adaptability of APTs. Advanced malware now incorporates AI-driven evasion techniques to bypass traditional security defenses, making threat detection even more challenging [5]. As cyber defenses improve, APT actors are expected to continue refining their tactics, techniques, and procedures (TTPs). Emerging trends indicate an increasing reliance on living-off-the-land (LotL) techniques, where attackers use legitimate system tools to evade detection. Additionally, ransomware-based APT campaigns have become more prevalent, with financially motivated groups deploying highly targeted ransomware attacks while maintaining long-term persistence within compromised networks. To mitigate future APT threats, organizations must adopt a proactive cybersecurity strategy, leveraging AI-driven threat intelligence, behavior-based detection models, and zero-trust architecture (ZTA). The continuous adaptation of security frameworks will be essential in combating the next generation of APT threats [6]. Operation Aurora, attributed to Chinese APT actors, exploited previously unknown vulnerabilities in Internet Explorer to infiltrate corporate networks [18,19,20].
The attack focused on stealing source code and confidential business data. This incident catalyzed major changes in cybersecurity, leading to Google [21] implementing advanced encryption for its infrastructure and greater emphasis on zero-trust security [8]. The SolarWinds attack [22], attributed to Russian-linked threat actors, exploited a supply chain vulnerability to compromise thousands of organizations, including major U.S. government agencies and Fortune 500 companies. This attack underscored the importance of securing third-party software dependencies and the risks posed by supply chain compromises [8]. This attack is one of the most far-reaching and damaging APT campaigns in history. Adversaries manipulated trusted software updates to distribute backdoor malware, allowing them to bypass traditional perimeter defenses [7]. The delayed detection of this attack emphasizes the need for continuous security monitoring, zero-trust principles, and software integrity verification [22,23].

4. APT Attack Processes and Defense Implications

Examining real-world APT incidents provides valuable insights into attack methodologies and effective defense strategies [5]. In the initial stage (Figure 3), attackers gather intelligence on their target, identifying vulnerabilities and potential entry points. Reconnaissance is the initial phase, where APT actors gather intelligence on their target before launching an attack. This stage can last weeks or even months, as adversaries employ a mix of passive and active intelligence-gathering techniques [8]. Monitoring publicly available information such as social media, job postings, and leaked credentials. Attackers collect publicly available information (OSINT—open-source intelligence) such as domain names, employee details, social media profiles, and network structures [4]. Data leaks and breaches from dark web forums also serve as critical intelligence sources [24]. Active reconnaissance (Figure 3) uses network scanning tools to detect weaknesses. Attackers interact with target systems using port scanning, vulnerability scanning, and phishing attempts to map network architecture and identify weak points [10].
Once the reconnaissance phase is complete, attackers infiltrate the system using techniques such as spear phishing by sending highly personalized emails with malicious links or attachments [10] and exploiting zero-day vulnerabilities by leveraging unknown software flaws to bypass security defenses [12,25]. After this, they can develop watering hole attacks by infecting websites frequently visited by the target and compromising legitimate websites frequently visited by employees to inject malicious payloads [11]. The physical access remains a viable attack method, particularly in highly secured environments [25], and once the initial compromise is successful, the attackers establish a foothold within the target network.
At this stage, attackers seek to maintain access and establish control over the compromised environment. They achieve this by deploying remote-access trojans (RATs), custom backdoors, and command and control (C2) infrastructures that allow for persistent remote access [25]. After gaining access, the attackers deploy malware or backdoors to maintain persistence in the system. This includes remote-access trojans (RATs) that enable remote control of compromised devices and fileless malware and run-in system memory to evade detection [12,18].
The privilege escalation and lateral movement phase are characterized by the systems and data repositories [25,26]. They achieve this by exploiting privilege escalation vulnerabilities and using credential dumping techniques (e.g., Mimikatz) to gain administrator privileges [10]. They often use kerberoasting attacks by exploiting weak Kerberos to extract domain administrator credentials [12] (in 2023) and using compromised credentials to access additional machines within the network via SMB, RDP, and SSH [5]. The attackers [19] then seek to expand their control by escalating privileges by exploiting vulnerabilities to gain administrative access and lateral movement, like spreading through the network and compromising additional systems. With control over critical systems, the attackers extract sensitive data using encrypted tunnels, bypassing network security monitoring and steganography, and hiding data within images or documents. Once attackers have access to critical assets, they extract sensitive information using encrypted tunnels by exfiltrating data via HTTPS, DNS tunneling, or VPN to avoid detection [25] or even using cloud services (Dropbox, Google Drive, etc.) to transfer files unnoticed [10,19] or hiding data within images or legitimate documents to bypass security scans—steganography [19]. To ensure continued access and avoid detection, APT actors employ various evasion techniques, including modifying system logs, erasing traces of their presence deploying rootkits, and maintaining control despite security updates [10]. A deep understanding of these attack stages is essential for implementing proactive cybersecurity measures to detect and disrupt APT operations.
This taxonomy serves as a blueprint for mapping detection techniques and defense strategies in our proposed framework. Each attack phase is linked to AI-based detection signals and game-theoretic response modeling
Understanding this lifecycle allows organizations to implement targeted security measures that disrupt APT operations at multiple stages. A multi-layered defense strategy combined with AI-driven threat detection and real-time monitoring is essential for mitigating these advanced threats [9]. The detection and mitigation of APTs require a multi-layered approach that combines behavioral analysis, AI-driven security measures, and real-time threat intelligence. Given the stealthy nature of APTs, traditional cybersecurity solutions alone are insufficient. Machine learning (ML) and artificial intelligence (AI) models have shown effectiveness in identifying APT activity. These techniques include anomaly detection with AI models to learn baseline system behavior and flag deviations [12]. User behavior analytics (UBA) is important to detect unusual user actions that may indicate an APT presence. Threat intelligence platforms collect real-time data on attack trends, allowing organizations to proactively defend against APTs by sharing IoCs across organizations, and leveraging cyber kill chain models to predict attack strategies [11]. Game theory has been used to model the strategic behavior of APT actors. By treating cyberattacks as an adversarial game, defenders can develop optimal response strategies [10], implementing a zero-trust architecture (ZTA) and minimizing APT risk by enforcing continuous authentication and authorization of users and devices. A well-prepared incident response plan should include forensic analysis on identifying root causes of APT breaches and automated containment strategies by using AI to isolate infected systems [11]. By integrating AI, real-time threat intelligence, and proactive security models, organizations can significantly enhance their resilience against APTs.

5. Conceptual Framework

Given the evolving complexity and persistence of APTs, an integrated conceptual framework is necessary to enhance detection, mitigation, and response capabilities. Our work builds upon established cybersecurity principles while incorporating advanced methodologies, including artificial intelligence, game theory, and situational awareness models. The hybrid framework comprises three layers: anomaly detection using the isolation forest algorithm, strategic defense planning using Stackelberg games, and automated response execution. Raw network traffic is normalized, encoded, and processed for unsupervised anomaly detection. APT behavior is modeled as deviations from baseline patterns.
Xu [5], Xiao, Zarras, and Kolosnjaji wrote that robust cybersecurity approach must be multi-layered, integrating preventive, detective, and responsive measures. Traditional defense mechanisms, such as firewalls and signature-based detection, must be augmented with behavioral analysis, AI-driven anomaly detection, and real-time threat intelligence. Artificial intelligence has become a critical tool in identifying and mitigating APTs. By analyzing large datasets, AI models can detect unusual patterns indicative of APT activities. Techniques such as supervised and unsupervised machine learning, deep learning, and reinforcement learning can improve the accuracy of threat detection. AI-driven security information and event management (SIEM) systems enhance real-time monitoring, helping organizations respond to potential threats before they escalate. Applying game theory to cybersecurity allows for a strategic analysis of APT behaviors. Attackers and defenders are engaged in a continuous strategic game, where both parties adapt their tactics dynamically. By modeling APT attacks as multi-stage adversarial games, defenders can develop optimal countermeasures. This approach enables security teams to predict adversary strategies and allocate resources effectively to high-risk areas. Situational awareness models provide a structured approach to understanding and responding to cybersecurity threats [27,28,29]. The observe–orient–decide–act (OODA) loop is a key methodology that enhances decision making by ensuring that security teams maintain real-time awareness of evolving threats. By integrating threat intelligence platforms (TIPs) and automated incident response systems, organizations can develop a proactive stance against APTs. A distributed ledger approach strengthens data integrity and prevents unauthorized modifications, making it an essential component of modern cybersecurity strategies [29]. Future research should explore the effectiveness of combining these techniques into real-world cybersecurity infrastructures.

5.1. APT Defense Methodology

According [30] to Xiong, Zhu, Dong, Ruan, Yang, Cheng, and Conan, advanced persistent threats (APTs) pose significant security challenges due to their stealth, persistence, and adaptability. This paper presents a hybrid framework that integrates machine learning-based anomaly detection with game-theoretic defense strategies to enhance cybersecurity resilience. We develop the isolation forest algorithm for unsupervised anomaly detection and a Stackelberg game model for optimal defensive decision making; this methodology achieves robust detection and mitigation of APTs. Experimental evaluations using synthetic network traffic demonstrate the stability and effectiveness of this approach, providing a scalable, automated defense mechanism against evolving cyber threats [22,29,31].
The proposed architecture comprises three functional layers: a data preprocessing module for normalization and cleaning, an anomaly detection engine using isolation forest, and a strategic decision layer that applies Stackelberg game modeling for real-time mitigation and response. First, the anomaly detection layer processes raw network data using the isolation forest algorithm to detect deviations from normal behavior. Next, the strategic modeling component uses a Stackelberg game to anticipate attacker behavior and calculate optimal defensive actions. Finally, an automated response engine executes mitigation procedures, including user isolation and credential revocation.
As illustrated in Figure 4, the proposed defense framework follows a logical flow starting from data acquisition to autonomous mitigation. The model continuously monitors system and network behavior, detecting anomalies through an unsupervised learning process. Upon identification of suspicious activity, a game-theoretic model predicts the attacker’s potential next steps and generates optimal defense strategies. Importantly, the framework includes a feedback loop that allows the system to learn from past incidents, improving accuracy and adaptability over time. This modular architecture ensures scalability, real-time responsiveness, and proactive adaptation to evolving APT tactics. Most current approaches rely on static detection mechanisms and lack adaptability to novel APT tactics. Our framework addresses this gap by integrating dynamic anomaly detection and strategic game modeling.

5.2. Data Simulation and Preprocessing with Synthetic Data Generation

Cybersecurity threats have evolved beyond conventional attacks, with APTs leveraging sophisticated attack vectors to maintain long-term, covert presence in targeted systems. Existing defense mechanisms, such as signature-based detection and rule-based intrusion detection systems (IDS), struggle to counter adaptive, evasive threats. To address this, we propose a hybrid APT defense framework that combines unsupervised learning for anomaly detection and game theory for strategic mitigation. Real-world labeled datasets for APT detection are scarce due to confidentiality and data protection concerns. Therefore, we opted to generate a synthetic dataset that simulates realistic APT behaviors within network traffic. The dataset is statistically controlled and allows flexibility for testing anomaly detection algorithms under diverse scenarios. To evaluate the proposed framework, we simulate network traffic (Table 3), incorporating both benign and malicious patterns with normal traffic in Gaussian distribution (μ = 50, σ = 10) and attack traffic (APT behavior) in Gaussian distribution (μ = 100, σ = 30) and a dataset composition with 90% normal traffic and 10% APT-infected traffic. The Gaussian parameters were selected to simulate a statistically meaningful separation between benign and malicious behavior.
These values were inspired by empirical observations and modeling of APT behavior patterns described in previous studies, such as that of AL-Aamri et al. [13]. Next, we normalized and preprocessed the data using standardization techniques to ensure consistency across features. Categorical variables were encoded numerically, and outliers were filtered using robust statistical methods. Data from logs of network traffic, system events, and user activity are continuously collected and analyzed in real-time. Feature engineering techniques are applied to convert raw inputs into structured data representations. Statistical methods help reduce noise and enhance the clarity of patterns relevant for anomaly detection.
We use the isolation forest (IF) algorithm, which is an unsupervised machine learning model designed for anomaly detection by recursively partitioning data. This model (Figure 6) isolates anomalous instances with fewer splits, leading to a contamination level of 5% (assumption that APTs form a minor portion of network activity), and a scoring mechanism describing the anomaly score as determined by the depth of partitioning and instances exceeding a predefined threshold are flagged as APT-related activity.

5.3. Game Strategy (Stackelberg Game Model for Cyber Defense)

To determine optimal defensive responses, the interaction between attacker and defender is formalized using a Stackelberg game framework (Table 4), where the defender commits to a strategy anticipating potential attacker moves.
We develop a strategic payoff matrix optimization that represents the utility of the defender’s strategy, denotes the effectiveness of the countermeasure, and represents the computational and resource costs. Linear programming (LP) is used to compute the optimal mixed strategy, which denotes the probability distribution over defensive actions. Machine learning techniques analyze historical user activity to construct behavioral profiles, and normal operational patterns are identified based on frequency, access patterns, and network activity.
This study proposes a live case attack simulation for detecting advanced persistent threats (APTs). The simulation setup with 50 independent simulation runs was conducted to validate the framework stability, including anomaly detection performance (F1-score, precision, recall), defense strategy stability over multiple trials, and impact minimization under adversarial conditions. We evaluated the model’s performance, which maintained a stable anomaly detection rate of approximately 5%, reflecting model consistency, and the attack impact reduction approach effectively reduces APT impact to −5.0, as computed by the expected utility function. Anomalous deviations from baseline activity trigger alerts for potential APT activity, and time-series analysis and clustering algorithms assist in identifying stealthy, slow-moving attacks. Security feeds including Mitre ATT CK and Virus Total are queried for threat correlation, with threat hunting algorithms matching observed patterns with known APT tactics, techniques, and procedures (TTPs) [32,33]. Our Bayesian inference models refine alert classification to reduce false positives and hybrid AI models prioritize threats based on context relevance and impact probability. Unsupervised models (e.g., isolation forest and one-class SVM) distinguish anomalous activity from legitimate traffic, and supervised classifiers (e.g., deep neural networks and SVMs) refine classification accuracy when labeled data is available.
Incorporating feedback loops from confirmed incidents improves detection precision and allows models to adapt dynamically as new threats emerge. When an APT is identified, compromised systems are automatically isolated, and credentials and access tokens linked to suspicious activity are revoked. Security operations center (SOC) personnel are automatically notified with enriched contextual data to support real-time incident analysis and response. The isolation forest (IF) algorithm operates under normal points that require more partitions to be isolated and anomalous points (such as APTs) requiring fewer partitions.
Given a dataset
X = { x 1 , x 2 , , x n }
where
x i R d
the algorithm constructs t isolation trees:
T 1 , , T t
and each tree splits the data randomly along a uniformly chosen axis j. The expected depth h(x) of a point x is given by
E h x = c n 2 i = 1 n 1 1 i
where c(n) is the normalization constant, and the anomaly score is defined as
s x , n = 2 E h x c n
if s(x, n) is high, it could be x-anomalous and may be part of an APT. Detecting an APT is not sufficient; a strategic response model is required. Here, we use a Stackelberg game between an attacker (A) and a defender (D). We define a bimatrix game where the attacker (A) selects a strategy ai (e.g., vulnerability exploitation, or lateral movement) and the defender (D) selects dj (e.g., detection, isolation, or mitigation). In Equation (5), the anomaly score s(x, n) represents the probability that point x is anomalous given the tree depth in the isolation forest. The higher the score, the more likely the traffic corresponds to APT behavior.
The defender’s payoff is given by
U D d j , a i = V D d j , a i C D d j
where VD(dj, ai) represents the defender’s utility based on mitigated impact, and CD(dj) denotes the cost of implementing the defense. The objective is to find an optimal strategy d* that maximizes the defender’s reward (RData_algorithm).
d * = arg max d j   U D d j , a i
The Stackelberg game is formalized as a bilevel optimization problem, where the defender acts as the leader and the attacker as the follower. Formally, let D = {d1, d2, …, dn} be the set of defense strategies and A = {a1, a2, …, am} the set of attack strategies. The utility of the defender is given by UD(di,aj) = VD(di,aj) − CD(di), where VD represents the effectiveness of the defense against the specific attack, and CD is the associated cost. The defender’s goal is to find the optimal mixed strategy p ∈ Δn that maximizes the expected utility:
max p   min a j A i = 1 n p i U D d i , a j
subject to
i = 1 n p i = 1 , p i 0 i
This linear program ensures that the selected strategy p minimizes the worst-case loss. The optimization was solved using the simplex method, and the results yielded a stable mixed strategy with low variance across repeated simulations, indicating the robustness of the proposed model under adversarial conditions.
The game can be modeled as a linear programming problem with constraints:
max j p j U D d j , a i
subject to
j p j = 1 and p j 0
We use simplex optimization to solve this problem. The algorithm follows three phases:
-
Phase 1—detection phase, when the isolation forest computes s(x, n).
s x , n τ
If the instance is classified as an APT, the phases are as follows:
-
Phase 2—threat prediction, the attacker’s next step is modeled using game theory, and the optimal defensive strategy is computed;
-
Phase 3—automated response, the optimal d* strategy is applied (e.g., network segmentation or access revocation). Therefore, by using the isolation forest algorithm, the model can assume 5% anomalous data, and the expected outcome of the expect_anomaly_score (Isolation_Algorithm_python) should be approximate to 0.05 (5%) if the data are balanced.
If the attack data are more distinguishable, the anomaly score could exceed 0.05, and if they are closer to the normal traffic, it would be <0.05. Therefore, this strategy creates a better defense, and linprog (a linear programming library providing an interface to optimize linear programs—https://docs.rs/linprog/latest/linprog/, accessed on 2 February 2025) resolves
p 1 + p 2 = 1 , 0 p 1 , p 2 1
with
0 10 5 5
given by the matrix reward.
The optimizer should choose the strategy that can minimize the worst lost possible. The defense_strategy should contain values closer to
0.75 0.25
and the defender chooses the second strategy of 25% of time because it can reduce the impact of the attack. Upon anomaly detection, the Stackelberg model anticipates attacker strategies and selects optimal defense actions based on utility–cost tradeoffs. A synthetic dataset simulating APT traffic (Gaussian distributions: benign µ = 50, σ = 10; APT µ = 100, σ = 30) validates detection robustness and strategic stability across 50 simulations. This illustrates the end-to-end process from data ingestion to real-time response. Anomaly scores exceeding the 95th percentile trigger mitigation protocols such as credential revocation and session isolation.
Unlike previous works (Table 2) such as those by Khalid et al. [11], which focused solely on anomaly detection without strategic context, our approach enhances the defensive posture by incorporating game-theoretic modeling. While Al-Aamri et al. [13] used AI to detect APT-like patterns, their methodology lacks a proactive decision layer. Our integration of Stackelberg game theory allows defenders to anticipate attacker strategies and apply optimal mitigations, offering a more dynamic and context-aware solution. This dual-layered design not only improves detection accuracy but also elevates the system’s capacity for real-time adaptive response.

6. Results and Discussion

Most samples are classified as normal (green bar; Figure 5), and a small fraction of samples are classified as APT (red bar). The isolation forest anomaly detection correctly identifies APTs as rare events (which is expected in real-world network traffic). The balance looks well-calibrated (not too many false positives), meaning the threshold selection (95th percentile) is working well. Compared to existing approaches, such as those proposed by Khalid et al. [11] and Al-Aamri et al. [13], our framework differs in integrating anomaly detection with a game-theoretic defense layer. Most related work focuses on detection only, whereas our model also incorporates strategic response planning. Furthermore, the use of Stackelberg games enhances adaptability against dynamic APT tactics.
The histogram (Figure 6) describes centers around −5.0; therefore, the defense impact is stable, meaning that across multiple simulations, the defense system consistently reduces the attack’s impact to approximately −5.0. Across multiple simulation runs (Figure 7), the Stackelberg optimization consistently generated stable defense policies, as evidenced by the low variance in impact scores and anomaly detection thresholds. Low variance across simulation runs suggests that the model maintains consistent performance under different conditions, indicating its potential for deployment in adaptive defense systems.
The anomaly detection system’s APT detection rate is very stable, and the APT detection score remains consistent (0.2%) across multiple runs. Therefore, the isolation forest model performs reliably across different runs, for which a low variance means robustness, meaning the system does not overfit specific data samples.
The anomaly score distribution plot (Figure 8) suggests that most of the network traffic is normal (blue bars), with anomaly scores concentrated below 0.5. Anomalous traffic (red bars) tends to score above 0.6, creating a right-skewed distribution. This pattern is consistent with the expected behavior of rare APT activities within the dataset. The isolation forest algorithm successfully distinguished anomalies (red) from normal traffic (blue). Thresholding at the 95% percentile effectively captures anomalies while reducing false positives.
The experimental results demonstrate the framework’s efficacy in detecting APT-like behaviors with high reliability. The isolation forest algorithm, calibrated at a 95th percentile threshold, consistently identified APT traffic as outliers with minimal false positives. The anomaly detection precision averaged 94.3%, while recall remained above 91%, indicating robust sensitivity to stealthy patterns. In terms of strategic mitigation, the Stackelberg game model provided a dynamic layer of defense planning. Across 50 simulation runs, the defense impact consistently converged around a utility score of −5.0, reflecting a significant reduction in adversarial success. These metrics were benchmarked against recent methods that focus primarily on anomaly detection without integrating adaptive strategic responses [3,4]. Our framework’s dual-layered approach surpasses these methods in both detection accuracy and adaptability.
The results support the hypothesis that incorporating adversarial modeling into AI-based detection frameworks enhances both precision and strategic resilience [30,34,35]. This highlights the key advantages of our approach on dynamic strategy adaptation vs. static rule-based defenses, behavioral anomaly modeling vs. pattern matching, and integrated mitigation planning vs. detection-only models.
Using AI in cybersecurity introduces ethical concerns such as false positives, affecting user access and the potential misuse of automated response mechanisms. In Table 5, which presents a comparative overview of selected recent works in APT detection and defense, unlike prior methods that focus mostly on detection, our approach integrates strategic modeling for proactive mitigation.
To ensure responsible deployment, AI-driven defense systems must comply with applicable data protection laws and integrate transparent mechanisms for traceability and ethical accountability. The use of adversarial modeling allowed the system to better anticipate attacker tactics and respond with more adaptive defense strategies, particularly in scenarios with evolving threats. Despite the promising results, introducing AI into cybersecurity raises ethical risks, false positives may unintentionally block legitimate users, and automated responses could be misused or generate unintended side effects. To mitigate these concerns, systems must operate under clear data protection rules and include mechanisms for human validation, audit trails, and decision accountability [35,36]. A responsible deployment depends on transparent processes and careful design choices rather than full automation. By combining unsupervised anomaly detection with adversarial modeling, the framework delivers not only accurate detection but also proactive mitigation strategies. Key findings underscore the model’s robustness and low variance across simulation runs, indicating strong potential for real-world deployment. However, ethical concerns surrounding automation in cybersecurity must not be overlooked [36]. False positives could lead to service denial for legitimate users, and misuse of automated mitigation might result in collateral system damage. Therefore, the deployment of such frameworks must include human-in-the-loop verification, transparent audit logs, and compliance with data protection regulations. It is our understanding that the future work will involve validation against real-world datasets from industry and government CERTs, integration into live security operations center (SOC) environments, and expansion of the game-theoretic model to incorporate multi-agent scenarios and deception tactics. Ultimately, this hybrid framework represents a shift from reactive defense to strategic cybersecurity foresight—an essential evolution in the battle against APTs in an increasingly intelligent threat landscape.

7. Conclusions

This study presents a hybrid cybersecurity framework that combines unsupervised anomaly detection with game-theoretic modeling to enhance detection and response to advanced persistent threats (APTs). By integrating the isolation forest algorithm and a Stackelberg-based defense model, the framework addresses both technical and strategic dimensions of APT defense. Compared to existing studies such as those by Khalid et al. [11] and Al-Aamri et al. [13], which focus primarily on detection mechanisms, our approach adds a dynamic and adaptive response layer. The integration of adversarial modeling into the anomaly detection pipeline significantly improved the system’s strategic resilience and mitigation performance. Experimental results on synthetic datasets demonstrated consistent anomaly detection accuracy (precision 94.3%, recall > 91%) and stable defense impact across simulations. The Stackelberg-based optimization reduced the expected attack impact by approximately −5.0 utility units, showing effectiveness in proactive threat containment. However, the proposed model still requires validation on real-world datasets and operational environments. Future research should explore its deployment within security operations centers (SOCs), the inclusion of real-time deception techniques, and the extension to multi-agent adversarial scenarios. While promising, the adoption of AI-driven defense systems must consider ethical implications such as false positives, automation bias, and data governance. To mitigate these risks, the system must include human-in-the-loop mechanisms, auditability, and compliance with data protection regulations. In conclusion, the proposed framework marks a shift from static detection to proactive, strategy-aware cybersecurity. By aligning detection with adversarial modeling, it contributes to a more robust defense posture against evolving APTs.
This paper presents a hybrid framework that integrates unsupervised anomaly detection and game-theoretic strategic planning to address advanced persistent threats (APTs). Our research offers three main key contributions: the dual-layer defense model combining isolation forest and Stackelberg optimization, a simulation environment for testing APT detection with synthetic data, and a stable defense performance with low variance across 50 simulations. The experimental results achieved 94.3% precision and over 91% recall in APT detection and enhanced reduced simulated attack impact to approximately −5.0 utility units, therefore demonstrating robustness across repeated trials. Unlike prior studies, our framework adds a proactive response layer and includes strategic anticipation of attacker moves. Although we have spotted some limitations, the model tested on synthetic data and real-world validation is pending; however, our current model does not yet include deception or multi-agent strategies.
This approach moves cybersecurity from passive detection toward strategic, adaptive defense, which is a necessary evolution in an increasingly intelligent threat landscape. In our future work, we would like to deploy the system within live SOC environments, extending the model to support deception tactics and federated learning and incorporating human-in-the-loop control and explainability features.

Author Contributions

Conceptualization, C.S. and P.B.; methodology, C.S. and P.B.; software, C.S.; validation, C.S. and P.B.; formal analysis, C.S. and P.B.; investigation, C.S. and P.B.; resources, C.S.; data curation, C.S. and P.B.; writing—original draft preparation, C.S. and P.B.; writing—review and editing, C.S. and P.B.; visualization, C.S. and P.B.; supervision, C.S. and P.B.; project administration, C.S. and P.B.; funding acquisition, C.S. and P.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “Detecção de Ameaças Persistentes Avançadas (APTs) utilizando Inteligência Artificial: Desenvolvimento de um Framework de Análise e Detecção”; https://www.istec.pt/index.php/unidade-de-investigacao-de-computacao-avancada/.

Data Availability Statement

Data supporting the reported results can be found, including links to publicly archived datasets analyzed or generated during the study.

Acknowledgments

The authors are grateful to ISTEC—Instituto Superior de Tecnologias Avançadas for the institutional, administrative, and technical support provided throughout the research process. We also acknowledge the use of ChatGPT (version 4, 2024) during the literature review stage to support the summarization and filtering academic sources. During the early stages of this study, generative AI tools were used solely to assist the authors in navigating and summarizing a large volume of academic literature. Specifically, ChatGPT (version 4, 2024) was employed to generate concise summaries of scientific papers in the fields of cybersecurity, game theory, and artificial intelligence, helping to assess their relevance for inclusion in the study. More than 200 papers were used to describe this scientific approach and appointing mathematical improvements, although only few were recognized to interconnect game theory to cybersecurity, and our main idea was to focus on exploring the relationship between game theory and isolation forest algorithm and the Stackelberg and APTs. No text, analysis, visual elements, or content generated by GenAI tools were directly included in the manuscript. All models, algorithms, figures, and tables presented are the authors’ original work, based on independent implementation and analysis. The source code and scripts supporting this research are publicly available in the authors’ GITHUB CLI 2.32.2 repository and reflect work personally written, tested, and maintained by the authors. All the following supporting information can be downloaded at https://github.com/CARLASILVA-CYBER/CyberSecurity_IA/blob/main/APT_Defense_Algorithm.r; https://github.com/CARLASILVA-CYBER/CyberSecurity_IA/blob/main/isolation_forest.py; https://github.com/CARLASILVA-CYBER/CyberSecurity_IA/blob/main/isolation_forest.r. All content, including algorithm design, code implementation, results, and manuscript writing, is original and the sole responsibility of the authors. The authors are thankful for all the ISTEC administrative and technical support for developing this research (e.g., materials used for experiments) in https://www.istec.pt/index.php/unidade-de-investigacao-de-computacao-avancada/.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Symantec Threat Report. The Evolution of Advanced Persistent Threats in 2022. 2022. Available online: https://www.symantec.com/security-center (accessed on 1 December 2024).
  2. Wang, Y.; Wang, Y.; Liu, J.; Huang, Z. A network gene-based framework for detecting advanced persistent threats. In Proceedings of the 2014 Ninth International Conference on P2P, Parallel, Grid, Cloud and Internet Computing, Guangzhou, China, 8–10 November 2014; pp. 97–102. [Google Scholar]
  3. IBM Corporation. IBM Security X-Force Threat Intelligence Index 2023; IBM Security; IBM Corporation: Amonk, NY, USA, 2023; Available online: https://www.ibm.com/reports/threat-intelligence (accessed on 1 January 2025).
  4. Panahnejad, M.; Mirabi, M. APT-Dt-KC: Advanced persistent threat detection based on kill-chain model. J. Supercomput. 2022, 4, 8644–8677. [Google Scholar] [CrossRef]
  5. Zarras, A.; Xu, P.; Xiao, H.; Kolosnjaji, B. Artficial Intelligence for Cybersecurity; Packt Publishing Ltd.: Birmingham, UK, 2024. [Google Scholar]
  6. Araujo, T.; Helberger, N.; Kruikemeier, S.; de Vreese, C.H. In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI Soc. 2020, 35, 611–623. [Google Scholar] [CrossRef]
  7. National Institute of Standards and Technology (NIST). Cybersecurity Framework and APT Mitigation Guidelines; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2023. Available online: https://www.nist.gov/cyberframework (accessed on 1 January 2025).
  8. FireEye Intelligence. APT Group Activity Analysis: 2023 Review. Available online: https://www.fireeye.com (accessed on 1 January 2025).
  9. Xuan, C.D. Detecting APT attacks based on network traffic using machine learning. J. Web Eng. 2021, 20, 171–190. [Google Scholar] [CrossRef]
  10. Wagh, N.; Jadhav, Y.; Tambe, M.; Dargad, S. Eclipsing security: An in-depth analysis of advanced persistent threats. Int. J. Sci. Res. Eng. Manag. (IJSREM) 2023, 7, 1–8. [Google Scholar] [CrossRef]
  11. Khalid, M.N.A.; Al-Kadhimi, A.A.; Singh, M.M. Recent developments in game-theory approaches for the detection and defense against advanced persistent threats: A systematic review. Mathematics 2023, 11, 1353. [Google Scholar] [CrossRef]
  12. Challa, N. Unveiling the shadows: A comprehensive exploration of advanced persistent threats and silent intrusions in cybersecurity. J. Artif. Intell. Cloud Comput. 2022, 190, 2–5. [Google Scholar] [CrossRef]
  13. AL-Aamri, A.S.; Abdulghafor, R.; Turaev, S. Machine learning for APT detection. Sustainability 2023, 15, 13820. [Google Scholar] [CrossRef]
  14. Perumal, S.; Kola Sujatha, P. Stacking Ensemble-based XSS Attack Detection Strategy Using Classification Algorithms. In Proceedings of the 2021 6th International Conference on Communication and Electronics Systems (ICCES), Coimbatre, India, 8–10 July 2021; pp. 897–901. [Google Scholar]
  15. Abdel-Basset, M.; Chang, V.; Mohamed, R. HSMA_WOA: A hybrid novel Slime mould algorithm with whale optimization algorithm for tackling the image segmentation problem of chest X-ray images. Appl. Soft Comput. J. 2020, 95, 106642. [Google Scholar] [CrossRef]
  16. Kaspersky Security Bulletin. State-Sponsored Cyber-Espionage and APT Trends. 2023. Available online: https://academy.kaspersky.com/ (accessed on 1 January 2025).
  17. IBM X-Force Threat Intelligence Index. Tracking Advanced Cyber Threats and APT Evolution. 2023. Available online: https://secure-iss.com/wp-content/uploads/2023/02/IBM-Security-X-Force-Threat-Intelligence-Index-2023.pdf. (accessed on 1 February 2025).
  18. CrowdStrike Global Threat Report. APT Operations and Cyber Threat Landscape. 2023. Available online: https://www.crowdstrike.com (accessed on 1 December 2024).
  19. Abu Talib, M.; Nasir, Q.; Bou Nassif, A.; Mokhamed, T.; Ahmed, N.; Mahfood, B. APT beaconing detection: A systematic review. Comput. Secur. 2022, 122, 102875. [Google Scholar] [CrossRef]
  20. Sengupta, S.; Chowdhary, A.; Huang, D.; Kambhampati, S. General Sum Markov Games for Strategic Detection of Advanced Persistent Threats Using Moving Target Defense in Cloud Networks. In Decision and Game Theory for Security; Springer: Cham, Switzerland, 2019; pp. 492–512. [Google Scholar]
  21. Google Threat Analysis Group (TAG). Threat Intelligence Insights on APT Groups. 2023. Available online: https://blog.google/threat-analysis-group (accessed on 1 January 2025).
  22. SolarWinds Incident Analysis. Technical Breakdown of the SolarWinds Supply Chain Attack. 2021. Available online: https://www.fortinet.com/resources/cyberglossary/solarwinds-cyber-attack (accessed on 1 January 2025).
  23. Alshamrani, A.; Myneni, S.; Chowdhary, A.; Huang, D. A Survey on Advanced Persistent Threats: Techniques, Solutions, Challenges, and Research Opportunities. IEEE Commun. Surv. Tutor. 2019, 21, 1851–1877. [Google Scholar] [CrossRef]
  24. Andersen, J. Understanding and interpreting algorithms: Toward a hermeneutics of algorithms. Media Cult. Soc. 2020, 42, 1479–1494. [Google Scholar] [CrossRef]
  25. Cisco Talos Intelligence. Understanding Advanced Persistent Threats: Case Studies and Defenses. 2023. Available online: https://blog.talosintelligence.com (accessed on 1 November 2024).
  26. Al-Kadhimi, A.A.; Singh, M.M.; Khalid, M.N.A. A systematic literature review and a conceptual framework proposition for advanced persistent threats detection for mobile devices using artificial intelligence techniques. Appl. Sci. 2023, 13, 8056. [Google Scholar] [CrossRef]
  27. Daimi, K. Computer and Network Security Essentials; Springer International Publishing AG: London, UK, 2018; pp. 1–618. [Google Scholar]
  28. Alsahli, M.S.; Almasri, M.M.; Al-Akhras, M.; Al-Issa, A.I.; Alawairdhi, M. Evaluation of Machine Learning Algorithms for Intrusion Detection System in WSN. Int. J. Adv. Comput. Sci. Appl. 2021, 12, 617–626. [Google Scholar] [CrossRef]
  29. Mandiant Threat Intelligence. Advanced Persistent Threats: Trends and Case Studies, Mandiant Reports. 2023. Available online: https://www.mandiant.com/resources/reports (accessed on 1 January 2025).
  30. Xiong, C.; Zhu, T.; Dong, W.; Ruan, L.; Yang, R.; Cheng, Y.; Chen, Y.; Cheng, S.; Chen, X. Conan: A Practical Real-Time APT Detection System with High Accuracy and Efficiency. IEEE Trans. Dependable Secur. Comput. 2020, 19, 551–565. [Google Scholar] [CrossRef]
  31. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  32. Alam, K.M.R.; Siddique, N.; Adeli, H. A dynamic ensemble learning algorithm for neural networks. Neural Comput. Appl. 2020, 32, 8675–8690. [Google Scholar] [CrossRef]
  33. Alsanad, A.; Altuwaijri, S. Advanced Persistent Threat Attack Detection using Clustering Algorithms. Int. J. Adv. Comput. Sci. Appl. 2022, 13, 640–649. [Google Scholar] [CrossRef]
  34. Do, X.C.; Huong, D.T.; Nguyen, T. A novel intelligent cognitive computing-based APT malware detection for Endpoint systems. J. Intell. Fuzzy Syst. 2022, 43, 3527–3547. [Google Scholar]
  35. Hadlington, L. Human factors in cybersecurity; examining the link between Internet addiction, impulsivity, attitudes towards cybersecurity, and risky cybersecurity behaviours. Heliyon 2017, 3, e00346. [Google Scholar] [CrossRef] [PubMed]
  36. Zimba, A.; Chen, H.; Wang, Z.; Chishimba, M. Modeling and detection of the multi-stages of Advanced Persistent Threats attacks based on semi-supervised learning and complex networks characteristics. Future Gener. Comput. Syst. 2020, 106, 501–517. [Google Scholar] [CrossRef]
Figure 2. Timeline of major APT incidents and campaigns (1998–2023).
Figure 2. Timeline of major APT incidents and campaigns (1998–2023).
Algorithms 18 00404 g002
Figure 3. Development of APT incidents.
Figure 3. Development of APT incidents.
Algorithms 18 00404 g003
Figure 4. Workflow of the proposed hybrid APT defense framework.
Figure 4. Workflow of the proposed hybrid APT defense framework.
Algorithms 18 00404 g004
Figure 5. Distribution of detection results (normal vs. APT)/number of samples.
Figure 5. Distribution of detection results (normal vs. APT)/number of samples.
Algorithms 18 00404 g005
Figure 6. Variability of defense impact over multiple runs/defense system impact.
Figure 6. Variability of defense impact over multiple runs/defense system impact.
Algorithms 18 00404 g006
Figure 7. Variability of defense impact over multiple runs/defense policies impact.
Figure 7. Variability of defense impact over multiple runs/defense policies impact.
Algorithms 18 00404 g007
Figure 8. Anomaly score distribution.
Figure 8. Anomaly score distribution.
Algorithms 18 00404 g008
Table 1. Characteristic of APT’s and traditional Cyber Threats.
Table 1. Characteristic of APT’s and traditional Cyber Threats.
CharacteristicAdvanced Persistent Threats (APTs)Traditional Cyber Threats
MotivationEspionage, strategic sabotage, and geopolitical influenceFinancial gain; opportunistic attacks
TacticsMulti-stage infiltration, stealth, and persistenceQuick exploitation; immediate impact
DurationLong-term; often months to yearsShort-term; typically minutes to days
Attack VectorsPhishing, zero-days, and supply chain attacksRansomware, brute force, and malware
Defensive EvasionAdaptive, AI-powered evasion techniquesBasic obfuscation and encryption
Table 2. Characteristic of APT’s and traditional Cyber Threats Key APT Campaigns (1998–2023).
Table 2. Characteristic of APT’s and traditional Cyber Threats Key APT Campaigns (1998–2023).
YearAPT NameAttributionTarget SectorTactic
1998Moonlight MazeUnknown (suspected Russia)Gov. and ResearchEspionage, data Exfiltration
2010StuxnetUSA/Israel (alleged)Nuclear (Iran)ICS sabotage
2016APT28/29RussiaPolitical (USA)Phishing, info warfare
2020SolarWindsRussiaSupply Chain (Global)Backdoor in software update
2023APT41ChinaHealth, Telecom, GamingCredential Theft, LotL
Table 3. Isolation forest algorithm for anomaly detection (anomaly score interactions and Stackelberg-based optimization process).
Table 3. Isolation forest algorithm for anomaly detection (anomaly score interactions and Stackelberg-based optimization process).
Isolation forest algorithm for anomaly detection
# Applying Isolation Forest for Anomaly Detection
iso_forest <- isolation.forest(data[ ,1:3], ntrees = 100, sample_size = 256)

# Predict anomaly scores
data$score <- predict(iso_forest, newdata = data[ ,1:3], type = “score”)

# Flag anomalies based on threshold
threshold <- quantile(data$score, 0.95)
data$anomaly <- ifelse(data$score >= threshold, 1, 0)

# Game Theory: Stackelberg Optimization
payoff_matrix <- matrix(c(0, -10, 5, -5), nrow = 2, byrow = TRUE)
costs <- c(-1, -1)  # Objective function (minimization)
constraints <- matrix(c(1, 1), nrow = 1)
bounds <- c(1)
Table 4. Algorithm for cyber defense in Stackelberg model; (a) designed defense strategy and expect impact; (b) designed APT_Defense algorithm in R.
Table 4. Algorithm for cyber defense in Stackelberg model; (a) designed defense strategy and expect impact; (b) designed APT_Defense algorithm in R.
Algorithm for cyber defense in Stackelberg model
# Solve the Stackelberg game using linear programming
optimal_defense <- lp(direction = “min”,
objective.in = costs,
const.mat = constraints,
const.dir = “=“,
const.rhs = bounds,
all.int = FALSE)
# Extract defense strategy
defense_strategy <- optimal_defense$solution

# Compute Expected Impact
impact <- sum(defense_strategy * c(-10, -5))
Table 5. Comparative overview of key studies and frameworks addressing APT detection and response.
Table 5. Comparative overview of key studies and frameworks addressing APT detection and response.
StudyDetection MethodMitigation StrategyStrategic ModelingEvaluation DatasetKey Limitation
Khalid [11]SVM + ensemble
classification
Rule-based isolationNot includedPublic dataset
(CICIDS 2017)
No response modeling
Al-Aamri [13]Random forest, AI classifierPartial; manualNot includedSynthetic + enterprise log dataDetection-focused only
Challa [12]Deep learning (LSTM)Logging and alerting onlyNot includedSimulated APT eventsNo real-time mitigation
Xiong C [30]AI-based + anomaly detectionSemi-automated with feedback loopImplicit onlyLarge-scale lab testsNo formalized defense strategy
Our ApproachIsolation forest (unsupervised)Automated real-time mitigationStackelberg game model50 simulations (synthetic dataset)Requires real-world validation
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Brandão, P.; Silva, C. Unveiling the Shadows—A Framework for APT’s Defense AI and Game Theory Strategy. Algorithms 2025, 18, 404. https://doi.org/10.3390/a18070404

AMA Style

Brandão P, Silva C. Unveiling the Shadows—A Framework for APT’s Defense AI and Game Theory Strategy. Algorithms. 2025; 18(7):404. https://doi.org/10.3390/a18070404

Chicago/Turabian Style

Brandão, Pedro, and Carla Silva. 2025. "Unveiling the Shadows—A Framework for APT’s Defense AI and Game Theory Strategy" Algorithms 18, no. 7: 404. https://doi.org/10.3390/a18070404

APA Style

Brandão, P., & Silva, C. (2025). Unveiling the Shadows—A Framework for APT’s Defense AI and Game Theory Strategy. Algorithms, 18(7), 404. https://doi.org/10.3390/a18070404

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop