Evaluating Synthetic Cyber Deception Strategies Under Uncertainty via Game Theory Approach: Linking Information Leakage and Game Outcomes in Cyber Deception
Abstract
1. Introduction
- * Standardized, reusable baseline for evaluating cyber deception: The study frames cyber deception evaluation around a fixed, reproducible comparison between an otherwise matched no-deception baseline and a deception-enabled setting. The contribution is not the general idea of using a baseline, but the study’s formalization of this comparison as a repeatable evaluation protocol intended to make results comparable across the heterogeneous deception mechanisms, attacker models, and cost regimes that are an explicit response to the well-documented fragmentation of deception evaluation in the literature [1].
- * Baseline-referenced reporting metrics for comparable deception claims (VoD and PoT): The study contributes two explicitly defined, equilibrium-grounded reporting measures, value of deception (VoD) and price of transparency (PoT), that are constructed to be interpreted relative to a matched no-deception baseline, rather than as standalone payoff numbers. This enables cross-setting comparison of deception benefit and transparency cost under differing attacker mixtures, decoy costs, and observability conditions, thereby moving beyond the common practice of reporting isolated “defender utility improved” results that are difficult to compare across models and scenarios [5,6,7,8].
- * Formal bounds and break-even conditions that delimit when deception cannot pay off: The study derives explicit, assumption-scoped theoretical results stated as theorems and corollaries that bound the achievable benefit of deception and identify “break-even” regimes in which deception becomes ineffective. These results provide checkable analytical statements showing how deception value must diminish or vanish as key factors worsen (for example, rising decoy costs, increasing attacker discernment, or increasing transparency), thereby clarifying where deception is defensible as a strategy and where it is not within the model class [9].
- * Algorithmic structure for heterogeneous decoy allocation with defensible performance claims: The study formulates a heterogeneous decoy-allocation design problem that goes beyond uniform “place decoys everywhere” settings by allowing decoys to differ in cost and effectiveness. Within this formulation, the study identifies structural properties that can be exploited algorithmically, develops scalable allocation rules (including greedy-style selection), and supports them with analytically stated performance properties (and benchmarking against optimal solutions on tractable instances) [10,11,12].
- * An uncertainty leakage interpretation layer that explains why deception value changes: The study adds an information-theoretic lens based on attacker uncertainty (conditional entropy) and information leakage to interpret the game-theoretic results, so that shifts in equilibrium value under different transparency and detectability regimes are explained mechanistically through “how much the attacker can infer” rather than reported only as changes in utility [13].
- * Robustness analysis under bounded rationality, tied directly to the evaluation metrics: The study strengthens the credibility of its conclusions by moving beyond the assumption of perfectly optimizing attackers and incorporating a bounded-rationality response model (e.g., a quantal-response formulation). The analysis treats attacker rationality as a sensitivity parameter and shows how VoD/PoT and the recommended decoy level change systematically as rationality and attacker-type composition vary. This positions bounded rationality as a structured robustness test of the proposed evaluation framework, rather than as an extension [14,15].
- * Reproducible, data-independent benchmarking protocol as a supporting contribution: The study provides a controlled simulation and benchmarking workflow that enables systematic sensitivity analysis across attacker mixtures, decoy costs, and observability regimes when real-world deception datasets are unavailable, incomplete, or difficult to share. A reproducible evaluation artifact is aligned with the paper’s theoretical quantities (equilibrium utilities and the proposed metrics), rather than as a substitute for empirical validation; thus, addressing a widely recognized obstacle in cyber-deception research, namely the difficulty of obtaining standardized datasets and comparable evaluation evidence [16].
Synopsis
2. Background
2.1. Cyber Deception Techniques and Taxonomies
- Honeypots: These are decoy systems intended for probing and assault. They vary from low-interaction honeypots that simulate basic services to high-interaction honeypots that offer a comprehensive, monitored environment for attackers. Game-theoretic models have been devised to enhance honeypot deployment, taking into account variables such as attacker probing and the utilization of attack graphs [49,50,51,52].
- Moving target defense (MTD): MTD is a proactive defense strategy that dynamically shifts the attack surface (e.g., by changing IP addresses or randomizing memory layouts) to increase uncertainty for attackers. Game theory has been instrumental in analyzing MTD, with models exploring the trade-offs between the security benefits and the operational costs of reconfiguration [55].
2.2. Game-Theoretic Models for Cybersecurity
- Stackelberg security games: These leader–follower models, where the defender commits to a defensive strategy first, are highly applicable to security domains where defensive postures are observable. They have been effectively implemented in practical solutions for infrastructure security. Nonetheless, determining the optimal strategy in these games is frequently NP-hard [57], which has spurred the development of efficient algorithms like the decomposed optimal Bayesian Stackelberg solver (DOBSS) [58].
- Signaling games: These games are optimal for simulating deception in conditions of information asymmetry. The defender (sender) can transmit a signal to the attacker (receiver) to affect their beliefs and behaviors. Pawlick and Zhu’s key work on signaling games with evidence establishes a formal framework for examining leaky deception, when the attacker might potentially discern the deception with a certain probability [59].
- Dynamic and repeated games: Cyber conflicts are seldom isolated incidents. Dynamic and repetitive games represent the long-term, changing interactions between attackers and defenders [60,61,62]. These models integrate learning and adaptation, wherein players modify their strategy based on historical gameplay [63]. The FlipIt game exemplifies the continuous contest for resource management and has been utilized to examine defenses against advanced persistent threats (APTs) [64].
- Information design and Bayesian persuasion: This current research investigates how a defense can strategically construct an information disclosure mechanism to influence an attacker to do activities advantageous to the defender [65]. This approach offers a robust instrument for examining deceit as a method of strategic information disclosure.
2.3. Prior Work on Value of Deception
2.4. Attacker Modeling and Bounded Rationality
- Quantal response equilibrium (QRE): QRE is a solution concept that relaxes the assumption of perfect rationality by allowing players to make mistakes with a certain probability [14]. The probability of choosing a suboptimal action is inversely related to the expected utility loss. QRE has been shown to provide a better fit for real-world security data than traditional equilibrium concepts [67].
- Nested quantal response (NQR): This is an extension of the QRE model that captures correlations in attacker choices, providing a more scalable and accurate model of adversary behavior [68].
- Learning-based models: Researchers are increasingly using machine learning techniques, such as reinforcement learning, to model adaptive adversaries who learn their strategies over time [69,70]. These models can be trained on data from real or simulated interactions to capture the complex decision-making processes of human attackers.
3. Formal Game-Theoretic Framework
3.1. Formal Problem Statement
3.2. The Bayesian Stackelberg Game Model
- Players (P): The game consists of two players, a defender (D) and an attacker (A). .
- Targets (T): There is a set of targets, partitioned into a set of real assets and a set of synthetic decoys . and , with .
- Attacker types (Θ): The attacker has a private type , where is a finite set of possible attacker types. An attacker’s type encapsulates private information, such as skill, resources, and motivations. For example, .
- Prior beliefs (p): The defender has a prior belief over the attacker’s type, which is a probability distribution for each , such that .
- Action spaces (A, S): The defender’s strategy space is the set of all possible decoy deployment strategies. A pure strategy for the defender is to choose the number of decoys to deploy, where is the maximum number of possible targets. The defender commits to a strategy . The attacker’s action space is the set of all possible targets to attack. . The attacker chooses an action after observing the defender’s strategy.
- Utility functions (U): The utility functions and define the payoffs for the defender and the attacker, respectively.
3.2.1. Defender’s Utility
- is the cost of deploying the strategy . A linear cost function is assumed: , where is the cost per decoy.
- is the reward (or loss) to the defender based on the attacker’s action:If (attacker attacks a decoy), , where is the benefit of detecting an attack (e.g., intelligence gain).If (attacker attacks a real asset), , where is the loss incurred from a compromised real asset.
3.2.2. Attacker’s Utility
- If , , where is the cost to the attacker of type for being deceived (e.g., wasted resources, exposure).
- If , , where is the reward to the attacker of type for a successful attack.
- The study adopts the following cost-accounting convention. The quantity is treated as the attacker’s net payoff from successfully compromising a real target, with any target-independent execution cost already absorbed into that term. The parameter is reserved exclusively for the incremental loss attributable to deception outcomes—namely, the additional operational penalty incurred when a decoy is engaged (e.g., wasted effort, increased exposure, tool attrition, or mission setback). Under this convention, the attacker’s utility subtracts only in the decoy outcome, thereby preventing double counting of a universal per-attack cost. If an alternative convention is preferred—where denotes a gross success reward and a universal execution cost is modeled explicitly—the equilibrium statements and proofs remain unchanged after a notational reparameterization that introduces a per-attack cost term and correspondingly redefines to preserve identical net payoffs.
3.3. Equilibrium Analysis
3.3.1. Attacker’s Best Response
3.3.2. Defender’s Optimal Strategy
3.4. Theorem 1: Existence of Optimal Strategy
Validation of Theorem 1
4. Extensions to Sophisticated Game Models
4.1. Signaling Games for Leaky Deception
4.1.1. Theorem 2: Budgeted Quality–Quantity Tradeoff Under Leaky Deception
Validation of Theorem 2
4.2. Repeated and Dynamic Games for Advanced Persistent Threats
4.2.1. Theorem 3: Closed-Form Optimal Rotation Period Under APT Learning
Validation of Theorem 3
4.3. Bounded Rationality and Quantal Response
4.3.1. Theorem 4: Finite- Rationality Bound for Logit QRE in SDG
Validation of Theorem 4
5. Optimal Allocation in Heterogeneous Deception Games (HDGs)
- A deployment cost .
- An effectiveness , which represents the probability that an attacker who interacts with a decoy of type is detected.
- A quality , which affects how easily the decoy can be distinguished from a real asset.
5.1. The Defender’s Optimization Problem
5.2. Theorem 5: Greedy Allocation Property
5.2.1. Formal Proof
- By Lemma 1, the defender’s optimization problem is the linear program (LP):
- Define the “bang-for-the-buck” for decoy type as follows:
- For any feasible allocation , we find the following:
- Let . Consider the allocation and for all . This allocation is feasible and achieves objective value, as follows:
- As the allocation achieves the upper bound in Step 3, it is optimal. Therefore, there exists an optimal allocation that invests only in a decoy type with the highest ratio . □
- Simplified decision-making: Defenders do not need to solve a complex multi-dimensional optimization problem. They can rank decoy types by and invest in a highest-ranked type.
- Focus on quality: The theorem suggests that it is often better to invest in a smaller number of high-quality, highly effective decoys than to spread resources across many low-quality ones.
- Sensitivity to attacker behavior: The optimal allocation depends on the attacker’s interaction probabilities . If these probabilities are not fixed but depend on the defender’s allocation, the problem becomes more complex.
Validation of Theorem 5
6. Computational Complexity Analysis
6.1. Complexity of the Basic SDG
- Enumerate defender strategies: for each .
- Compute best responses: for each , compute .
- Aggregate expected utility: compute .
- : the number of defender strategies being enumerated (each possible decoy count).
- : the number of attacker types being evaluated per defender strategy.
6.1.1. Theorem 6: Parameterized Polynomial-Time Solvability of the Basic SDG
Validation of Theorem 6
6.2. Complexity of Extended Models
6.3. Scalable Solution Approaches
6.4. NP-Hardness of Heterogeneous Decoy Allocation Problem (HDAP)
Validation of Theorem 7
7. Information-Theoretic Analysis of Deception
7.1. Information-Theoretic Foundation
7.2. Defining Deception Capacity
7.3. Implications for Deception Design
- Benchmarking: Deception capacity provides a theoretical upper bound on the effectiveness of any deception strategy. It can be used to benchmark the performance of practical deception systems.
- Resource allocation: By understanding the factors that influence deception capacity (e.g., the number of decoys, their quality, the attacker’s observational capabilities), more informed decisions about resource allocation can be made.
7.4. Future Research Directions
- Calculating deception capacity: Developing algorithms to calculate or approximate the deception capacity for different types of deception systems.
- Achieving deception capacity: Designing practical deception strategies that can achieve the theoretical deception capacity.
- Dynamic deception capacity: Extending the concept to dynamic and adaptive deception scenarios.
7.4.1. Calculating Deception Capacity
7.4.2. Achieving Deception Capacity
7.4.3. Dynamic Deception Capacity
- At time , compute the defender belief state over relevant hidden variables.
- Choose an action that minimizes predicted leakage over a horizon , as follows:
- Execute , observe feedback, update the belief, and repeat.
8. The VoD Framework
8.1. The Baseline: TSG
- Players: A defender (leader) and an attacker (follower).
- Targets: A set of real targets, .
- Defender’s strategy: The defender has security resources to allocate. A pure strategy is an allocation of these resources to a subset of targets. The defender commits to a mixed strategy , which is a probability distribution over all possible pure strategies.
- Attacker’s strategy: The attacker observes the defender’s mixed strategy and chooses a single target to attack.
- Payoffs:
- -
- If the attacker attacks target and it is covered by a resource, the defender receives a reward and the attacker receives a penalty
- -
- If the attacker attacks target and it is not covered, the defender receives a penalty and the attacker receives a reward
- Equilibrium: The solution concept is the SSE, where the defender chooses the mixed strategy that maximizes the defender’s expected utility, assuming the attacker will break ties in the defender’s favor.
8.2. DSG: TSG Extension
- Players and targets: Same as the TSG, but the defender can also deploy decoys, . The attacker sees a set of potential targets.
- Defender’s strategy: The defender’s strategy involves both allocating resources to the real targets and deploying decoys. The decoys have a deployment cost each.
- Attacker’s strategy: The attacker observes the mixed strategy over the real targets and the presence of the decoys, but cannot distinguish real targets from decoys with certainty. The attacker chooses one of the potential targets to attack.
- Payoffs:
- -
- Payoffs for attacking real targets are the same as in the TSG.
- -
- If the attacker attacks a decoy , the defender receives a high reward (for detecting the attacker) and the attacker receives a high penalty .
- Equilibrium: The solution concept is again the SSE.
8.3. Formulating the VoD
8.3.1. VoC Curve and Marginal VoD
8.3.2. Budgeted Deception and ROI-Comparable Deployment Interface
8.4. Positioning of This Framework
- Quantify the benefit of deception across different game settings using a standardized metric.
- Identify the conditions under which deception is most and least effective.
- Derive tight theoretical bounds on the maximum possible value of deception.
8.5. Theorems on VoD
8.5.1. Theorem 8: The High-Cost-of-Deception Theorem
- Let be the defender’s optimal expected utility in a DSG with decoys, not including the cost of the decoys.
- The total utility for the defender with decoys is .
- The defender will only choose to deploy the first decoy if the utility of doing so is greater than the utility of deploying zero decoys. That is, .
- Substituting the definitions, .
- This simplifies to . Let be the marginal utility gain from the first decoy.
- If , then for all , so the optimal number of decoys is .
- With decoys, the DSG is equivalent to the TSG, so , and . □
Validation of Theorem 8
8.5.2. Theorem 9: Budgeted Optimality and Diminishing Returns Condition
- The defender’s total utility with decoys is with feasibility constraint , equivalently .
- The increment from to is
- If , then , so deploying the -th decoy decreases total utility.
- If is non-increasing in , then for any , , so all subsequent increments also decrease total utility.
- Therefore, among feasible , the optimal number of decoys is attained at the largest feasible satisfying . □
Validation of Theorem 9
8.6. Tight Bounds and Characterization Results
8.6.1. Theorem 10: Upper Bound on the Value of Deception (VoD) Curve
- In the DSG with decoys, the attacker chooses one of the potential targets to attack, so exactly one outcome is realized: a real target is attacked or a decoy is attacked.
- If a real target is attacked, the defender’s payoff is governed by the same real-target payoffs as in the TSG. Under SSE, the defender’s expected utility from the best achievable real-target outcome is bounded above by .
- If a decoy is attacked, the defender receives . Under linear decoy deployment cost, at least one decoy cost is incurred whenever a decoy is deployed, so the defender’s payoff from a decoy attack outcome is at most . If , a decoy attack outcome yields no positive contribution relative to the baseline, so is used.
- Therefore, the defender’s optimal total utility in the DSG with decoys is bounded above by the maximum of the best achievable real-target equilibrium utility and the best achievable decoy-attack payoff, as follows:
- Dividing both sides by yields the following:□
Validation of Theorem 10
8.6.2. Theorem 11: Characterization of When Deception Is Ineffective
- Case (a): Decoy-immune attacker. If the attacker is decoy-immune, the attacker’s strategy set is restricted to the set of real targets . The presence of decoys has no effect on the attacker’s decision-making. As the decoys provide no benefit, the optimal strategy for the defender is to deploy zero decoys. Thus, , and
- Case (b): Prohibitively high cost. This follows directly from Theorem 8. If the cost of a decoy is higher than the marginal utility gain from deploying it, the optimal number of decoys is . The DSG reduces to the TSG, so , and therefore under the optimal strategy.
Validation of Theorem 11
9. Discussion, Future Directions, and Final Thoughts
9.1. Discussion
9.1.1. Sensitivity and Scenario Diversity Protocol
9.1.2. Operational Interpretation
9.2. Future Directions
9.3. Final Thoughts and Conclusion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Pawlick, J.; Colbert, E.; Zhu, Q. A Game-theoretic Taxonomy and Survey of Defensive Deception for Cybersecurity and Privacy. ACM Comput. Surv. 2019, 52, 82. [Google Scholar] [CrossRef]
- Zhang, L.; Thing, V.L.L. Three decades of deception techniques in active cyber defense—Retrospect and outlook. Comput. Secur. 2021, 106, 102288. [Google Scholar] [CrossRef]
- Prabhaker, N.; Bopche, G.S.; Arock, M. Generation and deployment of honeytokens in relational databases for cyber deception. Comput. Secur. 2024, 146, 104032. [Google Scholar] [CrossRef]
- Zarreh, A.; Lee, Y.; Janahi, R.A.; Wan, H.; Saygin, C. Cyber-Physical Security Evaluation in Manufacturing Systems with a Bayesian Game Model. Procedia Manuf. 2020, 51, 1158–1165. [Google Scholar] [CrossRef]
- Schlenker, A.; Thakoor, O.; Xu, H.; Fang, F.; Tambe, M.; Tran-Thanh, L.; Vayanos, P.; Vorobeychik, Y. Deceiving Cyber Adversaries: A Game Theoretic Approach. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, Stockholm, Sweden, 10–15 July 2018; International Foundation for Autonomous Agents and Multiagent Systems: Richland, SC, USA, 2018; pp. 892–900. [Google Scholar] [CrossRef]
- Zhang, Y.; Malacaria, P. Dealing with uncertainty in cybersecurity decision support. Comput. Secur. 2025, 148, 104153. [Google Scholar] [CrossRef]
- Horák, K.; Zhu, Q.; Bošanský, B. Manipulating Adversary’s Belief: A Dynamic Game Approach to Deception by Design for Proactive Network Security. In Proceedings of the Decision and Game Theory for Security, Vienna, Austria, 23–25 October 2017; Rass, S., An, B., Kiekintveld, C., Fang, F., Schauer, S., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 273–294. [Google Scholar]
- Wang, R.; Yang, C.; Deng, X.; Zhou, Y.; Liu, Y.; Tian, Z. Turn the tables: Proactive deception defense decision-making based on Bayesian attack graphs and Stackelberg games. Neurocomputing 2025, 638, 130139. [Google Scholar] [CrossRef]
- Guo, Q.; Gan, J.; Fang, F.; Tran-Thanh, L.; Tambe, M.; An, B. On the inducibility of stackelberg equilibrium for security games. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; AAAI Press: Palo Alto, CA, USA, 2019; Volume 33, pp. 2020–2028. [Google Scholar] [CrossRef]
- Jajodia, S.; Park, N.; Serra, E.; Subrahmanian, V.S. SHARE: A Stackelberg Honey-Based Adversarial Reasoning Engine. ACM Trans. Internet Technol. 2018, 18, 30. [Google Scholar] [CrossRef]
- Kiekintveld, C.; Jain, M.; Tsai, J.; Pita, J.; Ordóñez, F.; Tambe, M. Computing optimal randomized resource allocations for massive security games. In Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems—Volume 1, Budapest, Hungary, 10–15 May 2009; International Foundation for Autonomous Agents and Multiagent Systems: Richland, SC, USA, 2009; Volume 1, pp. 689–696. [Google Scholar]
- Bustamante-Faúndez, P.; Bucarey, L.V.; Labbé, M.; Marianov, V.; Ordoñez, F. Playing Stackelberg Security Games in perfect formulations. Omega 2024, 126, 103068. [Google Scholar] [CrossRef]
- Kopp, C.; Korb, K.B.; Mills, B.I. Information-theoretic models of deception: Modelling cooperation and diffusion in populations exposed to “fake news”. PLoS ONE 2018, 13, e0207383. [Google Scholar] [CrossRef]
- McKelvey, R.D.; Palfrey, T.R. Quantal Response Equilibria for Normal Form Games. Games Econ. Behav. 1995, 10, 6–38. [Google Scholar] [CrossRef]
- Zhu, Q. Game theory for cyber deception: A tutorial. In Proceedings of the 6th Annual Symposium on Hot Topics in the Science of Security, Nashville, TN, USA, 1–3 April 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–3. [Google Scholar]
- Javadpour, A.; Ja’fari, F.; Taleb, T.; Shojafar, M.; Benzaïd, C. A comprehensive survey on cyber deception techniques to improve honeypot performance. Comput. Secur. 2024, 140, 103792. [Google Scholar] [CrossRef]
- Korzhyk, D.; Yin, Z.; Kiekintveld, C.; Conitzer, V.; Tambe, M. Stackelberg vs. Nash in Security Games: An Extended Investigation of Interchangeability, Equivalence, and Uniqueness. J. Artif. Intell. Res. 2011, 41, 297–327. [Google Scholar] [CrossRef]
- Janssen, S.; Matias, D.; Sharpanskykh, A. An Agent-Based Empirical Game Theory Approach for Airport Security Patrols. Aerospace 2020, 7, 8. [Google Scholar] [CrossRef]
- Maghanaki, M.; Keramati, S.; Chen, F.F.; Shahin, M. Investigating Artificial Intelligence Approaches to Cybersecurity in Internet of Things Manufacturing Systems and a Deep Hybrid Learning Framework for Malware Detection. J. Manuf. Sci. Eng 2026, 1–32. [Google Scholar] [CrossRef]
- Zhu, M.; Anwar, A.H.; Wan, Z.; Cho, J.-H.; Kamhoua, C.A.; Singh, M.P. A Survey of Defensive Deception: Approaches Using Game Theory and Machine Learning. IEEE Commun. Surv. Tutor. 2021, 23, 2460–2493. [Google Scholar] [CrossRef]
- Lu, Z.; Wang, C.; Zhao, S. Cyber Deception for Computer and Network Security: Survey and Challenges. arXiv 2020, arXiv:2007.14497. [Google Scholar] [CrossRef]
- Kar, D.; Nguyen, T.; Fang, F.; Brown, M.; Sinha, A.; Tambe, M.; Jiang, A. Trends and Applications in Stackelberg Security Games. In Handbook Dynamic Game Theory; Springer: Cham, Switzerland, 2018; pp. 1223–1269. [Google Scholar] [CrossRef]
- Beltrán-López, P.; Gil Pérez, M.; Nespoli, P. Cyber Deception: Taxonomy, State of the Art, Frameworks, Trends, and Open Challenges. IEEE Commun. Surv. Tutor. 2026, 28, 1520–1556. [Google Scholar] [CrossRef]
- Sinha, A.; Nguyen, T.H.; Kar, D.; Brown, M.; Tambe, M.; Jiang, A.X. From physical security to cybersecurity. J. Cyber Secur. 2015, 1, 19–35. [Google Scholar] [CrossRef]
- Clots Figueras, I.; Hernán-González, R.; Kujal, P. Information asymmetry and deception. Front. Behav. Neurosci. 2015, 9, 109. [Google Scholar] [CrossRef]
- Gajarský, J.; Hliněný, P.; Obdržálek, J.; Ordyniak, S.; Reidl, F.; Rossmanith, P.; Sánchez Villaamil, F.; Sikdar, S. Kernelization using structural parameters on sparse graph classes. J. Comput. Syst. Sci. 2017, 84, 219–242. [Google Scholar] [CrossRef]
- Malacaria, P.; Heusser, J. Information Theory and Security: Quantitative Information Flow. In Formal Methods for Quantitative Aspects of Programming Languages, 10th International School on Formal Methods for the Design of Computer, Communication and Software Systems, SFM 2010, Bertinoro, Italy, 21–26 June 2010, Advanced Lectures; Aldini, A., Bernardo, M., Di Pierro, A., Wiklicky, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 87–134. ISBN 978-3-642-13678-8. [Google Scholar]
- Alcantara-Jiménez, G.; Clempner, J.B. Repeated Stackelberg security games: Learning with incomplete state information. Reliab. Eng. Syst. Saf. 2020, 195, 106695. [Google Scholar] [CrossRef]
- Shahin, M.; Maghanaki, M.; Chen, F.F. Integration of Lean Analytics and Industry 6.0: A Novel Meta-Theoretical Framework for Antifragile, Generative AI-Orchestrated, Circular–Regenerative, and Hyper-Connected Manufacturing Ecosystems. Big Data Cogn. Comput. 2026, 10, 65. [Google Scholar] [CrossRef]
- Chaudhuri, A.; Behera, R.K.; Bala, P.K. Factors impacting cybersecurity transformation: An Industry 5.0 perspective. Comput. Secur. 2025, 150, 104267. [Google Scholar] [CrossRef]
- Collins, B.; Xu, S.; Brown, P.N. Game-Theoretic Cybersecurity: The Good, the Bad and the Ugly. arXiv 2025, arXiv:2401.13815. [Google Scholar] [CrossRef]
- Hosseinzadeh, A.; Shahin, M.; Chen, F.F.; Maghanaki, M.; Tseng, T.-L.; Rashidifar, R. Using Applied Machine Learning to Detect Cyber-Security Threats in Industrial IoT Devices. In Flexible Automation and Intelligent Manufacturing: Manufacturing Innovation and Preparedness for the Changing World Order, Proceedings of FAIM 2024, Taichung, Taiwan, 23–26 June 2024; Wang, Y.-C., Chan, S.H., Wang, Z.-H., Eds.; Springer Nature: Cham, Switzerland, 2024; pp. 22–30. [Google Scholar]
- Admass, W.S.; Munaye, Y.Y.; Diro, A.A. Cyber security: State of the art, challenges and future directions. Cyber Secur. Appl. 2024, 2, 100031. [Google Scholar] [CrossRef]
- Shahin, M.; Maghanaki, M.; Hosseinzadeh, A.; Chen, F.F. Advancing Network Security in Industrial IoT: A Deep Dive into AI-Enabled Intrusion Detection Systems. Adv. Eng. Inform. 2024, 62, 102685. [Google Scholar] [CrossRef]
- Maghanaki, M.; Keramati, S.; Chen, F.F.; Shahin, M. Generation of a Multi-Class IoT Malware Dataset for Cybersecurity. Electronics 2025, 14, 4196. [Google Scholar] [CrossRef]
- Kour, R.; Karim, R.; Dersin, P.; Venkatesh, N. Cybersecurity for Industry 5.0: Trends and gaps. Front. Comput. Sci. 2024, 6, 1434436. [Google Scholar] [CrossRef]
- Abdullah, M.; Nawaz, M.M.; Saleem, B.; Zahra, M.; Ashfaq, E.b.; Muhammad, Z. Evolution Cybercrime—Key Trends, Cybersecurity Threats, and Mitigation Strategies from Historical Data. Analytics 2025, 4, 25. [Google Scholar] [CrossRef]
- Shahin, M.; Maghanaki, M.; Chen, F.F.; Hosseinzadeh, A. Enhancing Cybersecurity in Industrial IoT with Deep Hybrid Learning Models: A Comparative Study of Machine Learning and Deep Learning Approaches. In Flexible Automation and Intelligent Manufacturing: The Future of Automation and Manufacturing: Intelligence, Agility, and Sustainability, Proceedings of FAIM 2025, New York City, NY, USA, 21–24 June 2025; Srihari, K., Khasawneh, M.T., Yoon, S., Won, D., Eds.; Lecture Notes in Mechanical Engineering; Springer Nature: Cham, Switzerland, 2026; pp. 320–327. ISBN 978-3-032-07674-8. [Google Scholar]
- Santos, B.; Costa, R.L.C.; Santos, L. Cybersecurity in Industry 5.0: Open Challenges and Future Directions. In Proceedings of the 2024 21st Annual International Conference on Privacy, Security and Trust (PST), Sydney, Australia, 28–30 August 2024; IEEE: New York, NY, USA, 2024; pp. 1–6. [Google Scholar]
- Shahin, M.; Maghanaki, M.; Chen, F.F. The symbiotic factory: A comprehensive framework for extending lean manufacturing to human-AI collaboration. Expert Syst. Appl. 2026, 314, 131606. [Google Scholar] [CrossRef]
- Joshi, C.; Slapničar, S.; Yang, J.; Ko, R.K.L. Contrasting the optimal resource allocation to cybersecurity controls and cyber insurance using prospect theory versus expected utility theory. Comput. Secur. 2025, 154, 104450. [Google Scholar] [CrossRef]
- Shahin, M.; Hosseinzadeh, A.; Chen, F.F. A Two-Stage Hybrid Federated Learning Framework for Privacy-Preserving IoT Anomaly Detection and Classification. IoT 2025, 6, 48. [Google Scholar] [CrossRef]
- Chen, Y.-F.; Lin, F.Y.-S.; Tai, K.-Y.; Hsiao, C.-H.; Wang, W.-H.; Tsai, M.-C.; Sun, T.-L. A near-optimal resource allocation strategy for minimizing the worse-case impact of malicious attacks on cloud networks. J. Cloud Comp. 2025, 14, 41. [Google Scholar] [CrossRef]
- Njilla, L.L.; Kamhoua, C.A.; Kwiat, K.A.; Hurley, P.; Pissinou, N. Cyber Security Resource Allocation: A Markov Decision Process Approach. In Proceedings of the 2017 IEEE 18th International Symposium on High Assurance Systems Engineering (HASE), Singapore, 12–14 January 2017; IEEE: New York, NY, USA, 2017; pp. 49–52. [Google Scholar]
- Srinidhi, B.; Yan, J.; Tayi, G.K. Allocation of resources to cyber-security: The effect of misalignment of interest between managers and investors. Decis. Support Syst. 2015, 75, 49–62. [Google Scholar] [CrossRef]
- Dowell, J.A.; Wright, L.J.; Armstrong, E.A.; Denu, J.M. Benchmarking quantitative performance in label-free proteomics. ACS Omega 2021, 6, 2494–2504. [Google Scholar] [CrossRef]
- Gatto, L.; Aebersold, R.; Cox, J.; Demichev, V.; Derks, J.; Emmott, E.; Franks, A.M.; Ivanov, A.R.; Kelly, R.T.; Khoury, L.; et al. Initial recommendations for performing, benchmarking and reporting single-cell proteomics experiments. Nat. Methods 2023, 20, 375–386. [Google Scholar] [CrossRef] [PubMed]
- Almeshekah, M.H.; Spafford, E.H. Planning and Integrating Deception into Computer Security Defenses. In Proceedings of the 2014 New Security Paradigms Workshop, Victoria, BC, Canada, 15–18 September 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 127–138. [Google Scholar] [CrossRef]
- Mohan, P.V.; Dixit, S.; Gyaneshwar, A.; Chadha, U.; Srinivasan, K.; Seo, J.T. Leveraging Computational Intelligence Techniques for Defensive Deception: A Review, Recent Advances, Open Problems and Future Directions. Sensors 2022, 22, 2194. [Google Scholar] [CrossRef]
- Kiekintveld, C.; Lisý, V.; Píbil, R. Game-Theoretic Foundations for the Strategic Use of Honeypots in Network Security. In Cyber Warfare: Building the Scientific Foundation; Jajodia, S., Shakarian, P., Subrahmanian, V.S., Swarup, V., Wang, C., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 81–101. ISBN 978-3-319-14039-1. [Google Scholar]
- Sayed, M.A.; Anwar, A.H.; Kiekintveld, C.; Kamhoua, C. Honeypot Allocation for Cyber Deception in Dynamic Tactical Networks: A Game Theoretic Approach. In Decision and Game Theory for Security, Proceedings of the 14th International Conference, GameSec 2023, Avignon, France, 18–20 October 2023; Fu, J., Kroupa, T., Hayel, Y., Eds.; Springer Nature: Cham, Switzerland, 2023; pp. 195–214. [Google Scholar] [CrossRef]
- Kocaogullar, Y.; Cetin, O.; Arief, B.; Brierley, C.; Pont, J.; Hernandez-Castro, J. Hunting High or Low: Evaluating the Effectiveness of High-Interaction and Low-Interaction Honeypots. In Socio-Technical Aspects in Security, Proceedings of the 12th International Workshop, STAST 2022, Copenhagen, Denmark, 29 September 2022; Mehrnezhad, M., Parkin, S., Eds.; Springer Nature: Cham, Switzerland, 2025; pp. 14–30. [Google Scholar] [CrossRef]
- Bowen, B.M.; Hershkop, S.; Keromytis, A.D.; Stolfo, S.J. Baiting Inside Attackers Using Decoy Documents. In Security and Privacy in Communication Networks, Proceedings of the 5th International ICST Conference, SecureComm 2009, Athens, Greece, 14–18 September 2009; Chen, Y., Dimitriou, T.D., Zhou, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 51–70. [Google Scholar] [CrossRef]
- Papaspirou, V.; Papathanasaki, M.; Maglaras, L.; Kantzavelou, I.; Douligeris, C.; Ferrag, M.A.; Janicke, H. A Novel Authentication Method That Combines Honeytokens and Google Authenticator. Information 2023, 14, 386. [Google Scholar] [CrossRef]
- Clark, A.; Sun, K.; Bushnell, L.; Poovendran, R. A Game-Theoretic Approach to IP Address Randomization in Decoy-Based Cyber Defense. In Decision and Game Theory for Security, Proceedings of the 6th International Conference, GameSec 2015, London, UK, 4–5 November 2015; Khouzani, M., Panaousis, E., Theodorakopoulos, G., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 3–21. [Google Scholar]
- Shahin, M.; Chen, F.F.; Bouzary, H.; Zarreh, A. Frameworks Proposed to Address the Threat of Cyber-Physical Attacks to Lean 4.0 Systems. Procedia Manuf. 2020, 51, 1184–1191. [Google Scholar] [CrossRef]
- Conitzer, V.; Sandholm, T. Computing the optimal strategy to commit to. In Proceedings of the 7th ACM Conference on Electronic Commerce, Ann Arbor, MI, USA, 11–15 June 2006; Association for Computing Machinery: New York, NY, USA, 2006; pp. 82–90. [Google Scholar] [CrossRef]
- Zhang, Y.; Malacaria, P. Bayesian Stackelberg games for cyber-security decision support. Decis. Support Syst. 2021, 148, 113599. [Google Scholar] [CrossRef]
- Pawlick, J.; Colbert, E.; Zhu, Q. Modeling and Analysis of Leaky Deception Using Signaling Games with Evidence. IEEE Trans. Inf. Forensics Secur. 2019, 14, 1871–1886. [Google Scholar] [CrossRef]
- Aoyagi, M. Reputation and Dynamic Stackelberg Leadership in Infinitely Repeated Games. J. Econ. Theory 1996, 71, 378–393. [Google Scholar] [CrossRef]
- Bergin, J.; MacLeod, W.B. Continuous Time Repeated Games. Int. Econ. Rev. 1993, 34, 21. [Google Scholar] [CrossRef]
- Douglas Bernheim, B.; Ray, D. Collective dynamic consistency in repeated games. Games Econ. Behav. 1989, 1, 295–326. [Google Scholar] [CrossRef]
- Etesami, S.R.; Başar, T. Dynamic Games in Cyber-Physical Security: An Overview. Dyn. Games Appl. 2019, 9, 884–913. [Google Scholar] [CrossRef]
- van Dijk, M.; Juels, A.; Oprea, A.; Rivest, R.L. FlipIt: The Game of “Stealthy Takeover”. J. Cryptol. 2013, 26, 655–713. [Google Scholar] [CrossRef]
- Zhou, C.; Spivey, A.; Xu, H.; Nguyen, T.H. Information Design for Multiple Interdependent Defenders: Work Less, Pay Off More. Games 2023, 14, 12. [Google Scholar] [CrossRef]
- Zhu, Q.; Clark, A.; Poovendran, R.; Başar, T. Deceptive routing games. In Proceedings of the 2012 IEEE 51st Conference on Decision and Control (CDC), Maui, HI, USA, 10–13 December 2012; IEEE: New York, NY, USA, 2012; pp. 2704–2711. [Google Scholar]
- Yang, R.; Kiekintveld, C.; Ordóñez, F.; Tambe, M.; John, R. Improving resource allocation strategies against human adversaries in security games: An extended study. Artif. Intell. 2013, 195, 440–469. [Google Scholar] [CrossRef]
- Mai, T.; Sinha, A. Choices Are Not Independent: Stackelberg Security Games with Nested Quantal Response Models. Proc. AAAI Conf. Artif. Intell. 2022, 36, 5141–5149. [Google Scholar] [CrossRef]
- Trejo, K.K.; Clempner, J.B.; Poznyak, A.S. Adapting attackers and defenders patrolling strategies: A reinforcement learning approach for Stackelberg security games. J. Comput. Syst. Sci. 2018, 95, 35–54. [Google Scholar] [CrossRef]
- Perrault, A.; Wilder, B.; Ewing, E.; Mate, A.; Dilkina, B.; Tambe, M. End-to-End Game-Focused Learning of Adversary Behavior in Security Games. Proc. AAAI Conf. Artif. Intell. 2020, 34, 1378–1386. [Google Scholar] [CrossRef]
- Diamantoulakis, P.; Dalamagkas, C.; Radoglou-Grammatikis, P.; Sarigiannidis, P.; Karagiannidis, G. Game Theoretic Honeypot Deployment in Smart Grid. Sensors 2020, 20, 4199. [Google Scholar] [CrossRef]
- von Stengel, B.; Zamir, S. Leadership games with convex strategy sets. Games Econ. Behav. 2010, 69, 446–457. [Google Scholar] [CrossRef]
- Pita, J.; Jain, M.; Tambe, M.; Ordóñez, F.; Kraus, S. Robust solutions to Stackelberg games: Addressing bounded rationality and limited observations in human cognition. Artif. Intell. 2010, 174, 1142–1171. [Google Scholar] [CrossRef]
- An, B.; Tambe, M.; Ordonez, F.; Shieh, E.; Kiekintveld, C. Refinement of Strong Stackelberg Equilibria in Security Games. Proc. AAAI Conf. Artif. Intell. 2011, 25, 587–593. [Google Scholar] [CrossRef]
- Kiekintveld, C.; Marecki, J.; Tambe, M. Approximation methods for infinite Bayesian Stackelberg games: Modeling distributional payoff uncertainty. In Proceedings of the 10th International Conference on Autonomous Agents and Multiagent Systems—Volume 3, Taipei, Taiwan, 2–6 May 2011; International Foundation for Autonomous Agents and Multiagent Systems: Richland, SC, USA, 2011; pp. 1005–1012. [Google Scholar] [CrossRef]
- Wushishi, U.; Hussain, A.; Khalid, M.I.; Hussain, N.; Jamjoom, M.; Ullah, Z. D3O-IIoT: Deep reinforcement learning-driven dynamic deception orchestration for industrial IoT security. Sci. Rep. 2025, 16, 2389. [Google Scholar] [CrossRef]
- Nong, P.; Williamson, A.; Anthony, D.; Platt, J.; Kardia, S. Discrimination, trust, and withholding information from providers: Implications for missing data and inequity. SSM Popul. Health 2022, 18, 101092. [Google Scholar] [CrossRef]
- Min, M.; Xiao, L.; Xie, C.; Hajimirsadeghi, M.; Mandayam, N.B. Defense Against Advanced Persistent Threats in Dynamic Cloud Storage: A Colonel Blotto Game Approach. IEEE Internet Things J. 2018, 5, 4250–4261. [Google Scholar] [CrossRef]
- Kumar, R.; Singh, S.; Kela, R. Analyzing Advanced Persistent Threats Using Game Theory: A Critical Literature Review. In Critical Infrastructure Protection XV, Proceedings of the 15th IFIP WG 11.10 International Conference, ICCIP 2021, Virtual Event, 15–16 March 2021; Staggs, J., Shenoi, S., Eds.; IFIP Advances in Information and Communication Technology; Springer International Publishing: Cham, Switzerland, 2022; Volume 636, pp. 45–69. ISBN 978-3-030-93510-8. [Google Scholar]
- Khalid, M.N.A.; Al-Kadhimi, A.A.; Singh, M.M. Recent Developments in Game-Theory Approaches for the Detection and Defense against Advanced Persistent Threats (APTs): A Systematic Review. Mathematics 2023, 11, 1353. [Google Scholar] [CrossRef]
- Huang, L.; Zhu, Q. A dynamic games approach to proactive defense strategies against Advanced Persistent Threats in cyber-physical systems. Comput. Secur. 2020, 89, 101660. [Google Scholar] [CrossRef]
- Jafar, M.T.; Yang, L.-X.; Li, G.; Yang, X. The evolution of the flip-it game in cybersecurity: Insights from the past to the future. J. King Saud. Univ.—Comput. Inf. Sci. 2024, 36, 102195. [Google Scholar] [CrossRef]
- Zhou, Y.; Cheng, G.; Jiang, S.; Zhao, Y.; Chen, Z. Cost-effective moving target defense against DDoS attacks using trilateral game and multi-objective Markov decision processes. Comput. Secur. 2020, 97, 101976. [Google Scholar] [CrossRef]
- Evans, B.P.; Prokopenko, M. Bounded rationality for relaxing best response and mutual consistency: The quantal hierarchy model of decision making. Theory Decis. 2024, 96, 71–111. [Google Scholar] [CrossRef]
- Friedman, E.; Gonçalves, D. Quantal response equilibrium with a continuum of types: Characterization and nonparametric identification. Games Econ. Behav. 2025, in press. [Google Scholar] [CrossRef]
- Li, C.; Zhao, N.; Wu, H. Multiple deception resources deployment strategy based on reinforcement learning for network threat mitigation. Sci. Rep. 2025, 15, 16830. [Google Scholar] [CrossRef] [PubMed]
- Coniglio, S.; Gatti, N.; Marchesi, A. Computing a Pessimistic Stackelberg Equilibrium with Multiple Followers: The Mixed-Pure Case. Algorithmica 2020, 82, 1189–1238. [Google Scholar] [CrossRef]
- Hoefer, M.; Manurangsi, P.; Psomas, A. Algorithmic Persuasion with Evidence. ACM Trans. Econ. Comput. 2024, 12, 12. [Google Scholar] [CrossRef]
- Bhaskar, U.; Cheng, Y.; Ko, Y.K.; Swamy, C. Hardness Results for Signaling in Bayesian Zero-Sum and Network Routing Games. In Proceedings of the 2016 ACM Conference on Economics and Computation, Maastricht, The Netherlands, 24–28 July 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 479–496. [Google Scholar] [CrossRef]
- Bernstein, D.S.; Givan, R.; Immerman, N.; Zilberstein, S. The Complexity of Decentralized Control of Markov Decision Processes. Math. Oper. Res. 2002, 27, 819–840. [Google Scholar] [CrossRef]
- Mckelvey, R.D.; Palfrey, T.R. Quantal Response Equilibria for Extensive Form Games. Exp. Econ. 1998, 1, 9–41. [Google Scholar] [CrossRef]
- Chen, Y.; Dang, C. An extension of quantal response equilibrium and determination of perfect equilibrium. Games Econ. Behav. 2020, 124, 659–670. [Google Scholar] [CrossRef]
- Böckenhauer, H.-J.; Gehnen, M.; Hromkovič, J.; Klasing, R.; Komm, D.; Lotze, H.; Mock, D.; Rossmanith, P.; Stocker, M. Online Unbounded Knapsack. Theory Comput. Syst. 2025, 69, 14. [Google Scholar] [CrossRef]
- Verdú, S. Error Exponents and α-Mutual Information. Entropy 2021, 23, 199. [Google Scholar] [CrossRef] [PubMed]
- M, A.; Magbool Jan, N. Convex optimization approach to design sensor networks using information theoretic measures. AIChE J. 2024, 70, e18267. [Google Scholar] [CrossRef]
- Lauri, M.; Ritala, R. Planning for robotic exploration based on forward simulation. Robot. Auton. Syst. 2016, 83, 15–31. [Google Scholar] [CrossRef]
- Olivos-Castillo, I.; Schrater, P.; Pitkow, X. Frugal inference for control. arxiv 2025, arXiv:2406.14427v3. [Google Scholar]
- Seo, S.; Kim, D. SOD2G: A Study on a Social-Engineering Organizational Defensive Deception Game Framework through Optimization of Spatiotemporal MTD and Decoy Conflict. Electronics 2021, 10, 3012. [Google Scholar] [CrossRef]
- Sinha, A.; Fang, F.; An, B.; Kiekintveld, C.; Tambe, M. Stackelberg security games: Looking beyond a decade of success. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; AAAI Press: Palo Alto, CA, USA, 2018; pp. 5494–5501. [Google Scholar] [CrossRef]
- Thanh Nguyen, H.X. When Can the Defender Effectively Deceive Attackers in Security Games? Proc. AAAI Conf. Artif. Intell. 2022, 36, 9405–9412. [Google Scholar] [CrossRef]
- Nguyen, T.; Xu, H. Imitative Attacker Deception in Stackelberg Security Games. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019; AAAI Press: Palo Alto, CA, USA, 2019; pp. 528–534. [Google Scholar] [CrossRef]
- Maghanaki, M.; Chen, F.F.; Shahin, M.; Hosseinzadeh, A.; Bouzary, H. A Novel Transformer-Based Model for Comprehensive Text-Aware Service Composition in Cloud-Based Manufacturing. In Intelligent Production and Industry 5.0 with Human Touch, Resilience, and Circular Economy, Proceedings of the Transactions of the 12th International Conference on Production Research—ICPR Americas 2024; Šormaz, D.N., Bidanda, B., Alhawari, O., Geng, Z., Eds.; Springer Nature: Cham, Switzerland, 2025; pp. 313–321. [Google Scholar] [CrossRef]
- Shahin, M.; Chen, F.F.; Maghanaki, M.; Mehrzadi, H.; Hosseinzadeh, A. Advanced Forecasting Techniques for Strategic Decision-Making in Manufacturing: Analyzing Financial Market Predictive Models. In Flexible Automation and Intelligent Manufacturing: The Future of Automation and Manufacturing: Intelligence, Agility, and Sustainability, Proceedings of the FAIM 2025, New York City, NY, USA, 21–24 June 2025; Srihari, K., Khasawneh, M.T., Yoon, S., Won, D., Eds.; Springer Nature: Cham, Switzerland, 2026; pp. 59–66. [Google Scholar] [CrossRef]



























Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Shahin, M.; Maghanaki, M.; Chen, F.F. Evaluating Synthetic Cyber Deception Strategies Under Uncertainty via Game Theory Approach: Linking Information Leakage and Game Outcomes in Cyber Deception. Sensors 2026, 26, 1748. https://doi.org/10.3390/s26061748
Shahin M, Maghanaki M, Chen FF. Evaluating Synthetic Cyber Deception Strategies Under Uncertainty via Game Theory Approach: Linking Information Leakage and Game Outcomes in Cyber Deception. Sensors. 2026; 26(6):1748. https://doi.org/10.3390/s26061748
Chicago/Turabian StyleShahin, Mohammad, Mazdak Maghanaki, and Fengshan Frank Chen. 2026. "Evaluating Synthetic Cyber Deception Strategies Under Uncertainty via Game Theory Approach: Linking Information Leakage and Game Outcomes in Cyber Deception" Sensors 26, no. 6: 1748. https://doi.org/10.3390/s26061748
APA StyleShahin, M., Maghanaki, M., & Chen, F. F. (2026). Evaluating Synthetic Cyber Deception Strategies Under Uncertainty via Game Theory Approach: Linking Information Leakage and Game Outcomes in Cyber Deception. Sensors, 26(6), 1748. https://doi.org/10.3390/s26061748

