Novel Actionable Counterfactual Explanations for Intrusion Detection Using Diffusion Models
Abstract
1. Introduction
- We propose a novel tabular diffusion-based algorithm to generate CF explanations for network intrusion detection applications. This proposed method is devised based on a technique borrowed from the diffusion-based tabular data synthesis literature. We provide empirical evidence for the utility of the proposed method compared to several existing publicly available counterfactual algorithms.
- To the best of our knowledge, the quantitative comparison of the existing counterfactual methods presented in this work provides the first analysis of its kind in the NIDS domain.
- Finally, we propose global counterfactual rules to summarize the CF explanations generated by the proposed method. These rules are derived using a simple yet effective technique based on decision trees. They summarize the important features for a cohort of attack data points and the associated value bounds that separate them from benign data. Hence, these rules can be used in actionable defense measures against network intrusion attacks, which is a novel utility of CF explanations in the NIDS domain.
2. Related Work
3. Counterfactual Explanations for Network Intrusion Detection
- Efficiency—Modern cybersecurity operation centers are fast-paced and handle dynamic threat landscapes. Such an environment requires efficient explanation generation in order to quickly identify threats and design countermeasures.
- Validity—The generated explanations are required to be valid (reside in the intended target class), which would otherwise reduce the utility of generated explanations for defense measures.
- Diversity—Intrusion attacks generally comprise multiple attack queries (e.g., malformed HTTP requests in a Denial of Service (DoS) attack). Generating multiple explanations per query can offer different perspectives on attacks than one explanation per query. Later, we use this quality to aggregate and produce global rules using diverse explanations.
- Sparsity and plausibility—Sparsity ensures minimal changes to the original, and plausibility ensures that the generated CF explanations are realistic. These two qualities can provide human analysts with explanations that are easier to analyze and utilize for countermeasures against threats.
3.1. Counterfactual Algorithms for Intrusion Detection
3.2. Guided Diffusion-Based Counterfactual Explanations for Network Intrusion Detection
3.2.1. VCNet Model
3.2.2. Guided Diffusion Models
4. Methodology
4.1. Datasets
4.2. Baseline Counterfactual Methods
4.3. Evaluation Metrics
5. Results
Global Counterfactual Rules
- ‘state < 2’, ‘proto ≤ 46’, ‘djit ≤ 91,402.99’, ‘proto > 0’
- ‘state ≥ 1’, ‘trans_depth ≤ 1.0’, ‘dload ≤ 2,579,956.5’, ‘state < 7’, ‘proto > 25’, ‘proto < 27’
- dur ≤ 15.34, sload ≤ 72,363,628.0, response_body_len ≤ 141.5, ct_flw_http_mthd ≤ 2.5, dtcpb ≤ 4,201,986,176.0, ct_dst_sport_ltm ≤ 3.5, is_ftp_login ≤ 2.5
Algorithm 1 Simple Rule Extraction |
function rule extraction()
end function |
- ‘Fwd Header Length’ ≤ 205.00, ‘ACK Flag Count’ > 0, ‘Protocol’ ≤ 11, ‘CWE Flag Count’ < 1, ‘Protocol’ > 3, ‘Idle Std’ ≤ 31137002.0
- ‘Fwd Header Length’ ≤ 205.00, ‘ACK Flag Count’ > 0.5, ‘Protocol’ ≤ 11, ‘CWE Flag Count’ > 0, ‘RST Flag Count’ < 1, ‘Fwd PSH Flags’ > 0, ‘SYN Flag Count’ > 0’
- ‘Flow IAT Mean’ > 0.625, ‘Fwd Packet Length Min’ ≤ 1148.498’
6. Discussion
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
NIDS | Network Intrusion Detection System |
XAI | Explainable Artificial Intelligence |
CF | Counterfactual |
CVAE | Conditional Variational Auto-Encoder |
DDPM | Denoising Diffusion Probabilistic Model |
Appendix A. Theoretical Details
Progressive Distillation of DDPM
Appendix B. Details of the Classifier Models and Other Hyperparameters
Black-Box Classifier
Parameter | Value |
---|---|
No. of Hidden Units | 1,286,432 |
Optimizer and lr. | Adam, |
Epochs | 600, (300—CIC-IDS-2017) |
References
- Gamage, S.; Samarabandu, J. Deep learning methods in network intrusion detection: A survey and an objective comparison. J. Netw. Comput. Appl. 2020, 169, 102767. [Google Scholar] [CrossRef]
- Chinnasamy, R.; Subramanian, M.; Easwaramoorthy, S.V.; Cho, J. Deep learning-driven methods for network-based intrusion detection systems: A systematic review. ICT Express 2025, 11, 181–215. [Google Scholar] [CrossRef]
- Houda, Z.A.E.; Brik, B.; Khoukhi, L. “Why Should I Trust Your IDS?”: An Explainable Deep Learning Framework for Intrusion Detection Systems in Internet of Things Networks. IEEE Open J. Commun. Soc. 2022, 3, 1164–1176. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Lee, S.I. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: New York, NY, USA, 2017; Volume 30. [Google Scholar]
- Benzaid, C.; Taleb, T. AI-Driven Zero Touch Network and Service Management in 5G and Beyond: Challenges and Research Directions. IEEE Netw. 2020, 34, 186–194. [Google Scholar] [CrossRef]
- Wachter, S.; Mittelstadt, B.; Russell, C. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. arXiv 2018, arXiv:1711.00399. [Google Scholar] [CrossRef]
- Guidotti, R. Counterfactual explanations and how to find them: Literature review and benchmarking. Data Min. Knowl. Discov. 2022, 38, 2770–2824. [Google Scholar] [CrossRef]
- Verma, S.; Dickerson, J.P.; Hines, K. Counterfactual Explanations for Machine Learning: A Review. arXiv 2020, arXiv:2010.10596. [Google Scholar]
- Stepin, I.; Alonso, J.M.; Catala, A.; Pereira-Farina, M. A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence. IEEE Access 2021, 9, 11974–12001. [Google Scholar] [CrossRef]
- Pawelczyk, M.; Broelemann, K.; Kasneci, G. On Counterfactual Explanations under Predictive Multiplicity. In Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), PMLR, Virtual, 3–6 August 2020; pp. 809–818. [Google Scholar]
- Gupta, V.; Nokhiz, P.; Roy, C.D.; Venkatasubramanian, S. Equalizing Recourse across Groups. arXiv 2019, arXiv:1909.03166. [Google Scholar] [CrossRef]
- Kotelnikov, A.; Baranchuk, D.; Rubachev, I.; Babenko, A. TabDDPM: Modelling Tabular Data with Diffusion Models. In Proceedings of the 40th International Conference on Machine Learning, PMLR, Honolulu, HI, USA, 23–29 July 2023; Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., Scarlett, J., Eds.; Volume 202, pp. 17564–17579. [Google Scholar]
- Chou, Y.L.; Moreira, C.; Bruza, P.; Ouyang, C.; Jorge, J. Counterfactuals and causability in explainable artificial intelligence: Theory, algorithms, and applications. Inf. Fusion 2022, 81, 59–83. [Google Scholar] [CrossRef]
- Leemann, T.; Pawelczyk, M.; Prenkaj, B.; Kasneci, G. Towards Non-Adversarial Algorithmic Recourse. arXiv 2024, arXiv:2403.10330. [Google Scholar] [CrossRef]
- Ferry, J.; Aïvodji, U.; Gambs, S.; Huguet, M.J.; Siala, M. Taming the Triangle: On the Interplays Between Fairness, Interpretability, and Privacy in Machine Learning. Comput. Intell. 2025, 41, e70113. [Google Scholar] [CrossRef]
- General Data Protection Regulation (GDPR)–Legal Text—gdpr-info.eu. Available online: https://gdpr-info.eu/ (accessed on 16 August 2025).
- Zhang, Z.; Hamadi, H.A.; Damiani, E.; Yeun, C.Y.; Taher, F. Explainable Artificial Intelligence Applications in Cyber Security: State-of-the-Art in Research. IEEE Access 2022, 10, 93104–93139. [Google Scholar] [CrossRef]
- Krishnan, D.; Singh, S.; Sugumaran, V. Explainable AI for Zero-Day Attack Detection in IoT Networks Using Attention Fusion Model. Discov. Internet Things 2025, 5, 83. [Google Scholar] [CrossRef]
- Keshk, M.; Koroniotis, N.; Pham, N.; Moustafa, N.; Turnbull, B.; Zomaya, A.Y. An explainable deep learning-enabled intrusion detection framework in IoT networks. Inf. Sci. 2023, 639, 119000. [Google Scholar] [CrossRef]
- Moustafa, N.; Koroniotis, N.; Keshk, M.; Zomaya, A.Y.; Tari, Z. Explainable Intrusion Detection for Cyber Defences in the Internet of Things: Opportunities and Solutions. IEEE Commun. Surv. Tutor. 2023, 25, 1775–1807. [Google Scholar] [CrossRef]
- Barnard, P.; Marchetti, N.; DaSilva, L.A. Robust Network Intrusion Detection Through Explainable Artificial Intelligence (XAI). IEEE Netw. Lett. 2022, 4, 167–171. [Google Scholar] [CrossRef]
- Kalakoti, R.; Vaarandi, R.; Bahsi, H.; Nõmm, S. Evaluating Explainable AI for Deep Learning-Based Network Intrusion Detection System Alert Classification. arXiv 2025, arXiv:2506.07882. [Google Scholar] [CrossRef]
- Naif Alatawi, M. Enhancing Intrusion Detection Systems with Advanced Machine Learning Techniques: An Ensemble and Explainable Artificial Intelligence (AI) Approach. Secur. Priv. 2025, 8, e496. [Google Scholar] [CrossRef]
- Arreche, O.; Guntur, T.R.; Roberts, J.W.; Abdallah, M. E-XAI: Evaluating Black-Box Explainable AI Frameworks for Network Intrusion Detection. IEEE Access 2024, 12, 23954–23988. [Google Scholar] [CrossRef]
- Dietz, K.; Hajizadeh, M.; Schleicher, J.; Wehner, N.; Geißler, S.; Casas, P.; Seufert, M.; Hoßfeld, T. Agree to Disagree: Exploring Consensus of XAI Methods for ML-based NIDS. In Proceedings of the 2024 20th International Conference on Network and Service Management (CNSM), Prague, Czech Republic, 28–31 October 2024; pp. 1–7. [Google Scholar] [CrossRef]
- Marino, D.L.; Wickramasinghe, C.S.; Manic, M. An Adversarial Approach for Explainable AI in Intrusion Detection Systems. In Proceedings of the IECON 2018—44th Annual Conference of the IEEE Industrial Electronics Society, Washington, DC, USA, 21–23 October 2018; pp. 3237–3243. [Google Scholar] [CrossRef]
- Suryotrisongko, H.; Musashi, Y.; Tsuneda, A.; Sugitani, K. Robust Botnet DGA Detection: Blending XAI and OSINT for Cyber Threat Intelligence Sharing. IEEE Access 2022, 10, 34613–34624. [Google Scholar] [CrossRef]
- Zeng, Z.; Peng, W.; Zeng, D.; Zeng, C.; Chen, Y. Intrusion detection framework based on causal reasoning for DDoS. J. Inf. Secur. Appl. 2022, 65, 103124. [Google Scholar] [CrossRef]
- Gyawali, S.; Huang, J.; Jiang, Y. Leveraging Explainable AI for Actionable Insights in IoT Intrusion Detection. In Proceedings of the 2024 19th Annual System of Systems Engineering Conference (SoSE), Tacoma, WA, USA, 23–26 June 2024; pp. 92–97. [Google Scholar] [CrossRef]
- Evangelatos, S.; Veroni, E.; Efthymiou, V.; Nikolopoulos, C.; Papadopoulos, G.T.; Sarigiannidis, P. Exploring Energy Landscapes for Minimal Counterfactual Explanations: Applications in Cybersecurity and Beyond. arXiv 2025, arXiv:2503.18185. [Google Scholar] [CrossRef]
- Guyomard, V.; Fessant, F.; Guyet, T.; Bouadi, T.; Termier, A. VCNet: A Self-explaining Model for Realistic Counterfactual Generation. In Proceedings of the Machine Learning and Knowledge Discovery in Databases, Grenoble, France, 19–23 September 2022; Amini, M.R., Canu, S., Fischer, A., Guns, T., Kralj Novak, P., Tsoumakas, G., Eds.; Springer: Cham, Switzerland, 2023; pp. 437–453. [Google Scholar]
- Guo, H.; Nguyen, T.H.; Yadav, A. CounterNet: End-to-End Training of Prediction Aware Counterfactual Explanations. In Proceedings of the KDD ’23: 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA, USA, 6–10 August 2023; pp. 577–589. [Google Scholar] [CrossRef]
- Dhariwal, P.; Nichol, A. Diffusion Models Beat GANs on Image Synthesis. arXiv 2021, arXiv:2105.05233. [Google Scholar] [CrossRef]
- Madaan, N.; Bedathur, S. Navigating the Structured What-If Spaces: Counterfactual Generation via Structured Diffusion. arXiv 2023, arXiv:2312.13616. [Google Scholar] [CrossRef]
- Salimans, T.; Ho, J. Progressive Distillation for Fast Sampling of Diffusion Models. In Proceedings of the The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, 25–29 April 2022. [Google Scholar]
- Ho, J.; Jain, A.; Abbeel, P. Denoising Diffusion Probabilistic Models. In Advances in Neural Information Processing Systems; Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H.T., Eds.; Curran Associates, Inc.: New York, NY, USA, 2020; Volume 33, pp. 6840–6851. [Google Scholar]
- Hoogeboom, E.; Nielsen, D.; Jaini, P.; Forré, P.; Welling, M. Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions. In Advances in Neural Information Processing Systems; Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W., Eds.; Curran Associates, Inc.: New York, NY, USA, 2021; Volume 34, pp. 12454–12465. [Google Scholar]
- Moustafa, N.; Slay, J. UNSW-NB15: A comprehensive data set for network intrusion detection systems (UNSW-NB15 network data set). In Proceedings of the 2015 Military Communications and Information Systems Conference (MilCIS), Canberra, Australia, 10–12 November 2015; pp. 1–6. [Google Scholar] [CrossRef]
- Sharafaldin, I.; Habibi Lashkari, A.; Ghorbani, A.A. Toward Generating a New Intrusion Detection Dataset and Intrusion Traffic Characterization. In Proceedings of the 4th International Conference on Information Systems Security and Privacy, Funchal, Madeira, Portugal, 22–24 January 2018; pp. 108–116. [Google Scholar] [CrossRef]
- Sharafaldin, I.; Lashkari, A.H.; Hakak, S.; Ghorbani, A.A. Developing Realistic Distributed Denial of Service (DDoS) Attack Dataset and Taxonomy. In Proceedings of the 2019 International Carnahan Conference on Security Technology (ICCST), Chennai, India, 1–3 October 2019; pp. 1–8. [Google Scholar] [CrossRef]
- Mothilal, R.K.; Sharma, A.; Tan, C. Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020; pp. 607–617. [Google Scholar] [CrossRef]
- Poyiadzi, R.; Sokol, K.; Santos-Rodriguez, R.; De Bie, T.; Flach, P. FACE: Feasible and Actionable Counterfactual Explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, Barcelona, Spain, 27–30 January 2020; pp. 344–350. [Google Scholar] [CrossRef]
- Pawelczyk, M.; Broelemann, K.; Kasneci, G. Learning Model-Agnostic Counterfactual Explanations for Tabular Data. In Proceedings of the WWW ’20: The Web Conference 2020, Taipei, Taiwan, 20–24 April 2020; pp. 3126–3132. [Google Scholar] [CrossRef]
- Pawelczyk, M.; Bielawski, S.; den Heuvel, J.V.; Richter, T.; Kasneci, G. CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms. In Proceedings of the Thirty-Fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), Virtual, 6–14 December 2021. [Google Scholar]
- Moreira, C.; Chou, Y.L.; Hsieh, C.; Ouyang, C.; Pereira, J.; Jorge, J. Benchmarking Instance-Centric Counterfactual Algorithms for XAI: From White Box to Black Box. Acm Comput. Surv. 2024, 57, 145. [Google Scholar] [CrossRef]
- Bayrak, B.; Bach, K. Evaluation of Instance-Based Explanations: An In-Depth Analysis of Counterfactual Evaluation Metrics, Challenges, and the CEval Toolkit. IEEE Access 2024, 12, 137683–137695. [Google Scholar] [CrossRef]
Dataset | Features | No. of Instances |
---|---|---|
UNSW-NB15 [38] | 34 (31 num., 3 cat.) | 257,499 |
CICDDoS-2019 [40] | 32 (26 num., 6 cat.) | 190,383 |
CICIDS-2017 [39] | 49 (49 num.) | 2,756,124 |
Method | Sparsity ↓ | k-Validity ↑ | Validity ↑ | log-LOF ↓ | Time(s) ↓ | Acc/F1 |
---|---|---|---|---|---|---|
Wachter | 6.51 | - | 0 | 1.25 | 0.04 | 85.94/87.79 |
DiCE | 33.2 | 8.62 | 0.99 | 6.82 | 2.14 | 89.10/90.44 |
FACE | 1 | - | 0 | 1.21 | 395.75 | 87.65/89.02 |
VCNet | 33.96 | - | 0.97 | 1.59 | <1 ms | 85.83/85.30 |
CCHVAE | 20.11 | - | 0 | 1.34 | 24.44 | 88.32/90.55 |
SCD | 22.63 | 2.13 | 0.8 | 0.26 | 3.01 | 87.65/89.02 |
TabDiff | 19.5 | 7.44 | 1 | 0.12 | 6.22 | 87.65/89.02 |
TabDiff-distill. | 17.92 | 5.03 | 1 | 0.68 | 0.92 | 87.65/89.02 |
Method | Sparsity ↓ | k-Validity ↑ | Validity ↑ | log-LOF ↓ | Time(s) ↓ | Acc/F1 |
---|---|---|---|---|---|---|
Wachter | 6.25 | - | 0 | 0.72 | 8 ms | 96.43/92.92 |
DiCE | 3.8 | 0.04 | 0.04 | 0.74 | 2.8 | 99.15/96.67 |
FACE | - | - | - | - | - | - |
VCNet | 20.24 | - | 0.97 | 1.05 0.27 | <1 ms | 99.13/98.24 |
CCHVAE | - | - | - | - | - | - |
SCD | 20.95 | 8.78 | 1 | 0.38 | 8.06 | 99.06/98.11 |
TabDiff | 22.59 | 8.92 | 1 | 0.12 | 15.55 | 99.14/98.25 |
TabDiff-distill. | 16.28 | 3.87 | 0.99 | 0.15 | 1.65 | 99.1/96.54 |
Method | Sparsity ↓ | k-Validity ↑ | Validity ↑ | log-LOF ↓ | Time(s) ↓ | Acc/F1 |
---|---|---|---|---|---|---|
Wachter | 32.94 | - | 0 | 0.57 | 0.01 | 99.46/98.56 |
DiCE | 3.57 | 6.84 | 1 | 0.15 | 35.93 | 99.35/98.29 |
FACE | - | - | - | - | - | - |
VCNet | 30.03 | - | 0.93 | 0.4 | <1 ms | 98.44/95.8 |
CCHVAE | - | - | - | - | - | - |
SCD | 38.43 | 8.82 | 1 | 0.39 | 3.69 | 99.24/97.17 |
TabDiff | 34.07 | 7.53 | 1 | 0.19 | 6.83 | 99.33/97.21 |
TabDiff-distill. | 30.69 | 7.71 | 1 | 0.53 | 0.54 | 99.28/98.09 |
Feature | Original(x) | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|---|
Init Bwd Win Bytes | −1.000000 | −1.000000 | −1.000000 | −1.000000 | −1.000000 | −1.000000 |
Bwd Packet Len Max | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
Fwd Act Data Packets | 1.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
Bwd Header Len | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
Bwd IAT Min | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
Flow IAT Mean | 3.000004 | 0.500000 | 0.500000 | 0.500000 | 0.500000 | 0.500000 |
Flow IAT Min | 3.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
Total Fwd Packets | 2.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 |
Bwd IAT Total | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
Total Bwd Packets | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
Fwd Packet Len Min | 580.999972 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
Fwd Header Len | 64.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
Down/Up Ratio | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
Idle Std | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
Flow Packets/s | 666,667.560736 | 0.042459 | 0.042459 | 0.042459 | 0.042459 | 0.042459 |
Fwd Packet Length Max | 581.000056 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
Flow Duration | 3.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 | 1.000000 |
Init Fwd Win Bytes | −1.000000 | −1.000000 | −1.000000 | 65,535.000000 | −1.000000 | −1.000000 |
Active Std | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
Fwd Packet Length Total | 1162.000245 | 0.000000 | 0.000000 | 77,926.000000 | 77,926.000000 | 0.000000 |
Active Mean | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
Bwd Packets/s | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
Flow Bytes/s | 387,333,669.424055 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
Fwd Packet Length Std | 0.000000 | 0.000000 | 0.000000 | 1150.217529 | 1150.217529 | 0.000000 |
Bwd IAT Mean | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 |
Protocol | 17.000000 | 0.000000 | 0.000000 | 17.000000 | 0.000000 | 0.000000 |
Fwd PSH Flags | 0.000000 | 0.000000 | 0.000000 | 1.000000 | 1.000000 | 6.000000 |
SYN Flag Count | 0.000000 | 0.000000 | 1.000000 | 0.000000 | 1.000000 | 1.000000 |
RST Flag Count | 0.000000 | 0.000000 | 0.000000 | 1.000000 | 0.000000 | 1.000000 |
ACK Flag Count | 0.000000 | 1.000000 | 0.000000 | 1.000000 | 0.000000 | 0.000000 |
CWE Flag Count | 0.000000 | 1.000000 | 0.000000 | 1.000000 | 1.000000 | 0.000000 |
Prediction | 1 | 1 | 0 | 1 | 0 | 0 |
Filtered Dataset | Rule | Benign Data Left | Attack Data Left |
---|---|---|---|
Test set | -1 | 99.52% | 0.47% |
-2 | 100% | 0% | |
100% | 0% | ||
LDAP DDoS | -1 | - | 0% |
-2 | - | 0% | |
- | 4.9% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Galwaduge, V.; Samarabandu, J. Novel Actionable Counterfactual Explanations for Intrusion Detection Using Diffusion Models. J. Cybersecur. Priv. 2025, 5, 68. https://doi.org/10.3390/jcp5030068
Galwaduge V, Samarabandu J. Novel Actionable Counterfactual Explanations for Intrusion Detection Using Diffusion Models. Journal of Cybersecurity and Privacy. 2025; 5(3):68. https://doi.org/10.3390/jcp5030068
Chicago/Turabian StyleGalwaduge, Vinura, and Jagath Samarabandu. 2025. "Novel Actionable Counterfactual Explanations for Intrusion Detection Using Diffusion Models" Journal of Cybersecurity and Privacy 5, no. 3: 68. https://doi.org/10.3390/jcp5030068
APA StyleGalwaduge, V., & Samarabandu, J. (2025). Novel Actionable Counterfactual Explanations for Intrusion Detection Using Diffusion Models. Journal of Cybersecurity and Privacy, 5(3), 68. https://doi.org/10.3390/jcp5030068