A Comprehensive Review: The Evolving Cat-and-Mouse Game in Network Intrusion Detection Systems Leveraging Machine Learning
Abstract
1. Introduction
1.1. Motivation
1.2. Contributions of This Paper
- We give an in-depth review of existing ML-based NIDS attack and defense techniques, including a detailed overview of NIDS, its types, ML techniques, commonly used datasets in NIDS, attacker models, and possible detection methods.
- The types of powerful attack and defense techniques and the most realistic datasets in NIDS are carefully classified.
- Existing attack and defensive approaches on adversarial learning in the NIDS are analyzed and evaluated.
- We identify the real current challenges in detection and defensive techniques that need to be further investigated and remedied in order to produce more robust NIDS models and realistic datasets. Revealing these current challenges guides us to provide insights about future research directions in the area of detecting AAs in ML-based NIDS. This has been explored in detail in the discussion section, highlighting our findings and revealing the limitations of current existing approaches.
- A comprehensive analysis has been conducted to show the impact of adversarial breaches on ML-based NIDS, including black-, gray-, and white-box attacks.
2. Background
2.1. ML Techniques in NIDS
2.2. Adversarial Machine Learning (AML)
2.2.1. Knowledge
- Black-box attack: No information about the parameters or the classifier’s structure is required for the malicious agents. They should be able to access the input and the output of the models, and the rest is considered a black box.
- Gray-box attack: Limited information or access to the system should be known to an attacker, such as accessing the preparation of the dataset and the predicted labels (training dataset), or applying a limited number of queries to the model.
- White-box attack: In this type, it has been assumed that the attacker has complete knowledge and information about the classifier and the used hyperparameters.
2.2.2. Timing
- Evasion Attack: The attack can be employed during the testing phase, where an attacker attempts to force the ML model to classify the observations wrongly. In the context of network-based IDS, the attacker tries to prevent the detection system from detecting malicious or unusual events, and, therefore, the NIDS wrongly classifies the behavior as benign. To be more specific, four scenarios might occur, as follows: (1) Confidence reduction, which reduces the certainty score to cause wrong classification; (2) Misclassification, in which the attacker modifies the result to produce a class not similar to the first class; (3) Intended wrong prediction, in which the malicious agent generates an instance that fools the model into classifying the behaviour as an objective or incorrect class; and (4) Source or target misclassification, in which the adversary changes the result class of a certain attack example to a particular target class [34].
- Poisoning attack: It occurs during the training phase, in which an assaulter tampers with the model or the training data to yield inaccurate predictions. Poisoning attacks mainly involve data insertion (injection), logic corruption, and data poisoning or manipulation. Data insertion occurs when an assailant inserts hostile or harmful data inputs into the original data without changing its characteristics or labels. The phrase “data manipulation” refers to an adversary adjusting the original training data to compromise the ML model. Logic corruption refers to an adversary’s attempt to modify the internal model structure, decision making logic, or hyperparameters to malfunction the ML [27].
2.2.3. Goals
2.2.4. Capability
2.3. Adversarial Attacks Based on Machine Learning
2.3.1. Generative Adversarial Networks (GANs)
2.3.2. Zero-Order Optimization (ZOO)
2.3.3. Kernel Density Estimation (KDE)
2.3.4. DeepFool
2.3.5. Fast Gradient Sign Method (FGSM)
2.3.6. The Carlini and Wagner (C&W) Attack
2.3.7. The Jacobian-Based Saliency Map Attack (JSMA)
2.3.8. Projected Gradient Descent (PGD)
2.3.9. Basic Iteration Method (BIM)
3. Datasets Used in NIDS
3.1. Types of Datasets Used
- NSL-KDD dataset [65]: It represents the updated KDDCup99 dataset. It has 41 features: 13 for content connection features, 9 for temporal features (found during the two-second time window), 9 for individual TCP connection features, and 10 for other general features.
- UNSW-NB15 dataset [66]: This dataset includes data with sizes of 100 GB, including both malicious and normal traffic records, where each sample has 49 features created by using many feature extraction techniques. It is mainly partitioned into test and training sets, 82,332 and 175,341 instances, respectively.
- BoT-IoT dataset [67]: It has been generated from a physical network incorporating botnet and normal samples from traffic. It consists of the serious attacks, including DoS, DDoS, information theft, fingerprinting, and service scanning.
- Kyoto 2006+ dataset [68]: It was established and revealed by Song et al. It is a collection of real traffic coming from 32 honeypots with various features from almost three years (November 2006 to August 2009), including a total of over 93 million samples [69]. Each of these records has 24 features captured from network traffic flows; 14 of them are represented as KDD or DARPA, while the rest are added as new features.
- CSE-CIC-IDS2018 dataset [70]: The Canadian Institute for Cybersecurity (CIC) and Communications Security Establishment (CSE) have implemented this on AWS (Amazon Web Services), based in Fredericton. It represents the extended CIC-IDS2017 and was collected to reflect real cyber threats, including eighty features, network traffic, and system logs [71,72]. The network’s infrastructure contains 50 attacking computers, 420 compromised machines, and 30 servers.
- The ADFA-LD dataset [75]: It was derived from the host-based IDSs and includes samples recorded from two operating systems. Many important attacks in this dataset have been collected from zero-day and malware-based attacks.
- KDD CUP 99 dataset [76]: This dataset contains a total of 23 attacks with 41 features and an additional one, the 42nd feature, employed to divide the connection as either normal or attack.
- CTU-13 dataset [77]: It is a synthetic dataset that concentrates mainly on botnet network traffic and contains a wide range of dataset analyses for anomaly detection on critical infrastructures, covering 153 attack types, including port scanning and DDoS. The dataset has 30 samples. This dataset was originally presented in two formats: a full Pcap of normal and malicious packets and a labeled bidirectional Netflow. Table 3 illustrates the advantages and disadvantages of commonly used datasets in NIDS, highlighting their strengths, limitations, and supporting references to evaluate the adversarial contexts. It is worth mentioning that older datasets like KDDCup99 are justified and used for the baseline comparisons. In fact, they skew the results and do not reflect the modern network behaviors. In order to ensure the experimental validation, recent taxonomies on adversarial attacks against the NIDS [14] have been considered since they can better capture the traffic patterns and threat actions.
3.2. Data Preparation
- Data Preprocessing: Handle missing values, such as removing missing values, assigning with median/mean/mode, and removing irrelevant duplicated data.
- Data Transformation: Apply log to skewed numerical features and convert to binary, e.g., flags, attack types.
- Handling Anomalies: Include noise filtering and outliers, such as using z-scores, IQR, or isolation forest methods.
- Aggregation: Create summary statistics for network flows, e.g., total packets and average duration, as well as group traffic data by sessions or flows.
- Standardize Data Format: Ensure all datasets share a consistent naming, labels, and formats.
- Encoding: Include binary and multi-class classification.
- Splitting and Balancing: Split into train/test/validation set and use SMOTE or undersampling to address class imbalance.
- Data Types: Convert columns to appropriate data types, e.g., float, int, or category.
- Dimensionality Reduction: Incorporate t-SNE or PCA to decrease dimensionality.
- Feature Engineering: Encode categorical variables (one hot or label), normalize numeric features (min–max/standardization), extract temporal features, and select correlation analysis or feature importance metrics, e.g., mutual information, tree-based models, to retain only the most relevant features.
3.3. Feature Reduction
- Principal component analysis (PCA): It is leveraged to obtain valuable features in which the feature’s dimensions are decreased. This will reduce both the model’s computational time and complexity. In cybersecurity, PCA is used to obtain valuable features from the network traffic [38].
- Recursive feature elimination (Rfe): This one can be employed to choose preferable features from the entire set of features in the given input data, where high-ranked features have only been chosen, while the ones with low ranks are deleted. This approach is used to eliminate redundant features and retrieve preferable features [11].
- The stacked sparse autoencoder (SSAE): It is one of the deep learning models that is used to select features with high rank behavior and activity data. The classification features have been presented in SSAE for the first time to automatically extract the deep sparse features. The sparse features with low dimensions have been employed to implement various fundamental classifiers [78].
- Autoencoder (AE): AE is an unsupervised feature learning method used to classify both anomaly detection in the networks and malware. In fact, using a deep AE, the original input is transformed into an improved representation through hierarchical feature learning, in which each corresponding level captures a different degree of complexity. A linear activation function in all units of an AE captures a subspace of the basic parts of the input. Moreover, it is anticipated that incorporating non-linear activation functions with AE can help to detect more valuable features [54,78,79].
- Stacked Non-Symmetrical Deep Autoencoder (SNDAE): It uses the novel NDAE (non-linear deep autoencoder) approach in unsupervised learning for obtaining valuable features. A classification model built from stacked NDAEs has been shown to achieve excellent feature reduction when the RF algorithm was incorporated with two datasets: NSL-KDD and KDDCup99 [54].
3.4. Feature Extraction
- 1.
- Feature Extraction based on Flow: This technique is the most commonly used one, where details from network packets in the same connection and flow are extracted. Every extracted feature refers to the main connection between two given devices. The extracted features in this method are mainly obtained from the header of the network packet and can be classified into three steps, as follows:
- Aggregate information can be used to compute the number of specific packet features in the connection, like how many bytes have been sent, the number of flags used, and packets transferred.
- Summary statistics give the entire information in detail regarding the whole connection, including the used protocol/service and the duration of the connection.
- Statistical information is used to evaluate statistics of the packet features in the network connection, e.g., the mean and the standard deviation of interarrival time and packet size of the network [81].
- 2.
- Feature Extraction based on Packet: This is another feature extraction technique that can be used to obtain a specific feature at the packet level, rather than at the connection or flow. Usually, features based on packets are statistically aggregated data similar to the feature flows, but with extra correlation analysis between the inter-packet time and the size of the packet across two devices. The features can be extracted using different time windows to minimize the traffic variations and capture the sequential nature of the data packets. The packet-based feature extraction techniques can be employed in different detection systems [82,83].
4. Adversarial Attack Against ML-Based NIDS Models
4.1. Black-Box Attack
4.1.1. Poisoning Attack
4.1.2. Evasion Attack
4.2. White-Box Attack
4.2.1. Poisoning Attack
4.2.2. Evasion Attack
4.3. Gray-Box Attack
4.3.1. Poisoning Attack
4.3.2. Evasion Attack
4.4. Combination of Poisoning and Evasion Attacks
5. Defending ML-Based NIDS Models
5.1. Mitigation of Black-Box Attack
5.1.1. Mitigating Poisoning Attack
5.1.2. Mitigating Evasion Attack
5.2. Mitigation of White-Box Attack
5.2.1. Mitigating Poisoning Attack
5.2.2. Mitigating Evasion Attack
5.3. Mitigation of Gray-Box Attack
5.3.1. Mitigating Poisoning Attack
5.3.2. Mitigating Evasion Attack
5.4. Mitigating the Combination of Poisoning and Evasion Attacks
6. Discussion
- There is no defensive technique, including NIDS or hybrid ML with NIDS, that has been presented yet that can provide a very high attack detection rate against strong attacks, such as GAN, DeepFool, and KDE. In [44], extensive experiments have been conducted, and it has been pointed out that no defensive approaches were able to prevent sophisticated attacks generated via incorporating different datasets at different percentage rates of attacks.
- Although GANs have been shown to be the most promising and powerful attacks, the recent study in [44] confirmed that other attack models, e.g., DeepFools, can be stronger than GANs if they have been trained carefully.
- The implementation of adversarial attacks has been extensively leveraged and explored in deep detail in both image and speech processing areas. However, their impacts on NIDS remain to be explored. Creating adversarial examples in the real-time domain from network traffic at the physical layer has not been proposed or presented yet. This one is especially important to open the door for future interesting research areas.
- Many ML-based NIDS can detect attacks very accurately, but this usually makes them slower and more resource-intensive to run. On the other hand, simpler detection systems can work faster and cheaper by removing fewer critical features. While this approach makes the system more efficient, it does cause a drop in accuracy [158].
- Current defensive techniques typically address evasion or poisoning attacks separately, leaving models subject to combined threats. Our analysis shows that very few techniques can effectively mitigate both attacks at once. To close this gap, researchers should develop strong unified defenses to prevent both attack types, and this, in turn, makes the detection systems safer against real-world adversarial approaches.
- Unified defenses against both the evasion and poisoning attacks, consistent with the 2025 NIST taxonomy [10], can be implemented leveraging the emerging ensemble-based frameworks [131]. For instance, the hybrid adversarial–training ensembles have been shown to achieve 20–30% improvements in the robustness of the model. These defensive approaches can be further strengthened through incorporating the technical pathways, including the hardware acceleration, e.g., FPGA and GPU pipelines for the real-time adversarial example generation [60]. For the real-time generation, the pathways should include GPU or FPGA acceleration since they can be used to reduce the latency by approximately 50% in the SDN testbeds [60,104].
- Bridge academic–real-world gaps with physical-layer attacks, informed by 2025 reviews on NIDS-specific adversarial impacts [14]. For example, the physical-layer simulations, including the packet-level perturbation models that are deployed in the SDN testbeds [103], enable the real evaluation of the adversarial traffic behavior in the operational network conditions.
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Ahmad, S.; Arif, F.; Zabeehullah, Z.; Iltaf, N. Novel approach using deep learning for intrusion detection and classification of the network traffic. In Proceedings of the 2020 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), Tunis, Tunisia, 22–24 June 2020. [Google Scholar]
- Alasad, Q.; Lin, J.; Yuan, J.-S.; Fan, D.; Awad, A. Resilient and secure hardware devices using ASL. ACM J. Emerg. Technol. Comput. Syst. 2021, 17, 1–26. [Google Scholar] [CrossRef]
- Alasad, Q.; Yuan, J.-S.; Subramanyan, P. Strong logic obfuscation with low overhead against IC reverse engineering attacks. ACM Trans. Des. Autom. Electron. Syst. 2020, 25, 1–34. [Google Scholar] [CrossRef]
- Hashemi, M.J.; Keller, E. Enhancing robustness against adversarial examples in network intrusion detection systems. In Proceedings of the 2020 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Leganes, Spain, 10–12 November 2020. [Google Scholar]
- Alahmed, S.; Alasad, Q.; Hammood, M.M.; Yuan, J.-S.; Alawad, M. Mitigation of black-box attacks on intrusion detection systems-based ML. Computers 2022, 11, 115. [Google Scholar] [CrossRef]
- Aboueata, N.; Alrasbi, S.; Erbad, A.; Kassler, A.; Bhamare, D. Supervised machine learning techniques for efficient network intrusion detection. In Proceedings of the 2019 28th International Conference on Computer Communication and Networks (ICCCN), Valencia, Spain, 29 July–1 August 2019. [Google Scholar]
- Kumari, A.; Mehta, A.K. A hybrid intrusion detection system based on decision tree and support vector machine. In Proceedings of the 2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA), Greater Noida, India, 30–31 October 2020. [Google Scholar]
- Sah, G.; Banerjee, S. Feature reduction and classification techniques for intrusion detection system. In Proceedings of the 2020 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 28–30 July 2020. [Google Scholar]
- Serinelli, B.M.; Collen, A.; Nijdam, N.A. Training guidance with kdd cup 1999 and nsl-kdd data sets of anidinr: Anomaly-based network intrusion detection system. Procedia Comput. Sci. 2020, 175, 560–565. [Google Scholar] [CrossRef]
- Vassilev, A.; Oprea, A.; Fordyce, A.; Anderson, H. Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations; NIST Trustworthy and Responsible AI Report; NIST AI 100-2e2025; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2023.
- Usama, M.; Asim, M.; Latif, S.; Qadir, J.; Ala-Al-Fuqaha. Generative adversarial networks for launching and thwarting adversarial attacks on network intrusion detection systems. In Proceedings of the 2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC), Tangier, Morocco, 24–28 June 2019. [Google Scholar]
- Chen, J.; Wu, D.; Zhao, Y.; Sharma, N.; Blumenstein, M.; Yu, S. Fooling intrusion detection systems using adversarially autoencoder. Digit. Commun. Netw. 2021, 7, 453–460. [Google Scholar] [CrossRef]
- Alatwi, H.A.; Aldweesh, A. Adversarial black-box attacks against network intrusion detection systems: A survey. In Proceedings of the 2021 IEEE World AI IoT Congress (AIIoT), Seattle, WA, USA, 10–13 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 34–40. [Google Scholar]
- Hasan, M.M.; Islam, R.; Mamun, Q.; Islam, M.Z.; Gao, J. Adversarial attacks on deep learning-based network intrusion detection systems: A taxonomy and review. SSRN 5096420. 2025. Available online: https://ssrn.com/abstract=5096420 (accessed on 7 October 2025).
- Alqahtani, A.; AlShaher, H. Anomaly-Based Intrusion Detection Systems Using Machine Learning. J. Cybersecur. Inf. Manag. 2024, 14, 20. [Google Scholar]
- Alatwi, H.A.; Morisset, C. Adversarial machine learning in network intrusion detection domain: A systematic review. arXiv 2021, arXiv:2112.03315. [Google Scholar] [CrossRef]
- Rawal, A.; Rawat, D.; Sadler, B.M. Recent advances in adversarial machine learning: Status, challenges and perspectives. In Proceedings of the Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III, Online, 12–16 April 2021; Volume 11746. [Google Scholar]
- Ahmad, Z.; Khan, A.S.; Shiang, C.W.; Abdullah, J.; Ahmad, F. Network intrusion detection system: A systematic study of machine learning and deep learning approaches. Trans. Emerg. Telecommun. Techn. 2021, 32, e4150. [Google Scholar] [CrossRef]
- MOzkan-Okay; Samet, R.; Aslan, Ö.; Gupta, D. A comprehensive systematic literature review on intrusion detection systems. IEEE Access 2021, 9, 57727–157760. [Google Scholar] [CrossRef]
- Rosenberg, I.; Shabtai, A.; Elovici, Y.; Rokach, L. Adversarial machine learning attacks and defense methods in the cyber security domain. ACM Comput. Surv. 2021, 54, 1–36. [Google Scholar] [CrossRef]
- Jmila, H.; Khedher, M.I. Adversarial machine learning for network intrusion detection: A comparative study. Comput. Netw. 2022, 214, 109073. [Google Scholar] [CrossRef]
- Khazane, H.; Ridouani, M.; Salahdine, F.; Kaabouch, N. A holistic review of machine learning adversarial attacks in IoT networks. Future Internet 2024, 16, 32. [Google Scholar] [CrossRef]
- Alotaibi, A.; Rassam, M.A. Adversarial machine learning attacks against intrusion detection systems: A survey on strategies and defense. Future Internet 2023, 15, 62. [Google Scholar] [CrossRef]
- Lim, W.; Yong, K.S.C.; Lau, B.T.; Tan, C.C.L. Future of generative adversarial networks (GAN) for anomaly detection in network security: A review. Comput. Secur. 2024, 139, 103733. [Google Scholar] [CrossRef]
- Pacheco, Y.; Sun, W. Adversarial Machine Learning: A Comparative Study on Contemporary Intrusion Detection Datasets. In Proceedings of the International Conference on Information Systems Security and Privacy, Virtual, 11–13 February 2021; pp. 160–171. [Google Scholar]
- He, K.; Kim, D.D.; Asghar, M.R. Adversarial machine learning for network intrusion detection systems: A comprehensive survey. IEEE Commun. Surv. Tutor. 2023, 25, 538–566. [Google Scholar] [CrossRef]
- Ibitoye, O.; Abou-Khamis, R.; Shehaby, M.E.; Matrawy, A.; Shafiq, M.O. The Threat of Adversarial Attacks on Machine Learning in Network Security—A Survey. arXiv 2019, arXiv:1911.02621. [Google Scholar] [CrossRef]
- Liu, Q.; Li, P.; Zhao, W.; Cai, W.; Yu, S.; Leung, V.C.M. A survey on security threats and defensive techniques of machine learning: A data driven view. IEEE Access 2018, 6, 12103–12117. [Google Scholar] [CrossRef]
- Peng, Y.; Fu, G.; Luo, Y.; Hu, J.; Li, B.; Yan, Q. Detecting adversarial examples for network intrusion detection system with GAN. In Proceedings of the 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 16–18 October 2020. [Google Scholar]
- Apruzzese, G.; Colajanni, M.; Ferretti, L.; Marchetti, M. Addressing adversarial attacks against security systems based on machine learning. In Proceedings of the 2019 11th International Conference On Cyber Conflict (CyCon), Tallinn, Estonia, 28–31 May 2019. [Google Scholar]
- Liu, H. Automated Network Defense: A Systematic Survey and Analysis of AutoML Paradigms for Network Intrusion Detection. Appl. Sci. 2025, 15, 10389. [Google Scholar] [CrossRef]
- Apruzzese, G.; Andreolini, M.; Ferretti, L.; Marchetti, M.; Colajanni, M. Modeling realistic adversarial attacks against network intrusion detection systems. Digit. Threat. Res. Pract. 2022, 3, 1–19. [Google Scholar] [CrossRef]
- Alhajjar, E.; Maxwell, P.; Bastian, N. Adversarial machine learning in network intrusion detection systems. Expert Syst. Appl. 2021, 186, 115782. [Google Scholar] [CrossRef]
- Ayub, M.A.; Johnson, W.A.; Talbert, D.A.; Siraj, A. Model evasion attack on intrusion detection systems using adversarial machine learning. In Proceedings of the 2020 54th Annual Conference on Information Sciences and Systems (CISS), Princeton, NJ, USA, 18–20 March 2020. [Google Scholar]
- Silva, S.H.; Najafirad, P. Opportunities and challenges in deep learning adversarial robustness: A survey. arXiv 2020, arXiv:2007.00753. [Google Scholar] [CrossRef]
- Marano, G.C.; Rosso, M.M.; Aloisio, A.; Cirrincione, G. Generative adversarial networks review in earthquake-related engineering fields. Bull. Earthq. Eng. 2024, 22, 3511–3562. [Google Scholar] [CrossRef]
- Bourou, S.; El Saer, A.; Velivassaki, T.-H.; Voulkidis, A.; Zahariadis, T. A review of tabular data synthesis using GANs on an IDS dataset. Information 2021, 12, 375. [Google Scholar] [CrossRef]
- Soleymanzadeh, R.; Kashef, R. Efficient intrusion detection using multi-player generative adversarial networks (GANs): An ensemble-based deep learning architecture. Neural Comput. Appl. 2023, 35, 12545–12563. [Google Scholar] [CrossRef]
- Dutta, I.K.; Ghosh, B.; Carlson, A.; Totaro, M.; Bayoumi, M. Generative adversarial networks in security: A survey. In Proceedings of the 2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), New York, NY, USA, 28–31 October 2020. [Google Scholar]
- Chauhan, R.; Heydari, S.S. Polymorphic adversarial DDoS attack on IDS using GAN. In Proceedings of the 2020 International Symposium on Networks, Computers and Communications (ISNCC), Montreal, QC, Canada, 20–22 October 2020. [Google Scholar]
- Chen, P.-Y.; Zhang, H.; Sharma, Y.; Yi, J.; Hsieh, C.-J. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Salt Lake City, UT, USA, 14–18 October 2017. [Google Scholar]
- Golovin, D.; Karro, J.; Kochanski, G.; Lee, C.; Song, X.; Zhang, Q. Gradientless descent: High-dimensional zeroth-order optimization. arXiv 2019, arXiv:1911.06317. [Google Scholar]
- Kumar, S.; Gupta, S.; Buduru, A.B. BB-Patch: BlackBox Adversarial Patch-Attack using Zeroth-Order Optimization. arXiv 2024, arXiv:2405.06049. [Google Scholar]
- Ye, H.; Huang, Z.; Fang, C.; Li, C.J.; Zhang, T. Hessian-Aware Zeroth-Order Optimization. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 4869–4877. [Google Scholar] [CrossRef] [PubMed]
- Tian, X.; Kong, Y.; Gong, Y.; Huang, Y.; Wang, S.; Du, G. Dynamic geothermal resource assessment: Integrating reservoir simulation and Gaussian Kernel Density Estimation under geological uncertainties. Geothermics 2024, 120, 103017. [Google Scholar] [CrossRef]
- Aghaei, E.; Serpen, G. Host-based anomaly detection using Eigentraces feature extraction and one-class classification on system call trace data. arXiv 2019, arXiv:1911.11284. [Google Scholar]
- Pillonetto, G.; Aravkin, A.; Gedon, D.; Ljung, L.; Ribeiro, A.H.; Schön, T.B. Deep networks for system identification: A survey. Automatica 2025, 171, 111907. [Google Scholar] [CrossRef]
- Ahsan, M.; Khusna, H.; Wibawati; Lee, M.H. Support vector data description with kernel density estimation (SVDD-KDE) control chart for network intrusion monitoring. Sci. Rep. 2023, 13, 19149. [Google Scholar] [CrossRef]
- Chen, Y.-C. A tutorial on kernel density estimation and recent advances. Biostat. Epidemiol. 2017, 1, 161–187. [Google Scholar] [CrossRef]
- Węglarczyk, S. Kernel density estimation and its application. In ITM Web of Conferences; EDP Sciences: Les Ulis, France, 2018; Volume 23, p. 37. [Google Scholar]
- Petrovsky, D.V.; Rudnev, V.R.; Nikolsky, K.S.; Kulikova, L.I.; Malsagova, K.M.; Kopylov, A.T.; Kaysheva, A.L. PSSNet—An accurate super-secondary structure for protein segmentation. Int. J. Mol. Sci. 2022, 23, 14813. [Google Scholar] [CrossRef]
- Moosavi-Dezfooli, S.-M.; Fawzi, A.; Frossard, P. Deepfool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2574–2582. [Google Scholar]
- Fatehi, N.; Alasad, Q.; Alawad, M. Towards adversarial attacks for clinical document classification. Electronics 2022, 12, 129. [Google Scholar] [CrossRef]
- Shone, N.; Ngoc, T.N.; Phai, V.D.; Shi, Q. A deep learning approach to network intrusion detection. IEEE Trans. Emerg. Top. Comput. Intell. 2018, 2, 41–50. [Google Scholar] [CrossRef]
- Alahmed, S.; Alasad, Q.; Yuan, J.-S.; Alawad, M. Impacting robustness in deep learning-based NIDS through poisoning attacks. Algorithms 2024, 17, 155. [Google Scholar] [CrossRef]
- Jakubovitz, D.; Giryes, R. Improving DNN robustness to adversarial attacks using Jacobian regularization. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. arXiv 2014, arXiv:1412.6572. [Google Scholar]
- Carlini, N.; Wagner, D. Towards evaluating the robustness of neural networks. In Proceedings of the 2017 IEEE Symposium on security and privacy (sp), San Jose, CA, USA, 22–26 May 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 39–57. [Google Scholar]
- Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z.B.; Swami, A. The limitations of deep learning in adversarial settings. In Proceedings of the 2016 IEEE European symposium on security and privacy (EuroS&P), Saarbruecken, Germany, 21–24 March 2016; IEEE: Piscataway, NJ, USA, 2016. [Google Scholar]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards deep learning models resistant to adversarial attacks. arXiv 2017, arXiv:1706.06083. [Google Scholar]
- Kurakin, A.; Goodfellow, I.J.; Bengio, S. Adversarial examples in the physical world. In Artificial Intelligence Safety and Security; Chapman and Hall/CRC: Boca Raton, FL, USA, 2018; pp. 99–112. [Google Scholar]
- Ahmed, M.; Alasad, Q.; Yuan, J.-S.; Alawad, M. Re-Evaluating Deep Learning Attacks and Defenses in Cybersecurity Systems. Big Data Cogn. Comput. 2024, 8, 191. [Google Scholar] [CrossRef]
- Benzaïd, C.; Boukhalfa, M.; Taleb, T. Robust self-protection against application-layer (D) DoS attacks in SDN environment. In Proceedings of the 2020 IEEE Wireless Communications and Networking Conference (WCNC), Seoul, Republic of Korea, 25–28 May 2020. [Google Scholar]
- Roshan, M.K.; Zafar, A. Boosting robustness of network intrusion detection systems: A novel two phase defense strategy against untargeted white-box optimization adversarial attack. Expert Syst. Appl. 2024, 249, 123567. [Google Scholar] [CrossRef]
- Aljawarneh, S.; Aldwairi, M.; Yassein, M.B. Anomaly-based intrusion detection system through feature selection analysis and building hybrid efficient model. J. Comput. Sci. 2018, 25, 152–160. [Google Scholar] [CrossRef]
- Moustafa, N.; Slay, J. UNSW-NB15: A comprehensive data set for network intrusion detection systems (UNSW-NB15 network data set). In Proceedings of the 2015 Military Communications and Information Systems Conference (MilCIS), Canberra, ACT, Australia, 10–12 November 2015. [Google Scholar]
- Alosaimi, S.; Almutairi, S.M. An intrusion detection system using BoT-IoT. Appl. Sci. 2023, 13, 5427. [Google Scholar] [CrossRef]
- Song, J.; Takakura, H.; Okabe, Y.; Eto, M.; Inoue, D.; Nakao, K. Statistical analysis of honeypot data and building of Kyoto 2006+ dataset for NIDS evaluation. In Proceedings of the First Workshop on Building Analysis Datasets and Gathering Experience Returns for Security, Salzburg, Austria, 10 April 2011. [Google Scholar]
- Manisha, P.; Gujar, S. Generative Adversarial Networks (GANs): What it can generate and What it cannot? arXiv 2018, arXiv:1804.00140. [Google Scholar]
- Sharafaldin, I.; Lashkari, A.H.; Ghorbani, A.A. Toward generating a new intrusion detection dataset and intrusion traffic characterization. ICISSp 2018, 1, 108–116. [Google Scholar]
- Liu, L.; Wang, P.; Lin, J.; Liu, L. Intrusion detection of imbalanced network traffic based on machine learning and deep learning. IEEE Access 2020, 9, 7550–7563. [Google Scholar] [CrossRef]
- Kilincer, I.F.; Ertam, F.; Sengur, A. Machine learning methods for cyber security intrusion detection: Datasets and comparative study. Comput. Netw. 2021, 188, 107840. [Google Scholar] [CrossRef]
- Sharafaldin, I.; Lashkari, A.H.; Hakak, S.; Ghorbani, A.A. Developing realistic distributed denial of service (DDoS) attack dataset and taxonomy. In Proceedings of the 2019 International Carnahan Conference on Security Technology, Chennai, India, 1–3 October 2019. [Google Scholar]
- Rizvi, S.; Scanlon, M.; Mcgibney, J.; Sheppard, J. Application of artificial intelligence to network forensics: Survey, challenges and future directions. IEEE Access 2022, 10, 110362–110384. [Google Scholar] [CrossRef]
- Singh, G.; Khare, N. A survey of intrusion detection from the perspective of intrusion datasets and machine learning techniques. Int. J. Comput. Appl. 2022, 44, 659–669. [Google Scholar] [CrossRef]
- Rampure, V.; Tiwari, A. A rough set based feature selection on KDD CUP 99 data set. Int. J. Database Theory Appl. 2015, 8, 149–156. [Google Scholar] [CrossRef]
- Sharma, A.; Babbar, H. Detecting cyber threats in real-time: A supervised learning perspective on the CTU-13 dataset. In Proceedings of the 2024 5th International Conference for Emerging Technology (INCET), Belgaum, India, 24–26 May 2024. [Google Scholar]
- Yan, B.; Han, G. Effective feature extraction via stacked sparse autoencoder to improve intrusion detection system. IEEE Access 2018, 6, 41238–41248. [Google Scholar] [CrossRef]
- Yousefi-Azar, M.; Varadharajan, V.; Hamey, L.; Tupakula, U. Autoencoder-based feature learning for cyber security applications. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017. [Google Scholar]
- Han, D.; Wang, Z.; Zhong, Y.; Chen, W.; Yang, J.; Lu, S.; Shi, X.; Yin, X. Evaluating and improving adversarial robustness of machine learning-based network intrusion detectors. IEEE J. Sel. Areas Commun. 2021, 39, 2632–2647. [Google Scholar] [CrossRef]
- Tavallaee, M.; Bagheri, E.; Lu, W.; Ghorbani, A.A. A detailed analysis of the KDD CUP 99 data set. In Proceedings of the 2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications, Ottawa, ON, Canada, 8–10 July 2009. [Google Scholar]
- Thing, V.L. IEEE 802.11 network anomaly detection and attack classification: A deep learning approach. In Proceedings of the 2017 IEEE Wireless Communications and Networking Conference (WCNC), San Francisco, CA, USA, 19–22 March 2017. [Google Scholar]
- Mirsky, Y.; Doitshman, T.; Elovici, Y.; Shabtai, A. Kitsune: An ensemble of autoencoders for online network intrusion detection. arXiv 2018, arXiv:1802.09089. [Google Scholar] [CrossRef]
- Narodytska, N.; Kasiviswanathan, S.P. Simple Black-Box Adversarial Attacks on Deep Neural Networks. In CVPR Workshops; Elsevier: Amsterdam, The Netherlands, 2017; Volume 2. [Google Scholar]
- Ghadimi, S.; Lan, G. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM J. Optim. 2013, 23, 2341–2368. [Google Scholar] [CrossRef]
- Ilyas, A.; Engstrom, L.; Athalye, A.; Lin, J. Black-box adversarial attacks with limited queries and information. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 261–273. [Google Scholar]
- Wierstra, D.; Schaul, T.; Glasmachers, T.; Sun, Y.; Schmidhuber, J. Natural evolution strategies. arXiv 2011, arXiv:1106.4487. [Google Scholar] [PubMed]
- Li, P.; Zhao, W.; Liu, Q.; Liu, X.; Yu, L. Poisoning machine learning based wireless IDSs via stealing learning model. In Proceedings of the International Conference on Wireless Algorithms, Systems, and Applications, Tianjin, China, 20–22 June 2018. [Google Scholar]
- Zhou, X.; Liang, W.; Li, W.; Yan, K.; Shimizu, S.; Wang, K.I.-K. Hierarchical adversarial attacks against graph-neural-network-based IoT network intrusion detection system. IEEE Internet Things J. 2021, 9, 9310–9319. [Google Scholar] [CrossRef]
- Hamza, A.; Gharakheili, H.H.; Benson, T.A.; Sivaraman, V. Detecting volumetric attacks on lot devices via sdn-based monitoring of mud activity. In Proceedings of the 2019 ACM Symposium on SDN Research, San Jose, CA, USA, 3–4 April 2019; pp. 36–48. [Google Scholar]
- Xu, K.; Li, C.; Tian, Y.; Sonobe, T.; Kawarabayashi, K.-I.; Jegelka, S. Representation learning on graphs with jumping knowledge networks. In International Conference on Machine Learning; PMLR: New York, NY, USA, 2018; pp. 5453–5462. [Google Scholar]
- Kipf, T. Semi-Supervised Classification with Graph Convolutional Networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
- Zhou, X.; Liang, W.; Wang, K.I.-K.; Huang, R.; Jin, Q. Academic influence aware and multidimensional network analysis for research collaboration navigation based on scholarly big data. IEEE Trans. Emerg. Top. Comput. 2018, 9, 246–257. [Google Scholar] [CrossRef]
- Ma, J.; Ding, S.; Mei, Q. Towards more practical adversarial attacks on graph neural networks. Adv. Neural Inf. Process. Syst. 2020, 33, 4756–4766. [Google Scholar]
- Sun, Z.; Ambrosi, E.; Bricalli, A.; Ielmini, D. In-memory PageRank accelerator with a cross-point array of resistive memories. IEEE Trans. Electron Devices 2020, 67, 1466–1470. [Google Scholar] [CrossRef]
- Kotak, J.; Elovici, Y. Adversarial attacks against IoT identification systems. IEEE Internet Things J. 2022, 10, 7868–7883. [Google Scholar] [CrossRef]
- Sivanatha, A.; Gharakheili, H.H.; Loi, F.; Radford, A.; Wijenayake, C.; Vishwanath, A. Classifying IoT devices in smart environments using network traffic characteristics. IEEE Trans. Mob. Comput. 2018, 18, 1745–1759. [Google Scholar] [CrossRef]
- Tian, J. Adversarial vulnerability of deep neural network-based gait event detection: A comparative study using accelerometer-based data. Biomed. Signal Process. Control. 2022, 73, 103429. [Google Scholar] [CrossRef]
- Kuppa, A.; Grzonkowski, S.; Asghar, M.R.; Le-Khac, N.-A. Black box attacks on deep anomaly detectors. In Proceedings of the 14th International Conference on Availability, Reliability and Security, Canterbury, UK, 26-29 August 2019; pp. 1–10. [Google Scholar]
- Liu, F.T.; Ting, K.M.; Zhou, Z.-H. Isolation forest. In Proceedings of the 2008 Eighth IEEE International Conference on Data Mining, Pisa, Italy, 15–19 December 2008. [Google Scholar]
- Zenati, H.; Romain, M.; Foo, C.-S.; Lecouat, B.; Chandrasekhar, V. Adversarially learned anomaly detection. In Proceedings of the 2018 IEEE International Conference on Data Mining (ICDM), Singapore, 17–20 November 2018. [Google Scholar]
- Zong, B.; Song, Q.; Min, M.R.; Cheng, W.; Lumezanu, C.; Cho, D.; Chen, H. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Schlegl, T.; Seeböck, P.; Waldstein, S.M.; Langs, G.; Schmidt-Erfurth, U. f-AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks. Med. Image Anal. 2019, 54, 30–44. [Google Scholar] [CrossRef]
- Aiken, J.; Scott-Hayward, S. Investigating adversarial attacks against network intrusion detection systems in sdns. In Proceedings of the 2019 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Dallas, TX, USA, 12–14 November 2019. [Google Scholar]
- Yan, Q.; Wang, M.; Huang, W.; Luo, X.; Yu, F.R. Automatically synthesizing DoS attack traces using generative adversarial networks. Int. J. Mach. Learn. Cybern. 2019, 10, 3387–3396. [Google Scholar] [CrossRef]
- Shu, D.; Leslie, N.O.; Kamhoua, C.A.; Tucker, C.S. Generative adversarial attacks against intrusion detection systems using active learning. In Proceedings of the 2nd ACM Workshop on wireless Security and Machine Learning, Linz, Austria, 13 July 2020. [Google Scholar]
- Guo, S.; Zhao, J.; Li, X.; Duan, J.; Mu, D.; Jing, X. A Black-Box Attack Method against Machine-Learning-Based Anomaly Network Flow Detection Models. Secur. Commun. Netw. 2021, 2021, 5578335. [Google Scholar] [CrossRef]
- Sharon, Y.; Berend, D.; Liu, Y.; Shabtai, A.; Elovici, Y. Tantra: Timing-based adversarial network traffic reshaping attack. IEEE Trans. Inf. Forensics Secur. 2022, 17, 3225–3237. [Google Scholar] [CrossRef]
- Zolbayar, B.-E.; Sheatsley, R.; McDaniel, P.; Weisman, M.J.; Zhu, S.; Zhu, S.; Krishnamurthy, S. Generating practical adversarial network traffic flows using NIDSGAN. arXiv 2022, arXiv:2203.06694. [Google Scholar] [CrossRef]
- Hou, T. IoTGAN: GAN powered camouflage against machine learning based IoT device identification. In Proceedings of the 2021 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), Los Angeles, CA, USA, 13–15 December 2021. [Google Scholar]
- Bao, J.; Hamdaoui, B.; Wong, W.-K. Iot device type identification using hybrid deep learning approach for increased iot security. In Proceedings of the 2020 International Wireless Communications and Mobile Computing (IWCMC), Limassol, Cyprus, 15–19 June 2020. [Google Scholar]
- Aldhaheri, S.; Alhuzali, A. SGAN-IDS: Self-attention-based generative adversarial network against intrusion detection systems. Sensors 2023, 23, 7796. [Google Scholar] [CrossRef]
- Fan, M.; Liu, Y.; Chen, C.; Yu, S.; Guo, W.; Wang, L. Toward Evaluating the Reliability of Deep-Neural-Network-Based IoT Devices. IEEE Internet Things J. 2021, 9, 17002–17013. [Google Scholar] [CrossRef]
- Wong, E.; Rice, L.; Kolter, J.Z. Fast is better than free: Revisiting adversarial training. arXiv 2020, arXiv:2001.03994. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Nair, V.; Hinton, G. The CIFAR-10 and CIFAR-100 Datasets. 2014. Available online: https://www.cs.toronto.edu/~kriz/cifar.html (accessed on 3 February 2025).
- Usama, M.; Qadir, J.; Al-Fuqaha, A.; Hamdi, M. The adversarial machine learning conundrum: Can the insecurity of ML become the achilles’ heel of cognitive networks? IEEE Netw. 2019, 34, 196–203. [Google Scholar] [CrossRef]
- Abusnaina, A.; Khormali, A.; Nyang, D.; Yuksel, M.; Mohaisen, A. Examining the robustness of learning-based ddos detection in software defined networks. In Proceedings of the 2019 IEEE Conference on Dependable and Secure Computing (DSC), Hangzhou, China, 23–25 June 2019. [Google Scholar]
- Hashemi, M.J.; Cusack, G.; Keller, E. Towards evaluation of nidss in adversarial setting. In Proceedings of the 3rd ACM CoNEXT Workshop on Big DATA, Machine Learning and Artificial Intelligence for Data Communication Networks, Orlando, FL, USA, 9 December 2019; pp. 14–21. [Google Scholar]
- Zenati, H.; Foo, C.S.; Lecouat, B.; Manek, G.; Chandrasekhar, V.R. Efficient GAN-based anomaly detection. arXiv 2018, arXiv:1802.06222. [Google Scholar]
- Homoliak, I.; Teknos, M.; Ochoa, M.; Breitenbacher, D.; Hosseini, S.; Hanacek, P. Improving network intrusion detection classifiers by non-payload-based exploit-independent obfuscations: An adversarial approach. arXiv 2018, arXiv:1805.02684. [Google Scholar] [CrossRef]
- Teuffenbach, M.; Piatkowska, E.; Smith, P. Subverting network intrusion detection: Crafting adversarial examples accounting for domain-specific constraints. In Proceedings of the International Cross-Domain Conference for Machine Learning and Knowledge Extraction, Dublin, Ireland, 25–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 301–320. [Google Scholar]
- Wang, Y.; Wang, Y.; Tong, E.; Niu, W.; Liu, J. A c-ifgsm based adversarial approach for deep learning based intrusion detection. In Proceedings of the International Conference on Verification and Evaluation of Computer and Communication Systems, Xi’an, China, 26–27 October 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 207–221. [Google Scholar]
- Anthi, E. Hardening machine learning denial of service (DoS) defences against adversarial attacks in IoT smart home networks. Comput. Secur. 2021, 108, 102352. [Google Scholar] [CrossRef]
- Sánchez, P.M.S.; Celdrán, A.H.; Bovet, G.; Pérez, G.M. Adversarial attacks and defenses on ML-and hardware-based IoT device fingerprinting and identification. Future Gener. Comput. Syst. 2024, 152, 30–42. [Google Scholar] [CrossRef]
- Sánchez, P.M.S.; Valero, J.M.J.; Celdrán, A.H.; Bovet, G.; Pérez, M.G.; Pérez, G.M. LwHBench: A low-level hardware component benchmark and dataset for Single Board Computers. arXiv 2022, arXiv:2204.08516. [Google Scholar] [CrossRef]
- Roshan, K.; Zafar, A.; Haque, S.B.U. Untargeted white-box adversarial attack with heuristic defence methods in real-time deep learning based network intrusion detection system. Comput. Commun. 2024, 218, 97–113. [Google Scholar] [CrossRef]
- Xiao, H.; Biggio, B.; Brown, G.; Fumera, G.; Eckert, C.; Roli, F. Is feature selection secure against training data poisoning? In International Conference on Machine Learning; PMLR: New York, NY, USA, 2015; pp. 1689–1698. [Google Scholar]
- Biggio, B.; Nelson, B.; Laskov, P. Poisoning attacks against support vector machines. arXiv 2012, arXiv:1206.6389. [Google Scholar]
- Yang, K.; Liu, J.; Zhang, C.; Fang, Y. Adversarial examples against the deep learning based network intrusion detection systems. In Proceedings of the MILCOM 2018-2018 IEEE Military Communications Conference (MILCOM), Los Angeles, CA, USA, 29–31 October 2018. [Google Scholar]
- Peng, X.; Huang, W.; Shi, Z. Adversarial attack against dos intrusion detection: An improved boundary-based method. In Proceedings of the 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI), Portland, OR, USA, 4–6 November 2019. [Google Scholar]
- Awad, Z.; Zakaria, M.; Hassan, R. An enhanced ensemble defense framework for boosting adversarial robustness of intrusion detection systems. Sci. Rep. 2025, 15, 14177. [Google Scholar] [CrossRef] [PubMed]
- Lu, J.; Issaranon, T.; Forsyth, D. Safetynet: Detecting and rejecting adversarial examples robustly. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
- Metzen, J.H.; Genewein, T.; Fischer, V.; Bischoff, B. On detecting adversarial perturbations. arXiv 2017, arXiv:1702.04267. [Google Scholar] [CrossRef]
- Barreno, M.; Nelson, B.; Joseph, A.D.; Tygar, J.D. The security of machine learning. Mach. Learn. 2010, 81, 121–148. [Google Scholar] [CrossRef]
- Chen, X.; Liu, C.; Li, B.; Lu, K.; Song, D. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv 2017, arXiv:1712.05526. [Google Scholar] [CrossRef]
- Wang, J.; Dong, G.; Sun, J.; Wang, X.; Zhang, P. Adversarial sample detection for deep neural network through model mutation testing. In Proceedings of the 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), Montreal, QC, Canada, 25–31 May 2019. [Google Scholar]
- Raghuvanshi, A.; Singh, U.K.; Sajja, G.S.; Pallathadka, H.; Asenso, E.; Kamal, M.; Singh, A.; Phasinam, K. Intrusion detection using machine learning for risk mitigation in IoT-enabled smart irrigation in smart farming. J. Food Qual. 2022, 2022, 3955514. [Google Scholar] [CrossRef]
- Benaddi, H.; Jouhari, M.; Ibrahimi, K.; Ben Othman, J.; Amhoud, E.M. Anomaly detection in industrial IoT using distributional reinforcement learning and generative adversarial networks. Sensors 2022, 22, 8085. [Google Scholar] [CrossRef]
- Li, G.; Ota, K.; Dong, M.; Wu, J.; Li, J. DeSVig: Decentralized swift vigilance against adversarial attacks in industrial artificial intelligence systems. IEEE Trans. Ind. Inform. 2019, 16, 3267–3277. [Google Scholar] [CrossRef]
- Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar] [CrossRef]
- Benaddi, H.; Jouhari, M.; Ibrahimi, K.; Benslimane, A.; Amhoud, E.M. Adversarial attacks against iot networks using conditional gan based learning. In Proceedings of the GLOBECOM 2022—2022 IEEE Global Communications Conference, Rio de Janeiro, Brazil, 4–8 December 2022; pp. 2788–2793. [Google Scholar]
- Odena, A.; Olah, C.; Shlens, J. Conditional image synthesis with auxiliary classifier gans. In International Conference on Machine Learning; PMLR: New York, NY, USA, 2017; pp. 2642–2651. [Google Scholar]
- Dhillon, G.S.; Azizzadenesheli, K.; Lipton, Z.C.; Bernstein, J.; Kossaifi, J.; Khanna, A.; Anandkumar, A. Stochastic activation pruning for robust adversarial defense. arXiv 2018, arXiv:1803.01442. [Google Scholar] [CrossRef]
- Khamis, R.A.; Shafiq, M.O.; Matrawy, A. Investigating resistance of deep learning-based ids against adversaries using min-max optimization. In Proceedings of the ICC 2020—2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020. [Google Scholar]
- Pawlicki, M.; Choraś, M.; Kozik, R. Defending network intrusion detection systems against adversarial evasion attacks. Future Gener. Comput. Syst. 2020, 110, 148–154. [Google Scholar] [CrossRef]
- Ganesan, A.; Sarac, K. Mitigating evasion attacks on machine learning based nids systems in sdn. In Proceedings of the 2021 IEEE 7th International Conference on Network Softwarization (NetSoft), Tokyo, Japan, 28 June–2 July 2021. [Google Scholar]
- Debicha, I.; Debatty, T.; Dricot, J.-M.; Mees, W.; Kenaza, T. Detect & reject for transferability of black-box adversarial attacks against network intrusion detection systems. In Proceedings of the International Conference on Advances in Cyber Security, Penang, Malaysia, 24–25 August 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 329–339. [Google Scholar]
- Novaes, M.P.; Carvalho, L.F.; Lloret, J.; Proença, M.L. Adversarial Deep Learning approach detection and defense against DDoS attacks in SDN environments. Future Gener. Comput. Syst. 2021, 125, 156–167. [Google Scholar] [CrossRef]
- Yumlembam, R.; Issac, B.; Jacob, S.M.; Yang, L. Iot-based android malware detection using graph neural network with adversarial defense. IEEE Internet Things J. 2022, 10, 8432–8444. [Google Scholar] [CrossRef]
- Nelson, B.; Barreno, M.; Chi, F.J.; Joseph, A.D.; Rubinstein, B.I.P.; Saini, U.; Sutton, C.; Tygar, J.D.; Xia, K. Misleading learners: Co-opting your spam filter. In Machine Learning in Cyber Trust: Security, Privacy, and Reliability; Springer: Berlin/Heidelberg, Germany, 2009; pp. 17–51. [Google Scholar]
- Apruzzese, G.; Andreolini, M.; Marchetti, M.; Colacino, V.G.; Russo, G. AppCon: Mitigating evasion attacks to ML cyber detectors. Symmetry 2020, 12, 653. [Google Scholar] [CrossRef]
- Apruzzese, G.; Colajanni, M. Evading botnet detectors based on flows and random forest with adversarial samples. In Proceedings of the 2018 IEEE 17th International Symposium on Network Computing and Applications (NCA), Cambridge, MA, USA, 1–3 November 2018. [Google Scholar]
- Siganos, M.; Radoglou-Grammatikis, P.; Kotsiuba, I.; Markakis, E.; Moscholios, I.; Goudos, S.; Sarigiannidis, P. Explainable ai-based intrusion detection in the internet of things. In Proceedings of the 18th International Conference on Availability, Reliability and Security, Benevento, Italy, 29 August–1 September 2023; pp. 1–10. [Google Scholar]
- Park, C.; Lee, J.; Kim, Y.; Park, J.-G.; Kim, H.; Hong, D. An enhanced AI-based network intrusion detection system using generative adversarial networks. IEEE Internet Things J. 2022, 10, 2330–2345. [Google Scholar] [CrossRef]
- Sen, M.A. Attention-GAN for anomaly detection: A cutting-edge approach to cybersecurity threat management. arXiv 2024, arXiv:2402.15945. [Google Scholar]
- Qureshi, A.-U.-H.; Larijani, H.; Mtetwa, N.; Yousefi, M.; Javed, A. An adversarial attack detection paradigm with swarm optimization. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020. [Google Scholar]
- Punitha, A.; Vinodha, S.; Karthika, R.; Deepika, R. A feature reduction intrusion detection system using genetic algorithm. In Proceedings of the 2019 IEEE International Conference on System, Computation, Automation and Networking (ICSCAN), Pondicherry, India, 29–30 March 2019. [Google Scholar]
- Alasad, Q.; Hammood, M.M.; Alahmed, S. Performance and Complexity Tradeoffs of Feature Selection on Intrusion Detection System-Based Neural Network Classification with High-Dimensional Dataset. In Proceedings of the International Conference on Emerging Technologies and Intelligent Systems, Riyadh, Saudi Arabia, 9–11 May 2022; Springer: Berlin/Heidelberg, Germany; pp. 533–542. [Google Scholar]
- Usama, M.; Qayyum, A.; Qadir, J.; Al-Fuqaha, A. Black-box adversarial machine learning attack on network traffic classification. In Proceedings of the 2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC), Tangier, Morocco, 24–28 June.
- Warzyński, A.; Kołaczek, G. Intrusion detection systems vulnerability on adversarial examples. In Proceedings of the 2018 Innovations in Intelligent Systems and Applications (INISTA), Thessaloniki, Greece, 3–5 July 2018. [Google Scholar]
- Zhao, S.; Li, J.; Wang, J.; Zhang, Z.; Zhu, L.; Zhang, Y. attackgan: Adversarial attack against black-box ids using generative adversarial networks. Procedia Comput. Sci. 2021, 187, 128–133. [Google Scholar] [CrossRef]
- Lin, Z.; Shi, Y.; Xue, Z. Idsgan: Generative adversarial networks for attack generation against intrusion detection. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining, Chengdu, China, 16–19 May 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 79–91. [Google Scholar]
- Waskle, S.; Parashar, L.; Singh, U. Intrusion detection system using PCA with random forest approach. In Proceedings of the 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC), Coimbatore, India, 2–4 July 2020. [Google Scholar]
- Mirza, A.H. Computer network intrusion detection using various classifiers and ensemble learning. In Proceedings of the 2018 26th Signal Processing and Communications Applications Conference (SIU), Izmir, Turkey, 2–5 May 2018. [Google Scholar]
- Fitni, Q.R.S.; Ramli, K. Implementation of ensemble learning and feature selection for performance improvements in anomaly-based intrusion detection systems. In Proceedings of the 2020 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT), Bali, Indonesia, 7–8 July 2020. [Google Scholar]
- Li, P.; Liu, Q.; Zhao, W.; Wang, D.; Wang, S. Chronic poisoning against machine learning based IDSs using edge pattern detection. In Proceedings of the 2018 IEEE International Conference on Communications (ICC), Kansas City, MO, USA, 20–24 May 2018. [Google Scholar]



| Factor | Description | Examples in NIDS Context | Refs. |
|---|---|---|---|
| Knowledge | Level of knowledge on the attacked model that the assaulter knows. | Black-box: Only input–output access. Gray-box: Limited dataset access. White-box: Full access to model parameters. | [13] |
| Timing | When the attack occurs in the ML lifecycle. | Evasion: During the testing, to misclassify (real time traffic). Poisoning: During the training, to corrupt data (data tampering). | [14,27,34] |
| Goals | Attacker’s objective. | Targeted: Specific misclassification. Non-targeted: General error induction. | [35] |
| Capabilities | Attacker’s access and actions on the system. | Full access: Modify model internals. Partial: Limited access to the model. Limited: Query outputs only. | [10,15,16,17,18,19,20,21,22,23,24,30] |
| Attack Tech. | Type (White/Black/Gray) | Strengths | Weaknesses | Success Rate in NIDS (Examples) | Datasets Tested | Refs. |
|---|---|---|---|---|---|---|
| GANs | Black/White /Gray | High evasion; realistic samples | Computationally intensive | 98% evasion [40] | CICIDS2017 | [14,24,36,37,38,39,40] |
| ZOO | Black | Gradient-free; black-box effective | High query count; slow | 97% on DNNs [41] | NSL-KDD | [1,41,42,44] |
| KDE | White/Black | Non-parametric density estimation | Bandwidth sensitivity | 95% outlier detection [48] | NSL-KDD | [45,46,47,48,49,50,51] |
| DeepFool | Black/White | Minimal perturbations | Compute-heavy; white-box only | 90% misclassification [52] | KDDCup99 | [14,24,52,56] |
| FGSM | White | Fast generation | Less transferable | 97% on CNN [57] | KDDCup99 | [23,57] |
| C&W | White | Optimized for distances (L0/L2/L∞) | Resource-intensive | 95% bypass [58] | Various | [22,58] |
| JSMA | White | Targets key features | Slow; feature-specific | 92% targeted [59] | NSL-KDD | [22,59] |
| PGD | White | Constrained optimization | Iterative; compute-costly | 96% robust test [60] | CICIDS2017 | [22,60] |
| BIM | White | Multi-step improvements | Similarly to PGD but basic | 94% evasion [61] | CICIDS2017 | [22,61] |
| Dataset | Pros. | Cons. | Impact on Adversarial Robustness |
|---|---|---|---|
| KDDCup99 | Widely used benchmark; large scale | Outdated (1999); redundant records; lacks modern attacks | Skews results due to obsolete patterns; underestimates modern evasion rates by 10–20% in 2025 studies [14]; overoptimistic validity in adversarial experiments [10]. |
| NSL-KDD | Reduced redundancy from KDD; balanced classes | Still based on 1999 data; limited realism | Inflates robustness claims by ignoring contemporary traffic; evasion success underestimated by 15% [14]. |
| UNSW-NB15 | Realistic modern traffic; includes 9 attack families | Imbalanced; some synthetic elements | Better for robustness testing but skews if not balanced; underestimates poisoning by 5–10% in hybrid attacks [10]. |
| CIC-IDS2017-2019 | Comprehensive real traffic; multi-class attacks | High dimensionality; processing-intensive | Minimizes skew in robustness claims; accurate for 2025 evasion rates (up to 90% in IoT) [14]; recommended for valid experiments. |
| BoT-IoT | IoT-specific; recent botnet simulations | Focused on IoT; limited generalizability | Reduces skew for IoT NIDS; but overestimates general robustness; evasion rates accurate at 80–90% [14]. |
| Kyoto 2006+ | Honeypot-based; long-term data | Older (2006+); lacks latest zero-day attacks | Skews claims in long-term studies; underestimates current adversarial impacts by 20% [10]. |
| CTU-13 | Real botnet captures; 13 scenarios | Malware-focused; dated (2011) | Moderate skew; useful for poisoning tests but underestimates evasion in modern networks [14]. |
| Attack Tech. | Description | NIDS-Specific Examples | Refs. | Computational Cost | Detectability |
|---|---|---|---|---|---|
| GANs | Generator–discriminator model for crafting evasive samples; maximizes deception. | Bypassing NIDS with synthetic traffic [11,38]. | [14,36,37,38,39,40] | Medium (training-intensive but efficient in black-box). | Low (evasion rates up to 90% in network traffic; hard to detect [14]). |
| ZOO | Gradient-free optimization for black-box attacks; approximates gradients via queries. | Attacking DNN-based NIDS without model access [41,42,43,44]. | [41,42,43,44] | High (1000 s queries/sample; reduced 20–50% with Hessian [41,42]). | Medium (query intensity may expose attack; slower than FGSM). |
| KDE | Non-parametric density estimation for anomaly crafting; smooths perturbations. | Spotting adversaries in NSL-KDD traffic [45,46,47,48,49,50,51]. | [45,46,47,48,49,50,51] | Low (statistical, no heavy training). | High (reveals multi-peak patterns; easier to detect in complex data [50]). |
| DeepFool | Iterative minimal perturbations to cross decision boundaries; white-box focus. | Poisoning DL-NIDS with small changes [14,52,56]. | [14,52,56] | Medium (iterative but efficient for DNNs). | Low (small perturbations; adapted for 2025 DL-NIDS evasion success [14]). |
| FGSM | Single-step gradient-based; adds noise via sign of gradients. | Fast evasion in testing phase [57]. | [57] | Low (one-step computation). | Medium (larger perturbations; more detectable than DeepFool). |
| Defense Tech. | Description | NIDS-Specific Examples | Refs. | Success Rate vs. Combined Attacks |
|---|---|---|---|---|
| Adversarial Training | Train models on adversarial examples to build robustness. | Enhancing NIDS against evasion [4,63]. | [63,131,137,138,139,140] | 15–25% improvement; vulnerable to strong combined attacks [63,131]. |
| Feature Selection | Reduce dimensions to eliminate vulnerable features. | Genetic algorithm for IDS efficiency [157,158]. | [157,158] | 10–20% robustness gain; limited against poisoning + evasion (drops to 5% in hybrids [14]). |
| Ensemble Methods | Combine multiple models for detection, e.g., random forests. | Boosting against white-box attacks [159,160,161,162]. | [131,159,160,161,162] | 20–30% improvement in ensembles; effective for combined (evasion + poisoning) [31,131]. |
| Hybrid Defenses | Integrate training + detection (e.g., GAN-based anomaly). | Unified against both attack types [10,131]. | [10,131] | 25–35% in 2025 frameworks; closes the gap for combined threats [14,131]. |
| Detection-Based | Detect perturbations via isolation forests or autoencoders. | Kitsune for online anomalies [100,101,102,103]. | [100,101,102,103] | 10–15% for evasion; low (5–10%) for combined without unification [10]. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Alasad, Q.; Ahmed, M.; Alahmed, S.; Khattab, O.T.; Abdulwahhab, S.A.; Yuan, J.-S. A Comprehensive Review: The Evolving Cat-and-Mouse Game in Network Intrusion Detection Systems Leveraging Machine Learning. J. Cybersecur. Priv. 2026, 6, 13. https://doi.org/10.3390/jcp6010013
Alasad Q, Ahmed M, Alahmed S, Khattab OT, Abdulwahhab SA, Yuan J-S. A Comprehensive Review: The Evolving Cat-and-Mouse Game in Network Intrusion Detection Systems Leveraging Machine Learning. Journal of Cybersecurity and Privacy. 2026; 6(1):13. https://doi.org/10.3390/jcp6010013
Chicago/Turabian StyleAlasad, Qutaiba, Meaad Ahmed, Shahad Alahmed, Omer T. Khattab, Saba Alaa Abdulwahhab, and Jiann-Shuin Yuan. 2026. "A Comprehensive Review: The Evolving Cat-and-Mouse Game in Network Intrusion Detection Systems Leveraging Machine Learning" Journal of Cybersecurity and Privacy 6, no. 1: 13. https://doi.org/10.3390/jcp6010013
APA StyleAlasad, Q., Ahmed, M., Alahmed, S., Khattab, O. T., Abdulwahhab, S. A., & Yuan, J.-S. (2026). A Comprehensive Review: The Evolving Cat-and-Mouse Game in Network Intrusion Detection Systems Leveraging Machine Learning. Journal of Cybersecurity and Privacy, 6(1), 13. https://doi.org/10.3390/jcp6010013

