Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (93)

Search Parameters:
Keywords = Network Intrusion Detection Systems (NIDS)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 2510 KB  
Article
MCH-Ensemble: Minority Class Highlighting Ensemble Method for Class Imbalance in Network Intrusion Detection
by Sumin Oh, Seoyoung Sohn, Chaewon Kim and Minseo Park
Appl. Sci. 2025, 15(23), 12647; https://doi.org/10.3390/app152312647 - 28 Nov 2025
Viewed by 268
Abstract
As cyber threats such as denial-of-service (DoS) attacks continue to rise, network intrusion detection systems (NIDS) have become essential components of cybersecurity defense. Although machine learning is widely applied to network intrusion detection, its performance often deteriorates due to the extreme class imbalance [...] Read more.
As cyber threats such as denial-of-service (DoS) attacks continue to rise, network intrusion detection systems (NIDS) have become essential components of cybersecurity defense. Although machine learning is widely applied to network intrusion detection, its performance often deteriorates due to the extreme class imbalance present in real-world data. This imbalance causes models to become biased and unable to detect critical attack instances. To address this issue, we propose MCH-Ensemble (Minority Class Highlighting Ensemble), an ensemble framework designed to improve the detection of minority attack classes. The method constructs multiple balanced subsets through random under-sampling and trains base learners, including decision tree, XGBoost, and LightGBM models. Features of correctly predicted attack samples are then amplified by adding a constant value, producing a boosting-like effect that enhances minority class representation. The highlighted subsets are subsequently combined to train a random forest meta-model, which leverages bagging to capture diverse and fine-grained decision boundaries. Experimental evaluations on the UNSW-NB15, CIC-IDS2017, and WSN-DS datasets demonstrate that MCH-Ensemble effectively mitigates class imbalance and achieves superior recognition of DoS attacks. The proposed method achieves enhanced performance compared with those reported previously. On the UNSW-NB15 and CIC-IDS2017 datasets, it achieves improvements in accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC-ROC) by ~1.2% and ~0.61%, ~9.8% and 0.77%, ~0.7% and ~0.56%, ~5.3% and 0.66%, and ~0.1% and ~0.06%, respectively. In addition, it achieves these improvements by ~0.17%, ~1.66%, ~0.11%, ~0.88%, and ~0.06%, respectively, on the WSN-DS dataset. These findings indicate that the proposed framework offers a robust and accurate approach to intrusion detection, contributing to the development of reliable cybersecurity systems in highly imbalanced network environments. Full article
Show Figures

Figure 1

22 pages, 376 KB  
Article
CSCVAE-NID: A Conditionally Symmetric Two-Stage CVAE Framework with Cost-Sensitive Learning for Imbalanced Network Intrusion Detection
by Zhenyu Wang and Xuejun Yu
Entropy 2025, 27(11), 1086; https://doi.org/10.3390/e27111086 - 22 Oct 2025
Viewed by 624
Abstract
With the increasing complexity and diversity of network threats, developing high-performance Network Intrusion Detection Systems (NIDSs) has become a critical challenge. A primary obstacle in this domain is the pervasive issue of class imbalance, where the scarcity of minority attack samples and the [...] Read more.
With the increasing complexity and diversity of network threats, developing high-performance Network Intrusion Detection Systems (NIDSs) has become a critical challenge. A primary obstacle in this domain is the pervasive issue of class imbalance, where the scarcity of minority attack samples and the varying costs of misclassification severely limit the effectiveness of traditional models, often leading to a difficult trade-off between high False Positive Rates (FPRs) and low Recall. To address this challenge, this paper proposes a novel, conditionally symmetric two-stage framework, termed CSCVAE-NID (Conditionally Symmetric Two-Stage CVAE for Network Intrusion Detection). The framework operates in two synergistic stages: Firstly, a Data Augmentation Conditional Variational Autoencoder (DA-CVAE) is introduced to tackle the data imbalance problem at the data level. By conditioning on attack categories, the DA-CVAE generates high-quality and diverse synthetic samples for underrepresented classes, providing a more balanced training dataset. Secondly, the core of our framework, a Cost-Sensitive Multi-Class Classification CVAE (CSMC-CVAE), is proposed. This model innovatively reframes the classification task as a probabilistic distribution matching problem and integrates a cost-sensitive learning strategy at the algorithm level. By incorporating a predefined cost matrix into its loss function, the CSMC-CVAE is compelled to prioritize the correct classification of high-cost, minority attack classes. Comprehensive experiments conducted on the public CICIDS-2017 and UNSW-NB15 datasets demonstrate the superiority of the proposed CSCVAE-NID framework. Compared to several state-of-the-art methods, our approach achieves exceptional performance in both binary and multi-class classification tasks. Notably, the DA-CVAE module is designed to be independent and extensible, allowing the effective data that it generates to support any advanced intrusion detection methodology. Full article
Show Figures

Figure 1

23 pages, 1735 KB  
Article
FortiNIDS: Defending Smart City IoT Infrastructures Against Transferable Adversarial Poisoning in Machine Learning-Based Intrusion Detection Systems
by Abdulaziz Alajaji
Sensors 2025, 25(19), 6056; https://doi.org/10.3390/s25196056 - 2 Oct 2025
Viewed by 712
Abstract
In today’s digital era, cyberattacks are rapidly evolving, rendering traditional security mechanisms increasingly inadequate. The adoption of AI-based Network Intrusion Detection Systems (NIDS) has emerged as a promising solution, due to their ability to detect and respond to malicious activity using machine learning [...] Read more.
In today’s digital era, cyberattacks are rapidly evolving, rendering traditional security mechanisms increasingly inadequate. The adoption of AI-based Network Intrusion Detection Systems (NIDS) has emerged as a promising solution, due to their ability to detect and respond to malicious activity using machine learning techniques. However, these systems remain vulnerable to adversarial threats, particularly data poisoning attacks, in which attackers manipulate training data to degrade model performance. In this work, we examine tree classifiers, Random Forest and Gradient Boosting, to model black box poisoning attacks. We introduce FortiNIDS, a robust framework that employs a surrogate neural network to generate adversarial perturbations that can transfer between models, leveraging the transferability of adversarial examples. In addition, we investigate defense strategies designed to improve the resilience of NIDS in smart city Internet of Things (IoT) settings. Specifically, we evaluate adversarial training and the Reject on Negative Impact (RONI) technique using the widely adopted CICDDoS2019 dataset. Our findings highlight the effectiveness of targeted defenses in improving detection accuracy and maintaining system reliability under adversarial conditions, thereby contributing to the security and privacy of smart city networks. Full article
Show Figures

Figure 1

14 pages, 308 KB  
Review
Automated Network Defense: A Systematic Survey and Analysis of AutoML Paradigms for Network Intrusion Detection
by Haowen Liu, Xuren Wang, Famei He and Zhiqiang Zheng
Appl. Sci. 2025, 15(19), 10389; https://doi.org/10.3390/app151910389 - 24 Sep 2025
Cited by 1 | Viewed by 757
Abstract
As cyberattacks grow increasingly sophisticated, advanced Network Intrusion Detection Systems (NIDS) have become essential for securing cyberspace. While Machine Learning (ML) is foundational to modern NIDS, its effectiveness is often hampered by a resource-intensive development pipeline involving feature engineering, model selection, and hyperparameter [...] Read more.
As cyberattacks grow increasingly sophisticated, advanced Network Intrusion Detection Systems (NIDS) have become essential for securing cyberspace. While Machine Learning (ML) is foundational to modern NIDS, its effectiveness is often hampered by a resource-intensive development pipeline involving feature engineering, model selection, and hyperparameter tuning. Automated Machine Learning (AutoML) promises a solution, but its application to the massive, high-speed data streams in NIDS is fundamentally a parallel and distributed computing challenge. This paper argues that the scalability and performance of AutoML in NIDS are governed by the underlying computational paradigm. We introduce a novel taxonomy of AutoML frameworks, uniquely classifying them by their parallel and distributed architectures. Through a comprehensive meta-analysis of over 15 NID methods on benchmark datasets, we demonstrate how the performance of leading systems is a direct consequence of their chosen computational paradigm. Finally, we identify frontier challenges and future research directions at the intersection of AutoML, NIDS, and high-performance distributed systems, focusing on computational scalability, security, and end-to-end automation. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

24 pages, 3374 KB  
Article
Enhancing Adversarial Robustness in Network Intrusion Detection: A Novel Adversarially Trained Neural Network Approach
by Vahid Heydari and Kofi Nyarko
Electronics 2025, 14(16), 3249; https://doi.org/10.3390/electronics14163249 - 15 Aug 2025
Viewed by 2469
Abstract
Machine learning (ML) has greatly improved intrusion detection in enterprise networks. However, ML models remain vulnerable to adversarial attacks, where small input changes cause misclassification. This study evaluates the robustness of a Random Forest (RF), a standard neural network (NN), and [...] Read more.
Machine learning (ML) has greatly improved intrusion detection in enterprise networks. However, ML models remain vulnerable to adversarial attacks, where small input changes cause misclassification. This study evaluates the robustness of a Random Forest (RF), a standard neural network (NN), and a Transformer-based Network Intrusion Detection System (NIDS). It also introduces ADV_NN, an adversarially trained neural network designed to improve resilience. Model performance is tested using the UNSW-NB15 dataset under both clean and adversarial conditions. The attack types include Projected Gradient Descent (PGD), Fast Gradient Sign Method (FGSM), and Black-Box transfer attacks. The proposed ADV_NN achieves 86.04% accuracy on clean data. It maintains over 80% accuracy under PGD and FGSM attacks, and exceeds 85% under Black-Box attacks at ϵ=0.15. In contrast, the RF, NN, and Transformer-based models suffer significant degradation under adversarial perturbations. These results highlight the need to incorporate adversarial defenses into ML-based NIDS for secure deployment in real-world environments. Full article
(This article belongs to the Special Issue Recent Advances in Information Security and Data Privacy)
Show Figures

Figure 1

36 pages, 3039 KB  
Article
Decision Tree Pruning with Privacy-Preserving Strategies
by Yee Jian Chew, Shih Yin Ooi, Ying Han Pang and Zheng You Lim
Electronics 2025, 14(15), 3139; https://doi.org/10.3390/electronics14153139 - 6 Aug 2025
Viewed by 1692
Abstract
Machine learning techniques, particularly decision trees, have been extensively utilized in Network-based Intrusion Detection Systems (NIDSs) due to their transparent, rule-based structures that enable straightforward interpretation. However, this transparency presents privacy risks, as decision trees may inadvertently expose sensitive information such as network [...] Read more.
Machine learning techniques, particularly decision trees, have been extensively utilized in Network-based Intrusion Detection Systems (NIDSs) due to their transparent, rule-based structures that enable straightforward interpretation. However, this transparency presents privacy risks, as decision trees may inadvertently expose sensitive information such as network configurations or IP addresses. In our previous work, we introduced a sensitive pruning-based decision tree to mitigate these risks within a limited dataset and basic pruning framework. In this extended study, three privacy-preserving pruning strategies are proposed: standard sensitive pruning, which conceals specific sensitive attribute values; optimistic sensitive pruning, which further simplifies the decision tree when the sensitive splits are minimal; and pessimistic sensitive pruning, which aggressively removes entire subtrees to maximize privacy protection. The methods are implemented using the J48 (Weka C4.5 package) decision tree algorithm and are rigorously validated across three full-scale NIDS datasets: GureKDDCup, UNSW-NB15, and CIDDS-001. To ensure a realistic evaluation of time-dependent intrusion patterns, a rolling-origin resampling scheme is employed in place of conventional cross-validation. Additionally, IP address truncation and port bilateral classification are incorporated to further enhance privacy preservation. Experimental results demonstrate that the proposed pruning strategies effectively reduce the exposure of sensitive information, significantly simplify decision tree structures, and incur only minimal reductions in classification accuracy. These findings reaffirm that privacy protection can be successfully integrated into decision tree models without severely compromising detection performance. To further support the proposed pruning strategies, this study also includes a comprehensive review of decision tree post-pruning techniques. Full article
Show Figures

Figure 1

22 pages, 580 KB  
Article
The Choice of Training Data and the Generalizability of Machine Learning Models for Network Intrusion Detection Systems
by Marcin Iwanowski, Dominik Olszewski, Waldemar Graniszewski, Jacek Krupski and Franciszek Pelc
Appl. Sci. 2025, 15(15), 8466; https://doi.org/10.3390/app15158466 - 30 Jul 2025
Cited by 1 | Viewed by 1851
Abstract
Network Intrusion Detection Systems (NIDS) driven by Machine Learning (ML) algorithms are usually trained using publicly available datasets consisting of labeled traffic samples, where labels refer to traffic classes, usually one benign and multiple harmful. This paper studies the generalizability of models trained [...] Read more.
Network Intrusion Detection Systems (NIDS) driven by Machine Learning (ML) algorithms are usually trained using publicly available datasets consisting of labeled traffic samples, where labels refer to traffic classes, usually one benign and multiple harmful. This paper studies the generalizability of models trained on such datasets. This issue is crucial given the application of such a model to actual internet traffic because high-performance measures obtained on datasets do not necessarily imply similar efficiency on the real traffic. We propose a procedure consisting of cross-validation using various sets sharing some standard traffic classes combined with the t-SNE visualization. We apply it to investigate four well-known and widely used datasets: UNSW-NB15, CIC-CSE-IDS2018, BoT-IoT, and ToN-IoT. Our investigation reveals that the high accuracy of a model obtained on one set used for training is reproducible on others only to a limited extent. Moreover, benign traffic classes’ generalizability differs from harmful traffic. Given its application in the actual network environment, it implies that one needs to select the data used to train the ML model carefully to determine to what extent the classes present in the dataset used for training are similar to those in the real target traffic environment. On the other hand, merging datasets may result in more exhaustive data collection, consisting of a more diverse spectrum of training samples. Full article
Show Figures

Figure 1

16 pages, 1550 KB  
Article
Understanding and Detecting Adversarial Examples in IoT Networks: A White-Box Analysis with Autoencoders
by Wafi Danesh, Srinivas Rahul Sapireddy and Mostafizur Rahman
Electronics 2025, 14(15), 3015; https://doi.org/10.3390/electronics14153015 - 29 Jul 2025
Cited by 1 | Viewed by 1085
Abstract
Novel networking paradigms such as the Internet of Things (IoT) have expanded their usage and deployment to various application domains. Consequently, unseen critical security vulnerabilities such as zero-day attacks have emerged in such deployments. The design of intrusion detection systems for IoT networks [...] Read more.
Novel networking paradigms such as the Internet of Things (IoT) have expanded their usage and deployment to various application domains. Consequently, unseen critical security vulnerabilities such as zero-day attacks have emerged in such deployments. The design of intrusion detection systems for IoT networks is often challenged by a lack of labeled data, which complicates the development of robust defenses against adversarial attacks. As deep learning-based network intrusion detection systems, network intrusion detection systems (NIDS) have been used to counteract emerging security vulnerabilities. However, the deep learning models used in such NIDS are vulnerable to adversarial examples. Adversarial examples are specifically engineered samples tailored to a specific deep learning model; they are developed by minimal perturbation of network packet features, and are intended to cause misclassification. Such examples can bypass NIDS or enable the rejection of regular network traffic. Research in the adversarial example detection domain has yielded several prominent methods; however, most of those methods involve computationally expensive retraining steps and require access to labeled data, which are often lacking in IoT network deployments. In this paper, we propose an unsupervised method for detecting adversarial examples that performs early detection based on the intrinsic characteristics of the deep learning model. Our proposed method requires neither computationally expensive retraining nor extra hardware overhead for implementation. For the work in this paper, we first perform adversarial example generation on a deep learning model using autoencoders. After successful adversarial example generation, we perform adversarial example detection using the intrinsic characteristics of the layers in the deep learning model. A robustness analysis of our approach reveals that an attacker can easily bypass the detection mechanism by using low-magnitude log-normal Gaussian noise. Furthermore, we also test the robustness of our detection method against further compromise by the attacker. We tested our approach on the Kitsune datasets, which are state-of-the-art datasets obtained from deployed IoT network scenarios. Our experimental results show an average adversarial example generation time of 0.337 s and an average detection rate of almost 100%. The robustness analysis of our detection method reveals a reduction of almost 100% in adversarial example detection after compromise by the attacker. Full article
Show Figures

Figure 1

22 pages, 2046 KB  
Article
Optimizing IoT Intrusion Detection—A Graph Neural Network Approach with Attribute-Based Graph Construction
by Tien Ngo, Jiao Yin, Yong-Feng Ge and Hua Wang
Information 2025, 16(6), 499; https://doi.org/10.3390/info16060499 - 16 Jun 2025
Cited by 8 | Viewed by 3427
Abstract
The inherent complexity and heterogeneity of the Internet of Things (IoT) ecosystem present significant challenges for developing effective intrusion detection systems. While graph deep-learning-based methods have shown promise in cybersecurity applications, existing approaches primarily construct graphs based on physical network connections, which may [...] Read more.
The inherent complexity and heterogeneity of the Internet of Things (IoT) ecosystem present significant challenges for developing effective intrusion detection systems. While graph deep-learning-based methods have shown promise in cybersecurity applications, existing approaches primarily construct graphs based on physical network connections, which may not effectively capture node representations. This paper proposes a Top-K Similarity Graph Framework (TKSGF) for IoT network intrusion detection. Instead of relying on physical links, the TKSGF constructs graphs based on Top-K attribute similarity, ensuring a more meaningful representation of node relationships. We employ GraphSAGE as the Graph Neural Network (GNN) model to effectively capture node representations while maintaining scalability. Furthermore, we conducted extensive experiments to analyze the impact of graph directionality (directed vs. undirected), different K values, and various GNN architectures and configurations on detection performance. Evaluations on binary and multi-class classification tasks using the NF-ToN IoT and NF-BoT IoT datasets from the Machine-Learning-Based Network Intrusion Detection System (NIDS) benchmark demonstrated that our proposed framework consistently outperformed traditional machine learning methods and existing graph-based approaches, achieving superior classification accuracy and robustness. Full article
(This article belongs to the Special Issue Data Privacy Protection in the Internet of Things)
Show Figures

Figure 1

20 pages, 1198 KB  
Article
Mitigating Class Imbalance in Network Intrusion Detection with Feature-Regularized GANs
by Jing Li, Wei Zong, Yang-Wai Chow and Willy Susilo
Future Internet 2025, 17(5), 216; https://doi.org/10.3390/fi17050216 - 13 May 2025
Cited by 2 | Viewed by 2010
Abstract
Network Intrusion Detection Systems (NIDS) often suffer from severe class imbalance, where minority attack types are underrepresented, leading to degraded detection performance. To address this challenge, we propose a novel augmentation framework that integrates Soft Nearest Neighbor Loss (SNNL) into Generative Adversarial Networks [...] Read more.
Network Intrusion Detection Systems (NIDS) often suffer from severe class imbalance, where minority attack types are underrepresented, leading to degraded detection performance. To address this challenge, we propose a novel augmentation framework that integrates Soft Nearest Neighbor Loss (SNNL) into Generative Adversarial Networks (GANs), including WGAN, CWGAN, and WGAN-GP. Unlike traditional oversampling methods (e.g., SMOTE, ADASYN), our approach improves feature-space alignment between real and synthetic samples, enhancing classifier generalization on rare classes. Experiments on NSL-KDD, CSE-CIC-IDS2017, and CSE-CIC-IDS2018 show that SNNL-augmented GANs consistently improve minority-class F1-scores without degrading overall accuracy or majority-class performance. UMAP visualizations confirm that SNNL produces more compact and class-consistent sample distributions. We also evaluate the computational overhead, finding the added cost moderate. These results demonstrate the effectiveness and practicality of SNNL as a general enhancement for GAN-based data augmentation in imbalanced NIDS tasks. Full article
Show Figures

Figure 1

22 pages, 931 KB  
Article
Design of a Heterogeneous-Based Network Intrusion Detection System and Compiler
by Zhigui Lin, Xiaofeng Zhang, Qi Liu and Jun Cui
Appl. Sci. 2025, 15(9), 5012; https://doi.org/10.3390/app15095012 - 30 Apr 2025
Cited by 2 | Viewed by 1589
Abstract
With the continuous growth of network traffic scale, traditional software-based intrusion detection systems (IDS) constrained by CPU-processing capabilities struggle to meet the requirements of 100 Gbps high-speed network environments. While existing heterogeneous acceleration solutions enhance detection efficiency through hardware acceleration, they still exhibit [...] Read more.
With the continuous growth of network traffic scale, traditional software-based intrusion detection systems (IDS) constrained by CPU-processing capabilities struggle to meet the requirements of 100 Gbps high-speed network environments. While existing heterogeneous acceleration solutions enhance detection efficiency through hardware acceleration, they still exhibit technical limitations including insufficient throughput, simplistic task offloading mechanisms, and poor compatibility in rule compilation. This paper is based on the collaborative design consept of “hardware-accelerated preprocessing + software-based precise detection”, fully leveraging FPGA’s parallel processing capabilities and CPU’s flexible computation advantages. We construct an FPGA + CPU heterogeneous detection system featuring a five-tuple segmented matching architecture, which integrates hash bitmap and shift-or algorithms to achieve fast-pattern matching. A hardware compiler supporting 10,000+ detection rules is developed, enhancing hardware adaptability through packet optimization and mask compilation. Experimental results demonstrate that the system maintains 100 Gbps throughput with 2000–10,000 rule sets, achieves over 97% detection accuracy, and consumes only 33% hardware logic resources. Compared with Snort software implementation on equivalent configurations, it delivers 10.5–27.1 times throughput improvement, providing an efficient and reliable solution for real-time intrusion detection in high-speed networks. Full article
Show Figures

Figure 1

15 pages, 7945 KB  
Article
Self-Organizing Maps-Assisted Variational Autoencoder for Unsupervised Network Anomaly Detection
by Hailong Huang, Jiahong Yang, Hang Zeng, Yaqin Wang and Liuming Xiao
Symmetry 2025, 17(4), 520; https://doi.org/10.3390/sym17040520 - 30 Mar 2025
Cited by 1 | Viewed by 1836
Abstract
In network intrusion detection systems (NIDS), conventional supervised learning approaches remain constrained by their reliance on labor-intensive labeled datasets, especially in evolving network ecosystems. Although unsupervised learning offers a viable alternative, current methodologies frequently face challenges in managing high-dimensional feature spaces and achieving [...] Read more.
In network intrusion detection systems (NIDS), conventional supervised learning approaches remain constrained by their reliance on labor-intensive labeled datasets, especially in evolving network ecosystems. Although unsupervised learning offers a viable alternative, current methodologies frequently face challenges in managing high-dimensional feature spaces and achieving optimal detection performance. To overcome these limitations, this study proposes a self-organizing maps-assisted variational autoencoder (SOVAE) framework. The SOVAE architecture employs feature correlation graphs combined with the Louvain community detection algorithm to conduct feature selection. The processed data—characterized by reduced dimensionality and clustered structure—is subsequently projected through self-organizing maps to generate cluster-based labels. These labels are further incorporated into the symmetric encoding-decoding reconstruction process of the VAE to enhance data reconstruction quality. Anomaly detection is implemented through quantitative assessment of reconstruction discrepancies and SOM deviations. Experimental results show that SOVAE achieves F1 scores of 0.983 (±0.005) on UNSW-NB15 and 0.875 (±0.008) on CICIDS2017, outperforming mainstream unsupervised baselines. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

63 pages, 4416 KB  
Review
A Review of Machine Learning and Transfer Learning Strategies for Intrusion Detection Systems in 5G and Beyond
by Kinzah Noor, Agbotiname Lucky Imoize, Chun-Ta Li and Chi-Yao Weng
Mathematics 2025, 13(7), 1088; https://doi.org/10.3390/math13071088 - 26 Mar 2025
Cited by 10 | Viewed by 8802
Abstract
This review systematically explores the application of machine learning (ML) models in the context of Intrusion Detection Systems (IDSs) for modern network security, particularly within 5G environments. The evaluation is based on the 5G-NIDD dataset, a richly labeled resource encompassing a broad range [...] Read more.
This review systematically explores the application of machine learning (ML) models in the context of Intrusion Detection Systems (IDSs) for modern network security, particularly within 5G environments. The evaluation is based on the 5G-NIDD dataset, a richly labeled resource encompassing a broad range of network behaviors, from benign user traffic to various attack scenarios. This review examines multiple machine learning (ML) models, assessing their performance across critical metrics, including accuracy, precision, recall, F1-score, Receiver Operating Characteristic (ROC), Area Under the Curve (AUC), and execution time. Key findings indicate that the K-Nearest Neighbors (KNN) model excels in accuracy and ROC AUC, while the Voting Classifier achieves superior precision and F1-score. Other models, including decision tree (DT), Bagging, and Extra Trees, demonstrate strong recall, while AdaBoost shows underperformance across all metrics. Naive Bayes (NB) stands out for its computational efficiency despite moderate performance in other areas. As 5G technologies evolve, introducing more complex architectures, such as network slicing, increases the vulnerability to cyber threats, particularly Distributed Denial-of-Service (DDoS) attacks. This review also investigates the potential of deep learning (DL) and Deep Transfer Learning (DTL) models in enhancing the detection of such attacks. Advanced DL architectures, such as Bidirectional Long Short-Term Memory (BiLSTM), Convolutional Neural Networks (CNNs), Residual Networks (ResNet), and Inception, are evaluated, with a focus on the ability of DTL to leverage knowledge transfer from source datasets to improve detection accuracy on sparse 5G-NIDD data. The findings underscore the importance of large-scale labeled datasets and adaptive security mechanisms in addressing evolving threats. This review concludes by highlighting the significant role of ML and DTL approaches in strengthening network defense and fostering proactive, robust security solutions for future networks. Full article
(This article belongs to the Special Issue Network Security in Artificial Intelligence Systems)
Show Figures

Figure 1

20 pages, 914 KB  
Article
Cost-Efficient Hybrid Filter-Based Parameter Selection Scheme for Intrusion Detection System in IoT
by Gabriel Chukwunonso Amaizu, Akshita Maradapu Vera Venkata Sai, Madhuri Siddula and Dong-Seong Kim
Electronics 2025, 14(4), 726; https://doi.org/10.3390/electronics14040726 - 13 Feb 2025
Viewed by 987
Abstract
The rapid growth of Internet of Things (IoT) devices has brought about significant advancements in automation, data collection, and connectivity across various domains. However, this increased interconnectedness also poses substantial security challenges, making IoT networks attractive targets for malicious actors. Intrusion detection systems [...] Read more.
The rapid growth of Internet of Things (IoT) devices has brought about significant advancements in automation, data collection, and connectivity across various domains. However, this increased interconnectedness also poses substantial security challenges, making IoT networks attractive targets for malicious actors. Intrusion detection systems (IDSs) play a vital role in protecting IoT environments from cyber threats, necessitating the development of sophisticated and effective NIDS solutions. This paper proposes an IDS that addresses the curse of dimensionality by eliminating redundant and highly correlated features, followed by a wrapper-based feature ranking to determine their importance. Additionally, the IDS incorporates cutting-edge image processing techniques to reconstruct data into images, which are further enhanced through a filtering process. Finally, a meta classifier, consisting of three base models, is employed for efficient and accurate intrusion detection. Simulation results using industry-standard datasets demonstrate that the hybrid parameter selection approach significantly reduces computational costs while maintaining reliability. Furthermore, the combination of image transformation and ensemble learning techniques achieves higher detection accuracy, further enhancing the effectiveness of the proposed IDS. Full article
(This article belongs to the Special Issue New Challenges in Cyber Security)
Show Figures

Figure 1

21 pages, 806 KB  
Article
Labeling Network Intrusion Detection System (NIDS) Rules with MITRE ATT&CK Techniques: Machine Learning vs. Large Language Models
by Nir Daniel, Florian Klaus Kaiser, Shay Giladi, Sapir Sharabi, Raz Moyal, Shalev Shpolyansky, Andres Murillo, Aviad Elyashar and Rami Puzis
Big Data Cogn. Comput. 2025, 9(2), 23; https://doi.org/10.3390/bdcc9020023 - 26 Jan 2025
Cited by 7 | Viewed by 4936
Abstract
Analysts in Security Operations Centers (SOCs) are often occupied with time-consuming investigations of alerts from Network Intrusion Detection Systems (NIDSs). Many NIDS rules lack clear explanations and associations with attack techniques, complicating the alert triage and the generation of attack hypotheses. Large Language [...] Read more.
Analysts in Security Operations Centers (SOCs) are often occupied with time-consuming investigations of alerts from Network Intrusion Detection Systems (NIDSs). Many NIDS rules lack clear explanations and associations with attack techniques, complicating the alert triage and the generation of attack hypotheses. Large Language Models (LLMs) may be a promising technology to reduce the alert explainability gap by associating rules with attack techniques. In this paper, we investigate the ability of three prominent LLMs (ChatGPT, Claude, and Gemini) to reason about NIDS rules while labeling them with MITRE ATT&CK tactics and techniques. We discuss prompt design and present experiments performed with 973 Snort rules. Our results indicate that while LLMs provide explainable, scalable, and efficient initial mappings, traditional machine learning (ML) models consistently outperform them in accuracy, achieving higher precision, recall, and F1-scores. These results highlight the potential for hybrid LLM-ML approaches to enhance SOC operations and better address the evolving threat landscape. By utilizing automation, the presented methods will enhance the analysis efficiency of SOC alerts, and decrease workloads for analysts. Full article
(This article belongs to the Special Issue Generative AI and Large Language Models)
Show Figures

Figure 1

Back to TopTop