Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (222)

Search Parameters:
Keywords = CICIDS2017 dataset

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 7259 KB  
Article
Enhancing IoT Network Security: A BPSO-Optimized Attention-GRU Deep Learning Framework for Intrusion Detection
by Abdallah Elayan and Michel Kadoch
Computers 2026, 15(5), 266; https://doi.org/10.3390/computers15050266 - 23 Apr 2026
Viewed by 72
Abstract
The exponential expansion of computer networks, alongside the rapid development of the Internet of Things (IoT), has significantly increased the volume and complexity of transmitted data, emphasizing the need for robust network security measures to secure sensitive data and prevent unauthorized access or [...] Read more.
The exponential expansion of computer networks, alongside the rapid development of the Internet of Things (IoT), has significantly increased the volume and complexity of transmitted data, emphasizing the need for robust network security measures to secure sensitive data and prevent unauthorized access or breaches. Intrusion Detection Systems (IDSs) have emerged as a vital tool for protecting networks and IoT environments from threats. Various IDSs have been proposed in the literature; however, the lack of optimal feature learning, computational efficiency, and reliance on obsolete datasets poses significant challenges, limiting their effectiveness against evolving cyber threats. Moreover, traditional IDSs struggle to efficiently manage the high-dimensional and imbalanced nature of IoT network traffic data. To address these challenges, this research proposes a hybrid deep learning (DL)-based IDS integrating Binary Particle Swarm Optimization (BPSO), MultiHead Attention mechanisms (MHA), and a deep Gated Recurrent Unit (GRU) architecture, improving detection effectiveness while reducing computational overhead. Our proposed approach also utilizes a Target Sampling strategy to balance class distributions, enhancing the model’s ability to accurately identify minority attacks. The BPSO algorithm is employed to identify the most influential features from the high-dimensional network traffic datasets, enhancing model interpretability and supporting more efficient learning. This optimized feature subset is then fed into a GRU-based DL architecture augmented with MHA, which performs sequence processing and attention-based learning for intrusion detection. The performance of the proposed model is evaluated utilizing the BoT-IoT and the CIC-IDS2017 benchmark datasets, ensuring a comprehensive assessment of anomaly detection capabilities. Extensive experimental results demonstrate the superior performance of the proposed model, achieving a recall of 98.42% and 99.76%, with F1-score of 98.94% and 99.76% for binary classification and a recall of 99.79% and 98.69%, with F1-score of 99.89% and 98.04% for multiclass classification on the BoT-IoT and CIC-IDS2017 datasets, respectively, highlighting the effectiveness of our model in enhancing threat detection for computer networks and IoT environments in comparison to recent state-of-the-art IDSs. Full article
24 pages, 1278 KB  
Article
A Study on a Network Intrusion Detection System Based on the Fusion of SAGEConv-GNN and a Transformer Encoder
by Hoang Duc Binh, Yong-ha Choi, Jaeyeong Jeong, Yong-Joon Lee and Dongkyoo Shin
Electronics 2026, 15(8), 1737; https://doi.org/10.3390/electronics15081737 - 20 Apr 2026
Viewed by 264
Abstract
A network intrusion detection system (NIDS) plays a critical role in protecting modern networked environments, but conventional approaches often struggle to balance the detection of previously unseen attacks with a low false alarm rate. This study proposes a hybrid intrusion detection model, HybridSAGETransformerGlobal, [...] Read more.
A network intrusion detection system (NIDS) plays a critical role in protecting modern networked environments, but conventional approaches often struggle to balance the detection of previously unseen attacks with a low false alarm rate. This study proposes a hybrid intrusion detection model, HybridSAGETransformerGlobal, which integrates a SAGEConv-based graph neural network (GNN) and a Transformer encoder to jointly learn local structural information and global contextual dependencies from network traffic. In the proposed framework, network flows are represented as graph nodes, and edges are constructed using IP-group-aware k-nearest neighbors (KNNs) together with a temporal chain. The model further incorporates a gated fusion mechanism, multiple positional encodings, class weighting, label smoothing, and early stopping to improve training stability and detection performance. The proposed method was evaluated under a unified preprocessing and training pipeline on two benchmark datasets, UNSW-NB15 and CIC-IDS2017, using up to approximately 100,000 flow samples per dataset, and was compared with GCN, GAT, GraphSAGE, and a Transformer-only baseline. On UNSW-NB15, repeated-run evaluation over five random seeds showed that the proposed model achieved an accuracy of 0.9841 ± 0.0006, a macro-precision of 0.9684 ± 0.0010, a macro-recall of 0.9818 ± 0.0026, and a macro-F1-score of 0.9749 ± 0.0011, with statistically significant improvements over the strongest baseline in the macro-F1-score. On CIC-IDS2017, the proposed hybrid model also showed consistently strong performance, achieving an accuracy of 0.9749, a macro-precision of 0.9513, a macro-recall of 0.9722, a macro-F1-score of 0.9613, and an ROC-AUC of 0.9957. Additional ablation, sensitivity, and baseline re-optimization analyses further supported the robustness of the proposed design. These results suggest that a coordinated hybrid architecture combining structural graph learning and long-range contextual modeling can provide an effective framework for robust flow-based network intrusion detection under the evaluated settings. Full article
(This article belongs to the Special Issue Advances in Web Data Management)
Show Figures

Figure 1

21 pages, 2238 KB  
Article
Game-Theoretic Cost-Sensitive Adversarial Training for Robust Cloud Intrusion Detection Against GAN-Based Evasion Attacks
by Jianbo Ding, Zijian Shen and Wenhe Liu
Appl. Sci. 2026, 16(8), 3944; https://doi.org/10.3390/app16083944 - 18 Apr 2026
Viewed by 142
Abstract
Cloud-based intrusion detection systems (IDSs) increasingly rely on deep learning classifiers to identify malicious traffic; however, this reliance exposes them to adversarial evasion attacks in which adversaries craft near-imperceptible perturbations to bypass detection. Existing defenses based on conventional adversarial training often recover robustness [...] Read more.
Cloud-based intrusion detection systems (IDSs) increasingly rely on deep learning classifiers to identify malicious traffic; however, this reliance exposes them to adversarial evasion attacks in which adversaries craft near-imperceptible perturbations to bypass detection. Existing defenses based on conventional adversarial training often recover robustness against known perturbation patterns at the cost of degraded detection accuracy on canonical attack categories—a robustness–accuracy trade-off that remains an open challenge in the field. In this paper, we propose GT-CSAT (Game-Theoretic Cost-Sensitive Adversarial Training), a novel defense framework tailored for cloud security environments. GT-CSAT couples an improved Wasserstein GAN with Gradient Penalty (WGAN-GP) threat generator—conditioned on attack semantics to simulate functionally consistent and highly covert traffic variants—with a minimax adversarial training loop governed by a game-theoretic cost-sensitive loss function. The proposed loss function assigns asymmetric misclassification penalties derived from a two-player zero-sum payoff matrix, enabling the detector to maintain vigilance over both novel adversarial variants and well-characterized conventional threats simultaneously. Specifically, misclassifying an adversarially perturbed attack as benign incurs a strictly higher penalty than the symmetric cross-entropy baseline, while the cost weights are dynamically adapted via a Nash equilibrium-inspired update rule during training. We conduct comprehensive experiments on the Cloud Vulnerabilities Dataset (CVD), CICIDS-2017, and UNSW-NB15, which encompass diverse cloud-specific attack scenarios including denial-of-service, port scanning, brute-force, and SQL injection traffic. Under six representative evasion strategies—FGSM, PGD, C&W, BIM, DeepFool, and IDSGAN-style black-box perturbations—GT-CSAT achieves an average robust accuracy of 94.3%, surpassing standard adversarial training by 6.8 percentage points and the undefended baseline by 21.4 percentage points, while preserving clean-traffic detection at 97.1%. These results confirm that the game-theoretic cost structure effectively decouples robustness from accuracy, yielding a Pareto-superior detection profile relative to competing baselines across all evaluated threat models. The source code and experimental configurations have been publicly released to facilitate reproducibility. Full article
Show Figures

Figure 1

14 pages, 730 KB  
Proceeding Paper
Lightweight and Transparent Intrusion Detection in the Internet of Medical Things: The Role of Explainable AI
by Rawan Abdulaziz AlRumaih, Tarek Moulahi and Dina M. Ibrahim
Comput. Sci. Math. Forum 2026, 13(1), 5; https://doi.org/10.3390/cmsf2026013005 (registering DOI) - 16 Apr 2026
Viewed by 9
Abstract
The rise of the Internet of Medical Things (IoMT) has transformed healthcare through real-time monitoring and improved outcomes but also introduced critical security and privacy challenges. This paper presents a focused survey of Explainable AI (XAI) approaches for intrusion detection in IoMT, emphasizing [...] Read more.
The rise of the Internet of Medical Things (IoMT) has transformed healthcare through real-time monitoring and improved outcomes but also introduced critical security and privacy challenges. This paper presents a focused survey of Explainable AI (XAI) approaches for intrusion detection in IoMT, emphasizing methods that are lightweight, transparent, and deployable under resource constraints. We first clarify XAI terminology and taxonomy (global vs. local scope; ante hoc vs. post hoc; model-agnostic vs. model-specific) and then systematize recent works from the past five years across cybersecurity sub-domains relevant to eHealth. Representative pipelines span classical ML (e.g., LR, RF, SVM, and XGBoost) and deep models (e.g., DNNs and SRU/LSTM), with post hoc explainers, especially SHAP and LIME, dominating practice on benchmark datasets such as CICIDS2017, NSL-KDD, ToN-IoT, WUSTL-EHMS, and CICIoMT2024. Our comparative analysis highlights consistent gains from model ensembling and interpretable feature selection while uncovering key gaps: limited real-world validation, inconsistent explainability metrics, adversarial brittleness, and the computing cost of explanations at the edge. Full article
(This article belongs to the Proceedings of The 1st International Conference on Emerging Tech & Innovation (ICETI))
Show Figures

Figure 1

24 pages, 806 KB  
Article
EGGA: An Error-Guided Generative Augmentation and Optimized ML-Based IDS for EV Charging Network Security
by Li Yang and G. Kirubavathi
Future Internet 2026, 18(4), 202; https://doi.org/10.3390/fi18040202 - 13 Apr 2026
Viewed by 278
Abstract
Electric Vehicle Charging Systems (EVCSs) are increasingly connected with the Internet of Things (IoT) and smart grid infrastructure, yet they face growing cyber risks due to expanded attack interfaces. These systems are vulnerable to various attacks that potentially impact both charging operations and [...] Read more.
Electric Vehicle Charging Systems (EVCSs) are increasingly connected with the Internet of Things (IoT) and smart grid infrastructure, yet they face growing cyber risks due to expanded attack interfaces. These systems are vulnerable to various attacks that potentially impact both charging operations and user privacy. Intrusion Detection Systems (IDSs) are essential for identifying suspicious activities and mitigating risks to protect EVCS networks, but conventional ML-based IDSs are often unable to achieve optimal performance due to imbalanced datasets, complex traffic distributions, and human design limitations. In practice, EVCS traffic is typically multi-class, imbalanced, and safety-critical, where both missed attacks and false alarms can lead to denial of charging, service interruption, unnecessary incident escalation, financial loss, and reduced user trust. Automated ML (AutoML) and Generative Artificial Intelligence (GAI) have emerged as promising solutions in cybersecurity. Existing GAI and augmentation methods are mostly class-frequency-driven, but this does not necessarily improve the error-prone regions where IDSs actually fail. In this paper, we propose a GAI and an AutoML-based IDS that incorporates a Conditional Generative Adversarial Network (cGAN) with the optimized XGBoost model to improve the effectiveness of intrusion detection in EVCS networks and IoT systems. The proposed framework involves two techniques: (1) a novel cGAN-based error-guided generative augmentation (EGGA) method that extracts misclassified samples and generates a more robust training set for IDS development, and (2) an optimized IDS model that automatically constructs an optimized XGBoost model based on Bayesian Optimization with Tree-structured Parzen Estimator (BO-TPE). The main algorithmic novelty lies in EGGA, which uses model errors to guide generative augmentation toward difficult decision regions, while the overall pipeline represents a practical system-level integration of EGGA, XGBoost, and BO-TPE. To the best of our knowledge, this is the first work that combines GAI and AutoML to specifically improve detection on hard samples, enabling more autonomous and reliable identification of diverse cyber attacks in EV charging networks and IoT systems. Experiments are conducted on two benchmark EVCS and cybersecurity datasets, CICEVSE2024 and CICIDS2017, demonstrating consistent and statistically meaningful improvements over state-of-the-art IDS models. This research highlights the importance of combining automation, generative balancing, and optimized learning to strengthen cybersecurity solutions for EV charging networks and IoT systems. Full article
Show Figures

Figure 1

21 pages, 1059 KB  
Article
Lightweight MLP-Based Feature Extraction with Linear Classifier for Intrusion Detection System in Internet of Things
by Jisi Chandroth and Jehad Ali
Electronics 2026, 15(8), 1604; https://doi.org/10.3390/electronics15081604 - 12 Apr 2026
Viewed by 326
Abstract
The Internet of Things (IoT) comprises diverse devices connected through heterogeneous communication protocols to deliver a wide range of services. However, the complexity and scale of IoT networks make them difficult to secure. Network intrusion detection systems (NIDSs) have therefore become essential for [...] Read more.
The Internet of Things (IoT) comprises diverse devices connected through heterogeneous communication protocols to deliver a wide range of services. However, the complexity and scale of IoT networks make them difficult to secure. Network intrusion detection systems (NIDSs) have therefore become essential for identifying malicious activities and protecting IoT environments across many applications. Although recent deep learning (DL)-based IDS approaches achieve strong detection performance, they often require substantial computation and storage, which limits their practicality on resource-constrained IoT devices. To balance detection accuracy with computational efficiency, we propose a lightweight deep learning model for IoT intrusion detection. Specifically, our method learns compact, intrusion-relevant representations from traffic features using a two-layer multi-layer perceptron (MLP) embedding backbone, followed by a linear SoftMax classification head for multi-class attack detection. We evaluate the proposed approach on three benchmark datasets, CICIDS2017, NSL-KDD, and CICIoT2023, and the results show strong performance, achieving 99.85%, 99.21%, and 98.45% accuracy, respectively, while significantly reducing model size and computational overhead. The experimental results demonstrate that the proposed method achieves excellent classification performance while maintaining a lightweight design, with fewer parameters and lower FLOPs than existing approaches. Full article
Show Figures

Figure 1

13 pages, 1775 KB  
Article
Cost-Sensitive Threshold Optimization for Network Intrusion Detection: A Per-Class Approach with XGBoost
by Jaehyeok Cha, Jisoo Jang, Dongil Shin and Dongkyoo Shin
Electronics 2026, 15(7), 1542; https://doi.org/10.3390/electronics15071542 - 7 Apr 2026
Viewed by 321
Abstract
Machine learning-based Network Intrusion Detection Systems (NIDSs) typically optimize uniform metrics such as accuracy and F1-score, overlooking the asymmetric cost structure of real-world security operations, where a missed attack (False Negative (FN)) far outweighs a false alarm (False Positive (FP)). We propose a [...] Read more.
Machine learning-based Network Intrusion Detection Systems (NIDSs) typically optimize uniform metrics such as accuracy and F1-score, overlooking the asymmetric cost structure of real-world security operations, where a missed attack (False Negative (FN)) far outweighs a false alarm (False Positive (FP)). We propose a cost-sensitive threshold optimization framework based on XGBoost, using a 10:1 FN-to-FP cost ratio derived from established cost models. We first demonstrate that the default threshold of 0.5 is suboptimal and that a globally optimized threshold of 0.08 substantially reduces total cost. However, a single global threshold cannot accommodate the heterogeneous detection characteristics of diverse attack types. We therefore introduce Per-Class Thresholding, which assigns independently optimized thresholds to each attack class. Evaluated on CIC-IDS2018 and UNSW-NB15 across five independent random seeds, our method achieves a 28.19% cost reduction over the Random Forest baseline on CIC-IDS2018, demonstrating that attack classes undetectable under the global threshold—including DDoS attack-LOIC-UDP (100%), DoS attacks-SlowHTTPTest (99.79%), and FTP-BruteForce (98.16%)—can achieve near-complete cost elimination through individual per-class threshold search. Cross-dataset validation on UNSW-NB15 further confirms that per-class thresholding consistently improves class-level detection, with cost reductions of 74.10% for Reconnaissance, 69.06% for Backdoor, and 54.42% for Analysis attacks. These results confirm that class-specific threshold calibration is essential for cost-effective intrusion detection. Full article
(This article belongs to the Special Issue IoT Security in the Age of AI: Innovative Approaches and Technologies)
Show Figures

Figure 1

28 pages, 1021 KB  
Article
Cost-Aware Network Traffic Anomaly Detection with Histogram-Based Gradient Boosting
by Dariusz Żelasko
Appl. Sci. 2026, 16(7), 3496; https://doi.org/10.3390/app16073496 - 3 Apr 2026
Viewed by 265
Abstract
Intrusion Detection Systems (IDSs) operate under asymmetric misclassification costs: false alarms (FP) consume analysts’ time and erode trust, whereas missed attacks (FN) carry business risks. This paper presents a complete pipeline for network anomaly detection on the CIC-IDS2017 dataset using Histogram-Based Gradient Boosting [...] Read more.
Intrusion Detection Systems (IDSs) operate under asymmetric misclassification costs: false alarms (FP) consume analysts’ time and erode trust, whereas missed attacks (FN) carry business risks. This paper presents a complete pipeline for network anomaly detection on the CIC-IDS2017 dataset using Histogram-Based Gradient Boosting (HGB), with a particular focus on cost-aware threshold selection on a validation split for representative operating regimes wFP:wFN{1:1, 1:2, 1:3, 1:4, 1:5, 1:10}—treated as scenario-based proxies for varying risk posture, attack severity, and analyst workload rather than as universally fixed costs—and on the role of isotonic calibration. The results indicate that (i) under 1:1, the cost-optimal operating point aligns with the F1/MCC optimum; (ii) for 1:k cost regimes, the optimum shifts to lower thresholds, reducing FN at the expense of FP and increasing the alert rate; and (iii) isotonic calibration improves PR/ROC (ranking separation), but in the reported 1:5 experiment it did not reduce the final TEST-set operational cost relative to the uncalibrated run, despite using a separately selected post-calibration threshold. The evaluation includes PR/ROC curves, Cost–Threshold and Alert–Threshold sweeps, per-class recall, and permutation importance. In addition, the proposed approach is compared with unsupervised baselines (Isolation Forest, LOF). The results provide practical guidance for SOC decisions on how to choose thresholds consistent with alert budgets and risk profiles. In deployment, these operating points can be indexed to context (e.g., user type, service class, or time of day), yielding a small library of adaptive thresholds rather than one immutable global threshold. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

21 pages, 3346 KB  
Article
Hybrid-Pipeline-Based Detection and Classification of HTTP Slow Denial-of-Service Attacks Using Radial Basis Function Neural Networks
by Bashaer H. Alrashid, Mazen Alwadi and Qasem Abu Al-Haija
J. Cybersecur. Priv. 2026, 6(2), 64; https://doi.org/10.3390/jcp6020064 - 2 Apr 2026
Viewed by 339
Abstract
Detecting denial of service traffic remains challenging when malicious sessions exhibit flow characteristics that closely resemble benign network behavior, particularly in low-rate attack settings. This study examines whether autoencoder-based feature compression can improve flow-based intrusion detection while maintaining a deployment-oriented design. We develop [...] Read more.
Detecting denial of service traffic remains challenging when malicious sessions exhibit flow characteristics that closely resemble benign network behavior, particularly in low-rate attack settings. This study examines whether autoencoder-based feature compression can improve flow-based intrusion detection while maintaining a deployment-oriented design. We develop a lightweight pipeline that learns a low-dimensional latent representation of tabular flow features using an autoencoder and performs classification using Random Forest, LightGBM, and a radial basis function neural network. Using the CICIDS 2017 dataset, the best performing configurations achieve 99.43 percent accuracy with autoencoder plus Random Forest and 99.39 percent with autoencoder plus LightGBM, while autoencoder plus radial basis function neural network achieves 98.27 percent, with consistently strong precision, recall, and F1-score. The findings support practice by showing that high detection performance can be achieved using compact learned features that reduce input complexity for downstream models, which is beneficial for operational monitoring environments. The study advances knowledge by providing a reproducible evaluation of representation learning as a feature compression step for tabular intrusion detection, and by linking model performance to measurable computational considerations relevant to real-world deployment. Full article
(This article belongs to the Special Issue Cyber Security and Digital Forensics—3rd Edition)
Show Figures

Figure 1

24 pages, 11701 KB  
Article
MRLA: A Multi-Scale Time-Frequency Representation Learning Model with Lightweight Attention for Network Traffic Anomaly Detection
by Haoran Liu, Ke Guo, Yan Li, Shaohua Wang, Jun Yao and Zi Wang
Appl. Sci. 2026, 16(6), 3008; https://doi.org/10.3390/app16063008 - 20 Mar 2026
Viewed by 307
Abstract
As cyberattacks grow increasingly diverse and sophisticated, achieving accurate yet efficient network traffic anomaly detection has become a fundamental challenge in modern cybersecurity. While existing machine learning methods enable effective feature extraction, they remain limited in jointly modeling multi-scale temporal dynamics and frequency-domain [...] Read more.
As cyberattacks grow increasingly diverse and sophisticated, achieving accurate yet efficient network traffic anomaly detection has become a fundamental challenge in modern cybersecurity. While existing machine learning methods enable effective feature extraction, they remain limited in jointly modeling multi-scale temporal dynamics and frequency-domain characteristics of anomalous network behaviors, and typically incur substantial computational overhead when processing long traffic sequences. These limitations hinder their effectiveness in real large-scale deployments. To overcome these challenges, this paper proposes a Multi-scale time-frequency Representation learning and Lightweight Attention (MRLA)-based model, which unifies hierarchical time and frequency feature learning with efficient long-range dependency modeling. Extensive experiments on the CIC-IDS2018, CIC-DDoS2019, and UNSW-NB15 datasets with session-aware data splits demonstrate that MRLA achieves F1-scores of 99.94%, 99.78%, and 93.74%, respectively. These results indicate that MRLA consistently delivers high detection accuracy with improved computational efficiency, offering a robust and scalable solution for network traffic anomaly detection across diverse attacks. Full article
Show Figures

Figure 1

41 pages, 1130 KB  
Article
A Weighted Average-Based Heterogeneous Datasets Integration Framework for Intrusion Detection Using a Hybrid Transformer–MLP Model
by Hesham Kamal and Maggie Mashaly
Technologies 2026, 14(3), 180; https://doi.org/10.3390/technologies14030180 - 16 Mar 2026
Viewed by 627
Abstract
In today’s digital era, cyberattacks pose a critical threat to networks of all scales, from local systems to global infrastructures. Intrusion detection systems (IDSs) are essential for identifying and mitigating such threats. However, existing machine learning-based IDS often suffer from low detection accuracy, [...] Read more.
In today’s digital era, cyberattacks pose a critical threat to networks of all scales, from local systems to global infrastructures. Intrusion detection systems (IDSs) are essential for identifying and mitigating such threats. However, existing machine learning-based IDS often suffer from low detection accuracy, heavy reliance on manual feature extraction, and limited coverage of attack categories. To address these limitations, we propose a modular, deployment-ready intrusion detection framework that integrates multiple heterogeneous datasets through a hybrid transformer–multilayer perceptron (Transformer–MLP) architecture. The system employs three parallel Transformer–MLP models, each specialized for a distinct dataset, whose probabilistic outputs are fused using a weighted decision-level strategy. Unlike traditional feature-level fusion, this strategy ensures module independence, eliminates the need for global retraining when adding new components, and provides seamless modular scalability. The framework accurately identifies twenty-one traffic categories, including one benign and twenty attack classes, derived from a unified mapping across multiple heterogeneous sources to ensure a consistent cross-dataset taxonomy. By combining advanced contextual representation learning with ensemble-based probabilistic fusion, the framework demonstrates high detection accuracy and practical applicability in real-world network environments. The Transformer module captures complex contextual dependencies, while the MLP performs final classification. Class imbalance is mitigated via adaptive synthetic sampling (ADASYN), synthetic minority over-sampling technique (SMOTE), edited nearest neighbor (ENN), and class weight adjustments. Empirical evaluation demonstrates the framework’s high effectiveness: for binary classification, it achieves 99.98% on CICIDS2017, 99.19% on NSL-KDD, and 99.98% on NF-BoT-IoT-v2; for two-stage multi-class classification, 99.56%, 99.55%, and 97.75%; and for one-phase multi-class classification, 99.73%, 99.07%, and 98.23%, respectively. Moreover, the framework enables real-time deployment with 4.8–6.9 ms latency, 9800–14,200 fps throughput, and 412–458 MB memory. These results outperform existing multi-dataset IDS approaches, highlighting the architectural effectiveness, robustness, and practical applicability of the proposed framework. Full article
Show Figures

Figure 1

40 pages, 3992 KB  
Article
Toward Energy-Efficient and Low-Carbon Intrusion Detection in Edge and Cloud Computing Based on GreenShield Cybersecurity Framework
by Abdullah Alshammari
Sensors 2026, 26(6), 1780; https://doi.org/10.3390/s26061780 - 11 Mar 2026
Viewed by 613
Abstract
The fast growth of edge–cloud computing infrastructures has increased the cybersecurity burden even as it has substantially amplified the energy use and carbon footprint of intrusion detection systems (IDSs). In order to overcome this challenge, this paper suggests GreenShield, which is a framework [...] Read more.
The fast growth of edge–cloud computing infrastructures has increased the cybersecurity burden even as it has substantially amplified the energy use and carbon footprint of intrusion detection systems (IDSs). In order to overcome this challenge, this paper suggests GreenShield, which is a framework of low-carbon cybersecurity involving lightweight cryptography, deep learning that is energy efficient, and carbon conscious system optimization across distributed edges and in cloud setup. GreenShield employs a hierarchical federated learning architecture with integrated knowledge distillation and a carbon-aware scheduling controller that dynamically adjusts security response execution based on threat intensity and renewable energy availability. As extensive experiments on the UNSW-NB15 and CIC-IDS2017 datasets show, GreenShield attains 98.73% detection accuracy and is 67.4% more energy efficient than traditional deeplearning-based IDSs. Further, the suggested system reduces the operational carbon emissions up to 97.6%, which is equivalent to a reduction of around 2.8 kg CO2-equivalent/per hour in a typical edge-deployment situation, yet it does not undermine the performance of the detection. These findings suggest that GreenShield can be one of the meaningful alternatives in providing viable and scalable sustainable cybersecurity that supports carbon-conscious security workflows in the future edge–cloud computing architecture. Full article
Show Figures

Figure 1

27 pages, 2849 KB  
Systematic Review
Intrusion Detection in Fog Computing: A Systematic Review of Security Advances and Challenges
by Nyashadzashe Tamuka, Topside Ehleketani Mathonsi, Thomas Otieno Olwal, Solly Maswikaneng, Tonderai Muchenje and Tshimangadzo Mavin Tshilongamulenzhe
Computers 2026, 15(3), 169; https://doi.org/10.3390/computers15030169 - 5 Mar 2026
Viewed by 737
Abstract
Fog computing extends cloud services to the network edge to support low-latency IoT applications. However, since fog environments are distributed and resource-constrained, intrusion detection systems must be adapted to defend against cyberattacks while keeping computation and communication overhead minimal. This systematic review presents [...] Read more.
Fog computing extends cloud services to the network edge to support low-latency IoT applications. However, since fog environments are distributed and resource-constrained, intrusion detection systems must be adapted to defend against cyberattacks while keeping computation and communication overhead minimal. This systematic review presents research on intrusion detection systems (IDSs) for fog computing and synthesizes advances and research gaps. The study was guided by the “Preferred-Reporting-Items for-Systematic-Reviews-and-Meta-Analyses” (PRISMA) framework. Scopus and Web of Science were searched in the title field using TITLE/TI = (“intrusion detection” AND “fog computing”) for 2021–2025. The inclusion criteria were (i) 2021–2025 publications, (ii) journal or conference papers, (iii) English language, and (iv) open access availability; duplicates were removed programmatically using a DOI-first key with a title, year, and author alternative. The search identified 8560 records, of which 4905 were unique and included for qualitative grouping and bibliometric synthesis. Metadata (year, venue, authors, affiliations, keywords, and citations) were extracted and analyzed in Python to compute trends and collaboration. Intrusion detection systems in fog networks were categorized into traditional/signature-based, machine learning, deep learning, and hybrid/ensemble. Hybrid and DL approaches reported accuracy ranging from 95 to 99% on benchmark datasets (such as NSL-KDD, UNSW-NB15, CIC-IDS2017, KDD99, BoT-IoT). Notable bottlenecks included computational load relative to real-time latency on resource-constrained nodes, elevated false-positive rates for anomaly detection under concept drift, limited generalization to unseen attacks, privacy risks from centralizing data, and limited real-world validation. Bibliometric analyses highlighted the field’s concentration in fast-turnaround, open-access journals such as IEEE Access and Sensors, as well as a small number of highly collaborative author clusters, alongside dominant terms such as “learning,” “federated,” “ensemble,” “lightweight,” and “explainability.” Emerging directions include federated and distributed training to preserve privacy, as well as online/continual learning adaptation. Future work should consist of real-world evaluation of fog networks, ultra-lightweight yet adaptive hybrid IDS, self-learning, and secure cooperative frameworks. These insights help researchers select appropriate IDS models for fog networks. Full article
Show Figures

Figure 1

23 pages, 919 KB  
Article
A Hybrid Deep Learning Architecture for Intrusion Detection Deploying Multi-Scale Feature Interaction and Temporal Modeling
by Eva Jakubcova, Maros Jakubec and Peter Pocta
AI 2026, 7(3), 87; https://doi.org/10.3390/ai7030087 - 2 Mar 2026
Viewed by 846
Abstract
Network intrusion detection is a core component of modern cybersecurity, but it remains challenging due to highly imbalanced traffic, heterogeneous feature types, and a presence of short-term temporal dependencies in network flows. Traditional machine learning models often rely on handcrafted features and struggle [...] Read more.
Network intrusion detection is a core component of modern cybersecurity, but it remains challenging due to highly imbalanced traffic, heterogeneous feature types, and a presence of short-term temporal dependencies in network flows. Traditional machine learning models often rely on handcrafted features and struggle with complex attack patterns, while deep learning approaches may become overly complex or difficult to interpret. In this paper, we propose a neural intrusion detection method that combines structured feature preprocessing with a compact hybrid architecture. Numerical and categorical traffic features are processed separately using robust normalisation and trainable embeddings, and then merged into an unified representation. The proposed model builds on a multi-scale feature interaction block, followed by channel-wise attention and a single bidirectional gated recurrent unit layer with attention pooling to capture short-term temporal behavior. The method is evaluated on two widely used benchmark datasets, i.e., the CIC-IDS2017 and CSE-CIC-IDS2018 dataset. Experimental results show that the proposed approach consistently outperforms the classical machine learning baselines and achieves competitive or superior performance compared to the recent deep learning methods proposed in the literature. The results confirm that the proposed architectural choices effectively capture both feature interactions and temporal patterns in network traffic. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

24 pages, 4005 KB  
Article
Explainable Firewall Penetration Testing Method Employing Machine Learning
by Algimantas Venčkauskas, Jevgenijus Toldinas and Nerijus Morkevičius
Electronics 2026, 15(5), 1030; https://doi.org/10.3390/electronics15051030 - 1 Mar 2026
Viewed by 567
Abstract
Cyber adversaries are becoming more sophisticated, creating complex security challenges as digital services expand. The reliability of the firewall is of the utmost importance in the context of network security since it serves as the first line of protection. Penetration testing is an [...] Read more.
Cyber adversaries are becoming more sophisticated, creating complex security challenges as digital services expand. The reliability of the firewall is of the utmost importance in the context of network security since it serves as the first line of protection. Penetration testing is an approach used to evaluate the reliability of a firewall and improve security by uncovering exploitable flaws. Frequently, penetration testing solutions are developed using machine learning, and it is of the utmost importance to explain the obtained results during the penetration testing. The emergence of explainable AI (XAI) addresses transparency in ML models, which is essential for informed cybersecurity decisions. Additionally, effective penetration testing reports are crucial for organizations, helping them comprehend and address vulnerabilities with tailored mitigation strategies. This study contributes to firewall security by developing an explainable penetration testing method, which includes two machine learning classification models: a binary model for detecting attacks and a multiclass model for identifying attack types with an explainability feature. This research introduces a novel explainability method that emphasizes significant features related to attack types based on multiclass predictions and proposes an approach using the extended System Security Assurance Ontology (SSAO) to clarify vulnerabilities and suggest alternative mitigation strategies. After evaluating numerous ML algorithms for the CIC-IDS2017 dataset, the Fine Tree model was considered to have the greatest performance. For the binary model, it achieved a validation accuracy of 99.7%, while for the multiclass model, it achieved a validation accuracy of 99.6%. Both models were used to test the firewall for vulnerabilities. Firewall penetration testing using the binary model achieves an accuracy of 82.1%, while the multiclass model achieves an accuracy of 78.7%. Full article
(This article belongs to the Special Issue Recent Advances in Information Security and Data Privacy, 2nd Edition)
Show Figures

Figure 1

Back to TopTop