Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (162)

Search Parameters:
Keywords = stealthiness

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 6260 KB  
Article
Ditto: An Adaptable and Highly Robust Invisible Backdoor Attack Towards Deep Neural Networks
by Wenhao Zhang, Lianheng Zou, Yingying Xiong, Peng Shi and Xiao He
Electronics 2026, 15(8), 1551; https://doi.org/10.3390/electronics15081551 - 8 Apr 2026
Viewed by 279
Abstract
With the widespread application of deep neural networks across various fields, issues related to model security have become increasingly prevalent. Backdoor attacks, as a covert method of attack, can implant malicious behavior during the model training process, causing the model to perform predetermined [...] Read more.
With the widespread application of deep neural networks across various fields, issues related to model security have become increasingly prevalent. Backdoor attacks, as a covert method of attack, can implant malicious behavior during the model training process, causing the model to perform predetermined tasks under specific trigger conditions. However, current backdoor attacks struggle to achieve a good balance between stealthiness and attack success rate, and there is an issue in which certain data transformation operations can negatively impact attack performance. To address these issues, this paper proposes a specialized backdoor attack method called Ditto. It first uses a boundary detection algorithm and a padding algorithm to determine the trigger’s insertion position. The trigger is then dynamically generated using a generative adversarial network, taking into account the texture features of the images. Subsequently, the trigger is applied to the images, and its level of stealthiness is adjusted. Compared to existing popular backdoor attack methods, the experimental results ensure a high level of stealthiness while also maintaining a high attack success rate and a high accuracy for clean data. Furthermore, our attack method exhibits considerable robustness and adaptability, demonstrating effective resistance against baseline backdoor defense techniques. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

20 pages, 1454 KB  
Article
Momentum-Based Adversarial Attacks and Multi-Level Denoising Defenses in Deep Learning-Based Wind Power Forecasting
by Yangming Min, Congmei Jiang, Kang Yang, Xiankui Wen and Kexin Chen
Sensors 2026, 26(7), 2073; https://doi.org/10.3390/s26072073 - 26 Mar 2026
Viewed by 469
Abstract
Deep learning (DL) techniques have significantly advanced wind power forecasting by enhancing accuracy. However, these DL models are vulnerable to adversarial attacks, which can lead to severely inaccurate forecasts. Existing studies in wind power forecasting have rarely addressed the stealthiness and effectiveness of [...] Read more.
Deep learning (DL) techniques have significantly advanced wind power forecasting by enhancing accuracy. However, these DL models are vulnerable to adversarial attacks, which can lead to severely inaccurate forecasts. Existing studies in wind power forecasting have rarely addressed the stealthiness and effectiveness of adversarial attacks simultaneously, nor have they investigated defense strategies against multiple perturbation strengths or in black-box scenarios. To this end, we propose an attack algorithm for wind power forecasting, i.e., the momentum iterative fast gradient sign method (MI-FGSM). This algorithm generates adversarial samples by incorporating momentum into the iterative process and adding perturbations to the input samples along the gradient direction. To defend against such attacks under varying perturbation strengths, a defense model called multi-level iterative denoising autoencoder (MLI-DAE) is proposed. MLI-DAE is trained using adversarial samples with multiple perturbation levels to effectively restore attacked inputs to their clean forms. Experimental results under both white-box and black-box scenarios demonstrate that MI-FGSM induces significantly larger forecast errors with smaller perturbation magnitudes compared to FGSM. Furthermore, our proposed MLI-DAE effectively defends against multi-level perturbations without compromising the original forecast accuracy. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

26 pages, 572 KB  
Article
Physics-Constrained Optimization Framework for Detecting Stealthy Drift Perturbations
by Mordecai Opoku Ohemeng and Frederick T. Sheldon
Mathematics 2026, 14(7), 1113; https://doi.org/10.3390/math14071113 - 26 Mar 2026
Viewed by 414
Abstract
This work develops a zero-trust, physics-constrained mathematical framework for detecting stealthy drift perturbations in power system dynamical models. Such perturbations constitute adversarial, statistical deviations that preserve first-order operating trends, making them difficult to identify using classical residual-based estimators or unconstrained data-driven models. We [...] Read more.
This work develops a zero-trust, physics-constrained mathematical framework for detecting stealthy drift perturbations in power system dynamical models. Such perturbations constitute adversarial, statistical deviations that preserve first-order operating trends, making them difficult to identify using classical residual-based estimators or unconstrained data-driven models. We introduce ZETWIN, a spatio-temporal learning architecture formulated as a constrained optimization problem in which the nodal admittance matrix Ybus acts as a graph-structured linear operator embedded directly into the loss functional. This construction enforces Kirchhoff-consistent latent representations and yields a mathematically grounded zero-trust decision rule that flags any trajectory violating physical feasibility, independent of prior attack signatures. The proposed framework is evaluated using a PyPSA-based AC–DC meshed network, demonstrating an AUROC = 0.994, and F1 = 0.969. The formulation highlights how physics-informed constraints, graph operators, and spatio-temporal approximation theory can be combined to construct mathematically interpretable zero-trust detectors for complex dynamical systems. Full article
Show Figures

Figure 1

22 pages, 31045 KB  
Article
Robust and Stealthy White-Box Watermarking for Intellectual Property Protection of Remote Sensing Object Detection Models
by Lingjun Zou, Xin Xu, Weitong Chen, Qingqing Hong and Di Wu
Remote Sens. 2026, 18(7), 985; https://doi.org/10.3390/rs18070985 - 25 Mar 2026
Viewed by 326
Abstract
Remote sensing object detection (RSOD) models play an increasingly important role in modern remote sensing systems. However, during model delivery, sharing, and deployment, RSOD models face increasing risks of unauthorized redistribution, illegal replication, and intellectual property infringement. To mitigate these threats, this paper [...] Read more.
Remote sensing object detection (RSOD) models play an increasingly important role in modern remote sensing systems. However, during model delivery, sharing, and deployment, RSOD models face increasing risks of unauthorized redistribution, illegal replication, and intellectual property infringement. To mitigate these threats, this paper proposes a white-box watermarking framework for RSOD models that enables reliable copyright verification while preserving the performance of the primary detection task. Specifically, a gradient-based sensitivity analysis of the detection loss is first performed to adaptively identify model parameters that minimally affect detection performance, which are then selected as watermark carriers. Subsequently, a parameter-ranking-based watermark encoding scheme is developed, where watermark bits are embedded by enforcing relative ordering constraints between parameter pairs. To further improve robustness under practical deployment conditions, an attack-simulation-driven training strategy is introduced, in which common perturbations and watermark removal attacks are simulated during the embedding process. In addition, a stealthiness enhancement strategy based on statistical distribution constraints is designed to maintain consistency between the distribution of watermarked parameters and those of the original model, thereby reducing the risk of watermark exposure and localization. Extensive experiments across multiple RSOD datasets and detection architectures demonstrate that the proposed method achieves a high copyright verification success rate with negligible impact on detection accuracy and exhibits strong robustness and stealthiness against a variety of watermark removal attacks. Full article
Show Figures

Figure 1

28 pages, 1099 KB  
Article
DELP-Net: A Differentiable Entropy Layer Pyramid Network for End-to-End Low-Rate DoS Detection
by Jinyi Wang, Congyuan Xu and Jun Yang
Entropy 2026, 28(3), 328; https://doi.org/10.3390/e28030328 - 15 Mar 2026
Viewed by 234
Abstract
Low-rate Denial-of-Service (LDoS) attacks exploit periodic traffic pulses to trigger congestion while maintaining a low average rate, making them highly stealthy and difficult to distinguish from legitimate bursty traffic using threshold-based or simple statistical detectors. To address this challenge, this paper proposes DELP-Net, [...] Read more.
Low-rate Denial-of-Service (LDoS) attacks exploit periodic traffic pulses to trigger congestion while maintaining a low average rate, making them highly stealthy and difficult to distinguish from legitimate bursty traffic using threshold-based or simple statistical detectors. To address this challenge, this paper proposes DELP-Net, an end-to-end Differentiable Entropy Layer Pyramid Network for window-level online LDoS detection directly from raw traffic. DELP-Net combines a multi-scale one-dimensional convolutional pyramid with a differentiable Rényi-entropy-driven attention mechanism to capture distributional regularity and weak repetitive patterns characteristic of LDoS traffic. In addition, an entropy-conditioned temporal convolutional network is employed to model cross-window periodic dependencies in a lightweight manner, together with an entropy-regularized hybrid loss to enhance robustness under complex background traffic. Experiments on the low-rate DoS dataset show that DELP-Net achieves an average F1 score of 0.9877 across six LDoS attack types, with a detection rate of 98.69% and a false-positive rate of 1.15%, demonstrating its effectiveness and suitability for practical online intrusion detection deployments. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

16 pages, 396 KB  
Review
Security Threats and AI-Based Detection Techniques in IoT Chips
by Hiba El Balbali and Anas Abou El Kalam
Chips 2026, 5(1), 9; https://doi.org/10.3390/chips5010009 - 4 Mar 2026
Viewed by 667
Abstract
The rapid expansion of the Internet of Things (IoT) has opened resource-limited devices to novel physical threats, such as Side-Channel Attacks (SCAs) and Hardware Trojans (HTs). Traditional security mechanisms are often not capable of standing against such hardware-based attacks, specifically on low-power System-on-Chip [...] Read more.
The rapid expansion of the Internet of Things (IoT) has opened resource-limited devices to novel physical threats, such as Side-Channel Attacks (SCAs) and Hardware Trojans (HTs). Traditional security mechanisms are often not capable of standing against such hardware-based attacks, specifically on low-power System-on-Chip (SoC) where static defenses can incur 2× to 3× overhead in silicon area and power. Herein, the gap between hardware security and embedded AI is compositionally formulated for discussion. We present a comprehensive survey of the current hardware threat landscape and analyze the emergence of “Secure-by-Design” paradigms, specifically focusing on the integration of Edge AI and TinyML as active, on-chip intrusion detection mechanisms. This review presents a critical analysis of trade-offs for running lightweight ML models on hardware by comparing state-of-the-art approaches. Our analysis highlights that optimized architectures, such as Mamba-Enhanced Convolutional Neural Networks (CNNs) and Gated Recurrent Unit (GRU), can achieve detection accuracies exceeding 99% against SCA and >92% against stealthy Hardware Trojans, while offering up to 75% lower power consumption compared to standard deep learning baselines. Finally, open challenges such as adversarial attacks on defense models are briefly discussed, and the focus is put on future directions toward constructing secure chips based on robust, AI-driven technology. Full article
(This article belongs to the Special Issue Emerging Issues in Hardware and IC System Security)
Show Figures

Figure 1

21 pages, 358 KB  
Article
SecureFedGuard: Authenticated and Backdoor-Resilient Federated Learning with Dual-View Gradient Forensics
by Tuli Chen, Yantao Li and Shu Gong
Electronics 2026, 15(5), 1010; https://doi.org/10.3390/electronics15051010 - 28 Feb 2026
Viewed by 346
Abstract
Federated learning (FL) enables collaborative model training without centralizing raw data, yet practical deployments remain vulnerable to security threats such as Byzantine model poisoning, stealthy backdoor implantation, and integrity attacks that exploit the opacity of client updates. This paper presents SecureFedGuard, a security-centric [...] Read more.
Federated learning (FL) enables collaborative model training without centralizing raw data, yet practical deployments remain vulnerable to security threats such as Byzantine model poisoning, stealthy backdoor implantation, and integrity attacks that exploit the opacity of client updates. This paper presents SecureFedGuard, a security-centric FL framework that introduces a novel combination of (i) dual-view update authentication that binds each client update to a lightweight stochastic gradient fingerprint, enabling server-side integrity screening without accessing client data, and (ii) backdoor-resilient aggregation driven by cross-round spectral forensics and adaptive coordinate-wise trimming guided by an estimated benign subspace. SecureFedGuard is designed to be compatible with secure aggregation and does not require trusted hardware, public datasets for pretraining, or expensive per-client verification. We provide a simple robustness analysis that clarifies when benign updates dominate the estimated subspace under mixed benign/malicious participation. Experiments on real FL benchmarks (vision and language) under diverse threat models show that SecureFedGuard substantially improves clean accuracy and backdoor attack success rate compared with strong baselines, while adding modest communication and computation overhead. These results suggest a practical path toward integrity-preserving and backdoor-resistant FL without weakening the privacy boundary between clients and the server. Full article
(This article belongs to the Special Issue Security and Privacy in Distributed Machine Learning)
Show Figures

Figure 1

26 pages, 10348 KB  
Article
A Resilient Ensemble Deep Learning Architecture for Load Forecasting Against FDI Attack
by Zhenya Chen, Yameng Zhang, Bin Liu, Ming Yang and Xuguo Jiao
Electronics 2026, 15(5), 991; https://doi.org/10.3390/electronics15050991 - 27 Feb 2026
Viewed by 265
Abstract
Short-term load forecasting (STLF) is crucial for ensuring power grid stability and economic dispatch. Its accuracy heavily depends on the quality of the input data. However, collecting operational data via the power system’s communication network poses a significant vulnerability to cyberattacks, particularly stealthy [...] Read more.
Short-term load forecasting (STLF) is crucial for ensuring power grid stability and economic dispatch. Its accuracy heavily depends on the quality of the input data. However, collecting operational data via the power system’s communication network poses a significant vulnerability to cyberattacks, particularly stealthy False Data Injection (FDI) attacks. By closely mimicking normal load fluctuations, these attacks evade conventional detection, thus, compromising forecasting reliability. To address this challenge, this paper proposes a novel resilient load forecasting framework that integrates two-stage attack detection with robust ensemble learning. In the detection stage, attack identification is performed through seasonal decomposition and AE-BiLSTM reconstruction, followed by restoration using periodic-consistent historical means and secondary screening via second-order differencing (SOD). In the forecasting stage, an improved Multi-Objective Whale Migration Algorithm (MO-WMA) is employed to adaptively optimize ensemble weights for intelligent fusion, significantly enhancing prediction accuracy and robustness, and providing a generalizable solution for intelligent grid load forecasting. Experiments were conducted on the Independent System Operator of New England (ISO New England, 2012–2014) load dataset under four typical FDI attack scenarios, with test sets including diverse attack intensities and temporal patterns. Results show that the framework achieves 98.98% attack detection accuracy and improves the R2 forecasting metric from 0.9053 to 0.9851, approaching attack-free performance, demonstrating effective recovery of forecasting accuracy and generalization capability. Full article
Show Figures

Figure 1

42 pages, 1277 KB  
Article
A Hybrid Time Series Forecasting Model Combining ARIMA and Decision Trees to Detect Attacks in MITRE ATT&CK Labeled Zeek Log Data
by Raymond Freeman, Sikha S. Bagui, Subhash C. Bagui, Dustin Mink, Sarah Cameron and Germano Correa Silva De Carvalho
Electronics 2026, 15(4), 871; https://doi.org/10.3390/electronics15040871 - 19 Feb 2026
Viewed by 445
Abstract
Intrusion detection systems face challenges in processing high-volume network traffic while maintaining accuracy across diverse low volume attack types. This study presents a hybrid approach combining ARIMA time series forecasting with Decision Tree classification to detect attacks in Zeek network flow data labeled [...] Read more.
Intrusion detection systems face challenges in processing high-volume network traffic while maintaining accuracy across diverse low volume attack types. This study presents a hybrid approach combining ARIMA time series forecasting with Decision Tree classification to detect attacks in Zeek network flow data labeled with MITRE ATT&CK tactics, leveraging PySpark for scalability. ARIMA identifies temporal anomalies which Decision Trees then classify by attack type. The ARIMA model was evaluated across 13 MITRE ATT&CK tactics, though only 7 maintained sufficient class balance for valid assessment. Results are reported at three evaluation levels: Baseline (Decision Tree only), ARIMA-DT (Decision Tree tested on ARIMA-filtered anomalies), and End-to-End (pipeline performance measured against the original test population). The hybrid model demonstrated two distinct benefits: performance improvement for detectable attacks and detection enablement for previously undetectable attacks. For high-volume attacks with existing baseline detection, ARIMA preprocessing substantially improved performance, for example, Reconnaissance achieved an ARIMA-DT F1 score of 99.71% (from a baseline of 80.88%) with End-to-End metrics confirming this improvement at 97.59% F1-score. Credential Access reached a perfect 100% precision and recall on the ARIMA-filtered subset (from a baseline recall of 7.48%); however, End-to-End evaluation revealed that ARIMA filtering removed the vast majority of Credential Access attacks, resulting in a 1.28% End-to-End F1-score—worse than the baseline F1-score of 7.41%—demonstrating that the hybrid pipeline is counterproductive for attack types whose flow characteristics closely resemble legitimate traffic. More significantly, ARIMA preprocessing enabled detection where traditional Decision Trees completely failed (0% recall) for four stealthy attack types: Defense Evasion (ARIMA-DT recall of 93.22%, End-to-End 67.83%), Discovery (ARIMA-DT recall of 100%, End-to-End 63.43%), Persistence (ARIMA-DT recall of 86.92%, End-to-End 73.38%), and Privilege Escalation (ARIMA-DT recall of 89.93%, End-to-End 64.68%). These results demonstrate that ARIMA-based statistical anomaly detection is particularly effective for attacks involving subtle, low-volume activities that blend with legitimate operations, while also improving classification accuracy for high-volume reconnaissance activities. Full article
(This article belongs to the Special Issue Recent Advances in Intrusion Detection Systems Using Machine Learning)
Show Figures

Figure 1

37 pages, 20040 KB  
Article
Towards LLM-Driven Cybersecurity in Autonomous Vehicles: A Big Data-Empowered Framework with Emerging Technologies
by Aristeidis Karras, Leonidas Theodorakopoulos, Christos Karras and Alexandra Theodoropoulou
Mach. Learn. Knowl. Extr. 2026, 8(2), 43; https://doi.org/10.3390/make8020043 - 11 Feb 2026
Viewed by 980
Abstract
Modern Autonomous Vehicles generate large volumes of heterogeneous in-vehicle data, making cybersecurity a critical challenge as adversarial attacks become increasingly adaptive, stealthy, and multi-protocol. Traditional intrusion detection systems often fail under these conditions because of their limited contextual understanding, poor robustness to distribution [...] Read more.
Modern Autonomous Vehicles generate large volumes of heterogeneous in-vehicle data, making cybersecurity a critical challenge as adversarial attacks become increasingly adaptive, stealthy, and multi-protocol. Traditional intrusion detection systems often fail under these conditions because of their limited contextual understanding, poor robustness to distribution shifts, and insufficient regulatory transparency. This study introduces LLM-Guardian, a hierarchical intrusion detection framework with decision-making mechanisms that integrates Large Language Models (LLMs) with classical statistical detection theory, optimal transport drift analysis, graph neural networks, and formal uncertainty quantification. LLM-Guardian uses semantic anomaly scoring, conformal prediction for distribution-free confidence calibration, adaptive cumulative sum (CUSUM) sequential testing for low-latency detection, and topology-aware GNN reasoning designed to identify coordinated attacks across CAN, Ethernet, and V2X interfaces. In this work, the framework is empirically evaluated on four heterogeneous CAN-bus datasets, while the Ethernet and V2X components are instantiated at the architectural level and left as directions for future multi-protocol experimentation. Full article
Show Figures

Graphical abstract

33 pages, 745 KB  
Article
XAI-Driven Malware Detection from Memory Artifacts: An Alert-Driven AI Framework with TabNet and Ensemble Classification
by Aristeidis Mystakidis, Grigorios Kalogiannnis, Nikolaos Vakakis, Nikolaos Altanis, Konstantina Milousi, Iason Somarakis, Gabriela Mihalachi, Mariana S. Mazi, Dimitris Sotos, Antonis Voulgaridis, Christos Tjortjis, Konstantinos Votis and Dimitrios Tzovaras
AI 2026, 7(2), 66; https://doi.org/10.3390/ai7020066 - 10 Feb 2026
Viewed by 1353
Abstract
Modern malware presents significant challenges to traditional detection methods, often leveraging fileless techniques, in-memory execution, and process injection to evade antivirus and signature-based systems. To address these challenges, alert-driven memory forensics has emerged as a critical capability for uncovering stealthy, persistent, and zero-day [...] Read more.
Modern malware presents significant challenges to traditional detection methods, often leveraging fileless techniques, in-memory execution, and process injection to evade antivirus and signature-based systems. To address these challenges, alert-driven memory forensics has emerged as a critical capability for uncovering stealthy, persistent, and zero-day threats. This study presents a two-stage host-based malware detection framework, that integrates memory forensics, explainable machine learning, and ensemble classification, designed as a post-alert asynchronous SOC workflow balancing forensic depth and operational efficiency. Utilizing the MemMal-D2024 dataset—comprising rich memory forensic artifacts from Windows systems infected with malware samples whose creation metadata spans 2006–2021—the system performs malware detection, using features extracted from volatile memory. In the first stage, an Attentive and Interpretable Learning for structured Tabular data (TabNet) model is used for binary classification (benign vs. malware), leveraging its sequential attention mechanism and built-in explainability. In the second stage, a Voting Classifier ensemble, composed of Light Gradient Boosting Machine (LGBM), eXtreme Gradient Boosting (XGB), and Histogram Gradient Boosting (HGB) models, is used to identify the specific malware family (Trojan, Ransomware, Spyware). To reduce memory dump extraction and analysis time without compromising detection performance, only a curated subset of 24 memory features—operationally selected to reduce acquisition/extraction time and validated via redundancy inspection, model explainability (SHAP/TabNet), and training data correlation analysis —was used during training and runtime, identifying the best trade-off between memory analysis and detection accuracy. The pipeline, which is triggered from host-based Wazuh Security Information and Event Management (SIEM) alerts, achieved 99.97% accuracy in binary detection and 70.17% multiclass accuracy, resulting in an overall performance of 87.02%, including both global and local explainability, ensuring operational transparency and forensic interpretability. This approach provides an efficient and interpretable detection solution used in combination with conventional security tools as an extra layer of defense suitable for modern threat landscapes. Full article
Show Figures

Figure 1

22 pages, 861 KB  
Article
STD: Sensor-Oriented Temporal Detector Against Multi-Type Load Redistribution Attacks in Smart Grid
by Yunhao Yu, Boda Zhang, Mengxiang Liu and Xuguo Jiao
Electronics 2026, 15(4), 746; https://doi.org/10.3390/electronics15040746 - 10 Feb 2026
Viewed by 293
Abstract
The modern smart grid integrates information and communication technology (ICT) with electronic devices, but this integration introduces cybersecurity risks. Load measurements, crucial for grid operation, are vulnerable to attacks, particularly Load Redistribution Attacks (LRAs). LRAs maliciously alter load readings to mislead control systems [...] Read more.
The modern smart grid integrates information and communication technology (ICT) with electronic devices, but this integration introduces cybersecurity risks. Load measurements, crucial for grid operation, are vulnerable to attacks, particularly Load Redistribution Attacks (LRAs). LRAs maliciously alter load readings to mislead control systems without being detected by conventional methods. This paper first introduces two advanced LRA variants: a stealthy-enhanced LRA designed to bypass sophisticated data-driven detectors, and an impact-enhanced LRA engineered to cause significant operational disruptions, such as increased generation costs. To address these evolving threats, we propose a novel Sensor-oriented Temporal Detector (STD). Unlike existing methods that often rely on aggregate data or labeled attack examples, our STD focuses on the unique temporal patterns of individual sensor measurements. It achieves this by combining principal subspace projection to identify normal data subspaces with sequential change extraction to detect subtle deviations over time. This approach allows the STD to identify various LRA types effectively, even without prior knowledge of attack signatures. Extensive simulations validate the destructive impact of our proposed LRA variants and demonstrate the superior detection performance of the STD against these sophisticated attacks. Full article
Show Figures

Figure 1

14 pages, 971 KB  
Proceeding Paper
Deep Learning for Cybersecurity Threat Detection in Industrial Process Control and Monitoring Systems
by Godfrey Perfectson Oise, Joy Akpowehbve Odimayomi, Belinda Nkem Unuigbokhai, Babalola Eyitemi Akilo and Samuel Abiodun Oyedotun
Eng. Proc. 2025, 117(1), 43; https://doi.org/10.3390/engproc2025117043 - 9 Feb 2026
Viewed by 556
Abstract
The increasing digital integration of Industrial Control Systems (ICS), including Supervisory Control and Data Acquisition (SCADA) and Distributed Control Systems (DCSs), has improved operational efficiency while simultaneously increasing exposure to cyber threats. Traditional signature-based intrusion detection systems are limited in detecting novel and [...] Read more.
The increasing digital integration of Industrial Control Systems (ICS), including Supervisory Control and Data Acquisition (SCADA) and Distributed Control Systems (DCSs), has improved operational efficiency while simultaneously increasing exposure to cyber threats. Traditional signature-based intrusion detection systems are limited in detecting novel and stealthy attacks in dynamic industrial environments. This study presents a deep learning–based anomaly detection framework for ICS cybersecurity using multivariate time-series data from sensors, actuators, and network traffic. Three architectures, Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) networks, and Transformer models, are evaluated using the HAI Security Dataset. Experimental results show that the Transformer model achieves the highest accuracy (92%), followed by CNN (91%) and LSTM (90%), with all models attaining an F1-score of 91%. The Transformer demonstrates superior generalization by effectively modelling complex temporal dependencies. Key challenges, including data imbalance, overfitting, and limited interpretability, are discussed alongside potential mitigation strategies such as hybrid modelling, federated learning, and digital twin integration. The findings demonstrate the effectiveness of deep learning for scalable, real-time cybersecurity threat detection in industrial control environments. To address challenges such as class imbalance and overfitting, the study discusses mitigation strategies including regularization, early stopping, cost-sensitive learning, and future integration of data balancing and federated learning techniques for improved robustness and generalization. Full article
(This article belongs to the Proceedings of The 4th International Electronic Conference on Processes)
Show Figures

Figure 1

18 pages, 947 KB  
Article
A Classifier with Unknown Pattern Recognition for Domain Name System Tunneling Detection in Dynamic Networks
by Huijuan Dong, Zengwei Zheng and Shenfei Pei
Electronics 2026, 15(3), 709; https://doi.org/10.3390/electronics15030709 - 6 Feb 2026
Viewed by 427
Abstract
Domain Name System (DNS) tunneling, a stealthy attack that exploits DNS infrastructure, poses critical threats to dynamic networks and is evolving with emerging attack patterns. This study aims to accurately classify multi-pattern legitimate and malicious traffic and to identify previously unseen attack patterns. [...] Read more.
Domain Name System (DNS) tunneling, a stealthy attack that exploits DNS infrastructure, poses critical threats to dynamic networks and is evolving with emerging attack patterns. This study aims to accurately classify multi-pattern legitimate and malicious traffic and to identify previously unseen attack patterns. We focus on two core research questions: how to accurately classify known-pattern DNS queries and reliably identify unknown-pattern samples. The codified objective is to develop an unsupervised classification approach that integrates multi-pattern adaptation and the recognition of unknown patterns. We formalize the task as Emerging Pattern Classification and propose the Medium Neighbors Forest. It is a forest-based model that uses the “medium neighbor” mechanism and clustering to identify unknown patterns. Experiments verify that the proposed model effectively identifies unseen patterns, offering a new perspective for DNS tunneling detection. Full article
(This article belongs to the Special Issue AI for Cybersecurity and Emerging Technologies for Secure Systems)
Show Figures

Figure 1

27 pages, 496 KB  
Article
An Intelligent Sensing Framework for Early Ransomware Detection Using MHSA-LSTM Machine Learning
by Abdullah Alqahtani, Mordecai Opoku Ohemeng and Frederick T. Sheldon
Sensors 2026, 26(3), 952; https://doi.org/10.3390/s26030952 - 2 Feb 2026
Cited by 2 | Viewed by 575
Abstract
Ransomware represents a critical and evolving cybersecurity threat that often evades traditional defenses during its early stages. We present a novel intelligent sensing framework (ISF) designed for proactive, early-stage ransomware detection, centered on a Multi-Head Self-Attention Long Short-Term Memory (MHSA-LSTM) sensor model. The [...] Read more.
Ransomware represents a critical and evolving cybersecurity threat that often evades traditional defenses during its early stages. We present a novel intelligent sensing framework (ISF) designed for proactive, early-stage ransomware detection, centered on a Multi-Head Self-Attention Long Short-Term Memory (MHSA-LSTM) sensor model. The core innovation of this sensor is its self-attention mechanism, which is augmented to autonomously prioritize the most discriminative behavioral features by incorporating a relevance coefficient derived from information gain (μ), thereby filtering out noise and overcoming data scarcity inherent in initial attack phases. The framework was validated using a comprehensive dataset derived from the dynamic analysis of 39,378 ransomware samples and 9732 benign applications. The MHSA-LSTM sensor achieved superior performance, recording a peak accuracy of 98.4%, a low False Positive Rate (FPR) of 0.089, and an F1 score of 0.972 using an optimized 25-feature set. This performance consistently surpassed established sequence models, including CNN-LSTM and Stacked LSTM, confirming the significant potential of the ISF as a robust and scalable solution for enhancing defenses against modern, stealthy threats. Most significantly, integration of μ as a statistical anchor resulted in a 49% reduction in False Positive Rates (FPRs) compared to standard attention-based models. This addresses the main operational barrier to deploying deep learning sensors in live environments. Full article
(This article belongs to the Special Issue Intelligent Sensors for Security and Attack Detection)
Show Figures

Figure 1

Back to TopTop