Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (463)

Search Parameters:
Keywords = traffic anomaly detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1700 KB  
Article
Graph-Attentive Cyber–Physical Attack Detection and Forensic Attribution in Smart Grids: A Two-Stage Pipeline Combining Physical Anomaly Detection with Network Traffic Analysis
by Danilo Greco and Giovanni Battista Gaggero
Energies 2026, 19(10), 2394; https://doi.org/10.3390/en19102394 (registering DOI) - 16 May 2026
Abstract
Smart grids increasingly rely on digital communication, expanding the attack surface beyond the reach of conventional network intrusion-detection systems. Physics-based monitoring can detect anomalies that bypass traffic inspection, but most prior methods only provide binary detection and do not identify attackers or describe [...] Read more.
Smart grids increasingly rely on digital communication, expanding the attack surface beyond the reach of conventional network intrusion-detection systems. Physics-based monitoring can detect anomalies that bypass traffic inspection, but most prior methods only provide binary detection and do not identify attackers or describe associated network behaviour. This paper presents a two-stage cyber–physical detection and attribution pipeline for the IEEE 14-bus smart grid. In Stage 1, a four-layer GATv2 model analyses sliding windows of PLC sensor data and operates as a binary anomaly detector (Benign vs. Attack), achieving 96.39±1.26% accuracy, macro-F1 0.949±0.019, recall 0.992±0.007, and ROC-AUC 0.994±0.005 (mean ± std, 5 seeds, tuned configuration). GATv2 achieves the highest recall among all tested binary classifiers (Random Forest: 0.970; SVM: 0.860; KNN: 0.988 at low AUC 0.759), the primary metric in safety-critical intrusion detection where a missed attack is more dangerous than a false alarm. A Welch t-test across five independent seeds confirms that GATv2 and RF are statistically equivalent in accuracy (t=2.030, p=0.096). A six-class ablation study reveals that Backdoor is physically near-invisible (F1 =0.238, lowest among all classes), motivating the network attribution stage. In Stage 2, triggered only after anomaly detection, a LightGBM model trained on 27 network-traffic features attributes the attack campaign, reaching 83.05±0.00% accuracy and macro-F1 0.819±0.002 across all six cyber classes. A final enrichment stage correlates anomaly windows with network events to extract attacker IP and MAC information, suspicious ports, Modbus manipulation signals, and connection-rate anomalies, producing a structured forensic report. Ablations and visual analyses show that graph-based physical sensing and statistical network attribution are complementary. To the best of our knowledge, this is the first work to combine topology-aware GNN physical detection, multi-class cyber attribution, and automated forensic enrichment in a single pipeline evaluated on this dataset. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
19 pages, 7841 KB  
Article
A Network Intrusion Detection System Based on VAE-CWGAN and Feature Selection
by Shiwen Li and Ruifeng Shi
Information 2026, 17(5), 486; https://doi.org/10.3390/info17050486 (registering DOI) - 15 May 2026
Abstract
In network intrusion detection, class imbalance, the scarcity of minority-class attack samples, high feature dimensionality, and substantial feature redundancy are prevalent issues that limit the detection capability of intrusion detection models. To address these issues, this paper proposes a network traffic anomaly detection [...] Read more.
In network intrusion detection, class imbalance, the scarcity of minority-class attack samples, high feature dimensionality, and substantial feature redundancy are prevalent issues that limit the detection capability of intrusion detection models. To address these issues, this paper proposes a network traffic anomaly detection method based on a Variational Autoencoder and a Conditional Wasserstein Generative Adversarial Network (VAE-CWGAN). First, a feature selection strategy that combines ANOVA and mutual information is employed to select informative network traffic features, thereby improving the discriminative capability of the input features. Second, a minority-class sample generation model that integrates VAE and CWGAN is constructed. The VAE is used to learn the latent distribution characteristics of minority-class attack samples, while class-conditional constraints and the Wasserstein distance are introduced to generate high-quality synthetic minority-class samples, thereby alleviating class imbalance in the training dataset. Finally, Random Forest (RF), a representative machine learning classifier, is adopted for the classification experiments. Experimental results on the NSL-KDD dataset demonstrate that the proposed method performs well in minority-class attack detection, achieving Precision, Recall, and F1-score values of 95.89%, 75.18%, and 84.28% for the R2L class and 77.08%, 55.22%, and 64.35% for the U2R class, respectively. Full article
(This article belongs to the Section Information Security and Privacy)
Show Figures

Figure 1

26 pages, 3343 KB  
Article
Graph Sampling Contrastive Self-Supervised Graph Neural Network for Network Traffic Anomaly Detection
by Min Yang and Caiming Liu
Electronics 2026, 15(10), 2119; https://doi.org/10.3390/electronics15102119 - 15 May 2026
Abstract
With the increasing scale and complexity of network traffic, anomaly detection faces significant challenges, particularly under the scarcity of labeled data in real-world environments. Although graph neural networks (GNNs) effectively model relational structures, most existing approaches rely on supervised learning, limiting their applicability [...] Read more.
With the increasing scale and complexity of network traffic, anomaly detection faces significant challenges, particularly under the scarcity of labeled data in real-world environments. Although graph neural networks (GNNs) effectively model relational structures, most existing approaches rely on supervised learning, limiting their applicability in weakly labeled or unlabeled scenarios. To address these limitations, this paper proposes a self-supervised graph neural network framework, termed EGSCA, for network traffic anomaly detection. The framework employs a GNN to jointly model node and edge information, enabling the learning of discriminative representations. On this basis, a graph contrastive learning strategy is designed, where diverse subgraphs are generated via breadth-first search (BFS) to effectively capture local structural patterns. Meanwhile, a hybrid contrastive loss based on Wasserstein distance and Gromov–Wasserstein distance is introduced to achieve collaborative optimization between feature-space alignment and structural consistency under unlabeled conditions. Experimental results on multiple benchmark datasets demonstrate that the proposed method achieves competitive performance. Notably, it achieves the best results on datasets NF-BoT-IoT and NF-BoT-IoT-v2, with average improvements of approximately 3.2% in F1-score and 1.7% in DR over the strongest baseline. Further analysis indicates that the model yields more pronounced performance gains in scenarios with high class separability. Full article
(This article belongs to the Special Issue AI in Cybersecurity, 3rd Edition)
Show Figures

Figure 1

20 pages, 1141 KB  
Article
1D Convolution-Enhanced Mamba: A Method for Accurate Capture of Long-Sequence Stealthy DDoS Attacks
by Yi Li, Xingzhou Deng, Ang Yang and Jing Gao
Electronics 2026, 15(10), 2096; https://doi.org/10.3390/electronics15102096 - 14 May 2026
Abstract
Network technology has advanced rapidly in recent years, and distributed denial-of-service (DDoS) attacks have grown more diverse, stealthy, and large-scale. Traditional detection approaches struggle to process long network traffic sequences and locate sparse attack signals hidden in massive normal traffic, which makes accurate [...] Read more.
Network technology has advanced rapidly in recent years, and distributed denial-of-service (DDoS) attacks have grown more diverse, stealthy, and large-scale. Traditional detection approaches struggle to process long network traffic sequences and locate sparse attack signals hidden in massive normal traffic, which makes accurate and efficient DDoS detection an urgent requirement. This paper presents an end-to-end DDoS detection model built on the Mamba architecture. We use one-dimensional convolutions to extract local features and smooth noise, which strengthens the model’s ability to capture bursty attack behaviors. Then, taking advantage of Mamba’s linear complexity and selective scanning mechanism, the model models long traffic sequences, filters out redundant information, and concentrates on potential attack patterns. With global feature aggregation and a classification layer, the model realizes accurate attack recognition. Experiments conducted on the CIC-DDoS2019 dataset show that our model obtains better performance in weighted F1 score, precision, and recall, while also improving inference efficiency. The model is suitable for high-precision, low-latency DDoS detection in real network environments. Full article
(This article belongs to the Special Issue New Technologies for Cybersecurity)
Show Figures

Figure 1

32 pages, 25709 KB  
Article
Landmark-Based Features for Vehicle Trajectory Anomaly Detection from Traffic Video in Urban Intersections—A Case Study
by Nicolae Cleju and Constantin Catargiu
Sensors 2026, 26(10), 3027; https://doi.org/10.3390/s26103027 - 11 May 2026
Viewed by 741
Abstract
We study trajectory feature representations in the context of detecting spatially anomalous vehicle trajectories in urban intersections, using trajectory data from video streams captured by camera monitoring systems. These trajectories are extracted using an object detection pipeline and have particular characteristics like short [...] Read more.
We study trajectory feature representations in the context of detecting spatially anomalous vehicle trajectories in urban intersections, using trajectory data from video streams captured by camera monitoring systems. These trajectories are extracted using an object detection pipeline and have particular characteristics like short lengths, variable endpoints, and other viewpoint-dependent detection artifacts, which make existing spatial feature approaches less effective. We introduce two feature representations adapted for intersection-level trajectories, based on distances to a fixed set of landmark points, which provide fixed-length vectors compatible with common tabular anomaly detector algorithms. We evaluate using a dataset of 5378 labeled trajectories collected from camera recordings in one deployment site, as well as on other existing city-wide benchmark datasets, showing that, in the evaluated setting, the proposed feature representations improve upon several existing spatial features and enable better detection of both shape and placement anomalies. Full article
Show Figures

Figure 1

32 pages, 4538 KB  
Article
Handling Imbalanced IoMT Network Data for Intrusion Detection via PCA and One-Class SVM
by Eren Gencturk, Beste Ustubioglu and Guzin Ulutas
Appl. Sci. 2026, 16(10), 4701; https://doi.org/10.3390/app16104701 - 9 May 2026
Viewed by 192
Abstract
The Internet of Medical Things (IoMT) has become integral to modern healthcare, yet its always-connected and resource-constrained nature enlarges the attack surface and complicates timely intrusion detection. This study presents a deployment-oriented, two-stage anomaly-detection pipeline. First, Principal Component Analysis (PCA) is employed to [...] Read more.
The Internet of Medical Things (IoMT) has become integral to modern healthcare, yet its always-connected and resource-constrained nature enlarges the attack surface and complicates timely intrusion detection. This study presents a deployment-oriented, two-stage anomaly-detection pipeline. First, Principal Component Analysis (PCA) is employed to reduce the dimensionality of network traffic data, capturing the most significant variance. Subsequently, a One-Class Support Vector Machine (OC-SVM) is trained exclusively on these principal components of normal traffic. This approach prioritizes computational efficiency for resource-constrained IoMT devices while maintaining high model robustness. By modeling the principal components of normal behavior, our method achieves state-of-the-art performance across diverse attack families. We adopt a uniform protocol across four public IoMT corpora—BoT-IoT, CICIoMT2024, ECU-IoHT, and IoMT-TrafficData. The model’s hyperparameters, including the optimal number of principal components determined by explained variance, are tuned via randomized search. Despite using no attack labels during training, the proposed PCA-enhanced detector achieves state-of-the-art performance across diverse attack families: on BoT-IoT we obtain 99.92% F1-score (99.84% accuracy), on CICIoMT2024 we obtain 99.88% F1-score (99.77% accuracy), on ECU-IoHT 99.25% F1-score (98.58% accuracy), and on IoMT-TrafficData 99.19% F1-score (98.66% accuracy). The compact model size, enabled by PCA, makes the approach highly amenable to edge or gateway deployment in clinical networks, while the normal-only training paradigm improves robustness to zero-day threats. The results demonstrate that modeling the principal components of routine network behavior is a highly effective and efficient strategy for reliable, low-latency threat detection in realistic IoMT settings. Full article
(This article belongs to the Special Issue Advances in Cyber Security)
Show Figures

Figure 1

27 pages, 555 KB  
Article
Few-Shot Network Intrusion Detection Using Online Triplet Mining
by Jack Wilkie, Hanan Hindy, Christos Tachtatzis, Miroslav Bures and Robert Atkinson
Appl. Sci. 2026, 16(10), 4589; https://doi.org/10.3390/app16104589 - 7 May 2026
Viewed by 200
Abstract
Network intrusion detection systems play a vital role in protecting networks by detecting malicious network traffic which can then be investigated by a cybersecurity operations centre. State-of-the-art approaches utilise supervised machine learning methods to train a classification model to recognise known cyberattacks; however, [...] Read more.
Network intrusion detection systems play a vital role in protecting networks by detecting malicious network traffic which can then be investigated by a cybersecurity operations centre. State-of-the-art approaches utilise supervised machine learning methods to train a classification model to recognise known cyberattacks; however, these models require a large labelled dataset to train and show poor performance when trained on smaller datasets. In an attempt to address this shortcoming, anomaly detection models learn the distribution of benign traffic and flag non-conforming traffic as malicious. While these methods do not require malicious examples to train, they suffer from high false-positive rates rendering them impractical. As a result, networks may be particularly vulnerable when there are insufficient labelled instances of a specific attack class to train an effective classifier. This often occurs in newly established networks or when previously unseen types of attacks emerge. To address this challenge, this work proposes the use of a triplet network, utilising online triplet mining and a KNN classifier, which is able to perform few-shot classification, enabling effective intrusion detection after being trained on a limited number of malicious examples. Various online triplet mining algorithms were explored and model design choices, such as the inference algorithm and optimised distance metrics, were compared and evaluated through a series of ablation studies. The final model was compared against other state-of-the-art approaches in few-shot binary and multiclass classification, where the proposed approach was found to be competitive with existing methods when trained on as little as 10 malicious samples of each class. Full article
(This article belongs to the Special Issue New Advances in Cybersecurity Technology and Cybersecurity Management)
Show Figures

Figure 1

18 pages, 533 KB  
Article
A Rigorous Comparative Study of Supervised Machine Learning Techniques for Network Anomaly Detection: Empirical Insights from the UNSW-NB15 Dataset
by Nouf Alkhater
Computers 2026, 15(5), 285; https://doi.org/10.3390/computers15050285 - 1 May 2026
Viewed by 440
Abstract
The increasing complexity of modern network infrastructures has intensified the need for reliable and efficient intrusion detection systems. While advanced deep learning approaches have demonstrated strong performance, their high computational cost and limited interpretability restrict their practical deployment in real-time environments. This study [...] Read more.
The increasing complexity of modern network infrastructures has intensified the need for reliable and efficient intrusion detection systems. While advanced deep learning approaches have demonstrated strong performance, their high computational cost and limited interpretability restrict their practical deployment in real-time environments. This study presents a systematic empirical evaluation of four supervised machine learning models—Decision Tree, Random Forest, Support Vector Machine (SVM), and XGBoost—for network anomaly detection using the UNSW-NB15 dataset. To ensure methodological rigor, a structured preprocessing pipeline and a five-fold stratified cross-validation framework were employed. Model performance was assessed using multiple evaluation metrics, including accuracy, precision, recall, F1-score, and area under the ROC curve (AUC). In addition, a feature importance analysis was conducted to identify the most influential network traffic attributes contributing to anomaly detection. The results show that ensemble-based methods outperform individual classifiers, with XGBoost achieving the best overall performance (accuracy = 0.97, AUC = 0.98) along with high stability across validation folds. The analysis further reveals that a subset of flow-based and temporal features—such as sttl, sload, and dload—plays a critical role in distinguishing between normal and malicious traffic. This study provides a rigorous, interpretable, and reproducible benchmarking framework for supervised machine learning in network anomaly detection. The findings provide practical insights for developing efficient and scalable intrusion detection systems suitable for real-world deployment. Full article
Show Figures

Figure 1

23 pages, 7928 KB  
Article
Hardware-Assisted Security Enhancements for an FPGA-ARM Embedded Vision System in IoT Applications
by Tomyslav Sledevič and Darius Andriukaitis
Electronics 2026, 15(9), 1887; https://doi.org/10.3390/electronics15091887 - 29 Apr 2026
Viewed by 210
Abstract
Embedded Field-Programmable Gate Array (FPGA)-Advanced RISC Machine (ARM) systems used in industrial and Internet of Things (IoT) environments increasingly operate as network-connected edge devices. While such connectivity enables distributed processing and remote monitoring, it also exposes embedded vision nodes to security threats, including [...] Read more.
Embedded Field-Programmable Gate Array (FPGA)-Advanced RISC Machine (ARM) systems used in industrial and Internet of Things (IoT) environments increasingly operate as network-connected edge devices. While such connectivity enables distributed processing and remote monitoring, it also exposes embedded vision nodes to security threats, including command injection, frame replay, data tampering, and abnormal communication traffic. This paper presents a hardware-assisted security architecture for an FPGA-ARM embedded vision system designed for high-speed image acquisition and network streaming. The proposed solution integrates several lightweight protection mechanisms directly into the FPGA processing pipeline, including frame replay detection, cyclic redundancy check (CRC)-based frame integrity verification, frame sequence monitoring, authenticated command execution, communication anomaly monitoring, and hardware-rooted trust primitives, such as a ring-oscillator physical unclonable function (PUF) and a pseudo-random generator. Optional secure communication is provided via a lightweight ASCON-authenticated encryption core. The architecture was implemented on a Cyclone V System-on-Chip (SoC) platform using an industrial Camera Link camera and evaluated in a low-latency image-acquisition setup operating at 100 fps, with data throughput exceeding 1 Gbps. Experimental results demonstrate that the proposed security architecture introduces only about 1.6% additional FPGA logic utilization while maintaining full real-time acquisition performance. The presented approach demonstrates that practical hardware-level security mechanisms can be integrated into FPGA-based embedded vision nodes with minimal architectural modifications and negligible performance overhead. Full article
Show Figures

Figure 1

20 pages, 3466 KB  
Review
AI-Driven Hybrid Detection and Classification Framework for Secure Sleep Health IoT Networks
by Prajoona Valsalan and Mohammad Maroof Siddiqui
Clocks & Sleep 2026, 8(2), 23; https://doi.org/10.3390/clockssleep8020023 - 28 Apr 2026
Viewed by 380
Abstract
Sleep disorders, such as insomnia, obstructive sleep apnea (OSA), narcolepsy, REM sleep behavior disorder, and circadian rhythm disturbances, represent a rapidly expanding global health burden that is strongly associated with cardiovascular, metabolic, neurological, and psychiatric diseases. Advancements in wearable sensing technologies and Internet [...] Read more.
Sleep disorders, such as insomnia, obstructive sleep apnea (OSA), narcolepsy, REM sleep behavior disorder, and circadian rhythm disturbances, represent a rapidly expanding global health burden that is strongly associated with cardiovascular, metabolic, neurological, and psychiatric diseases. Advancements in wearable sensing technologies and Internet of Medical Things (IoMT) infrastructures have expanded the possibilities for continuous, home-based sleep assessment beyond conventional polysomnography laboratories. These Sleep Health Internet of Things (S-HIoT) systems combine multimodal physiological sensing (EEG, ECG, SpO2, respiratory effort and actigraphy) with wireless communication and cloud-based analytics for automated sleep-stage classification and disorder detection. Nonetheless, the digitization of sleep medicine brings about significant cybersecurity concerns. The constant transmission of sensitive biomedical information makes S-HIoT networks open to anomalous traffic flows, signal manipulation, replay attacks, spoofing, and data integrity violation. Existing studies mostly focus on analyzing physiological signals and network intrusion detection independently, resulting in a systemic vulnerability of cyber–physical sleep monitoring ecosystems. With the aim of addressing this empirical deficiency, this review integrates emerging advances (2022–2026) in the AI-assisted categorization of sleep phases and IoMT anomaly detector designs on the finer analysis of CNN, LSTM/BiLSTM, Transformer-based systems, and a component part of federated schemes and the lightweight, edge-deployable intruder assessor models available. The aim of this study is to uncover a gap in the literature: integrated architectures to trade off audiences of faithfulness of physiological modeling with communication-layer security. To counter it, we present a single framework to include CNN-based spatial feature extraction, Bidirectional Long Short-Term Memory (BiLSTM)-based temporal models and Random Forest-based ensemble classification using a dual task-learning approach. We propose a multi-objective optimization framework to jointly optimize the performance of sleep-stage prediction and that of network anomaly detection. Performance on publicly available datasets (Sleep-EDF and CICIoMT2024) confirms that hybrid integration can be tailored to achieve high accuracy [99.8% sleep staging; 98.6% anomaly detection] whilst being characterized by low inference latency (<45 ms), which is promising for feasibility in real-time deployment in view of targeting edge devices. This work presents a comprehensive framework for developing secure, intelligent, and clinically robust digital sleep health ecosystems by bridging chronobiological signal modeling with cybersecurity mechanisms. Furthermore, it highlights future research directions, including explainable AI, federated secure learning, adversarial robustness, and energy-aware edge optimization. Full article
(This article belongs to the Section Computational Models)
Show Figures

Figure 1

35 pages, 6142 KB  
Article
An LSTM Autoencoder-Based Approach for Monitoring Railway Bridges
by Viviana Giorgi, Ciro Tordela, Lorenzo Bernardini, Pablo Alex Ramírez Balbiano, Claudio Somaschini, Salvatore Strano and Mario Terzo
Appl. Sci. 2026, 16(9), 4272; https://doi.org/10.3390/app16094272 - 27 Apr 2026
Viewed by 282
Abstract
Continuous monitoring of railway bridges is essential for ensuring safety and operational reliability, considering aging mechanisms, rising traffic, and elevated speeds of railway vehicles. Frequently, traditional vibration-based approaches, including modal identification and data-driven diagnostic strategies, are strongly influenced by environmental and operational variability, [...] Read more.
Continuous monitoring of railway bridges is essential for ensuring safety and operational reliability, considering aging mechanisms, rising traffic, and elevated speeds of railway vehicles. Frequently, traditional vibration-based approaches, including modal identification and data-driven diagnostic strategies, are strongly influenced by environmental and operational variability, requiring labeled damaged datasets or numerical simulations to provide reliable outcomes. However, the acquisition of complete and representative datasets for training neural networks in structural health monitoring remains a challenging task, particularly for large-scale civil structures such as bridges. In these cases, unsupervised learning approaches represent promising solutions. An unsupervised anomaly detection methodology for railway bridge monitoring based on a long short-term memory (LSTM) autoencoder (AE) trained exclusively on bridge accelerations under healthy structural conditions is proposed in the present work. Specifically, the acceleration responses are obtained from simulations made on a calibrated finite element model of the bridge, reproducing realistic train–bridge interaction scenarios. The multi-channel acceleration signals are reconstructed by the proposed LSTM AE to produce the Root Mean Square Error (RMSE) between measured and reconstructed acceleration responses as indicators of potential structural anomalies. A dual-threshold strategy is adopted for damage detection purposes, including a global threshold for identifying anomalies in the overall dynamic response and per-sensor thresholds derived from the healthy-condition RMSE distribution for detecting localized damages. Only healthy-condition data are required for employing the proposed technique, avoiding labeled damaged data for training purposes. The LSTM AE constitutes an effective and computationally efficient tool for anomaly detection and continuous structural health monitoring of railway bridges, as demonstrated by the obtained results, representing a promising alternative to classical modal-based approaches and existing deep learning-based methods. Full article
Show Figures

Figure 1

21 pages, 1930 KB  
Article
Road Traffic Anomaly Detection by Human-Attention-Assisted Text–Vision Learning
by Yachuang Chai and Wushouer Silamu
Sensors 2026, 26(9), 2638; https://doi.org/10.3390/s26092638 - 24 Apr 2026
Viewed by 234
Abstract
With the rapid development of society, the number of road vehicles has increased significantly, leading to a growing severity of traffic accident issues. Timely and accurate detection of road traffic anomalies or accidents is crucial for reducing fatalities and alleviating traffic congestion. Consequently, [...] Read more.
With the rapid development of society, the number of road vehicles has increased significantly, leading to a growing severity of traffic accident issues. Timely and accurate detection of road traffic anomalies or accidents is crucial for reducing fatalities and alleviating traffic congestion. Consequently, the detection of road traffic anomalies has become a focal point of research in recent years. With the assistance of computer technologies such as deep learning, researchers have developed more accurate and effective methods for detecting road traffic anomalies. However, the small proportion of anomaly-prone areas in surveillance video frames, combined with the complex and difficult-to-capture patterns of accidents, presents new challenges for the application of deep models to traffic anomaly detection from a surveillance perspective. In light of this, this paper annotates the TADS dataset we previously proposed, a popular text-assisted video representation learning method, to develop a more efficient detection method. Utilizing the well-known video-text model CLIP, we have constructed a detection model that leverages unique text and eye-gaze annotation data from the TADS dataset to learn anomaly representations more effectively, thereby improving the detection of road traffic anomalies from a surveillance perspective. Experimental results demonstrate the superiority of our model for detecting traffic anomalies from a surveillance perspective, as well as the utility of the text and eye-gaze data included in the dataset. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

31 pages, 7250 KB  
Article
Enhancing IoT Network Security: A BPSO-Optimized Attention-GRU Deep Learning Framework for Intrusion Detection
by Abdallah Elayan and Michel Kadoch
Computers 2026, 15(5), 266; https://doi.org/10.3390/computers15050266 - 23 Apr 2026
Viewed by 202
Abstract
The exponential expansion of computer networks, alongside the rapid development of the Internet of Things (IoT), has significantly increased the volume and complexity of transmitted data, emphasizing the need for robust network security measures to secure sensitive data and prevent unauthorized access or [...] Read more.
The exponential expansion of computer networks, alongside the rapid development of the Internet of Things (IoT), has significantly increased the volume and complexity of transmitted data, emphasizing the need for robust network security measures to secure sensitive data and prevent unauthorized access or breaches. Intrusion Detection Systems (IDSs) have emerged as a vital tool for protecting networks and IoT environments from threats. Various IDSs have been proposed in the literature; however, the lack of optimal feature learning, computational efficiency, and reliance on obsolete datasets poses significant challenges, limiting their effectiveness against evolving cyber threats. Moreover, traditional IDSs struggle to efficiently manage the high-dimensional and imbalanced nature of IoT network traffic data. To address these challenges, this research proposes a hybrid deep learning (DL)-based IDS integrating Binary Particle Swarm Optimization (BPSO), MultiHead Attention mechanisms (MHA), and a deep Gated Recurrent Unit (GRU) architecture, improving detection effectiveness while reducing computational overhead. Our proposed approach also utilizes a Target Sampling strategy to balance class distributions, enhancing the model’s ability to accurately identify minority attacks. The BPSO algorithm is employed to identify the most influential features from the high-dimensional network traffic datasets, enhancing model interpretability and supporting more efficient learning. This optimized feature subset is then fed into a GRU-based DL architecture augmented with MHA, which performs sequence processing and attention-based learning for intrusion detection. The performance of the proposed model is evaluated utilizing the BoT-IoT and the CIC-IDS2017 benchmark datasets, ensuring a comprehensive assessment of anomaly detection capabilities. Extensive experimental results demonstrate the superior performance of the proposed model, achieving a recall of 98.42% and 99.76%, with F1-score of 98.94% and 99.76% for binary classification and a recall of 99.79% and 98.69%, with F1-score of 99.89% and 98.04% for multiclass classification on the BoT-IoT and CIC-IDS2017 datasets, respectively, highlighting the effectiveness of our model in enhancing threat detection for computer networks and IoT environments in comparison to recent state-of-the-art IDSs. Full article
Show Figures

Figure 1

28 pages, 1805 KB  
Article
Intelligent Threat Defense Mechanisms for 5G APIs
by Asif Yasin, Seyed Ebrahim Hosseini, Muhammad Nadeem and Shahbaz Pervez
Future Internet 2026, 18(5), 223; https://doi.org/10.3390/fi18050223 - 22 Apr 2026
Viewed by 463
Abstract
As 5G Standalone Core networks grow, Application Programming Interface (APIs) have become a key part of how network systems talk to each other. They allow different functions to share data and complete tasks quickly. However, this also makes them targets for attacks. 5G [...] Read more.
As 5G Standalone Core networks grow, Application Programming Interface (APIs) have become a key part of how network systems talk to each other. They allow different functions to share data and complete tasks quickly. However, this also makes them targets for attacks. 5G Standalone Core networks rely on Service-Based Architecture (SBA), where network functions communicate through exposed APIs. These APIs are attractive targets for cyberattacks because they are externally accessible, handle sensitive control-plane operations, and exchange structured data using Hypertext Transfer Protocol version 2 (HTTP/2) and JavaScript Object Notation (JSON) protocols. Most older security tools work using fixed rules, which cannot always detect new or changing threats. This study aimed to fix that gap by using Artificial Intelligence to make API security smarter. Two AI models were tested: Long Short-Term Memory (LSTM), which learns from past traffic and Reinforcement Learning (RL), which learns by adapting to network behavior. Both were used to assess API traffic and assign a real-time risk score. Synthetic traffic was created using Python, including both normal API calls and different types of attacks like Distributed Denial-of-Service (DDoS), brute force, and Structured Query Language (SQL) injection. The results show that both LSTM and RL models were better than traditional rule-based systems. They found more threats, gave fewer false alerts, and responded faster. RL was especially strong at handling unknown or changing attacks. Experimental results show that the proposed LSTM and RL models achieved approximately 95% detection accuracy, significantly outperforming the static rule-based baseline model, which achieved 58% accuracy. The results demonstrate the effectiveness of adaptive AI-based security mechanisms for detecting evolving API threats. This research shows that AI can help protect 5G APIs in a smarter and more flexible way. It can support telecom networks by making threat detection faster, more accurate, and ready for future challenges. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

31 pages, 2201 KB  
Article
Anomaly Detection for Substations Based on IEC 61850-NFA Model
by Deniz Berfin Tastan and Musa Balta
Appl. Sci. 2026, 16(8), 4000; https://doi.org/10.3390/app16084000 - 20 Apr 2026
Viewed by 367
Abstract
The increasing digitalization of energy transmission and distribution infrastructures has made industrial control systems (ICS), and especially IEC 61850-based communication structures, critical. IEC 61850 performs protection and control functions in substations in real time via GOOSE and MMS protocols. The fast and low-latency [...] Read more.
The increasing digitalization of energy transmission and distribution infrastructures has made industrial control systems (ICS), and especially IEC 61850-based communication structures, critical. IEC 61850 performs protection and control functions in substations in real time via GOOSE and MMS protocols. The fast and low-latency operation of these protocols is essential; however, their open structure leaves systems vulnerable to cyberattacks. Traditional signature-based solutions are insufficient for detecting such anomalies, and models capable of learning both time and state relationships are needed. This study develops a time-aware probabilistic NFA model to detect anomalous behavior in IEC 61850 traffic. The model analyzes GOOSE and MMS message sequences with both state transitions and time differences (Δt). Thus, not only the message sequence but also the timing variations between events are learned. The probability of each transition is dynamically updated, and deviations from normal behavior are marked as “anomalies”. The dataset used in this study was created based on normal and attack scenarios conducted in the Sakarya University Critical Infrastructure National Testbed Center Energy Laboratory (Center Energy). The experimental results obtained in the study show that the model detects time-based, structural, and behavioral anomalies with high accuracy. With a dual-model configuration, results of 91.7% accuracy, 88.9% precision, 100% recall, and a 94.1% F1-score were achieved; particularly in time-based attack scenarios, the model performance reached an accuracy level of up to 93%. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop