Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,258)

Search Parameters:
Keywords = adversarial attacks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 3629 KB  
Article
HS-FP and SS-FP: Fine-Pruning-Based Backdoor Elimination for Spiking Neural Networks on Neuromorphic Event Data
by Ki-Ho Kim and Eun-Kyu Lee
Electronics 2026, 15(5), 937; https://doi.org/10.3390/electronics15050937 - 25 Feb 2026
Abstract
Spiking Neural Networks (SNNs) have attracted increasing attention due to their energy efficiency and suitability for neuromorphic data processing. Despite these advantages, the security of SNNs—particularly their robustness against backdoor attacks—remains underexplored. This study revisits fine-pruning, a widely adopted backdoor defense technique in [...] Read more.
Spiking Neural Networks (SNNs) have attracted increasing attention due to their energy efficiency and suitability for neuromorphic data processing. Despite these advantages, the security of SNNs—particularly their robustness against backdoor attacks—remains underexplored. This study revisits fine-pruning, a widely adopted backdoor defense technique in deep neural networks, and adapts it to the unique spatio-temporal characteristics of SNNs. We propose two SNN-specific fine-pruning methods: Hook–Surrogate Gradient-based fine-pruning (HS-FP) and Spike–STDP-based fine-pruning (SS-FP). HS-FP leverages hook-based activation analysis with surrogate gradient learning, while SS-FP integrates total spike activity with hybrid STDP and surrogate gradient fine-tuning. We evaluate both methods against static, moving, and smart backdoor attacks on two neuromorphic benchmarks, N-MNIST and DVS128-Gesture. Experimental results show that both approaches reduce the attack success rate down to approximately 10% while preserving model accuracy above 99% on N-MNIST and achieving substantial recovery on DVS128-Gesture. Moreover, our analysis reveals that several phenomena observed in fine-pruning-based defenses for deep neural networks—such as mixed-function neurons and backdoor reactivation during fine-tuning—also manifest in SNNs. These findings highlight both the effectiveness and limitations of fine-pruning in the SNN domain and suggest promising directions for extending existing DNN security methodologies to neuromorphic systems. Full article
Show Figures

Figure 1

27 pages, 8186 KB  
Article
Deceptive Waypoint Sequencing Based UAV–UAV Interception Control Using DBSCAN Learning Strategy
by Abdulrazaq Nafiu Abubakar, Ali Nasir and Abdul-Wahid A. Saif
Mach. Learn. Knowl. Extr. 2026, 8(3), 54; https://doi.org/10.3390/make8030054 - 25 Feb 2026
Abstract
Modern multi-Unmanned Aerial Vehicle (UAV) attacks pose significant challenges to existing counter-UAV frameworks due to their agility, irregular spatial formations, and increasing reliance on intelligent evasive behaviors. This paper proposes a unified interception architecture that integrates Density-Based Spatial Clustering of Applications with Noise [...] Read more.
Modern multi-Unmanned Aerial Vehicle (UAV) attacks pose significant challenges to existing counter-UAV frameworks due to their agility, irregular spatial formations, and increasing reliance on intelligent evasive behaviors. This paper proposes a unified interception architecture that integrates Density-Based Spatial Clustering of Applications with Noise (DBSCAN) for multi-target grouping, a deceptive waypoint sequencing (DWS) mechanism for adversarial evasion, and a robust sliding-mode backstepping controller augmented with extended state observers (ESOs) for precise tracking under disturbances. DBSCAN enables real-time clustering of attacking UAVs without prior knowledge of the number of formations, producing dynamic centroids that serve as tactical interception references. To counter risky attackers capable of predicting defender trajectories, a novel DWS strategy introduces centroid-relative waypoints that preserve mission objectives while reducing trajectory predictability. Lyapunov-based analysis is developed for stability, guaranteeing uniform ultimate boundedness of the tracking errors. The proposed approach achieves successful interception in both scenarios, with an interception time of 7 s and final interception error of 0.023 m in the single-UAV case, and an interception time of 8 s with final interception error of 0.050 m in the multiple-UAV case, whereas the PID baseline fails to achieve interception under the same conditions. Extensive simulations involving single and multi-cluster engagements demonstrate that the proposed strategy achieves fast, accurate, and deception-resilient interception, outperforming the conventional PID approach in the presence of disturbances, nonlinearities, and dynamic swarm configurations. The obtained results show the effectiveness of integrating adaptive clustering, deceptive planning, and robust nonlinear control for modern UAV–UAV defensive operations. Full article
Show Figures

Figure 1

30 pages, 490 KB  
Article
Adaptive Threat Mitigation in PoW Blockchains (Part II): A Deep Reinforcement Learning Approach to Countering Evasive Adversaries
by Rafał Skowroński
Sensors 2026, 26(4), 1368; https://doi.org/10.3390/s26041368 - 21 Feb 2026
Viewed by 142
Abstract
Static defense mechanisms in blockchain security, while effective against known threats, are inherently vulnerable to intelligent adversaries who can adapt their strategies to evade detection. This paper addresses this critical limitation by proposing a next-generation adaptive security framework powered by deep reinforcement learning [...] Read more.
Static defense mechanisms in blockchain security, while effective against known threats, are inherently vulnerable to intelligent adversaries who can adapt their strategies to evade detection. This paper addresses this critical limitation by proposing a next-generation adaptive security framework powered by deep reinforcement learning (DRL). Building upon the state-of-the-art statistical detection system presented in Part I of this series, we introduce a DRL agent that learns to dynamically adjust security parameters in response to evolving network conditions and adversarial behavior. The agent is trained using a realistic, proxy-based reward function that optimizes for network stability without requiring ground-truth attack labels. We conduct comprehensive evaluation across multiple scenarios, demonstrating that our DRL-enhanced framework consistently renders attacks unprofitable where static models eventually fail. Against adaptive adversaries, the DRL agent drives adversary profit to 42±13% (deeply unprofitable) compared to +65±22% (profitable) under the static framework and +145±18% under baseline detectors. Furthermore, we demonstrate resilience in zero-day scenarios where novel attack variants are suppressed within 24 h, and compare performance against alternative AI methodologies (supervised learning, GANs), achieving a superior F1-score of 0.95±0.02. This work provides a robust blueprint for creating intelligent, adaptive, and resilient security systems for future decentralized networks. Full article
Show Figures

Graphical abstract

21 pages, 6717 KB  
Article
Unraveling Patch Size Effects in Vision Transformers: Adversarial Robustness in Hyperspectral Image Classification
by Shashi Kiran Chandrappa, Sidike Paheding and Abel A. Reyes-Angulo
Remote Sens. 2026, 18(4), 656; https://doi.org/10.3390/rs18040656 - 21 Feb 2026
Viewed by 102
Abstract
Vision Transformers (ViTs) have demonstrated strong performance in hyperspectral image (HSI) classification; however, their robustness is highly sensitive to patch size. This study investigates the impact of spatial patch size on clean accuracy and adversarial robustness using a standard ViT and a Channel [...] Read more.
Vision Transformers (ViTs) have demonstrated strong performance in hyperspectral image (HSI) classification; however, their robustness is highly sensitive to patch size. This study investigates the impact of spatial patch size on clean accuracy and adversarial robustness using a standard ViT and a Channel Attention Fusion variant (ViT-CAF). Patch sizes from 1 × 1 to 19 × 19 are evaluated across four benchmark datasets under FGSM, BIM, CW, PGD, and RFGSM attacks. Descriptive results show that smaller patches, particularly 1 × 1 and 3 × 3, generally yield higher adversarial accuracy, while larger patches amplify localized perturbations and degrade robustness. Parameter analysis indicates that patch-size-dependent variations arise mainly from the embedding layer, with the Transformer backbone remaining fixed, confirming that robustness differences are driven primarily by spatial context rather than model capacity. These findings reveal a trade-off between spatial granularity and adversarial resilience and provide guidance for patch size selection in ViT-based HSI applications. Full article
Show Figures

Figure 1

49 pages, 943 KB  
Review
A Review of Resilient IoT Systems: Trends, Challenges, and Future Directions
by Bandar Alotaibi
Appl. Sci. 2026, 16(4), 2079; https://doi.org/10.3390/app16042079 - 20 Feb 2026
Viewed by 138
Abstract
The Internet of Things (IoT) is increasingly embedded in critical infrastructures across healthcare, energy, transportation, and industrial automation, yet its pervasiveness introduces substantial security and resilience challenges. This paper presents a comprehensive review of recent advances in IoT resilience, focusing on developments reported [...] Read more.
The Internet of Things (IoT) is increasingly embedded in critical infrastructures across healthcare, energy, transportation, and industrial automation, yet its pervasiveness introduces substantial security and resilience challenges. This paper presents a comprehensive review of recent advances in IoT resilience, focusing on developments reported between 2022 and 2025. A layered taxonomy is proposed to organize resilience strategies across hardware, network, learning, application, and governance layers, addressing adversarial, environmental, and hybrid stressors. The survey systematically classifies and compares more than forty representative studies encompassing deep learning under adversarial attack, generative and ensemble intrusion detection, hardware and protocol-level defenses, federated and distributed learning, and trust and governance-based approaches. A comparative analysis shows that while adversarial training, GAN-based augmentation, and decentralized learning improve robustness, their evidence is often confined to specific datasets or attack scenarios, with limited validation in large-scale deployments. The study highlights challenges in benchmarking adaptivity, cross-layer integration, and explainable resilience, concluding with future directions for creating antifragile IoT systems that can self-heal and adapt to evolving cyber–physical threats. Full article
Show Figures

Figure 1

21 pages, 2598 KB  
Article
AG2: Attention-Guided Dynamic Adaptation for Adversarial Attacks in Computer Vision
by Jie Tian and Vladimir Y. Mariano
Algorithms 2026, 19(2), 159; https://doi.org/10.3390/a19020159 - 18 Feb 2026
Viewed by 122
Abstract
Deep neural networks (DNNs) have achieved remarkable success in computer vision yet remain vulnerable to adversarial examples. Existing attacks typically distribute perturbations uniformly across the input, without leveraging the model’s internal attention mechanism, and fail to adapt to model responses. To tackle these [...] Read more.
Deep neural networks (DNNs) have achieved remarkable success in computer vision yet remain vulnerable to adversarial examples. Existing attacks typically distribute perturbations uniformly across the input, without leveraging the model’s internal attention mechanism, and fail to adapt to model responses. To tackle these limitations, we propose AG2 (Attention-Guided Adversarial Sample Generation), an adversarial attack algorithm that uses dynamically updated attention maps to guide perturbation placement and a dynamic feedback mechanism for adaptive optimization. AG2 comprises three steps: feature extraction and attention-weight computation, iterative optimization of perturbations guided by attention maps, and adjustment of optimization parameters based on attention shifts. By concentrating perturbations in regions receiving high attention from the victim model, AG2 improves attack effectiveness while preserving visual imperceptibility. The dynamic feedback mechanism further maintains robustness against defended models such as those trained with defensive distillation. Experiments on MNIST, CIFAR-10, and ImageNet show that AG2 achieves attack success rates of 93.7%, 93.5%, and 85.0%, respectively, outperforming prior methods. Moreover, AG2 exhibits strong cross-architecture transferability, achieving a 69.5% success rate on Vision Transformers, which is higher than the previous method’s 55.3% by 14.2%. Theoretical analysis provides convergence guarantees and stability bounds for the proposed attention-guided optimization. Full article
Show Figures

Figure 1

53 pages, 3178 KB  
Review
Federated Learning in Edge Computing: Vulnerabilities, Attacks, and Defenses—A Survey
by Sahar Alhawas and Murad A. Rassam
Sensors 2026, 26(4), 1275; https://doi.org/10.3390/s26041275 - 15 Feb 2026
Viewed by 354
Abstract
Federated Learning (FL), a distributed machine learning framework, enables collaborative model training across multiple devices without sharing raw data, thereby preserving privacy and reducing communication costs. When combined with Edge Computing (EC), FL brings computations closer to data sources, enabling low-latency, real-time decision-making [...] Read more.
Federated Learning (FL), a distributed machine learning framework, enables collaborative model training across multiple devices without sharing raw data, thereby preserving privacy and reducing communication costs. When combined with Edge Computing (EC), FL brings computations closer to data sources, enabling low-latency, real-time decision-making in resource-constrained environments. However, this decentralization introduces several vulnerabilities, including data poisoning, backdoor attacks, inference leaks, and Byzantine behaviors, which are worsened by the heterogeneity of edge devices and their intermittent connectivity. This survey presents a comprehensive review of the intersection of FL and EC, focusing on vulnerabilities, attack vectors, and defense mechanisms. We analyze existing methods for robust aggregation, anomaly detection, differential privacy, and secure aggregation, with a focus on their feasibility within edge environments. Additionally, we identify open research challenges, such as scalability, resilience to heterogeneity, and energy-efficient defenses, and provide insights into the evolving landscape of FL in edge computing. This review aims to inform future research on enhancing the security, privacy, and efficiency of FL systems deployed in real-world edge environments. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

20 pages, 1282 KB  
Article
Graph Neural Network-Guided TrapManager for Critical Path Identification and Decoy Deployment
by Rui Liu, Guangxia Xu and Zhenwei Hu
Mathematics 2026, 14(4), 683; https://doi.org/10.3390/math14040683 - 14 Feb 2026
Viewed by 187
Abstract
Static honeypot deployment and one-shot attack-path analysis often become ineffective against adaptive adversaries because fixed decoy layouts are easy to fingerprint and risk estimates quickly go stale. This paper presents a unified, mathematically grounded TrapManager framework that couples graph representation learning with budget-constrained [...] Read more.
Static honeypot deployment and one-shot attack-path analysis often become ineffective against adaptive adversaries because fixed decoy layouts are easy to fingerprint and risk estimates quickly go stale. This paper presents a unified, mathematically grounded TrapManager framework that couples graph representation learning with budget-constrained combinatorial optimization for dynamic cyber deception. We model attacker progression on vulnerability-based attack graphs and learn context-aware node embeddings using a Graph Attention Network (GAT) that fuses vulnerability-driven risk signals (e.g., CVSS-derived node scores) with structural features. The learned representations are used to estimate edge plausibility and rank candidate source–target routes at the path level. Given limited resources, we formulate pointTrap placement as a Mixed-Integer Programming (MIP) problem that maximizes the expected interception of high-risk paths while penalizing deployment cost under explicit budget constraints, including mandatory coverage of the top-ranked critical paths. To enable online adaptiveness, a pointTrap-triggered, event-driven feedback mechanism locally amplifies risk around alerted regions, updates path weights without retraining the GAT, and re-solves the MIP for rapid redeployment. Experiments on MulVAL-generated benchmark attack graphs and cross-domain transfer settings demonstrate fast convergence, strong discrimination between attack and non-attack edges, and early interception within a small number of hops even with minimal decoy budgets. Overall, the proposed framework provides a scalable and resource-efficient approach to closed-loop attack-path defense by integrating attention-based learning and integer optimization. Full article
Show Figures

Figure 1

29 pages, 5664 KB  
Article
Adversarially Robust and Explainable Insulator Defect Detection for Smart Grid Infrastructure
by Mubarak Alanazi
Energies 2026, 19(4), 1013; https://doi.org/10.3390/en19041013 - 14 Feb 2026
Viewed by 148
Abstract
Automated insulator inspection systems face critical challenges from small object sizes, complex backgrounds, and vulnerability to adversarial attacks, a security concern largely unaddressed in safety-critical power infrastructure. We introduce Faster-YOLOv12n, integrating a FasterNet backbone with SGC2f attention modules and Wise-ShapeIoU loss for enhanced [...] Read more.
Automated insulator inspection systems face critical challenges from small object sizes, complex backgrounds, and vulnerability to adversarial attacks, a security concern largely unaddressed in safety-critical power infrastructure. We introduce Faster-YOLOv12n, integrating a FasterNet backbone with SGC2f attention modules and Wise-ShapeIoU loss for enhanced small defect localization. Our architecture achieves 98.9% mAP@0.5 on the CPLID, improving baseline YOLOv12n by 1.3% in precision (97.8% vs. 96.5%), 4.7% in recall (95.1% vs. 90.4%), and 1.8% in mAP@0.5. Through differential data augmentation, we expand training samples from 678 to 3900 images, achieving balanced class distribution and robust generalization across fog, adverse weather, and complex transmission line backgrounds. Comparative evaluation demonstrates superior performance over RT-DETR, Faster R-CNN, YOLOv7, YOLOv8, and YOLOv9, with per-class analysis revealing 99.8% AP@0.5 for defect detection. We provide the first comprehensive adversarial robustness evaluation for insulator defect detection, systematically assessing FGSM, PGD, and C&W attacks across perturbation budgets. Through adversarial training with mixed-batch strategies, our robust model maintains 93.2% mAP@0.5 under the strongest FGSM attacks (ϵ = 48/255), 94.5% under PGD attacks, and 95.1% under C&W attacks (τ = 3.0) while preserving 98.9% clean accuracy, demonstrating no trade-off between accuracy and robustness. Grad-CAM visualizations demonstrate that attacks disrupt confidence calibration while preserving spatial attention on defect regions, providing interpretable insights into model decision-making under adversarial conditions and validating learned feature representations for safety-critical smart grid monitoring applications. Full article
Show Figures

Figure 1

27 pages, 1059 KB  
Systematic Review
Data Security and Privacy in GPT Models: Techniques and Challenges
by David Ghiurău and Daniela Elena Popescu
Appl. Sci. 2026, 16(4), 1900; https://doi.org/10.3390/app16041900 - 13 Feb 2026
Viewed by 227
Abstract
The rapid advancement of Generative Pre-trained Transformer (GPT) models has led to their widespread adoption across applied domains such as healthcare, finance, education, and enterprise software engineering. However, the large-scale data requirements and generative capabilities of these models introduce significant challenges related to [...] Read more.
The rapid advancement of Generative Pre-trained Transformer (GPT) models has led to their widespread adoption across applied domains such as healthcare, finance, education, and enterprise software engineering. However, the large-scale data requirements and generative capabilities of these models introduce significant challenges related to data security, privacy preservation, and regulatory compliance. This paper presents a systematic literature review conducted in accordance with the PRISMA 2020 guidelines, analyzing 60 peer-reviewed empirical studies published between 2020 and 2025 in Q1 and Q2 journals indexed in the Web of Science Core Collection. The review examines the evolution of GPT architectures and evaluates state-of-the-art security and privacy techniques, including encryption, differential privacy, federated learning, data anonymization, model distillation, and secure deployment mechanisms. Key challenges identified include unintended memorization of sensitive data, adversarial prompt-based attacks, and performance degradation resulting from privacy-preserving constraints, with reported accuracy reductions ranging from 5% to 20% depending on the applied technique. Additionally, the analysis highlights increased computational overhead, in some cases exceeding 30–40% training or inference cost when advanced cryptographic methods are employed. Regulatory and ethical implications are assessed in relation to frameworks such as GDPR, CCPA, HIPAA, and the proposed EU Artificial Intelligence Act. The findings emphasize the need for privacy-by-design approaches and scalable governance strategies to support secure and trustworthy deployment of GPT models in applied real-world environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

49 pages, 5086 KB  
Article
Class-Specific GAN-Based Minority Data Augmentation for Cyberattack Detection Using the UWF-ZeekData22 Dataset
by Asfaw Debelie, Sikha S. Bagui, Dustin Mink and Subhash C. Bagui
Technologies 2026, 14(2), 117; https://doi.org/10.3390/technologies14020117 - 12 Feb 2026
Viewed by 319
Abstract
Intrusion detection systems (IDS) often struggle to detect rare but high-impact attack behaviors due to severe class imbalance in real-world network traffic. This work proposes a class-specific GAN-based augmentation framework that explicitly targets sparsity in the minority-class in structured cybersecurity datasets. Unlike prior [...] Read more.
Intrusion detection systems (IDS) often struggle to detect rare but high-impact attack behaviors due to severe class imbalance in real-world network traffic. This work proposes a class-specific GAN-based augmentation framework that explicitly targets sparsity in the minority-class in structured cybersecurity datasets. Unlike prior GAN-based approaches that employ global augmentation or anomaly-driven synthesis, separate Generative Adversarial Networks (GANs) are trained independently for each MITRE ATT&CK tactic using only real minority-class samples, enabling focused distribution learning without contamination from benign traffic. Using a relatively new network traffic dataset, UWF-ZeekData22, the proposed framework augments minority classes under conditions of extreme sample sparsity, where traditional classifiers and interpolation-based oversampling methods are ineffective or statistically unreliable. Five traditional classifiers—Logistic Regression, Support Vector Machine (SVM), k-Nearest Neighbors (KNN), Decision Tree, and Random Forest—are evaluated before and after augmentation using stratified 5-fold cross-validation. Experimental results show that class-specific GAN augmentation consistently improves recall and F1-score for rare attack tactics, with the largest gains observed under extreme sparsity where pre-augmentation evaluation was infeasible. Notably, false-negative rates are substantially reduced without degrading majority-class performance, demonstrating that the proposed approach enhances minority-class separability rather than inflating evaluation metrics. These findings demonstrate that class-specific GAN-based augmentation is a practical and robust data-level strategy for improving the detection of rare MITRE ATT&CK-aligned attack behaviors in machine-learning-based IDSs. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

28 pages, 2899 KB  
Article
Design of Secure Communication Networks for UAV Platform Empowered by Lightweight Authentication Protocols
by Muhammet A. Sen, Saba Al-Rubaye and Antonios Tsourdos
Electronics 2026, 15(4), 785; https://doi.org/10.3390/electronics15040785 - 12 Feb 2026
Viewed by 187
Abstract
Flying Ad Hoc Networks (FANETs) formed by cooperative Unmanned Aerial Vehicles (UAVs) require formally proven secure and resource-efficient authentication because open wireless channels allow active adversaries to inject commands, replay traffic, and impersonate nodes. Conventional certificate-based mechanisms impose key management overhead and remain [...] Read more.
Flying Ad Hoc Networks (FANETs) formed by cooperative Unmanned Aerial Vehicles (UAVs) require formally proven secure and resource-efficient authentication because open wireless channels allow active adversaries to inject commands, replay traffic, and impersonate nodes. Conventional certificate-based mechanisms impose key management overhead and remain vulnerable under device capture, while existing lightweight and Physical Unclonable Function (PUF)-assisted proposals commonly assume stable connectivity, lack formal adversarial verification, or are evaluated only through simulation. This paper presents a lightweight PUF-assisted authentication protocol designed for dynamic multi-hop FANET operation. The scheme provides mutual UAV–Ground Station (GS) authentication and session key establishment and further enables secure UAV–UAV communication using an off-path ticket mechanism that eliminates continuous infrastructure dependence. The protocol is constructed through verification-driven refinement and formally analysed under the Dolev–Yao model, establishing authentication and session key secrecy and resistance to replay and impersonation attacks. Implementation-oriented latency measurements on Raspberry-Pi-class embedded platforms demonstrate that cryptographic processing time can be further reduced with hardware improvements, while the overall end-to-end delay is still largely determined by channel conditions and connection behaviour. Comparative evaluation shows reduced communication cost and broader security coverage relative to existing UAV authentication schemes, indicating practical deployability in large-scale FANET environments. Full article
(This article belongs to the Special Issue Wireless Sensor Network: Latest Advances and Prospects)
Show Figures

Graphical abstract

29 pages, 766 KB  
Article
Enhancing the MITRE ATT&CK® Framework for Cyber-Physical Systems Using Insights from Advanced Persistent Threats
by Michael Mc Cabe and Siv Hilde Houmb
Appl. Sci. 2026, 16(4), 1815; https://doi.org/10.3390/app16041815 - 12 Feb 2026
Viewed by 295
Abstract
In recent years, numerous Advanced Persistent Threats (APTs) have carried out cyber-physical attacks on critical infrastructures. Ukraine has been the victim of several advanced campaigns against its power grids, exemplifying a growing trend of disruptive and potentially destructive attacks. Although frameworks like the [...] Read more.
In recent years, numerous Advanced Persistent Threats (APTs) have carried out cyber-physical attacks on critical infrastructures. Ukraine has been the victim of several advanced campaigns against its power grids, exemplifying a growing trend of disruptive and potentially destructive attacks. Although frameworks like the MITRE ATT&CK® (ATT&CK) document adversaries’ behaviour across various domains, they show limitations in representing the unique characteristics of cyber-physical attacks. Existing models often fail to capture the integration of physical processes, system states, and domain-specific impacts that are essential to understand threats in cyber-physical environments. This gap hinders the ability to fully model how APTs exploit physical components alongside cyber. This research investigates the limitations of the ATT&CK Industrial Control System (ICS) framework in the context of Cyber-Physical System (CPS). A capability analysis of selected Russian APTs known to target CPS was conducted, resulting in conceptual enhancements to better represent their relevant tactics and techniques. These enhancements were evaluated through semi-structured interviews with cybersecurity professionals. The findings indicate the need for improved representation of interactions in the physical domain, along with greater contextual detail on tactics and techniques. Although the study is exploratory, the enhancements provide a foundation for future research to strengthen CPS threat analysis. Full article
(This article belongs to the Special Issue Infrastructure Resilience Analysis)
Show Figures

Figure 1

21 pages, 2513 KB  
Article
Towards Information-Theoretic Security and Privacy in IoT: A Three-Factor AKA Protocol Supporting Forgotten Password Reset
by Yicheng Yu, Kai Wei, Hongtu Li and Kai Zhang
Entropy 2026, 28(2), 205; https://doi.org/10.3390/e28020205 - 11 Feb 2026
Viewed by 141
Abstract
The growth of the Internet of Things (IoT) has created many problems. A wise example is presented by the design of secure, efficient authentication and key agreement (AKA) protocols. A novel three-factor AKA protocol for the IoT is presented in this paper. The [...] Read more.
The growth of the Internet of Things (IoT) has created many problems. A wise example is presented by the design of secure, efficient authentication and key agreement (AKA) protocols. A novel three-factor AKA protocol for the IoT is presented in this paper. The scheme integrates password, biometric, and device-based factors that achieved strong security, which gives anonymity to the user, achieves forward secrecy, and makes the scheme resilient to various attacks like replay, impersonation, and de-synchronization. It also adds a safe lost-password-reset functionality, which makes the protocol more usable. Security analysis proves its strength against the typical adversary, while performance evaluation shows that the solution is better than existing solutions in terms of computational and communication efficiency. The work proposes a practical and scalable security solution for IoT systems, which satisfies the high security standard but within the constraints of an IoT system. Full article
(This article belongs to the Special Issue Information-Theoretic Security and Privacy)
Show Figures

Figure 1

34 pages, 3862 KB  
Article
Securing UAV Swarms with Vision Transformers: A Byzantine-Robust Federated Learning Framework for Cross-Modal Intrusion Detection
by Canan Batur Şahin
Drones 2026, 10(2), 125; https://doi.org/10.3390/drones10020125 - 11 Feb 2026
Viewed by 259
Abstract
The increasing deployment of uncrewed aerial vehicles (UAVs) in cyber-physical and safety-critical missions has amplified the need for intrusion detection systems that are accurate, privacy-preserving, and resilient to adversarial manipulation. In this paper, we propose CM-BRF-ViT, a Cross-Modal Byzantine-Robust Federated Vision Transformer framework [...] Read more.
The increasing deployment of uncrewed aerial vehicles (UAVs) in cyber-physical and safety-critical missions has amplified the need for intrusion detection systems that are accurate, privacy-preserving, and resilient to adversarial manipulation. In this paper, we propose CM-BRF-ViT, a Cross-Modal Byzantine-Robust Federated Vision Transformer framework for UAV intrusion detection that jointly addresses heterogeneous attack modeling, distributed learning security, and adaptive decision fusion. The proposed framework integrates Gramian Angular Field (GAF) transformations with Vision Transformer (ViT) architectures to effectively convert tabular network and cyber-physical features into discriminative visual representations suitable for attention-based learning. To enable privacy-preserving collaboration across distributed UAV nodes, CM-BRF-ViT operates within a federated learning paradigm and introduces Reference-GAF Consistency Aggregation (ReGCA). This novel Byzantine-robust aggregation mechanism jointly measures prediction consistency and feature-level semantic consistency using a trusted reference set and MAD-based robust weighting. Unlike conventional defenses that rely solely on parameter-space filtering, ReGCA supervises model updates at both behavioral and representation levels, significantly enhancing robustness against malicious clients. In addition, a learnable cross-modal fusion head is developed to adaptively combine attack probabilities derived from cyber and cyber-physical modalities, allowing the framework to exploit complementary threat signatures across layers. Extensive experiments conducted on the UAVIDS-2025 and Cyber-Physical datasets demonstrate that the proposed method achieves 97.1% detection accuracy for UAV network traffic and 78.5% for cyber-physical data, with a fused detection AUC of 0.993. Under adversarial settings, CM-BRF-ViT preserves 89.6% accuracy with up to 40% Byzantine clients, outperforming FedAvg by more than 44 percentage points. Ablation studies further confirm that ReGCA, cross-modal fusion, and ViT-based representation learning contribute complementary performance gains over baseline federated and centralized approaches. These results demonstrate that CM-BRF-ViT provides a robust, adaptive, and privacy-aware intrusion detection solution for UAV systems, making it well-suited for deployment in adversarial and resource-constrained aerial networks. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Figure 1

Back to TopTop