Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (261)

Search Parameters:
Keywords = malicious behavior detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 3228 KB  
Article
Symmetry-Aware Byzantine Resilience in Federated Learning via Dual-Channel Attention-Driven Anomaly Detection
by Yuliang Zhang, Jian Hou, Xianke Zhou, Linjie Ruan, Xianyu Luo and Lili Wang
Symmetry 2026, 18(3), 478; https://doi.org/10.3390/sym18030478 - 11 Mar 2026
Viewed by 88
Abstract
Byzantine failures remain a critical threat to Federated Learning (FL), where malicious clients inject adversarial updates to disrupt global model convergence. From the perspective of symmetry, benign client updates typically exhibit statistical symmetry around the global consensus, whereas Byzantine attacks function as “symmetry-breaking” [...] Read more.
Byzantine failures remain a critical threat to Federated Learning (FL), where malicious clients inject adversarial updates to disrupt global model convergence. From the perspective of symmetry, benign client updates typically exhibit statistical symmetry around the global consensus, whereas Byzantine attacks function as “symmetry-breaking” events that introduce skewness and distributional anomalies. Existing defenses often rely on unrealistic assumptions or fail to capture these asymmetric deviations under high-dimensional non-IID settings. In this paper, we propose a symmetry-aware Byzantine-resilient FL framework driven by a Dual-Channel Attention-Driven Anomaly Detector (DAAD). Specifically, DAAD transforms inter-client behaviors into geometrically symmetric interaction matrices—encoding Gradient Cosine Similarities and Loss Euclidean Distances—to construct dual-channel spatial representations. These representations are processed via a Convolutional Neural Network (CNN) enhanced with Squeeze-and-Excitation (SE) attention blocks, which leverage the inherent symmetry of benign consensus to extract robust adversarial signatures. The detector is pre-trained offline on a synthetic dataset incorporating a diverse portfolio of simulated attacks (e.g., Gaussian noise and label flipping). Crucially, this pre-trained model is seamlessly embedded into the online FL loop to filter updates without requiring ground-truth labels. By jointly encoding client behaviors and learning cross-modal attack signatures, our framework enables reliable detection even when over half of the clients are Byzantine. Extensive experiments on MNIST, CIFAR-10, and FEMNIST datasets demonstrate that DAAD consistently outperforms existing robust aggregation baselines in both anomaly detection accuracy and global model performance, especially under high Byzantine ratios and non-IID conditions. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

24 pages, 5580 KB  
Article
DF-TransVAE: A Deep Fusion Network for Binary Classification-Based Anomaly Detection in Internet User Behavior
by Huihui Fan, Yuan Jia, Wu Le, Zhenhong Jia, Hui Zhao, Congbing He, Hedong Jiang, Zeyu Hu, Xiaoyi Lv, Jianting Yuan and Xiaohui Huang
Appl. Sci. 2026, 16(5), 2243; https://doi.org/10.3390/app16052243 - 26 Feb 2026
Viewed by 206
Abstract
User behavior anomaly detection plays a vital role in network security for identifying malicious access and abnormal activities in high-dimensional internet user behavior data. Although Transformer architectures have been widely adopted in anomaly detection tasks, and their integration with Variational Autoencoders (VAEs) has [...] Read more.
User behavior anomaly detection plays a vital role in network security for identifying malicious access and abnormal activities in high-dimensional internet user behavior data. Although Transformer architectures have been widely adopted in anomaly detection tasks, and their integration with Variational Autoencoders (VAEs) has often been used to further improve detection accuracy, existing integration methods have failed to effectively balance global feature dependency modeling and generative data distribution learning. This results in limited capability in identifying complex anomalous patterns. To address this issue, this paper proposes DF-TransVAE, a novel deeply integrated framework that advances the integration of a Transformer and a VAE for supervised anomaly detection. The framework first fuses global contextual representations from the Transformer encoder with original input features, then maps the fused representation into the latent space via the VAE encoder. A cross-attention mechanism is introduced as the core of deep integration, enabling dynamic, bidirectional interaction between the fused features and latent variables to enhance information fusion. Lastly, a fully connected classifier equipped with residual connections outputs anomaly probabilities for supervised binary classification. Experimental results on two public datasets demonstrate that the proposed framework achieves better performance than existing deep learning methods in terms of accuracy, precision, recall, and F1-score, particularly in detecting complex anomalous patterns. Our results indicate that the deep integration mechanism we propose effectively addresses the limitations of conventional Transformer–VAE combinations. Full article
Show Figures

Figure 1

21 pages, 3512 KB  
Article
Real-Time Ransomware Detection Using Reinforcement Learning Agents
by Kutub Thakur, Md Liakat Ali, Suzanna Schmeelk, Joan Debello and Md Mustafizur Rahman
Information 2026, 17(2), 194; https://doi.org/10.3390/info17020194 - 13 Feb 2026
Viewed by 435
Abstract
Traditional signature-based anti-malware tools often fail to detect zero-day ransomware attacks due to their reliance on known patterns. This paper presents a real-time ransomware detection framework that models system behavior as a Reinforcement Learning (RL) environment. Behavioral features—including file entropy, CPU usage, and [...] Read more.
Traditional signature-based anti-malware tools often fail to detect zero-day ransomware attacks due to their reliance on known patterns. This paper presents a real-time ransomware detection framework that models system behavior as a Reinforcement Learning (RL) environment. Behavioral features—including file entropy, CPU usage, and registry changes—are extracted from dynamic analysis logs generated by Cuckoo Sandbox. A (DQN) agent is trained to proactively block malicious actions by maximizing long-term rewards based on observed behavior. Experimental evaluation across multiple ransomware families such as WannaCry, Locky, Cerber, and Ryuk demonstrates that the proposed RL agent achieves a superior detection accuracy, precision, and F1-score compared to existing static and supervised learning methods. Furthermore, ablation tests and latency analysis confirm the model’s robustness and suitability for real-time deployment. This work introduces a behavior-driven, generalizable approach to ransomware defense that adapts to unseen threats through continual learning. Full article
(This article belongs to the Special Issue Extended Reality and Cybersecurity)
Show Figures

Figure 1

34 pages, 3862 KB  
Article
Securing UAV Swarms with Vision Transformers: A Byzantine-Robust Federated Learning Framework for Cross-Modal Intrusion Detection
by Canan Batur Şahin
Drones 2026, 10(2), 125; https://doi.org/10.3390/drones10020125 - 11 Feb 2026
Viewed by 439
Abstract
The increasing deployment of uncrewed aerial vehicles (UAVs) in cyber-physical and safety-critical missions has amplified the need for intrusion detection systems that are accurate, privacy-preserving, and resilient to adversarial manipulation. In this paper, we propose CM-BRF-ViT, a Cross-Modal Byzantine-Robust Federated Vision Transformer framework [...] Read more.
The increasing deployment of uncrewed aerial vehicles (UAVs) in cyber-physical and safety-critical missions has amplified the need for intrusion detection systems that are accurate, privacy-preserving, and resilient to adversarial manipulation. In this paper, we propose CM-BRF-ViT, a Cross-Modal Byzantine-Robust Federated Vision Transformer framework for UAV intrusion detection that jointly addresses heterogeneous attack modeling, distributed learning security, and adaptive decision fusion. The proposed framework integrates Gramian Angular Field (GAF) transformations with Vision Transformer (ViT) architectures to effectively convert tabular network and cyber-physical features into discriminative visual representations suitable for attention-based learning. To enable privacy-preserving collaboration across distributed UAV nodes, CM-BRF-ViT operates within a federated learning paradigm and introduces Reference-GAF Consistency Aggregation (ReGCA). This novel Byzantine-robust aggregation mechanism jointly measures prediction consistency and feature-level semantic consistency using a trusted reference set and MAD-based robust weighting. Unlike conventional defenses that rely solely on parameter-space filtering, ReGCA supervises model updates at both behavioral and representation levels, significantly enhancing robustness against malicious clients. In addition, a learnable cross-modal fusion head is developed to adaptively combine attack probabilities derived from cyber and cyber-physical modalities, allowing the framework to exploit complementary threat signatures across layers. Extensive experiments conducted on the UAVIDS-2025 and Cyber-Physical datasets demonstrate that the proposed method achieves 97.1% detection accuracy for UAV network traffic and 78.5% for cyber-physical data, with a fused detection AUC of 0.993. Under adversarial settings, CM-BRF-ViT preserves 89.6% accuracy with up to 40% Byzantine clients, outperforming FedAvg by more than 44 percentage points. Ablation studies further confirm that ReGCA, cross-modal fusion, and ViT-based representation learning contribute complementary performance gains over baseline federated and centralized approaches. These results demonstrate that CM-BRF-ViT provides a robust, adaptive, and privacy-aware intrusion detection solution for UAV systems, making it well-suited for deployment in adversarial and resource-constrained aerial networks. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Figure 1

24 pages, 1642 KB  
Article
ProbeSpec: Robust Model Fingerprinting via Dynamic Perturbation Response Spectrum
by Shanshan Lou, Hanzhe Yu and Qi Xuan
Electronics 2026, 15(4), 729; https://doi.org/10.3390/electronics15040729 - 9 Feb 2026
Viewed by 338
Abstract
Deep neural networks (DNNs) represent critical intellectual property that model owners urgently need to protect. With the increasing value of models, malicious attackers increasingly attempt to extract model functionality through techniques such as fine-tuning, distillation, and pruning. Model fingerprinting has emerged as a [...] Read more.
Deep neural networks (DNNs) represent critical intellectual property that model owners urgently need to protect. With the increasing value of models, malicious attackers increasingly attempt to extract model functionality through techniques such as fine-tuning, distillation, and pruning. Model fingerprinting has emerged as a mainstream protection strategy. However, existing fingerprinting methods either exhibit vulnerability to model modifications due to reliance on decision boundary features or require prohibitively large query budgets for accurate verification. This paper proposes ProbeSpec, which captures model fingerprints through dynamic behavioral analysis rather than static output matching. We discover that a model’s response patterns under multi-level perturbations form a unique “behavioral spectrum”, originating from implicit decision mechanisms learned during training and preserved even after various attacks. ProbeSpec employs three complementary probe types to elicit this characteristic and leverages DCT frequency-domain transformation for efficient fingerprint extraction. Extensive experiments show that ProbeSpec achieves 100% detection rate in the majority of attack scenarios, with an overall accuracy exceeding 95% across all tested architectures. Meanwhile, it effectively distinguishes independently trained models and requires only 80 probe samples for fingerprint extraction. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

34 pages, 9182 KB  
Article
A Reputation-Aware Adaptive Incentive Mechanism for Federated Learning-Based Smart Transportation
by Abir Raza, Elarbi Badidi and Omar El Harrouss
Smart Cities 2026, 9(2), 27; https://doi.org/10.3390/smartcities9020027 - 4 Feb 2026
Viewed by 448
Abstract
Federated learning (FL) has emerged as a promising paradigm for privacy-preserving distributed intelligence in modern urban transportation systems, where vehicles collaboratively train global models without sharing raw data. However, the dynamic nature of vehicular environments introduces critical challenges, including unstable participation, data heterogeneity, [...] Read more.
Federated learning (FL) has emerged as a promising paradigm for privacy-preserving distributed intelligence in modern urban transportation systems, where vehicles collaboratively train global models without sharing raw data. However, the dynamic nature of vehicular environments introduces critical challenges, including unstable participation, data heterogeneity, and the potential for malicious behavior. Conventional FL frameworks lack effective trust management and adaptive incentive mechanisms capable of maintaining fairness and reliability under these fluctuating conditions. This paper presents a reputation-aware federated learning framework that integrates multi-dimensional reputation evaluation, dynamic incentive control, and malicious client detection through an adaptive feedback mechanism. Each vehicular client is assessed based on data quality, stability, and behavioral consistency, producing a reputation score that directly influences client selection and reward allocation. The proposed feedback controller self-tunes the incentive weights in real time, ensuring equitable participation and sustained convergence performance. In parallel, a penalty module leverages statistical anomaly detection to identify, isolate, and penalize untrustworthy clients without compromising benign contributors. Extensive simulations conducted on real-world datasets demonstrate that the proposed framework achieves higher model accuracy and greater robustness against poisoning and gradient manipulation attacks compared to existing baseline methods. The results confirm the potential of our trust-regulated incentive mechanism to enable reliable federated learning in smart cities transportation systems. Full article
(This article belongs to the Topic Data-Driven Optimization for Smart Urban Mobility)
Show Figures

Graphical abstract

28 pages, 922 KB  
Article
MAESTRO: A Multi-Scale Ensemble Framework with GAN-Based Data Refinement for Robust Malicious Tor Traffic Detection
by Jinbu Geng, Yu Xie, Jun Li, Xuewen Yu and Lei He
Mathematics 2026, 14(3), 551; https://doi.org/10.3390/math14030551 - 3 Feb 2026
Viewed by 413
Abstract
Malicious Tor traffic data contains deep domain-specific knowledge, which makes labeling challenging, and the lack of labeled data degrades the accuracy of learning-based detectors. Real-world deployments also exhibit severe class imbalance, where malicious traffic constitutes a small minority of network flows, which further [...] Read more.
Malicious Tor traffic data contains deep domain-specific knowledge, which makes labeling challenging, and the lack of labeled data degrades the accuracy of learning-based detectors. Real-world deployments also exhibit severe class imbalance, where malicious traffic constitutes a small minority of network flows, which further reduces detection performance. In addition, Tor’s fixed 512-byte cell architecture removes packet-size diversity that many encrypted-traffic methods rely on, making feature extraction difficult. This paper proposes an efficient three-stage framework, MAESTRO v1.0, for malicious Tor traffic detection. In Stage 1, MAESTRO extracts multi-scale behavioral signatures by fusing temporal, positional, and directional embeddings at cell, direction, and flow granularities to mitigate feature homogeneity; it then compresses these representations with an autoencoder into compact latent features. In Stage 2, MAESTRO introduces an ensemble-based quality quantification method that combines five complementary anomaly detection models to produce robust discriminability scores for adaptive sample weighting, helping the classifier to emphasize high-quality samples. MAESTRO also trains three specialized GANs per minority class and applies strict five-model ensemble validation to synthesize diverse high-fidelity samples, addressing extreme class imbalance. We evaluate MAESTRO under systematic imbalance settings, ranging from the natural distribution to an extreme 1% malicious ratio. On the CCS’22 Tor malware dataset, MAESTRO achieves 92.38% accuracy, 64.79% recall, and 73.70% F1-score under the natural distribution, improving F1-score by up to 15.53% compared with state-of-the-art baselines. Under the 1% malicious setting, MAESTRO maintains 21.1% recall, which is 14.1 percentage points higher than the best baseline, while conventional methods drop below 10%. Full article
(This article belongs to the Special Issue New Advances in Network Security and Data Privacy)
Show Figures

Figure 1

31 pages, 4489 KB  
Article
A Hybrid Intrusion Detection Framework Using Deep Autoencoder and Machine Learning Models
by Salam Allawi Hussein and Sándor R. Répás
AI 2026, 7(2), 39; https://doi.org/10.3390/ai7020039 - 25 Jan 2026
Viewed by 814
Abstract
This study provides a detailed comparative analysis of a three-hybrid intrusion detection method aimed at strengthening network security through precise and adaptive threat identification. The proposed framework integrates an Autoencoder-Gaussian Mixture Model (AE-GMM) with two supervised learning techniques, XGBoost and Logistic Regression, combining [...] Read more.
This study provides a detailed comparative analysis of a three-hybrid intrusion detection method aimed at strengthening network security through precise and adaptive threat identification. The proposed framework integrates an Autoencoder-Gaussian Mixture Model (AE-GMM) with two supervised learning techniques, XGBoost and Logistic Regression, combining deep feature extraction with interpretability and stable generalization. Although the downstream classifiers are trained in a supervised manner, the hybrid intrusion detection nature of the framework is preserved through unsupervised representation learning and probabilistic modeling in the AE-GMM stage. Two benchmark datasets were used for evaluation: NSL-KDD, representing traditional network behavior, and UNSW-NB15, reflecting modern and diverse traffic patterns. A consistent preprocessing pipeline was applied, including normalization, feature selection, and dimensionality reduction, to ensure fair comparison and efficient training. The experimental findings show that hybridizing deep learning with gradient-boosted and linear classifiers markedly enhances detection performance and resilience. The AE–GMM-XGBoost model achieved superior outcomes, reaching an F1-score above 0.94 ± 0.0021 and an AUC greater than 0.97 on both datasets, demonstrating high accuracy in distinguishing legitimate and malicious traffic. AE-GMM-Logistic Regression also achieved strong and balanced performance, recording an F1-score exceeding 0.91 ± 0.0020 with stable generalization across test conditions. Conversely, the standalone AE-GMM effectively captured deep latent patterns but exhibited lower recall, indicating limited sensitivity to subtle or emerging attacks. These results collectively confirm that integrating autoencoder-based representation learning with advanced supervised models significantly improves intrusion detection in complex network settings. The proposed framework therefore provides a solid and extensible basis for future research in explainable and federated intrusion detection, supporting the development of adaptive and proactive cybersecurity defenses. Full article
Show Figures

Figure 1

32 pages, 4159 KB  
Article
APT Malware Detection Model Based on Heterogeneous Multimodal Semantic Fusion
by Chaosen Pu and Liang Wan
Appl. Sci. 2026, 16(2), 1083; https://doi.org/10.3390/app16021083 - 21 Jan 2026
Viewed by 422
Abstract
In recent years, Advanced Persistent Threat (APT) malware, with its high stealth, has made it difficult for unimodal detection methods to accurately identify its disguised malicious behaviors. To address this challenge, this paper proposes an APT Malware Detection Model based on Heterogeneous Multimodal [...] Read more.
In recent years, Advanced Persistent Threat (APT) malware, with its high stealth, has made it difficult for unimodal detection methods to accurately identify its disguised malicious behaviors. To address this challenge, this paper proposes an APT Malware Detection Model based on Heterogeneous Multimodal Semantic Fusion (HMSF-ADM). By integrating the API call sequence features of APT malware in the operating system and the RGB image features of PE files, the model constructs multimodal representations with stronger discriminability, thus achieving efficient and accurate identification of APT malicious behaviors. First, the model employs two encoders, namely a Transformer encoder equipped with the DPCFTE module and a CAS-ViT encoder, to encode sequence features and image features, respectively, completing local–global collaborative context modeling. Then, the sequence encoding results and image encoding results are interactively fused via two cross-attention mechanisms to generate fused representations. Finally, a TextCNN-based classifier is utilized to perform classification prediction on the fused representations. Experimental results on two APT malware datasets demonstrate that the proposed HMSF-ADM model outperforms various mainstream multimodal comparison models in core metrics such as accuracy, precision, and F1-score. Notably, the F1-score of the model exceeds 0.95 for the vast majority of APT malware families, and its accuracy and F1-score both remain above 0.986 in the task of distinguishing between ordinary malware and APT malware. Full article
Show Figures

Figure 1

25 pages, 3597 KB  
Article
Social Engineering Attacks Using Technical Job Interviews: Real-Life Case Analysis and AI-Assisted Mitigation Proposals
by Tomás de J. Mateo Sanguino
Information 2026, 17(1), 98; https://doi.org/10.3390/info17010098 - 18 Jan 2026
Viewed by 551
Abstract
Technical job interviews have become a vulnerable environment for social engineering attacks, particularly when they involve direct interaction with malicious code. In this context, the present manuscript investigates an exploratory case study, aiming to provide an in-depth analysis of a single incident rather [...] Read more.
Technical job interviews have become a vulnerable environment for social engineering attacks, particularly when they involve direct interaction with malicious code. In this context, the present manuscript investigates an exploratory case study, aiming to provide an in-depth analysis of a single incident rather than seeking to generalize statistical evidence. The study examines a real-world covert attack conducted through a simulated interview, identifying the technical and psychological elements that contribute to its effectiveness, assessing the performance of artificial intelligence (AI) assistants in early detection and proposing mitigation strategies. To this end, a methodology was implemented that combines discursive reconstruction of the attack, code exploitation and forensic analysis. The experimental phase, primarily focused on evaluating 10 large language models (LLMs) against a fragment of obfuscated code, reveals that the malware initially evaded detection by 62 antivirus engines, while assistants such as GPT 5.1, Grok 4.1 and Claude Sonnet 4.5 successfully identified malicious patterns and suggested operational countermeasures. The discussion highlights how the apparent legitimacy of platforms like LinkedIn, Calendly and Bitbucket, along with time pressure and technical familiarity, act as catalysts for deception. Based on these findings, the study suggests that LLMs may play a role in the early detection of threats, offering a potentially valuable avenue to enhance security in technical recruitment processes by enabling the timely identification of malicious behavior. To the best of available knowledge, this represents the first academically documented case of its kind analyzed from an interdisciplinary perspective. Full article
Show Figures

Figure 1

28 pages, 22992 KB  
Article
Domain Knowledge-Infused Synthetic Data Generation for LLM-Based ICS Intrusion Detection: Mitigating Data Scarcity and Imbalance
by Seokhyun Ann, Hongeun Kim, Suhyeon Park, Seong-je Cho, Joonmo Kim and Harksu Cho
Electronics 2026, 15(2), 371; https://doi.org/10.3390/electronics15020371 - 14 Jan 2026
Viewed by 494
Abstract
Industrial control systems (ICSs) are increasingly interconnected with enterprise IT networks and remote services, which expands the attack surface of operational technology (OT) environments. However, collecting sufficient attack traffic from real OT/ICS networks is difficult, and the resulting scarcity and class imbalance of [...] Read more.
Industrial control systems (ICSs) are increasingly interconnected with enterprise IT networks and remote services, which expands the attack surface of operational technology (OT) environments. However, collecting sufficient attack traffic from real OT/ICS networks is difficult, and the resulting scarcity and class imbalance of malicious data hinder the development of intrusion detection systems (IDSs). At the same time, large language models (LLMs) have shown promise for security analytics when system events are expressed in natural language. This study investigates an LLM-based network IDS for a smart-factory OT/ICS environment and proposes a synthetic data generation method that injects domain knowledge into attack samples. Using the ICSSIM simulator, we construct a bottle-filling smart factory, implement six MITRE ATT&CK for ICS-based attack scenarios, capture Modbus/TCP traffic, and convert each request–response pair into a natural-language description of network behavior. We then generate synthetic attack descriptions with GPT by combining (1) statistical properties of normal traffic, (2) MITRE ATT&CK for ICS tactics and techniques, and (3) expert knowledge obtained from executing the attacks in ICSSIM. The Llama 3.1 8B Instruct model is fine-tuned with QLoRA on a seven-class classification task (Benign vs. six attack types) and evaluated on a test set composed exclusively of real ICSSIM traffic. Experimental results show that synthetic data generated only from statistical information, or from statistics plus MITRE descriptions, yield limited performance, whereas incorporating environment-specific expert knowledge is associated with substantially higher performance on our ICSSIM-based expanded test set (100% accuracy in binary detection and 96.49% accuracy with a macro F1-score of 0.958 in attack-type classification). Overall, these findings suggest that domain-knowledge-infused synthetic data and natural-language traffic representations can support LLM-based IDSs in OT/ICS smart-factory settings; however, further validation on larger and more diverse datasets is needed to confirm generality. Full article
(This article belongs to the Special Issue AI-Enhanced Security: Advancing Threat Detection and Defense)
Show Figures

Figure 1

25 pages, 1862 KB  
Article
A Novel Architecture for Mitigating Botnet Threats in AI-Powered IoT Environments
by Vasileios A. Memos, Christos L. Stergiou, Alexandros I. Bermperis, Andreas P. Plageras and Konstantinos E. Psannis
Sensors 2026, 26(2), 572; https://doi.org/10.3390/s26020572 - 14 Jan 2026
Viewed by 704
Abstract
The rapid growth of Artificial Intelligence of Things (AIoT) environments in various sectors has introduced major security challenges, as these smart devices can be exploited by malicious users to form Botnets of Things (BoT). Limited computational resources and weak encryption mechanisms in such [...] Read more.
The rapid growth of Artificial Intelligence of Things (AIoT) environments in various sectors has introduced major security challenges, as these smart devices can be exploited by malicious users to form Botnets of Things (BoT). Limited computational resources and weak encryption mechanisms in such devices make them attractive targets for attacks like Distributed Denial of Service (DDoS), Man-in-the-Middle (MitM), and malware distribution. In this paper, we propose a novel multi-layered architecture to mitigate BoT threats in AIoT environments. The system leverages edge traffic inspection, sandboxing, and machine learning techniques to analyze, detect, and prevent suspicious behavior, while uses centralized monitoring and response automation to ensure rapid mitigation. Experimental results demonstrate the necessity and superiority over or parallel to existing models, providing an early detection of botnet activity, reduced false positives, improved forensic capabilities, and scalable protection for large-scale AIoT areas. Overall, this solution delivers a comprehensive, resilient, and proactive framework to protect AIoT assets from evolving cyber threats. Full article
(This article belongs to the Special Issue Internet of Things Cybersecurity)
Show Figures

Figure 1

30 pages, 4344 KB  
Article
HAGEN: Unveiling Obfuscated Memory Threats via Hierarchical Attention-Gated Explainable Networks
by Mahmoud E. Farfoura, Mohammad Alia and Tee Connie
Electronics 2026, 15(2), 352; https://doi.org/10.3390/electronics15020352 - 13 Jan 2026
Viewed by 425
Abstract
Memory resident malware, particularly fileless and heavily obfuscated types, continues to pose a major problem for endpoint defense tools, as these threats often slip past traditional signature-based detection techniques. Deep learning has shown promise in identifying such malicious activity, but its use in [...] Read more.
Memory resident malware, particularly fileless and heavily obfuscated types, continues to pose a major problem for endpoint defense tools, as these threats often slip past traditional signature-based detection techniques. Deep learning has shown promise in identifying such malicious activity, but its use in real Security Operations Centers (SOCs) is still limited because the internal reasoning of these neural network models is difficult to interpret or verify. In response to this challenge, we present HAGEN, a hierarchical attention architecture designed to combine strong classification performance with explanations that security analysts can understand and trust. HAGEN processes memory artifacts through a series of attention layers that highlight important behavioral cues at different scales, while a gated mechanism controls how information flows through the network. This structure enables the system to expose the basis of its decisions rather than simply output a label. To further support transparency, the final classification step is guided by representative prototypes, allowing predictions to be related back to concrete examples learned during training. When evaluated on the CIC-MalMem-2022 dataset, HAGEN achieved 99.99% accuracy in distinguishing benign programs from major malware classes such as spyware, ransomware, and trojans, all with modest computational requirements suitable for live environments. Beyond accuracy, HAGEN produces clear visual and numeric explanations—such as attention maps and prototype distances—that help investigators understand which memory patterns contributed to each decision, making it a practical tool for both detection and forensic analysis. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

25 pages, 692 KB  
Article
Decentralized Dynamic Heterogeneous Redundancy Architecture Based on Raft Consensus Algorithm
by Ke Chen and Leyi Shi
Future Internet 2026, 18(1), 20; https://doi.org/10.3390/fi18010020 - 1 Jan 2026
Viewed by 545
Abstract
Dynamic heterogeneous redundancy (DHR) architectures combine heterogeneity, redundancy, and dynamism to create security-centric frameworks that can be used to mitigate network attacks that exploit unknown vulnerabilities. However, conventional DHR architectures rely on centralized control modules for scheduling and adjudication, leading to significant single-point [...] Read more.
Dynamic heterogeneous redundancy (DHR) architectures combine heterogeneity, redundancy, and dynamism to create security-centric frameworks that can be used to mitigate network attacks that exploit unknown vulnerabilities. However, conventional DHR architectures rely on centralized control modules for scheduling and adjudication, leading to significant single-point failure risks and trust bottlenecks that severely limit their deployment in security-critical scenarios. To address these challenges, this paper proposes a decentralized DHR architecture based on the Raft consensus algorithm. It deeply integrates the Raft consensus mechanism with the DHR execution layer to build a consensus-centric control plane and designs a dual-log pipeline to ensure all security-critical decisions are executed only after global consistency via Raft. Furthermore, we define a multi-dimensional attacker model—covering external, internal executor, internal node, and collaborative Byzantine adversaries—to analyze the security properties and explicit defense boundaries of the architecture under Raft’s crash-fault-tolerant assumptions. To assess the effectiveness of the proposed architecture, a prototype consisting of five heterogeneous nodes was developed for thorough evaluation. The experimental results show that, for non-Byzantine external and internal attacks, the architecture achieves high detection and isolation rates, maintains high availability, and ensures state consistency among non-malicious nodes. For stress tests in which a minority of nodes exhibit Byzantine-like behavior, our prototype preserves log consistency and prevents incorrect state commitments; however, we explicitly treat these as empirical observations under a restricted adversary rather than a general Byzantine fault tolerance guarantee. Performance testing revealed that the system exhibits strong security resilience in attack scenarios, with manageable performance overhead. Instead of turning Raft into a Byzantine-fault-tolerant consensus protocol, the proposed architecture preserves Raft’s crash-fault-tolerant guarantees at the consensus layer and achieves Byzantine-resilient behavior at the execution layer through heterogeneous redundant executors and majority-hash validation. To support evaluation during peer review, we provide a runnable prototype package containing Docker-based deployment scripts, pre-built heterogeneous executors, and Raft control-plane images, enabling reviewers to observe and assess the representative architectural behaviors of the system under controlled configurations without exposing the internal source code. The complete implementation will be made available after acceptance in accordance with institutional IP requirements, without affecting the scope or validity of the current evaluation. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

36 pages, 630 KB  
Article
Semantic Communication Unlearning: A Variational Information Bottleneck Approach for Backdoor Defense in Wireless Systems
by Sümeye Nur Karahan, Merve Güllü, Mustafa Serdar Osmanca and Necaattin Barışçı
Future Internet 2026, 18(1), 17; https://doi.org/10.3390/fi18010017 - 28 Dec 2025
Viewed by 547
Abstract
Semantic communication systems leverage deep neural networks to extract and transmit essential information, achieving superior performance in bandwidth-constrained wireless environments. However, their vulnerability to backdoor attacks poses critical security threats, where adversaries can inject malicious triggers during training to manipulate system behavior. This [...] Read more.
Semantic communication systems leverage deep neural networks to extract and transmit essential information, achieving superior performance in bandwidth-constrained wireless environments. However, their vulnerability to backdoor attacks poses critical security threats, where adversaries can inject malicious triggers during training to manipulate system behavior. This paper introduces Selective Communication Unlearning (SCU), a novel defense mechanism based on Variational Information Bottleneck (VIB) principles. SCU employs a two-stage approach: (1) joint unlearning to remove backdoor knowledge from both encoder and decoder while preserving legitimate data representations, and (2) contrastive compensation to maximize feature separation between poisoned and clean samples. Extensive experiments on the RML2016.10a wireless signal dataset demonstrate that SCU achieves 629.5 ± 191.2% backdoor mitigation (5-seed average; 95% CI: [364.1%, 895.0%]), with peak performance of 1486% under optimal conditions, while maintaining only 11.5% clean performance degradation. This represents an order-of-magnitude improvement over detection-based defenses and fundamentally outperforms existing unlearning approaches that achieve near-zero or negative mitigation. We validate SCU across seven signal processing domains, four adaptive backdoor types, and varying SNR conditions, demonstrating unprecedented robustness and generalizability. The framework achieves a 243 s unlearning time, making it practical for resource-constrained edge deployments in 6G networks. Full article
(This article belongs to the Special Issue Future Industrial Networks: Technologies, Algorithms, and Protocols)
Show Figures

Figure 1

Back to TopTop