Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (482)

Search Parameters:
Keywords = adversarial attack generation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 2687 KiB  
Article
A Multimodal Framework for Advanced Cybersecurity Threat Detection Using GAN-Driven Data Synthesis
by Nikolaos Peppes, Emmanouil Daskalakis, Theodoros Alexakis and Evgenia Adamopoulou
Appl. Sci. 2025, 15(15), 8730; https://doi.org/10.3390/app15158730 - 7 Aug 2025
Viewed by 287
Abstract
Cybersecurity threats are becoming increasingly sophisticated, frequent, and diverse, posing a major risk to critical infrastructure, public trust, and digital economies. Traditional intrusion detection systems often struggle with detecting novel or rare attack types, particularly when data availability is limited or heterogeneous. The [...] Read more.
Cybersecurity threats are becoming increasingly sophisticated, frequent, and diverse, posing a major risk to critical infrastructure, public trust, and digital economies. Traditional intrusion detection systems often struggle with detecting novel or rare attack types, particularly when data availability is limited or heterogeneous. The current study tries to address these challenges by proposing a unified, multimodal threat detection framework that leverages the combination of synthetic data generation through Generative Adversarial Networks (GANs), advanced ensemble learning, and transfer learning techniques. The research objective is to enhance detection accuracy and resilience against zero-day, botnet, and image-based malware attacks by integrating multiple data modalities, including structured network logs and malware binaries, within a scalable and flexible pipeline. The proposed system features a dual-branch architecture: one branch uses a CNN with transfer learning for image-based malware classification, and the other employs a soft-voting ensemble classifier for tabular intrusion detection, both trained on augmented datasets generated by GANs. Experimental results demonstrate significant improvements in detection performance and false positive reduction, especially when multimodal outputs are fused using the proposed confidence-weighted strategy. The findings highlight the framework’s adaptability and practical applicability in real-world intrusion detection and response systems. Full article
(This article belongs to the Special Issue Data Mining and Machine Learning in Cybersecurity)
Show Figures

Figure 1

24 pages, 1993 KiB  
Article
Evaluating Prompt Injection Attacks with LSTM-Based Generative Adversarial Networks: A Lightweight Alternative to Large Language Models
by Sharaf Rashid, Edson Bollis, Lucas Pellicer, Darian Rabbani, Rafael Palacios, Aneesh Gupta and Amar Gupta
Mach. Learn. Knowl. Extr. 2025, 7(3), 77; https://doi.org/10.3390/make7030077 - 6 Aug 2025
Viewed by 526
Abstract
Generative Adversarial Networks (GANs) using Long Short-Term Memory (LSTM) provide a computationally cheaper approach for text generation compared to large language models (LLMs). The low hardware barrier of training GANs poses a threat because it means more bad actors may use them to [...] Read more.
Generative Adversarial Networks (GANs) using Long Short-Term Memory (LSTM) provide a computationally cheaper approach for text generation compared to large language models (LLMs). The low hardware barrier of training GANs poses a threat because it means more bad actors may use them to mass-produce prompt attack messages against LLM systems. Thus, to better understand the threat of GANs being used for prompt attack generation, we train two well-known GAN architectures, SeqGAN and RelGAN, on prompt attack messages. For each architecture, we evaluate generated prompt attack messages, comparing results with each other, with generated attacks from another computationally cheap approach, a 1-billion-parameter Llama 3.2 small language model (SLM), and with messages from the original dataset. This evaluation suggests that GAN architectures like SeqGAN and RelGAN have the potential to be used in conjunction with SLMs to readily generate malicious prompts that impose new threats against LLM-based systems such as chatbots. Analyzing the effectiveness of state-of-the-art defenses against prompt attacks, we also find that GAN-generated attacks can deceive most of these defenses with varying levels of success with the exception of Meta’s PromptGuard. Further, we suggest an improvement of prompt attack defenses based on the analysis of the language quality of the prompts, which we found to be the weakest point of GAN-generated messages. Full article
Show Figures

Graphical abstract

28 pages, 1874 KiB  
Article
Lexicon-Based Random Substitute and Word-Variant Voting Models for Detecting Textual Adversarial Attacks
by Tarik El Lel, Mominul Ahsan and Majid Latifi
Computers 2025, 14(8), 315; https://doi.org/10.3390/computers14080315 - 2 Aug 2025
Viewed by 366
Abstract
Adversarial attacks in Natural Language Processing (NLP) present a critical challenge, particularly in sentiment analysis, where subtle input modifications can significantly alter model predictions. In search of more robust defenses against adversarial attacks on sentimental analysis, this research work introduces two novel defense [...] Read more.
Adversarial attacks in Natural Language Processing (NLP) present a critical challenge, particularly in sentiment analysis, where subtle input modifications can significantly alter model predictions. In search of more robust defenses against adversarial attacks on sentimental analysis, this research work introduces two novel defense mechanisms: the Lexicon-Based Random Substitute Model (LRSM) and the Word-Variant Voting Model (WVVM). LRSM employs randomized substitutions from a dataset-specific lexicon to generate diverse input variations, disrupting adversarial strategies by introducing unpredictability. Unlike traditional defenses requiring synonym dictionaries or precomputed semantic relationships, LRSM directly substitutes words with random lexicon alternatives, reducing overhead while maintaining robustness. Notably, LRSM not only neutralizes adversarial perturbations but occasionally surpasses the original accuracy by correcting inherent model misclassifications. Building on LRSM, WVVM integrates LRSM, Frequency-Guided Word Substitution (FGWS), and Synonym Random Substitution and Voting (RS&V) in an ensemble framework that adaptively combines their outputs. Logistic Regression (LR) emerged as the optimal ensemble configuration, leveraging its regularization parameters to balance the contributions of individual defenses. WVVM consistently outperformed standalone defenses, demonstrating superior restored accuracy and F1 scores across adversarial scenarios. The proposed defenses were evaluated on two well-known sentiment analysis benchmarks: the IMDB Sentiment Dataset and the Yelp Polarity Dataset. The IMDB dataset, comprising 50,000 labeled movie reviews, and the Yelp Polarity dataset, containing labeled business reviews, provided diverse linguistic challenges for assessing adversarial robustness. Both datasets were tested using 4000 adversarial examples generated by established attacks, including Probability Weighted Word Saliency, TextFooler, and BERT-based Adversarial Examples. WVVM and LRSM demonstrated superior performance in restoring accuracy and F1 scores across both datasets, with WVVM excelling through its ensemble learning framework. LRSM improved restored accuracy from 75.66% to 83.7% when compared to the second-best individual model, RS&V, while the Support Vector Classifier WVVM variation further improved restored accuracy to 93.17%. Logistic Regression WVVM achieved an F1 score of 86.26% compared to 76.80% for RS&V. These findings establish LRSM and WVVM as robust frameworks for defending against adversarial text attacks in sentiment analysis. Full article
Show Figures

Figure 1

17 pages, 1027 KiB  
Article
AI-Driven Security for Blockchain-Based Smart Contracts: A GAN-Assisted Deep Learning Approach to Malware Detection
by Imad Bourian, Lahcen Hassine and Khalid Chougdali
J. Cybersecur. Priv. 2025, 5(3), 53; https://doi.org/10.3390/jcp5030053 - 1 Aug 2025
Viewed by 491
Abstract
In the modern era, the use of blockchain technology has been growing rapidly, where Ethereum smart contracts play an important role in securing decentralized application systems. However, these smart contracts are also susceptible to a large number of vulnerabilities, which pose significant threats [...] Read more.
In the modern era, the use of blockchain technology has been growing rapidly, where Ethereum smart contracts play an important role in securing decentralized application systems. However, these smart contracts are also susceptible to a large number of vulnerabilities, which pose significant threats to intelligent systems and IoT applications, leading to data breaches and financial losses. Traditional detection techniques, such as manual analysis and static automated tools, suffer from high false positives and undetected security vulnerabilities. To address these problems, this paper proposes an Artificial Intelligence (AI)-based security framework that integrates Generative Adversarial Network (GAN)-based feature selection and deep learning techniques to classify and detect malware attacks on smart contract execution in the blockchain decentralized network. After an exhaustive pre-processing phase yielding a dataset of 40,000 malware and benign samples, the proposed model is evaluated and compared with related studies on the basis of a number of performance metrics including training accuracy, training loss, and classification metrics (accuracy, precision, recall, and F1-score). Our combined approach achieved a remarkable accuracy of 97.6%, demonstrating its effectiveness in detecting malware and protecting blockchain systems. Full article
Show Figures

Figure 1

16 pages, 2174 KiB  
Article
TwinFedPot: Honeypot Intelligence Distillation into Digital Twin for Persistent Smart Traffic Security
by Yesin Sahraoui, Abdessalam Mohammed Hadjkouider, Chaker Abdelaziz Kerrache and Carlos T. Calafate
Sensors 2025, 25(15), 4725; https://doi.org/10.3390/s25154725 - 31 Jul 2025
Viewed by 349
Abstract
The integration of digital twins (DTs) with intelligent traffic systems (ITSs) holds strong potential for improving real-time management in smart cities. However, securing digital twins remains a significant challenge due to the dynamic and adversarial nature of cyber–physical environments. In this work, we [...] Read more.
The integration of digital twins (DTs) with intelligent traffic systems (ITSs) holds strong potential for improving real-time management in smart cities. However, securing digital twins remains a significant challenge due to the dynamic and adversarial nature of cyber–physical environments. In this work, we propose TwinFedPot, an innovative digital twin-based security architecture that combines honeypot-driven data collection with Zero-Shot Learning (ZSL) for robust and adaptive cyber threat detection without requiring prior sampling. The framework leverages Inverse Federated Distillation (IFD) to train the DT server, where edge-deployed honeypots generate semantic predictions of anomalous behavior and upload soft logits instead of raw data. Unlike conventional federated approaches, TwinFedPot reverses the typical knowledge flow by distilling collective intelligence from the honeypots into a central teacher model hosted on the DT. This inversion allows the system to learn generalized attack patterns using only limited data, while preserving privacy and enhancing robustness. Experimental results demonstrate significant improvements in accuracy and F1-score, establishing TwinFedPot as a scalable and effective defense solution for smart traffic infrastructures. Full article
Show Figures

Figure 1

26 pages, 2653 KiB  
Article
Attacker Attribution in Multi-Step and Multi-Adversarial Network Attacks Using Transformer-Based Approach
by Romina Torres and Ana García
Appl. Sci. 2025, 15(15), 8476; https://doi.org/10.3390/app15158476 - 30 Jul 2025
Viewed by 268
Abstract
Recent studies on network intrusion detection using deep learning primarily focus on detecting attacks or classifying attack types, but they often overlook the challenge of attributing each attack to its specific source among many potential adversaries (multi-adversary attribution). This is a critical and [...] Read more.
Recent studies on network intrusion detection using deep learning primarily focus on detecting attacks or classifying attack types, but they often overlook the challenge of attributing each attack to its specific source among many potential adversaries (multi-adversary attribution). This is a critical and underexplored issue in cybersecurity. In this study, we address the problem of attacker attribution in complex, multi-step network attack (MSNA) environments, aiming to identify the responsible attacker (e.g., IP address) for each sequence of security alerts, rather than merely detecting the presence or type of attack. We propose a deep learning approach based on Transformer encoders to classify sequences of network alerts and attribute them to specific attackers among many candidates. Our pipeline includes data preprocessing, exploratory analysis, and robust training/validation using stratified splits and 5-fold cross-validation, all applied to real-world multi-step attack datasets from capture-the-flag (CTF) competitions. We compare the Transformer-based approach with a multilayer perceptron (MLP) baseline to quantify the benefits of advanced architectures. Experiments on this challenging dataset demonstrate that our Transformer model achieves near-perfect accuracy (99.98%) and F1-scores (macro and weighted ≈ 99%) in attack attribution, significantly outperforming the MLP baseline (accuracy 80.62%, macro F1 65.05% and weighted F1 80.48%). The Transformer generalizes robustly across all attacker classes, including those with few samples, as evidenced by per-class metrics and confusion matrices. Our results show that Transformer-based models are highly effective for multi-adversary attack attribution in MSNA, a scenario not or under-addressed in the previous intrusion detection systems (IDS) literature. The adoption of advanced architectures and rigorous validation strategies is essential for reliable attribution in complex and imbalanced environments. Full article
(This article belongs to the Special Issue Application of Deep Learning for Cybersecurity)
Show Figures

Figure 1

16 pages, 1550 KiB  
Article
Understanding and Detecting Adversarial Examples in IoT Networks: A White-Box Analysis with Autoencoders
by Wafi Danesh, Srinivas Rahul Sapireddy and Mostafizur Rahman
Electronics 2025, 14(15), 3015; https://doi.org/10.3390/electronics14153015 - 29 Jul 2025
Viewed by 345
Abstract
Novel networking paradigms such as the Internet of Things (IoT) have expanded their usage and deployment to various application domains. Consequently, unseen critical security vulnerabilities such as zero-day attacks have emerged in such deployments. The design of intrusion detection systems for IoT networks [...] Read more.
Novel networking paradigms such as the Internet of Things (IoT) have expanded their usage and deployment to various application domains. Consequently, unseen critical security vulnerabilities such as zero-day attacks have emerged in such deployments. The design of intrusion detection systems for IoT networks is often challenged by a lack of labeled data, which complicates the development of robust defenses against adversarial attacks. As deep learning-based network intrusion detection systems, network intrusion detection systems (NIDS) have been used to counteract emerging security vulnerabilities. However, the deep learning models used in such NIDS are vulnerable to adversarial examples. Adversarial examples are specifically engineered samples tailored to a specific deep learning model; they are developed by minimal perturbation of network packet features, and are intended to cause misclassification. Such examples can bypass NIDS or enable the rejection of regular network traffic. Research in the adversarial example detection domain has yielded several prominent methods; however, most of those methods involve computationally expensive retraining steps and require access to labeled data, which are often lacking in IoT network deployments. In this paper, we propose an unsupervised method for detecting adversarial examples that performs early detection based on the intrinsic characteristics of the deep learning model. Our proposed method requires neither computationally expensive retraining nor extra hardware overhead for implementation. For the work in this paper, we first perform adversarial example generation on a deep learning model using autoencoders. After successful adversarial example generation, we perform adversarial example detection using the intrinsic characteristics of the layers in the deep learning model. A robustness analysis of our approach reveals that an attacker can easily bypass the detection mechanism by using low-magnitude log-normal Gaussian noise. Furthermore, we also test the robustness of our detection method against further compromise by the attacker. We tested our approach on the Kitsune datasets, which are state-of-the-art datasets obtained from deployed IoT network scenarios. Our experimental results show an average adversarial example generation time of 0.337 s and an average detection rate of almost 100%. The robustness analysis of our detection method reveals a reduction of almost 100% in adversarial example detection after compromise by the attacker. Full article
Show Figures

Figure 1

24 pages, 1530 KiB  
Article
A Lightweight Robust Training Method for Defending Model Poisoning Attacks in Federated Learning Assisted UAV Networks
by Lucheng Chen, Weiwei Zhai, Xiangfeng Bu, Ming Sun and Chenglin Zhu
Drones 2025, 9(8), 528; https://doi.org/10.3390/drones9080528 - 28 Jul 2025
Viewed by 473
Abstract
The integration of unmanned aerial vehicles (UAVs) into next-generation wireless networks greatly enhances the flexibility and efficiency of communication and distributed computation for ground mobile devices. Federated learning (FL) provides a privacy-preserving paradigm for device collaboration but remains highly vulnerable to poisoning attacks [...] Read more.
The integration of unmanned aerial vehicles (UAVs) into next-generation wireless networks greatly enhances the flexibility and efficiency of communication and distributed computation for ground mobile devices. Federated learning (FL) provides a privacy-preserving paradigm for device collaboration but remains highly vulnerable to poisoning attacks and is further challenged by the resource constraints and heterogeneous data common to UAV-assisted systems. Existing robust aggregation and anomaly detection methods often degrade in efficiency and reliability under these realistic adversarial and non-IID settings. To bridge these gaps, we propose FedULite, a lightweight and robust federated learning framework specifically designed for UAV-assisted environments. FedULite features unsupervised local representation learning optimized for unlabeled, non-IID data. Moreover, FedULite leverages a robust, adaptive server-side aggregation strategy that uses cosine similarity-based update filtering and dimension-wise adaptive learning rates to neutralize sophisticated data and model poisoning attacks. Extensive experiments across diverse datasets and adversarial scenarios demonstrate that FedULite reduces the attack success rate (ASR) from over 90% in undefended scenarios to below 5%, while maintaining the main task accuracy loss within 2%. Moreover, it introduces negligible computational overhead compared to standard FedAvg, with approximately 7% additional training time. Full article
(This article belongs to the Special Issue IoT-Enabled UAV Networks for Secure Communication)
Show Figures

Figure 1

21 pages, 2789 KiB  
Article
BIM-Based Adversarial Attacks Against Speech Deepfake Detectors
by Wendy Edda Wang, Davide Salvi, Viola Negroni, Daniele Ugo Leonzio, Paolo Bestagini and Stefano Tubaro
Electronics 2025, 14(15), 2967; https://doi.org/10.3390/electronics14152967 - 24 Jul 2025
Viewed by 324
Abstract
Automatic Speaker Verification (ASV) systems are increasingly employed to secure access to services and facilities. However, recent advances in speech deepfake generation pose serious threats to their reliability. Modern speech synthesis models can convincingly imitate a target speaker’s voice and generate realistic synthetic [...] Read more.
Automatic Speaker Verification (ASV) systems are increasingly employed to secure access to services and facilities. However, recent advances in speech deepfake generation pose serious threats to their reliability. Modern speech synthesis models can convincingly imitate a target speaker’s voice and generate realistic synthetic audio, potentially enabling unauthorized access through ASV systems. To counter these threats, forensic detectors have been developed to distinguish between real and fake speech. Although these models achieve strong performance, their deep learning nature makes them susceptible to adversarial attacks, i.e., carefully crafted, imperceptible perturbations in the audio signal that make the model unable to classify correctly. In this paper, we explore adversarial attacks targeting speech deepfake detectors. Specifically, we analyze the effectiveness of Basic Iterative Method (BIM) attacks applied in both time and frequency domains under white- and black-box conditions. Additionally, we propose an ensemble-based attack strategy designed to simultaneously target multiple detection models. This approach generates adversarial examples with balanced effectiveness across the ensemble, enhancing transferability to unseen models. Our experimental results show that, although crafting universally transferable attacks remains challenging, it is possible to fool state-of-the-art detectors using minimal, imperceptible perturbations, highlighting the need for more robust defenses in speech deepfake detection. Full article
Show Figures

Figure 1

38 pages, 6851 KiB  
Article
FGFNet: Fourier Gated Feature-Fusion Network with Fractal Dimension Estimation for Robust Palm-Vein Spoof Detection
by Seung Gu Kim, Jung Soo Kim and Kang Ryoung Park
Fractal Fract. 2025, 9(8), 478; https://doi.org/10.3390/fractalfract9080478 - 22 Jul 2025
Viewed by 311
Abstract
The palm-vein recognition system has garnered attention as a biometric technology due to its resilience to external environmental factors, protection of personal privacy, and low risk of external exposure. However, with recent advancements in deep learning-based generative models for image synthesis, the quality [...] Read more.
The palm-vein recognition system has garnered attention as a biometric technology due to its resilience to external environmental factors, protection of personal privacy, and low risk of external exposure. However, with recent advancements in deep learning-based generative models for image synthesis, the quality and sophistication of fake images have improved, leading to an increased security threat from counterfeit images. In particular, palm-vein images acquired through near-infrared illumination exhibit low resolution and blurred characteristics, making it even more challenging to detect fake images. Furthermore, spoof detection specifically targeting palm-vein images has not been studied in detail. To address these challenges, this study proposes the Fourier-gated feature-fusion network (FGFNet) as a novel spoof detector for palm-vein recognition systems. The proposed network integrates masked fast Fourier transform, a map-based gated feature fusion block, and a fast Fourier convolution (FFC) attention block with global contrastive loss to effectively detect distortion patterns caused by generative models. These components enable the efficient extraction of critical information required to determine the authenticity of palm-vein images. In addition, fractal dimension estimation (FDE) was employed for two purposes in this study. In the spoof attack procedure, FDE was used to evaluate how closely the generated fake images approximate the structural complexity of real palm-vein images, confirming that the generative model produced highly realistic spoof samples. In the spoof detection procedure, the FDE results further demonstrated that the proposed FGFNet effectively distinguishes between real and fake images, validating its capability to capture subtle structural differences induced by generative manipulation. To evaluate the spoof detection performance of FGFNet, experiments were conducted using real palm-vein images from two publicly available palm-vein datasets—VERA Spoofing PalmVein (VERA dataset) and PLUSVein-contactless (PLUS dataset)—as well as fake palm-vein images generated based on these datasets using a cycle-consistent generative adversarial network. The results showed that, based on the average classification error rate, FGFNet achieved 0.3% and 0.3% on the VERA and PLUS datasets, respectively, demonstrating superior performance compared to existing state-of-the-art spoof detection methods. Full article
Show Figures

Figure 1

43 pages, 2108 KiB  
Article
FIGS: A Realistic Intrusion-Detection Framework for Highly Imbalanced IoT Environments
by Zeynab Anbiaee, Sajjad Dadkhah and Ali A. Ghorbani
Electronics 2025, 14(14), 2917; https://doi.org/10.3390/electronics14142917 - 21 Jul 2025
Viewed by 446
Abstract
The rapid growth of Internet of Things (IoT) environments has increased security challenges due to heightened exposure to cyber threats and attacks. A key problem is the class imbalance in attack traffic, where critical yet underrepresented attacks are often overlooked by intrusion-detection systems [...] Read more.
The rapid growth of Internet of Things (IoT) environments has increased security challenges due to heightened exposure to cyber threats and attacks. A key problem is the class imbalance in attack traffic, where critical yet underrepresented attacks are often overlooked by intrusion-detection systems (IDS), thereby compromising reliability. We propose Feature-Importance GAN SMOTE (FIGS), an innovative, realistic intrusion-detection framework designed for IoT environments to address this challenge. Unlike other works that rely only on traditional oversampling methods, FIGS integrates sensitivity-based feature-importance analysis, Generative Adversarial Network (GAN)-based augmentation, a novel imbalance ratio (GIR), and Synthetic Minority Oversampling Technique (SMOTE) for generating high-quality synthetic data for minority classes. FIGS enhanced minority class detection by focusing on the most important features identified by the sensitivity analysis, while minimizing computational overhead and reducing noise during data generation. Evaluations on the CICIoMT2024 and CICIDS2017 datasets demonstrate that FIGS improves detection accuracy and significantly lowers the false negative rate. FIGS achieved a 17% improvement over the baseline model on the CICIoMT2024 dataset while maintaining performance for the majority groups. The results show that FIGS represents a highly effective solution for real-world IoT networks with high detection accuracy across all classes without introducing unnecessary computational overhead. Full article
(This article belongs to the Special Issue Network Security and Cryptography Applications)
Show Figures

Figure 1

55 pages, 6352 KiB  
Review
A Deep Learning Framework for Enhanced Detection of Polymorphic Ransomware
by Mazen Gazzan, Bader Alobaywi, Mohammed Almutairi and Frederick T. Sheldon
Future Internet 2025, 17(7), 311; https://doi.org/10.3390/fi17070311 - 18 Jul 2025
Viewed by 524
Abstract
Ransomware, a significant cybersecurity threat, encrypts files and causes substantial damage, making early detection crucial yet challenging. This paper introduces a novel multi-phase framework for early ransomware detection, designed to enhance accuracy and minimize false positives. The framework addresses the limitations of existing [...] Read more.
Ransomware, a significant cybersecurity threat, encrypts files and causes substantial damage, making early detection crucial yet challenging. This paper introduces a novel multi-phase framework for early ransomware detection, designed to enhance accuracy and minimize false positives. The framework addresses the limitations of existing methods by integrating operational data with situational and threat intelligence, enabling it to dynamically adapt to the evolving ransomware landscape. Key innovations include (1) data augmentation using a Bi-Gradual Minimax Generative Adversarial Network (BGM-GAN) to generate synthetic ransomware attack patterns, addressing data insufficiency; (2) Incremental Mutual Information Selection (IMIS) for dynamically selecting relevant features, adapting to evolving ransomware behaviors and reducing computational overhead; and (3) a Deep Belief Network (DBN) detection architecture, trained on the augmented data and optimized with Uncertainty-Aware Dynamic Early Stopping (UA-DES) to prevent overfitting. The model demonstrates a 4% improvement in detection accuracy (from 90% to 94%) through synthetic data generation and reduces false positives from 15.4% to 14%. The IMIS technique further increases accuracy to 96% while reducing false positives. The UA-DES optimization boosts accuracy to 98.6% and lowers false positives to 10%. Overall, this framework effectively addresses the challenges posed by evolving ransomware, significantly enhancing detection accuracy and reliability. Full article
Show Figures

Figure 1

40 pages, 2206 KiB  
Review
Toward Generative AI-Based Intrusion Detection Systems for the Internet of Vehicles (IoV)
by Isra Mahmoudi, Djallel Eddine Boubiche, Samir Athmani, Homero Toral-Cruz and Freddy I. Chan-Puc
Future Internet 2025, 17(7), 310; https://doi.org/10.3390/fi17070310 - 17 Jul 2025
Cited by 1 | Viewed by 668
Abstract
The increasing complexity and scale of Internet of Vehicles (IoV) networks pose significant security challenges, necessitating the development of advanced intrusion detection systems (IDS). Traditional IDS approaches, such as rule-based and signature-based methods, are often inadequate in detecting novel and sophisticated attacks due [...] Read more.
The increasing complexity and scale of Internet of Vehicles (IoV) networks pose significant security challenges, necessitating the development of advanced intrusion detection systems (IDS). Traditional IDS approaches, such as rule-based and signature-based methods, are often inadequate in detecting novel and sophisticated attacks due to their limited adaptability and dependency on predefined patterns. To overcome these limitations, machine learning (ML) and deep learning (DL)-based IDS have been introduced, offering better generalization and the ability to learn from data. However, these models can still struggle with zero-day attacks, require large volumes of labeled data, and may be vulnerable to adversarial examples. In response to these challenges, Generative AI-based IDS—leveraging models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Transformers—have emerged as promising solutions that offer enhanced adaptability, synthetic data generation for training, and improved detection capabilities for evolving threats. This survey provides an overview of IoV architecture, vulnerabilities, and classical IDS techniques while focusing on the growing role of Generative AI in strengthening IoV security. It discusses the current landscape, highlights the key challenges, and outlines future research directions aimed at building more resilient and intelligent IDS for the IoV ecosystem. Full article
Show Figures

Figure 1

25 pages, 4668 KiB  
Article
An Asynchronous Federated Learning Aggregation Method Based on Adaptive Differential Privacy
by Jiawen Wu, Geming Xia, Hongwei Huang, Chaodong Yu, Yuze Zhang and Hongfeng Li
Electronics 2025, 14(14), 2847; https://doi.org/10.3390/electronics14142847 - 16 Jul 2025
Viewed by 393
Abstract
Federated learning is a distributed machine learning technique that allows multiple devices to collaborate on learning a shared model without exchanging data. It can be used to improve model accuracy while protecting user privacy. However, traditional federated learning is vulnerable to attacks from [...] Read more.
Federated learning is a distributed machine learning technique that allows multiple devices to collaborate on learning a shared model without exchanging data. It can be used to improve model accuracy while protecting user privacy. However, traditional federated learning is vulnerable to attacks from generative adversarial networks (GANs). As a new privacy protection method, differential privacy enhances privacy protection capabilities by sacrificing some data accuracy. To optimize the privacy budget allocation scheme in traditional differential privacy, we propose a differential privacy method called ADP-FL, which dynamically adjusts the privacy budget based on Newton’s Law of Cooling. While maintaining the overall privacy budget, it dynamically tunes adaptive parameters to improve training accuracy. Additionally, we propose an asynchronous federated learning aggregation scheme that combines privacy budget with data freshness, thereby reducing the impact of differential privacy on accuracy. We conducted extensive experiments on differential privacy algorithms based on Gaussian mechanisms and Laplace mechanisms. The experimental results show that, under the same privacy budget, our algorithm achieves higher accuracy and lower communication overhead compared to the baseline algorithm. Full article
(This article belongs to the Special Issue Emerging Trends in Federated Learning and Network Security)
Show Figures

Figure 1

21 pages, 1632 KiB  
Article
Adversarial Hierarchical-Aware Edge Attention Learning Method for Network Intrusion Detection
by Hao Yan, Jianming Li, Lei Du, Binxing Fang, Yan Jia and Zhaoquan Gu
Appl. Sci. 2025, 15(14), 7915; https://doi.org/10.3390/app15147915 - 16 Jul 2025
Viewed by 373
Abstract
The rapid development of information technology has made cyberspace security an increasingly critical issue. Network intrusion detection methods are practical approaches to protecting network systems from cyber attacks. However, cyberspace security threats have topological dependencies and fine-grained attack semantics. Existing graph-based approaches either [...] Read more.
The rapid development of information technology has made cyberspace security an increasingly critical issue. Network intrusion detection methods are practical approaches to protecting network systems from cyber attacks. However, cyberspace security threats have topological dependencies and fine-grained attack semantics. Existing graph-based approaches either underestimate edge-level features or fail to balance detection accuracy with adversarial robustness. To handle these problems, we propose a novel graph neural network–based method for network intrusion detection called the adversarial hierarchical-aware edge attention learning method (AH-EAT). It leverages the natural graph structure of computer networks to achieve robust, multi-grained intrusion detection. Specifically, AH-EAT includes three main modules: an edge-based graph attention embedding module, a hierarchical multi-grained detection module, and an adversarial training module. In the first module, we apply graph attention networks to aggregate node and edge features according to their importance. This effectively captures the network’s key topological information. In the second module, we first perform coarse-grained detection to distinguish malicious flows from benign ones, and then perform fine-grained classification to identify specific attack types. In the third module, we use projected gradient descent to generate adversarial perturbations on network flow features during training, enhancing the model’s robustness to evasion attacks. Experimental results on four benchmark intrusion detection datasets show that AH-EAT achieves 90.73% average coarse-grained accuracy and 1.45% ASR on CIC-IDS2018 under adversarial attacks, outperforming state-of-the-art methods in both detection accuracy and robustness. Full article
(This article belongs to the Special Issue Cyberspace Security Technology in Computer Science)
Show Figures

Figure 1

Back to TopTop