Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,041)

Search Parameters:
Keywords = adversary attack

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 1874 KiB  
Article
Lexicon-Based Random Substitute and Word-Variant Voting Models for Detecting Textual Adversarial Attacks
by Tarik El Lel, Mominul Ahsan and Majid Latifi
Computers 2025, 14(8), 315; https://doi.org/10.3390/computers14080315 (registering DOI) - 2 Aug 2025
Abstract
Adversarial attacks in Natural Language Processing (NLP) present a critical challenge, particularly in sentiment analysis, where subtle input modifications can significantly alter model predictions. In search of more robust defenses against adversarial attacks on sentimental analysis, this research work introduces two novel defense [...] Read more.
Adversarial attacks in Natural Language Processing (NLP) present a critical challenge, particularly in sentiment analysis, where subtle input modifications can significantly alter model predictions. In search of more robust defenses against adversarial attacks on sentimental analysis, this research work introduces two novel defense mechanisms: the Lexicon-Based Random Substitute Model (LRSM) and the Word-Variant Voting Model (WVVM). LRSM employs randomized substitutions from a dataset-specific lexicon to generate diverse input variations, disrupting adversarial strategies by introducing unpredictability. Unlike traditional defenses requiring synonym dictionaries or precomputed semantic relationships, LRSM directly substitutes words with random lexicon alternatives, reducing overhead while maintaining robustness. Notably, LRSM not only neutralizes adversarial perturbations but occasionally surpasses the original accuracy by correcting inherent model misclassifications. Building on LRSM, WVVM integrates LRSM, Frequency-Guided Word Substitution (FGWS), and Synonym Random Substitution and Voting (RS&V) in an ensemble framework that adaptively combines their outputs. Logistic Regression (LR) emerged as the optimal ensemble configuration, leveraging its regularization parameters to balance the contributions of individual defenses. WVVM consistently outperformed standalone defenses, demonstrating superior restored accuracy and F1 scores across adversarial scenarios. The proposed defenses were evaluated on two well-known sentiment analysis benchmarks: the IMDB Sentiment Dataset and the Yelp Polarity Dataset. The IMDB dataset, comprising 50,000 labeled movie reviews, and the Yelp Polarity dataset, containing labeled business reviews, provided diverse linguistic challenges for assessing adversarial robustness. Both datasets were tested using 4000 adversarial examples generated by established attacks, including Probability Weighted Word Saliency, TextFooler, and BERT-based Adversarial Examples. WVVM and LRSM demonstrated superior performance in restoring accuracy and F1 scores across both datasets, with WVVM excelling through its ensemble learning framework. LRSM improved restored accuracy from 75.66% to 83.7% when compared to the second-best individual model, RS&V, while the Support Vector Classifier WVVM variation further improved restored accuracy to 93.17%. Logistic Regression WVVM achieved an F1 score of 86.26% compared to 76.80% for RS&V. These findings establish LRSM and WVVM as robust frameworks for defending against adversarial text attacks in sentiment analysis. Full article
Show Figures

Figure 1

17 pages, 1027 KiB  
Article
AI-Driven Security for Blockchain-Based Smart Contracts: A GAN-Assisted Deep Learning Approach to Malware Detection
by Imad Bourian, Lahcen Hassine and Khalid Chougdali
J. Cybersecur. Priv. 2025, 5(3), 53; https://doi.org/10.3390/jcp5030053 (registering DOI) - 1 Aug 2025
Abstract
In the modern era, the use of blockchain technology has been growing rapidly, where Ethereum smart contracts play an important role in securing decentralized application systems. However, these smart contracts are also susceptible to a large number of vulnerabilities, which pose significant threats [...] Read more.
In the modern era, the use of blockchain technology has been growing rapidly, where Ethereum smart contracts play an important role in securing decentralized application systems. However, these smart contracts are also susceptible to a large number of vulnerabilities, which pose significant threats to intelligent systems and IoT applications, leading to data breaches and financial losses. Traditional detection techniques, such as manual analysis and static automated tools, suffer from high false positives and undetected security vulnerabilities. To address these problems, this paper proposes an Artificial Intelligence (AI)-based security framework that integrates Generative Adversarial Network (GAN)-based feature selection and deep learning techniques to classify and detect malware attacks on smart contract execution in the blockchain decentralized network. After an exhaustive pre-processing phase yielding a dataset of 40,000 malware and benign samples, the proposed model is evaluated and compared with related studies on the basis of a number of performance metrics including training accuracy, training loss, and classification metrics (accuracy, precision, recall, and F1-score). Our combined approach achieved a remarkable accuracy of 97.6%, demonstrating its effectiveness in detecting malware and protecting blockchain systems. Full article
Show Figures

Figure 1

16 pages, 2174 KiB  
Article
TwinFedPot: Honeypot Intelligence Distillation into Digital Twin for Persistent Smart Traffic Security
by Yesin Sahraoui, Abdessalam Mohammed Hadjkouider, Chaker Abdelaziz Kerrache and Carlos T. Calafate
Sensors 2025, 25(15), 4725; https://doi.org/10.3390/s25154725 (registering DOI) - 31 Jul 2025
Viewed by 62
Abstract
The integration of digital twins (DTs) with intelligent traffic systems (ITSs) holds strong potential for improving real-time management in smart cities. However, securing digital twins remains a significant challenge due to the dynamic and adversarial nature of cyber–physical environments. In this work, we [...] Read more.
The integration of digital twins (DTs) with intelligent traffic systems (ITSs) holds strong potential for improving real-time management in smart cities. However, securing digital twins remains a significant challenge due to the dynamic and adversarial nature of cyber–physical environments. In this work, we propose TwinFedPot, an innovative digital twin-based security architecture that combines honeypot-driven data collection with Zero-Shot Learning (ZSL) for robust and adaptive cyber threat detection without requiring prior sampling. The framework leverages Inverse Federated Distillation (IFD) to train the DT server, where edge-deployed honeypots generate semantic predictions of anomalous behavior and upload soft logits instead of raw data. Unlike conventional federated approaches, TwinFedPot reverses the typical knowledge flow by distilling collective intelligence from the honeypots into a central teacher model hosted on the DT. This inversion allows the system to learn generalized attack patterns using only limited data, while preserving privacy and enhancing robustness. Experimental results demonstrate significant improvements in accuracy and F1-score, establishing TwinFedPot as a scalable and effective defense solution for smart traffic infrastructures. Full article
Show Figures

Figure 1

22 pages, 5254 KiB  
Article
Exploring Simulation Methods to Counter Cyber-Attacks on the Steering Systems of the Maritime Autonomous Surface Ship (MASS)
by Igor Astrov, Sanja Bauk and Pentti Kujala
J. Mar. Sci. Eng. 2025, 13(8), 1470; https://doi.org/10.3390/jmse13081470 - 31 Jul 2025
Viewed by 57
Abstract
This paper presents a simulation-based investigation into control strategies for mitigating the consequences of cyber-assault on the steering systems of the Maritime Autonomous Surface Ships (MASS). The study focuses on two simulation experiments conducted within the Simulink/MATLAB environment, utilizing the catamaran “Nymo” MASS [...] Read more.
This paper presents a simulation-based investigation into control strategies for mitigating the consequences of cyber-assault on the steering systems of the Maritime Autonomous Surface Ships (MASS). The study focuses on two simulation experiments conducted within the Simulink/MATLAB environment, utilizing the catamaran “Nymo” MASS mathematical model to represent vessel dynamics. Cyber-attacks are modeled as external disturbances affecting the rudder control signal, emulating realistic interference scenarios. To assess control resilience, two configurations are compared during a representative turning maneuver to a specified heading: (1) a Proportional–Integral–Derivative (PID) regulator augmented with a Least Mean Squares (LMS) adaptive filter, and (2) a Nonlinear Autoregressive Moving Average with Exogenous Input (NARMA-L2) neural network regulator. The PID and LMS configurations aim to enhance the disturbance rejection capabilities of the classical controller through adaptive filtering, while the NARMA-L2 approach represents a data-driven, nonlinear control alternative. Simulation results indicate that although the PID and LMS setups demonstrate improved performance over standalone PID in the presence of cyber-induced disturbances, the NARMA-L2 controller exhibits superior adaptability, accuracy, and robustness under adversarial conditions. These findings suggest that neural network-based control offers a promising pathway for developing cyber-resilient steering systems in autonomous maritime vessels. Full article
(This article belongs to the Special Issue Advanced Control Strategies for Autonomous Maritime Systems)
Show Figures

Figure 1

26 pages, 2653 KiB  
Article
Attacker Attribution in Multi-Step and Multi-Adversarial Network Attacks Using Transformer-Based Approach
by Romina Torres and Ana García
Appl. Sci. 2025, 15(15), 8476; https://doi.org/10.3390/app15158476 - 30 Jul 2025
Viewed by 112
Abstract
Recent studies on network intrusion detection using deep learning primarily focus on detecting attacks or classifying attack types, but they often overlook the challenge of attributing each attack to its specific source among many potential adversaries (multi-adversary attribution). This is a critical and [...] Read more.
Recent studies on network intrusion detection using deep learning primarily focus on detecting attacks or classifying attack types, but they often overlook the challenge of attributing each attack to its specific source among many potential adversaries (multi-adversary attribution). This is a critical and underexplored issue in cybersecurity. In this study, we address the problem of attacker attribution in complex, multi-step network attack (MSNA) environments, aiming to identify the responsible attacker (e.g., IP address) for each sequence of security alerts, rather than merely detecting the presence or type of attack. We propose a deep learning approach based on Transformer encoders to classify sequences of network alerts and attribute them to specific attackers among many candidates. Our pipeline includes data preprocessing, exploratory analysis, and robust training/validation using stratified splits and 5-fold cross-validation, all applied to real-world multi-step attack datasets from capture-the-flag (CTF) competitions. We compare the Transformer-based approach with a multilayer perceptron (MLP) baseline to quantify the benefits of advanced architectures. Experiments on this challenging dataset demonstrate that our Transformer model achieves near-perfect accuracy (99.98%) and F1-scores (macro and weighted ≈ 99%) in attack attribution, significantly outperforming the MLP baseline (accuracy 80.62%, macro F1 65.05% and weighted F1 80.48%). The Transformer generalizes robustly across all attacker classes, including those with few samples, as evidenced by per-class metrics and confusion matrices. Our results show that Transformer-based models are highly effective for multi-adversary attack attribution in MSNA, a scenario not or under-addressed in the previous intrusion detection systems (IDS) literature. The adoption of advanced architectures and rigorous validation strategies is essential for reliable attribution in complex and imbalanced environments. Full article
(This article belongs to the Special Issue Application of Deep Learning for Cybersecurity)
Show Figures

Figure 1

24 pages, 6025 KiB  
Article
Uniform Manifold Approximation and Projection Filtering and Explainable Artificial Intelligence to Detect Adversarial Machine Learning
by Achmed Samuel Koroma, Sara Narteni, Enrico Cambiaso and Maurizio Mongelli
Information 2025, 16(8), 647; https://doi.org/10.3390/info16080647 - 29 Jul 2025
Viewed by 241
Abstract
Adversarial machine learning exploits the vulnerabilities of artificial intelligence (AI) models by inducing malicious distortion in input data. Starting with the effect of adversarial methods on well-known MNIST and CIFAR-10 open datasets, this paper investigates the ability of Uniform Manifold Approximation and Projection [...] Read more.
Adversarial machine learning exploits the vulnerabilities of artificial intelligence (AI) models by inducing malicious distortion in input data. Starting with the effect of adversarial methods on well-known MNIST and CIFAR-10 open datasets, this paper investigates the ability of Uniform Manifold Approximation and Projection (UMAP) in providing useful representations of both legitimate and malicious images and analyzes the attacks’ behavior under various conditions. By enabling the extraction of decision rules and the ranking of important features from classifiers such as decision trees, eXplainable AI (XAI) achieves zero false positives and negatives in detection through very simple if-then rules over UMAP variables. Several examples are reported in order to highlight attacks behaviour. The data availability statement details all code and data which is publicly available to offer support to reproducibility. Full article
Show Figures

Figure 1

16 pages, 1550 KiB  
Article
Understanding and Detecting Adversarial Examples in IoT Networks: A White-Box Analysis with Autoencoders
by Wafi Danesh, Srinivas Rahul Sapireddy and Mostafizur Rahman
Electronics 2025, 14(15), 3015; https://doi.org/10.3390/electronics14153015 - 29 Jul 2025
Viewed by 187
Abstract
Novel networking paradigms such as the Internet of Things (IoT) have expanded their usage and deployment to various application domains. Consequently, unseen critical security vulnerabilities such as zero-day attacks have emerged in such deployments. The design of intrusion detection systems for IoT networks [...] Read more.
Novel networking paradigms such as the Internet of Things (IoT) have expanded their usage and deployment to various application domains. Consequently, unseen critical security vulnerabilities such as zero-day attacks have emerged in such deployments. The design of intrusion detection systems for IoT networks is often challenged by a lack of labeled data, which complicates the development of robust defenses against adversarial attacks. As deep learning-based network intrusion detection systems, network intrusion detection systems (NIDS) have been used to counteract emerging security vulnerabilities. However, the deep learning models used in such NIDS are vulnerable to adversarial examples. Adversarial examples are specifically engineered samples tailored to a specific deep learning model; they are developed by minimal perturbation of network packet features, and are intended to cause misclassification. Such examples can bypass NIDS or enable the rejection of regular network traffic. Research in the adversarial example detection domain has yielded several prominent methods; however, most of those methods involve computationally expensive retraining steps and require access to labeled data, which are often lacking in IoT network deployments. In this paper, we propose an unsupervised method for detecting adversarial examples that performs early detection based on the intrinsic characteristics of the deep learning model. Our proposed method requires neither computationally expensive retraining nor extra hardware overhead for implementation. For the work in this paper, we first perform adversarial example generation on a deep learning model using autoencoders. After successful adversarial example generation, we perform adversarial example detection using the intrinsic characteristics of the layers in the deep learning model. A robustness analysis of our approach reveals that an attacker can easily bypass the detection mechanism by using low-magnitude log-normal Gaussian noise. Furthermore, we also test the robustness of our detection method against further compromise by the attacker. We tested our approach on the Kitsune datasets, which are state-of-the-art datasets obtained from deployed IoT network scenarios. Our experimental results show an average adversarial example generation time of 0.337 s and an average detection rate of almost 100%. The robustness analysis of our detection method reveals a reduction of almost 100% in adversarial example detection after compromise by the attacker. Full article
Show Figures

Figure 1

24 pages, 1530 KiB  
Article
A Lightweight Robust Training Method for Defending Model Poisoning Attacks in Federated Learning Assisted UAV Networks
by Lucheng Chen, Weiwei Zhai, Xiangfeng Bu, Ming Sun and Chenglin Zhu
Drones 2025, 9(8), 528; https://doi.org/10.3390/drones9080528 - 28 Jul 2025
Viewed by 338
Abstract
The integration of unmanned aerial vehicles (UAVs) into next-generation wireless networks greatly enhances the flexibility and efficiency of communication and distributed computation for ground mobile devices. Federated learning (FL) provides a privacy-preserving paradigm for device collaboration but remains highly vulnerable to poisoning attacks [...] Read more.
The integration of unmanned aerial vehicles (UAVs) into next-generation wireless networks greatly enhances the flexibility and efficiency of communication and distributed computation for ground mobile devices. Federated learning (FL) provides a privacy-preserving paradigm for device collaboration but remains highly vulnerable to poisoning attacks and is further challenged by the resource constraints and heterogeneous data common to UAV-assisted systems. Existing robust aggregation and anomaly detection methods often degrade in efficiency and reliability under these realistic adversarial and non-IID settings. To bridge these gaps, we propose FedULite, a lightweight and robust federated learning framework specifically designed for UAV-assisted environments. FedULite features unsupervised local representation learning optimized for unlabeled, non-IID data. Moreover, FedULite leverages a robust, adaptive server-side aggregation strategy that uses cosine similarity-based update filtering and dimension-wise adaptive learning rates to neutralize sophisticated data and model poisoning attacks. Extensive experiments across diverse datasets and adversarial scenarios demonstrate that FedULite reduces the attack success rate (ASR) from over 90% in undefended scenarios to below 5%, while maintaining the main task accuracy loss within 2%. Moreover, it introduces negligible computational overhead compared to standard FedAvg, with approximately 7% additional training time. Full article
(This article belongs to the Special Issue IoT-Enabled UAV Networks for Secure Communication)
Show Figures

Figure 1

21 pages, 2789 KiB  
Article
BIM-Based Adversarial Attacks Against Speech Deepfake Detectors
by Wendy Edda Wang, Davide Salvi, Viola Negroni, Daniele Ugo Leonzio, Paolo Bestagini and Stefano Tubaro
Electronics 2025, 14(15), 2967; https://doi.org/10.3390/electronics14152967 - 24 Jul 2025
Viewed by 216
Abstract
Automatic Speaker Verification (ASV) systems are increasingly employed to secure access to services and facilities. However, recent advances in speech deepfake generation pose serious threats to their reliability. Modern speech synthesis models can convincingly imitate a target speaker’s voice and generate realistic synthetic [...] Read more.
Automatic Speaker Verification (ASV) systems are increasingly employed to secure access to services and facilities. However, recent advances in speech deepfake generation pose serious threats to their reliability. Modern speech synthesis models can convincingly imitate a target speaker’s voice and generate realistic synthetic audio, potentially enabling unauthorized access through ASV systems. To counter these threats, forensic detectors have been developed to distinguish between real and fake speech. Although these models achieve strong performance, their deep learning nature makes them susceptible to adversarial attacks, i.e., carefully crafted, imperceptible perturbations in the audio signal that make the model unable to classify correctly. In this paper, we explore adversarial attacks targeting speech deepfake detectors. Specifically, we analyze the effectiveness of Basic Iterative Method (BIM) attacks applied in both time and frequency domains under white- and black-box conditions. Additionally, we propose an ensemble-based attack strategy designed to simultaneously target multiple detection models. This approach generates adversarial examples with balanced effectiveness across the ensemble, enhancing transferability to unseen models. Our experimental results show that, although crafting universally transferable attacks remains challenging, it is possible to fool state-of-the-art detectors using minimal, imperceptible perturbations, highlighting the need for more robust defenses in speech deepfake detection. Full article
Show Figures

Figure 1

38 pages, 6851 KiB  
Article
FGFNet: Fourier Gated Feature-Fusion Network with Fractal Dimension Estimation for Robust Palm-Vein Spoof Detection
by Seung Gu Kim, Jung Soo Kim and Kang Ryoung Park
Fractal Fract. 2025, 9(8), 478; https://doi.org/10.3390/fractalfract9080478 - 22 Jul 2025
Viewed by 236
Abstract
The palm-vein recognition system has garnered attention as a biometric technology due to its resilience to external environmental factors, protection of personal privacy, and low risk of external exposure. However, with recent advancements in deep learning-based generative models for image synthesis, the quality [...] Read more.
The palm-vein recognition system has garnered attention as a biometric technology due to its resilience to external environmental factors, protection of personal privacy, and low risk of external exposure. However, with recent advancements in deep learning-based generative models for image synthesis, the quality and sophistication of fake images have improved, leading to an increased security threat from counterfeit images. In particular, palm-vein images acquired through near-infrared illumination exhibit low resolution and blurred characteristics, making it even more challenging to detect fake images. Furthermore, spoof detection specifically targeting palm-vein images has not been studied in detail. To address these challenges, this study proposes the Fourier-gated feature-fusion network (FGFNet) as a novel spoof detector for palm-vein recognition systems. The proposed network integrates masked fast Fourier transform, a map-based gated feature fusion block, and a fast Fourier convolution (FFC) attention block with global contrastive loss to effectively detect distortion patterns caused by generative models. These components enable the efficient extraction of critical information required to determine the authenticity of palm-vein images. In addition, fractal dimension estimation (FDE) was employed for two purposes in this study. In the spoof attack procedure, FDE was used to evaluate how closely the generated fake images approximate the structural complexity of real palm-vein images, confirming that the generative model produced highly realistic spoof samples. In the spoof detection procedure, the FDE results further demonstrated that the proposed FGFNet effectively distinguishes between real and fake images, validating its capability to capture subtle structural differences induced by generative manipulation. To evaluate the spoof detection performance of FGFNet, experiments were conducted using real palm-vein images from two publicly available palm-vein datasets—VERA Spoofing PalmVein (VERA dataset) and PLUSVein-contactless (PLUS dataset)—as well as fake palm-vein images generated based on these datasets using a cycle-consistent generative adversarial network. The results showed that, based on the average classification error rate, FGFNet achieved 0.3% and 0.3% on the VERA and PLUS datasets, respectively, demonstrating superior performance compared to existing state-of-the-art spoof detection methods. Full article
Show Figures

Figure 1

43 pages, 2108 KiB  
Article
FIGS: A Realistic Intrusion-Detection Framework for Highly Imbalanced IoT Environments
by Zeynab Anbiaee, Sajjad Dadkhah and Ali A. Ghorbani
Electronics 2025, 14(14), 2917; https://doi.org/10.3390/electronics14142917 - 21 Jul 2025
Viewed by 350
Abstract
The rapid growth of Internet of Things (IoT) environments has increased security challenges due to heightened exposure to cyber threats and attacks. A key problem is the class imbalance in attack traffic, where critical yet underrepresented attacks are often overlooked by intrusion-detection systems [...] Read more.
The rapid growth of Internet of Things (IoT) environments has increased security challenges due to heightened exposure to cyber threats and attacks. A key problem is the class imbalance in attack traffic, where critical yet underrepresented attacks are often overlooked by intrusion-detection systems (IDS), thereby compromising reliability. We propose Feature-Importance GAN SMOTE (FIGS), an innovative, realistic intrusion-detection framework designed for IoT environments to address this challenge. Unlike other works that rely only on traditional oversampling methods, FIGS integrates sensitivity-based feature-importance analysis, Generative Adversarial Network (GAN)-based augmentation, a novel imbalance ratio (GIR), and Synthetic Minority Oversampling Technique (SMOTE) for generating high-quality synthetic data for minority classes. FIGS enhanced minority class detection by focusing on the most important features identified by the sensitivity analysis, while minimizing computational overhead and reducing noise during data generation. Evaluations on the CICIoMT2024 and CICIDS2017 datasets demonstrate that FIGS improves detection accuracy and significantly lowers the false negative rate. FIGS achieved a 17% improvement over the baseline model on the CICIoMT2024 dataset while maintaining performance for the majority groups. The results show that FIGS represents a highly effective solution for real-world IoT networks with high detection accuracy across all classes without introducing unnecessary computational overhead. Full article
(This article belongs to the Special Issue Network Security and Cryptography Applications)
Show Figures

Figure 1

36 pages, 8047 KiB  
Article
Fed-DTB: A Dynamic Trust-Based Framework for Secure and Efficient Federated Learning in IoV Networks: Securing V2V/V2I Communication
by Ahmed Alruwaili, Sardar Islam and Iqbal Gondal
J. Cybersecur. Priv. 2025, 5(3), 48; https://doi.org/10.3390/jcp5030048 - 19 Jul 2025
Viewed by 429
Abstract
The Internet of Vehicles (IoV) presents a vast opportunity for optimised traffic flow, road safety, and enhanced usage experience with the influence of Federated Learning (FL). However, the distributed nature of IoV networks creates certain inherent problems regarding data privacy, security from adversarial [...] Read more.
The Internet of Vehicles (IoV) presents a vast opportunity for optimised traffic flow, road safety, and enhanced usage experience with the influence of Federated Learning (FL). However, the distributed nature of IoV networks creates certain inherent problems regarding data privacy, security from adversarial attacks, and the handling of available resources. This paper introduces Fed-DTB, a new dynamic trust-based framework for FL that aims to overcome these challenges in the context of IoV. Fed-DTB integrates the adaptive trust evaluation that is capable of quickly identifying and excluding malicious clients to maintain the authenticity of the learning process. A performance comparison with previous approaches is shown, where the Fed-DTB method improves accuracy in the first two training rounds and decreases the per-round training time. The Fed-DTB is robust to non-IID data distributions and outperforms all other state-of-the-art approaches regarding the final accuracy (87–88%), convergence rate, and adversary detection (99.86% accuracy). The key contributions include (1) a multi-factor trust evaluation mechanism with seven contextual factors, (2) correlation-based adaptive weighting that dynamically prioritises trust factors based on vehicular conditions, and (3) an optimisation-based client selection strategy that maximises collaborative reliability. This work opens up opportunities for more accurate, secure, and private collaborative learning in future intelligent transportation systems with the help of federated learning while overcoming the conventional trade-off of security vs. efficiency. Full article
Show Figures

Figure 1

20 pages, 437 KiB  
Article
Post-Quantum Key Exchange and Subscriber Identity Encryption in 5G Using ML-KEM (Kyber)
by Qaiser Khan, Sourav Purification and Sang-Yoon Chang
Information 2025, 16(7), 617; https://doi.org/10.3390/info16070617 - 19 Jul 2025
Viewed by 267
Abstract
5G addresses user privacy concerns in cellular networking by encrypting a subscriber identifier with elliptic-curve-based encryption and then transmitting it as ciphertext known as a Subscriber Concealed Identifier (SUCI). However, an adversary equipped with a quantum computer can break a discrete-logarithm-based elliptic curve [...] Read more.
5G addresses user privacy concerns in cellular networking by encrypting a subscriber identifier with elliptic-curve-based encryption and then transmitting it as ciphertext known as a Subscriber Concealed Identifier (SUCI). However, an adversary equipped with a quantum computer can break a discrete-logarithm-based elliptic curve algorithm. Consequently, the user privacy in 5G is at stake against quantum attacks. In this paper, we study the incorporation of the post-quantum ciphers in the SUCI calculation both at the user equipment and at the core network, which involves the shared-key exchange and then using the resulting key for the ID encryption. We experiment on different hardware platforms to analyze the PQC key exchange and encryption using NIST-standardized CRYSTALS-Kyber (which is now called an ML-KEM after the standardization selection by NIST). Our analyses focus on the performances and compare the Kyber-based key exchange and encryption with the current (pre-quantum) elliptic curve Diffie–Hellman (ECDH). The performance analyses are critical because mobile networking involves resource-limited and battery-operating mobile devices. We measure and analyze not only the time and CPU-processing performances but also the energy and power performances. Our analyses show that Kyber-512 is the most efficient and even has better performance (i.e., faster computations and lower energy consumption) than ECDH. Full article
(This article belongs to the Special Issue Public Key Cryptography and Privacy Protection)
Show Figures

Figure 1

21 pages, 423 KiB  
Article
Multi-Line Prefetch Covert Channel with Huge Pages
by Xinyao Li and Akhilesh Tyagi
Cryptography 2025, 9(3), 51; https://doi.org/10.3390/cryptography9030051 - 18 Jul 2025
Viewed by 220
Abstract
Modern x86 processors incorporate performance-enhancing features such as prefetching mechanisms, cache coherence protocols, and support for large memory pages (e.g., 2 MB huge pages). While these architectural innovations aim to reduce memory access latency, boost throughput, and maintain cache consistency across cores, they [...] Read more.
Modern x86 processors incorporate performance-enhancing features such as prefetching mechanisms, cache coherence protocols, and support for large memory pages (e.g., 2 MB huge pages). While these architectural innovations aim to reduce memory access latency, boost throughput, and maintain cache consistency across cores, they can also expose subtle microarchitectural side channels that adversaries may exploit. This study investigates how the combination of prefetching techniques and huge pages can significantly enhance the throughput and accuracy of covert channels in controlled computing environments. Building on prior work that examined the impact of the MESI cache coherence protocol using single-cache-line access without huge pages, our approach expands the attack surface by simultaneously accessing multiple cache lines across all 512 L1 lines under a 2 MB huge page configuration. As a result, our 9-bit covert channel achieves a peak throughput of 4940 KB/s—substantially exceeding previously reported benchmarks. We further validate our channel on AMD SEV-SNP virtual machines, achieving up to an 88% decoding accuracy using write-access encoding with 2 MB huge pages, demonstrating feasibility even under TEE-enforced virtualization environments. These findings highlight the need for careful consideration and evaluation of the security implications of common performance optimizations with respect to their side-channel potential. Full article
Show Figures

Figure 1

55 pages, 6352 KiB  
Review
A Deep Learning Framework for Enhanced Detection of Polymorphic Ransomware
by Mazen Gazzan, Bader Alobaywi, Mohammed Almutairi and Frederick T. Sheldon
Future Internet 2025, 17(7), 311; https://doi.org/10.3390/fi17070311 - 18 Jul 2025
Viewed by 433
Abstract
Ransomware, a significant cybersecurity threat, encrypts files and causes substantial damage, making early detection crucial yet challenging. This paper introduces a novel multi-phase framework for early ransomware detection, designed to enhance accuracy and minimize false positives. The framework addresses the limitations of existing [...] Read more.
Ransomware, a significant cybersecurity threat, encrypts files and causes substantial damage, making early detection crucial yet challenging. This paper introduces a novel multi-phase framework for early ransomware detection, designed to enhance accuracy and minimize false positives. The framework addresses the limitations of existing methods by integrating operational data with situational and threat intelligence, enabling it to dynamically adapt to the evolving ransomware landscape. Key innovations include (1) data augmentation using a Bi-Gradual Minimax Generative Adversarial Network (BGM-GAN) to generate synthetic ransomware attack patterns, addressing data insufficiency; (2) Incremental Mutual Information Selection (IMIS) for dynamically selecting relevant features, adapting to evolving ransomware behaviors and reducing computational overhead; and (3) a Deep Belief Network (DBN) detection architecture, trained on the augmented data and optimized with Uncertainty-Aware Dynamic Early Stopping (UA-DES) to prevent overfitting. The model demonstrates a 4% improvement in detection accuracy (from 90% to 94%) through synthetic data generation and reduces false positives from 15.4% to 14%. The IMIS technique further increases accuracy to 96% while reducing false positives. The UA-DES optimization boosts accuracy to 98.6% and lowers false positives to 10%. Overall, this framework effectively addresses the challenges posed by evolving ransomware, significantly enhancing detection accuracy and reliability. Full article
Show Figures

Figure 1

Back to TopTop