Previous Issue
Volume 5, June
 
 

J. Cybersecur. Priv., Volume 5, Issue 3 (September 2025) – 25 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
54 pages, 1637 KiB  
Article
MICRA: A Modular Intelligent Cybersecurity Response Architecture with Machine Learning Integration
by Alessandro Carvalho Coutinho and Luciano Vieira de Araújo
J. Cybersecur. Priv. 2025, 5(3), 60; https://doi.org/10.3390/jcp5030060 - 16 Aug 2025
Viewed by 421
Abstract
The growing sophistication of cyber threats has posed significant challenges for organizations in terms of accurately detecting and responding to incidents in a coordinated manner. Despite advances in the application of machine learning and automation, many solutions still face limitations such as high [...] Read more.
The growing sophistication of cyber threats has posed significant challenges for organizations in terms of accurately detecting and responding to incidents in a coordinated manner. Despite advances in the application of machine learning and automation, many solutions still face limitations such as high false positive rates, low scalability, and difficulties in interorganizational cooperation. This study presents MICRA (Modular Intelligent Cybersecurity Response Architecture), a modular conceptual proposal that integrates dynamic data acquisition, cognitive threat analysis, multi-layer validation, adaptive response orchestration, and collaborative intelligence sharing. The architecture consists of six interoperable modules and incorporates techniques such as supervised learning, heuristic analysis, and behavioral modeling. The modules are designed for operation in diverse environments, including corporate networks, educational networks, and critical infrastructures. MICRA seeks to establish a flexible and scalable foundation for proactive cyber defense, reconciling automation, collaborative intelligence, and adaptability. This proposal aims to support future implementations and research on incident response and cyber resilience in complex operational contexts. Full article
(This article belongs to the Collection Machine Learning and Data Analytics for Cyber Security)
Show Figures

Graphical abstract

22 pages, 1908 KiB  
Article
AI-Blockchain Integration for Real-Time Cybersecurity: System Design and Evaluation
by Sam Goundar and Iqbal Gondal
J. Cybersecur. Priv. 2025, 5(3), 59; https://doi.org/10.3390/jcp5030059 - 14 Aug 2025
Viewed by 456
Abstract
This paper proposes and evaluates a novel real-time cybersecurity framework integrating artificial intelligence (AI) and blockchain technology to enhance the detection and auditability of cyber threats. Traditional cybersecurity approaches often lack transparency and robustness in logging and verifying AI-generated decisions, hindering forensic investigations [...] Read more.
This paper proposes and evaluates a novel real-time cybersecurity framework integrating artificial intelligence (AI) and blockchain technology to enhance the detection and auditability of cyber threats. Traditional cybersecurity approaches often lack transparency and robustness in logging and verifying AI-generated decisions, hindering forensic investigations and regulatory compliance. To address these challenges, we developed an integrated solution combining a convolutional neural network (CNN)-based anomaly detection module with a permissioned Ethereum blockchain to securely log and immutably store AI-generated alerts and relevant metadata. The proposed system employs smart contracts to automatically validate AI alerts and ensure data integrity and transparency, significantly enhancing auditability and forensic analysis capabilities. To rigorously test and validate our solution, we conducted comprehensive experiments using the CICIDS2017 dataset and evaluated the system’s detection accuracy, precision, recall, and real-time responsiveness. Additionally, we performed penetration testing and security assessments to verify system resilience against common cybersecurity threats. Results demonstrate that our AI-blockchain integrated solution achieves superior detection performance while ensuring real-time logging, transparency, and auditability. The integration significantly strengthens system robustness, reduces false positives, and provides clear benefits for cybersecurity management, especially in regulated environments. This paper concludes by outlining potential avenues for future research, particularly extending blockchain scalability, privacy enhancements, and optimizing performance for high-throughput cybersecurity applications. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

30 pages, 1486 KiB  
Article
A Comprehensive Analysis of Evolving Permission Usage in Android Apps: Trends, Threats, and Ecosystem Insights
by Ali Alkinoon, Trung Cuong Dang, Ahod Alghuried, Abdulaziz Alghamdi, Soohyeon Choi, Manar Mohaisen, An Wang, Saeed Salem and David Mohaisen
J. Cybersecur. Priv. 2025, 5(3), 58; https://doi.org/10.3390/jcp5030058 - 14 Aug 2025
Viewed by 417
Abstract
The proper use of Android app permissions is crucial to the success and security of these apps. Users must agree to permission requests when installing or running their apps. Despite official Android platform documentation on proper permission usage, there are still many cases [...] Read more.
The proper use of Android app permissions is crucial to the success and security of these apps. Users must agree to permission requests when installing or running their apps. Despite official Android platform documentation on proper permission usage, there are still many cases of permission abuse. This study provides a comprehensive analysis of the Android permission landscape, highlighting trends and patterns in permission requests across various applications from the Google Play Store. By distinguishing between benign and malicious applications, we uncover developers’ evolving strategies, with malicious apps increasingly requesting fewer permissions to evade detection, while benign apps request more to enhance functionality. In addition to examining permission trends across years and app features such as advertisements, in-app purchases, content ratings, and app sizes, we leverage association rule mining using the FP-Growth algorithm. This allows us to uncover frequent permission combinations across the entire dataset, specific years, and 16 app genres. The analysis reveals significant differences in permission usage patterns, providing a deeper understanding of co-occurring permissions and their implications for user privacy and app functionality. By categorizing permissions into high-level semantic groups and examining their application across distinct app categories, this study offers a structured approach to analyzing the dynamics within the Android ecosystem. The findings emphasize the importance of continuous monitoring, user education, and regulatory oversight to address permission misuse effectively. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

21 pages, 2863 KiB  
Article
Metric Differential Privacy on the Special Orthogonal Group SO(3)
by Anna Katharina Hildebrandt, Elmar Schömer and Andreas Hildebrandt
J. Cybersecur. Priv. 2025, 5(3), 57; https://doi.org/10.3390/jcp5030057 - 12 Aug 2025
Viewed by 228
Abstract
Differential privacy (DP) is an important framework to provide strong theoretical guarantees on the privacy and utility of released data. Since its introduction in 2006, DP has been applied to various data types and domains. More recently, the introduction of metric differential privacy [...] Read more.
Differential privacy (DP) is an important framework to provide strong theoretical guarantees on the privacy and utility of released data. Since its introduction in 2006, DP has been applied to various data types and domains. More recently, the introduction of metric differential privacy has improved the applicability and interpretability of DP in cases where the data resides in more general metric spaces. In metric DP, indistinguishability of data points is modulated by their distance. In this work, we demonstrate how to extend metric differential privacy to datasets representing three-dimensional rotations in SO(3) through two mechanisms: a Laplace mechanism on SO(3), and a novel privacy mechanism based on the Bingham distribution. In contrast to other applications of metric DP to directional data, we demonstrate how to handle the antipodal symmetry inherent in SO(3) while transferring privacy from S3 to SO(3). We show that the Laplace mechanism fulfills ϵϕ-privacy, where ϕ is the geodesic metric on SO(3), and that the Bingham mechanism fulfills ϵ˜ϕ-privacy with ϵ˜=π4ϵ. Through a simulation study, we compare the distribution of samples from both mechanisms and argue about their respective privacy–utility tradeoffs. Full article
(This article belongs to the Special Issue Data Protection and Privacy)
Show Figures

Figure 1

21 pages, 1477 KiB  
Article
When Things Heat Up: Detecting Malicious Activity Using CPU Thermal Sensors
by Teodora Vasilas and Remus Brad
J. Cybersecur. Priv. 2025, 5(3), 56; https://doi.org/10.3390/jcp5030056 - 11 Aug 2025
Viewed by 337
Abstract
In today’s era of technology, where information is readily available anytime and from anywhere, safeguarding our privacy and sensitive data is more important than ever. The thermal sensors embedded within a CPU are primarily utilized for monitoring and regulating the processor’s temperature during [...] Read more.
In today’s era of technology, where information is readily available anytime and from anywhere, safeguarding our privacy and sensitive data is more important than ever. The thermal sensors embedded within a CPU are primarily utilized for monitoring and regulating the processor’s temperature during operation. However, they can serve as valuable components in increasing the security of a system as well, by enabling the detection of anomalies through temperature monitoring. This study presents three distinct methods demonstrating that anomalies in CPU heat dissipation can be effectively detected using the thermal sensors of a CPU, even under conditions of significant environmental use. First, it evaluates the Hot-n-Cold anomaly detection technique across various noisy environments, demonstrating that the presence of additional lines of code inserted into a Linux command can be identified through thermal analysis. Second, it detects the CryptoTrooper ransomware attack by fingerprinting the associated cryptographic processes in terms of temperature. Finally, it detects unauthorized system login attempts by capturing and analyzing their distinctive thermal signatures. This study demonstrates that various detection mechanisms can be implemented using thermal sensors to enhance system security. It also motivates the need for further research in this relatively underexplored area with the goal of developing more effective methods of protecting data. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

60 pages, 4240 KiB  
Article
Leveraging Large Language Models for Scalable and Explainable Cybersecurity Log Analysis
by Giulia Palma, Gaia Cecchi, Mario Caronna and Antonio Rizzo
J. Cybersecur. Priv. 2025, 5(3), 55; https://doi.org/10.3390/jcp5030055 - 10 Aug 2025
Viewed by 755
Abstract
The increasing complexity and volume of cybersecurity logs demand advanced analytical techniques capable of accurate threat detection and explainability. This paper investigates the application of Large Language Models (LLMs), specifically qwen2.5:7b, gemma3:4b, llama3.2:3b, qwen3:8b and qwen2.5:32b to cybersecurity log classification, demonstrating their superior [...] Read more.
The increasing complexity and volume of cybersecurity logs demand advanced analytical techniques capable of accurate threat detection and explainability. This paper investigates the application of Large Language Models (LLMs), specifically qwen2.5:7b, gemma3:4b, llama3.2:3b, qwen3:8b and qwen2.5:32b to cybersecurity log classification, demonstrating their superior performance compared to traditional machine learning models such as XGBoost, Random Forest, and LightGBM. We present a comprehensive evaluation pipeline that integrates domain-specific prompt engineering, robust parsing of free-text LLM outputs, and uncertainty quantification to enable scalable, automated benchmarking. Our experiments on a vulnerability detection task show that the LLM achieves an F1-score of 0.928 ([0.913, 0.942] 95% CI), significantly outperforming XGBoost (0.555 [0.520, 0.590]) and LightGBM (0.432 [0.380, 0.484]). In addition to superior predictive performance, the LLM generates structured, domain-relevant explanations aligned with classical interpretability methods. These findings highlight the potential of LLMs as interpretable, adaptive tools for operational cybersecurity, making advanced threat detection feasible for SMEs and paving the way for their deployment in dynamic threat environments. Full article
Show Figures

Figure 1

19 pages, 443 KiB  
Article
Frame-Wise Steganalysis Based on Mask-Gating Attention and Deep Residual Bilinear Interaction Mechanisms for Low-Bit-Rate Speech Streams
by Congcong Sun, Azizol Abdullah, Normalia Samian and Nuur Alifah Roslan
J. Cybersecur. Priv. 2025, 5(3), 54; https://doi.org/10.3390/jcp5030054 - 4 Aug 2025
Viewed by 274
Abstract
Frame-wise steganalysis is a crucial task in low-bit-rate speech streams that can achieve active defense. However, there is no common theory on how to extract steganalysis features for frame-wise steganalysis. Moreover, existing frame-wise steganalysis methods cannot extract fine-grained steganalysis features. Therefore, in this [...] Read more.
Frame-wise steganalysis is a crucial task in low-bit-rate speech streams that can achieve active defense. However, there is no common theory on how to extract steganalysis features for frame-wise steganalysis. Moreover, existing frame-wise steganalysis methods cannot extract fine-grained steganalysis features. Therefore, in this paper, we propose a frame-wise steganalysis method based on mask-gating attention and bilinear codeword feature interaction mechanisms. First, this paper utilizes the mask-gating attention mechanism to dynamically learn the importance of the codewords. Second, the bilinear codeword feature interaction mechanism is used to capture an informative second-order codeword feature interaction pattern in a fine-grained way. Finally, multiple fully connected layers with a residual structure are utilized to capture higher-order codeword interaction features while preserving lower-order interaction features. The experimental results show that the performance of our method is better than that of the state-of-the-art frame-wise steganalysis method on large steganography datasets. The detection accuracy of our method is 74.46% on 1000K testing samples, whereas the detection accuracy of the state-of-the-art method is 72.32%. Full article
(This article belongs to the Special Issue Multimedia Security and Privacy)
Show Figures

Figure 1

17 pages, 1027 KiB  
Article
AI-Driven Security for Blockchain-Based Smart Contracts: A GAN-Assisted Deep Learning Approach to Malware Detection
by Imad Bourian, Lahcen Hassine and Khalid Chougdali
J. Cybersecur. Priv. 2025, 5(3), 53; https://doi.org/10.3390/jcp5030053 - 1 Aug 2025
Viewed by 578
Abstract
In the modern era, the use of blockchain technology has been growing rapidly, where Ethereum smart contracts play an important role in securing decentralized application systems. However, these smart contracts are also susceptible to a large number of vulnerabilities, which pose significant threats [...] Read more.
In the modern era, the use of blockchain technology has been growing rapidly, where Ethereum smart contracts play an important role in securing decentralized application systems. However, these smart contracts are also susceptible to a large number of vulnerabilities, which pose significant threats to intelligent systems and IoT applications, leading to data breaches and financial losses. Traditional detection techniques, such as manual analysis and static automated tools, suffer from high false positives and undetected security vulnerabilities. To address these problems, this paper proposes an Artificial Intelligence (AI)-based security framework that integrates Generative Adversarial Network (GAN)-based feature selection and deep learning techniques to classify and detect malware attacks on smart contract execution in the blockchain decentralized network. After an exhaustive pre-processing phase yielding a dataset of 40,000 malware and benign samples, the proposed model is evaluated and compared with related studies on the basis of a number of performance metrics including training accuracy, training loss, and classification metrics (accuracy, precision, recall, and F1-score). Our combined approach achieved a remarkable accuracy of 97.6%, demonstrating its effectiveness in detecting malware and protecting blockchain systems. Full article
Show Figures

Figure 1

28 pages, 6624 KiB  
Article
YoloMal-XAI: Interpretable Android Malware Classification Using RGB Images and YOLO11
by Chaymae El Youssofi and Khalid Chougdali
J. Cybersecur. Priv. 2025, 5(3), 52; https://doi.org/10.3390/jcp5030052 - 1 Aug 2025
Viewed by 516
Abstract
As Android malware grows increasingly sophisticated, traditional detection methods struggle to keep pace, creating an urgent need for robust, interpretable, and real-time solutions to safeguard mobile ecosystems. This study introduces YoloMal-XAI, a novel deep learning framework that transforms Android application files into RGB [...] Read more.
As Android malware grows increasingly sophisticated, traditional detection methods struggle to keep pace, creating an urgent need for robust, interpretable, and real-time solutions to safeguard mobile ecosystems. This study introduces YoloMal-XAI, a novel deep learning framework that transforms Android application files into RGB image representations by mapping DEX (Dalvik Executable), Manifest.xml, and Resources.arsc files to distinct color channels. Evaluated on the CICMalDroid2020 dataset using YOLO11 pretrained classification models, YoloMal-XAI achieves 99.87% accuracy in binary classification and 99.56% in multi-class classification (Adware, Banking, Riskware, SMS, and Benign). Compared to ResNet-50, GoogLeNet, and MobileNetV2, YOLO11 offers competitive accuracy with at least 7× faster training over 100 epochs. Against YOLOv8, YOLO11 achieves comparable or superior accuracy while reducing training time by up to 3.5×. Cross-corpus validation using Drebin and CICAndMal2017 further confirms the model’s generalization capability on previously unseen malware. An ablation study highlights the value of integrating DEX, Manifest, and Resources components, with the full RGB configuration consistently delivering the best performance. Explainable AI (XAI) techniques—Grad-CAM, Grad-CAM++, Eigen-CAM, and HiRes-CAM—are employed to interpret model decisions, revealing the DEX segment as the most influential component. These results establish YoloMal-XAI as a scalable, efficient, and interpretable framework for Android malware detection, with strong potential for future deployment on resource-constrained mobile devices. Full article
Show Figures

Figure 1

20 pages, 1059 KiB  
Article
The Knowledge Sovereignty Paradigm: Mapping Employee-Driven Information Governance Following Organisational Data Breaches
by Jeferson Martínez Lozano, Kevin Restrepo Bedoya and Juan Velez-Ocampo
J. Cybersecur. Priv. 2025, 5(3), 51; https://doi.org/10.3390/jcp5030051 - 31 Jul 2025
Viewed by 316
Abstract
This study explores the emergent dynamics of knowledge sovereignty within organisations following data breach incidents. Using qualitative analysis based on Benoit’s image restoration theory, this study shows that employees do more than relay official messages—they actively shape information governance after a cyberattack. Employees [...] Read more.
This study explores the emergent dynamics of knowledge sovereignty within organisations following data breach incidents. Using qualitative analysis based on Benoit’s image restoration theory, this study shows that employees do more than relay official messages—they actively shape information governance after a cyberattack. Employees adapt Benoit’s response strategies (denial, evasion of responsibility, reducing offensiveness, corrective action, and mortification) based on how authentic they perceive the organisation’s response, their identification with the company, and their sense of fairness in crisis management. This investigation substantively extends extant crisis communication theory by showing how knowledge sovereignty is shaped through negotiation, as employees manage their dual role as breach victims and organisational representatives. The findings suggest that employees are key actors in post-breach information governance, and that their authentic engagement is critical to organisational recovery after cybersecurity incidents. Full article
36 pages, 856 KiB  
Systematic Review
Is Blockchain the Future of AI Alignment? Developing a Framework and a Research Agenda Based on a Systematic Literature Review
by Alexander Neulinger, Lukas Sparer, Maryam Roshanaei, Dragutin Ostojić, Jainil Kakka and Dušan Ramljak
J. Cybersecur. Priv. 2025, 5(3), 50; https://doi.org/10.3390/jcp5030050 - 29 Jul 2025
Viewed by 938
Abstract
Artificial intelligence (AI) agents are increasingly shaping vital sectors of society, including healthcare, education, supply chains, and finance. As their influence grows, AI alignment research plays a pivotal role in ensuring these systems are trustworthy, transparent, and aligned with human values. Leveraging blockchain [...] Read more.
Artificial intelligence (AI) agents are increasingly shaping vital sectors of society, including healthcare, education, supply chains, and finance. As their influence grows, AI alignment research plays a pivotal role in ensuring these systems are trustworthy, transparent, and aligned with human values. Leveraging blockchain technology, proven over the past decade in enabling transparent, tamper-resistant distributed systems, offers significant potential to strengthen AI alignment. However, despite its potential, the current AI alignment literature has yet to systematically explore the effectiveness of blockchain in facilitating secure and ethical behavior in AI agents. While existing systematic literature reviews (SLRs) in AI alignment address various aspects of AI safety and AI alignment, this SLR specifically examines the gap at the intersection of AI alignment, blockchain, and ethics. To address this gap, this SLR explores how blockchain technology can overcome the limitations of existing AI alignment approaches. We searched for studies containing keywords from AI, blockchain, and ethics domains in the Scopus database, identifying 7110 initial records on 28 May 2024. We excluded studies which did not answer our research questions and did not discuss the thematic intersection between AI, blockchain, and ethics to a sufficient extent. The quality of the selected studies was assessed on the basis of their methodology, clarity, completeness, and transparency, resulting in a final number of 46 included studies, the majority of which were journal articles. Results were synthesized through quantitative topic analysis and qualitative analysis to identify key themes and patterns. The contributions of this paper include the following: (i) presentation of the results of an SLR conducted to identify, extract, evaluate, and synthesize studies on the symbiosis of AI alignment, blockchain, and ethics; (ii) summary and categorization of the existing benefits and challenges in incorporating blockchain for AI alignment within the context of ethics; (iii) development of a framework that will facilitate new research activities; and (iv) establishment of the state of evidence with in-depth assessment. The proposed blockchain-based AI alignment framework in this study demonstrates that integrating blockchain with AI alignment can substantially enhance robustness, promote public trust, and facilitate ethical compliance in AI systems. Full article
Show Figures

Figure 1

18 pages, 2539 KiB  
Article
Empowering End-Users with Cybersecurity Situational Awareness: Findings from IoT-Health Table-Top Exercises
by Fariha Tasmin Jaigirdar, Carsten Rudolph, Misita Anwar and Boyu Tan
J. Cybersecur. Priv. 2025, 5(3), 49; https://doi.org/10.3390/jcp5030049 - 25 Jul 2025
Viewed by 403
Abstract
End-users in a decision-oriented Internet of Things (IoT) healthcare system are often left in the dark regarding critical security information necessary for making informed decisions about potential risks. This is partly due to the lack of transparency and system security awareness end-users have [...] Read more.
End-users in a decision-oriented Internet of Things (IoT) healthcare system are often left in the dark regarding critical security information necessary for making informed decisions about potential risks. This is partly due to the lack of transparency and system security awareness end-users have in such systems. To empower end-users and enhance their cybersecurity situational awareness, it is imperative to thoroughly document and report the runtime security controls in place, as well as the security-relevant aspects of the devices they rely on, while the need for better transparency is obvious, it remains uncertain whether current systems offer adequate security metadata for end-users and how future designs can be improved to ensure better visibility into the security measures implemented. To address this gap, we conducted table-top exercises with ten security and ICT experts to evaluate a typical IoT-Health scenario. These exercises revealed the critical role of security metadata, identified the available ones to be presented to users, and suggested potential enhancements that could be integrated into system design. We present our observations from the exercises, highlighting experts’ valuable suggestions, concerns, and views, backed by our in-depth analysis. Moreover, as a proof-of-concept of our study, we simulated three relevant use cases to detect cyber risks. This comprehensive analysis underscores critical considerations that can significantly improve future system protocols, ensuring end-users are better equipped to navigate and mitigate security risks effectively. Full article
Show Figures

Figure 1

36 pages, 8047 KiB  
Article
Fed-DTB: A Dynamic Trust-Based Framework for Secure and Efficient Federated Learning in IoV Networks: Securing V2V/V2I Communication
by Ahmed Alruwaili, Sardar Islam and Iqbal Gondal
J. Cybersecur. Priv. 2025, 5(3), 48; https://doi.org/10.3390/jcp5030048 - 19 Jul 2025
Viewed by 707
Abstract
The Internet of Vehicles (IoV) presents a vast opportunity for optimised traffic flow, road safety, and enhanced usage experience with the influence of Federated Learning (FL). However, the distributed nature of IoV networks creates certain inherent problems regarding data privacy, security from adversarial [...] Read more.
The Internet of Vehicles (IoV) presents a vast opportunity for optimised traffic flow, road safety, and enhanced usage experience with the influence of Federated Learning (FL). However, the distributed nature of IoV networks creates certain inherent problems regarding data privacy, security from adversarial attacks, and the handling of available resources. This paper introduces Fed-DTB, a new dynamic trust-based framework for FL that aims to overcome these challenges in the context of IoV. Fed-DTB integrates the adaptive trust evaluation that is capable of quickly identifying and excluding malicious clients to maintain the authenticity of the learning process. A performance comparison with previous approaches is shown, where the Fed-DTB method improves accuracy in the first two training rounds and decreases the per-round training time. The Fed-DTB is robust to non-IID data distributions and outperforms all other state-of-the-art approaches regarding the final accuracy (87–88%), convergence rate, and adversary detection (99.86% accuracy). The key contributions include (1) a multi-factor trust evaluation mechanism with seven contextual factors, (2) correlation-based adaptive weighting that dynamically prioritises trust factors based on vehicular conditions, and (3) an optimisation-based client selection strategy that maximises collaborative reliability. This work opens up opportunities for more accurate, secure, and private collaborative learning in future intelligent transportation systems with the help of federated learning while overcoming the conventional trade-off of security vs. efficiency. Full article
Show Figures

Figure 1

24 pages, 1991 KiB  
Article
A Multi-Feature Semantic Fusion Machine Learning Architecture for Detecting Encrypted Malicious Traffic
by Shiyu Tang, Fei Du, Zulong Diao and Wenjun Fan
J. Cybersecur. Priv. 2025, 5(3), 47; https://doi.org/10.3390/jcp5030047 - 17 Jul 2025
Viewed by 609
Abstract
With the increasing sophistication of network attacks, machine learning (ML)-based methods have showcased promising performance in attack detection. However, ML-based methods often suffer from high false rates when tackling encrypted malicious traffic. To break through these bottlenecks, we propose EFTransformer, an encrypted flow [...] Read more.
With the increasing sophistication of network attacks, machine learning (ML)-based methods have showcased promising performance in attack detection. However, ML-based methods often suffer from high false rates when tackling encrypted malicious traffic. To break through these bottlenecks, we propose EFTransformer, an encrypted flow transformer framework which inherits semantic perception and multi-scale feature fusion, can robustly and efficiently detect encrypted malicious traffic, and make up for the shortcomings of ML in the context of modeling ability and feature adequacy. EFTransformer introduces a channel-level extraction mechanism based on quintuples and a noise-aware clustering strategy to enhance the recognition ability of traffic patterns; adopts a dual-channel embedding method, using Word2Vec and FastText to capture global semantics and subword-level changes; and uses a Transformer-based classifier and attention pooling module to achieve dynamic feature-weighted fusion, thereby improving the robustness and accuracy of malicious traffic detection. Our systematic experiments on the ISCX2012 dataset demonstrate that EFTransformer achieves the best detection performance, with an accuracy of up to 95.26%, a false positive rate (FPR) of 6.19%, and a false negative rate (FNR) of only 5.85%. These results show that EFTransformer achieves high detection performance against encrypted malicious traffic. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

28 pages, 1727 KiB  
Article
Detecting Jamming in Smart Grid Communications via Deep Learning
by Muhammad Irfan, Aymen Omri, Javier Hernandez Fernandez, Savio Sciancalepore and Gabriele Oligeri
J. Cybersecur. Priv. 2025, 5(3), 46; https://doi.org/10.3390/jcp5030046 - 15 Jul 2025
Viewed by 599
Abstract
Power-Line Communication (PLC) allows data transmission through existing power lines, thus avoiding the expensive deployment of ad hoc network infrastructures. However, power line networks remain vastly unattended, which allows tampering by malicious actors. In fact, an attacker can easily inject a malicious signal [...] Read more.
Power-Line Communication (PLC) allows data transmission through existing power lines, thus avoiding the expensive deployment of ad hoc network infrastructures. However, power line networks remain vastly unattended, which allows tampering by malicious actors. In fact, an attacker can easily inject a malicious signal (jamming) with the aim of disrupting ongoing communications. In this paper, we propose a new solution to detect jamming attacks before they significantly affect the quality of the communication link, thus allowing the detection of a jammer (geographically) far away from a receiver. We consider two scenarios as a function of the receiver’s ability to know in advance the impact of the jammer on the received signal. In the first scenario (jamming-aware), we leverage a classifier based on a Convolutional Neural Network, which has been trained on both jammed and non-jammed signals. In the second scenario (jamming-unaware), we consider a one-class classifier based on autoencoders, allowing us to address the challenge of jamming detection as a classical anomaly detection problem. Our proposed solution can detect jamming attacks on PLC networks with an accuracy greater than 99% even when the jammer is 68 m away from the receiver while requiring training only on traffic acquired during the regular operation of the target PLC network. Full article
Show Figures

Figure 1

36 pages, 1120 KiB  
Article
Triple-Shield Privacy in Healthcare: Federated Learning, p-ABCs, and Distributed Ledger Authentication
by Sofia Sakka, Nikolaos Pavlidis, Vasiliki Liagkou, Ioannis Panges, Despina Elizabeth Filippidou, Chrysostomos Stylios and Anastasios Manos
J. Cybersecur. Priv. 2025, 5(3), 45; https://doi.org/10.3390/jcp5030045 - 12 Jul 2025
Viewed by 681
Abstract
The growing influence of technology in the healthcare industry has led to the creation of innovative applications that improve convenience, accessibility, and diagnostic accuracy. However, health applications face significant challenges concerning user privacy and data security, as they handle extremely sensitive personal and [...] Read more.
The growing influence of technology in the healthcare industry has led to the creation of innovative applications that improve convenience, accessibility, and diagnostic accuracy. However, health applications face significant challenges concerning user privacy and data security, as they handle extremely sensitive personal and medical information. Privacy-Enhancing Technologies (PETs), such as Privacy-Attribute-based Credentials, Differential Privacy, and Federated Learning, have emerged as crucial tools to tackle these challenges. Despite their potential, PETs are not widely utilized due to technical and implementation obstacles. This research introduces a comprehensive framework for protecting health applications from privacy and security threats, with a specific emphasis on gamified mental health apps designed to manage Attention Deficit Hyperactivity Disorder (ADHD) in children. Acknowledging the heightened sensitivity of mental health data, especially in applications for children, our framework prioritizes user-centered design and strong privacy measures. We suggest an identity management system based on blockchain technology to ensure secure and transparent credential management and incorporate Federated Learning to enable privacy-preserving AI-driven predictions. These advancements ensure compliance with data protection regulations, like GDPR, while meeting the needs of various stakeholders, including children, parents, educators, and healthcare professionals. Full article
(This article belongs to the Special Issue Data Protection and Privacy)
Show Figures

Figure 1

13 pages, 1053 KiB  
Opinion
A Framework for the Design of Privacy-Preserving Record Linkage Systems
by Zixin Nie, Benjamin Tyndall, Daniel Brannock, Emily Gentles, Elizabeth Parish and Alison Banger
J. Cybersecur. Priv. 2025, 5(3), 44; https://doi.org/10.3390/jcp5030044 - 9 Jul 2025
Viewed by 531
Abstract
Record linkage can enhance the utility of data by bringing data together from different sources, increasing the available information about data subjects and providing more holistic views. Doing so, however, can increase privacy risks. To mitigate these risks, a family of methods known [...] Read more.
Record linkage can enhance the utility of data by bringing data together from different sources, increasing the available information about data subjects and providing more holistic views. Doing so, however, can increase privacy risks. To mitigate these risks, a family of methods known as privacy-preserving record linkage (PPRL) was developed, using techniques such as cryptography, de-identification, and the strict separation of roles to ensure data subjects’ privacy remains protected throughout the linkage process, and the resulting linked data poses no additional privacy risks. Building privacy protections into the architecture of the system (for instance, ensuring that data flows between different parties in the system do not allow for transmission of private information) is just as important as the technology used to obfuscate private information. In this paper, we present a technology-agnostic framework for designing PPRL systems that is focused on privacy protection, defining key roles, providing a system architecture with data flows, detailing system controls, and discussing privacy evaluations that ensure the system protects privacy. We hope that the framework presented in this paper can both help elucidate how currently deployed PPRL systems protect privacy and help developers design future PPRL systems. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

22 pages, 818 KiB  
Article
Towards Reliable Fake News Detection: Enhanced Attention-Based Transformer Model
by Jayanti Rout, Minati Mishra and Manob Jyoti Saikia
J. Cybersecur. Priv. 2025, 5(3), 43; https://doi.org/10.3390/jcp5030043 - 9 Jul 2025
Viewed by 1104
Abstract
The widespread rise of misinformation across digital platforms has increased the demand for accurate and efficient Fake News Detection (FND) systems. This study introduces an enhanced transformer-based architecture for FND, developed through comprehensive ablation studies and empirical evaluations on multiple benchmark datasets. The [...] Read more.
The widespread rise of misinformation across digital platforms has increased the demand for accurate and efficient Fake News Detection (FND) systems. This study introduces an enhanced transformer-based architecture for FND, developed through comprehensive ablation studies and empirical evaluations on multiple benchmark datasets. The proposed model combines improved multi-head attention, dynamic positional encoding, and a lightweight classification head to effectively capture nuanced linguistic patterns, while maintaining computational efficiency. To ensure robust training, techniques such as label smoothing, learning rate warm-up, and reproducibility protocols were incorporated. The model demonstrates strong generalization across three diverse datasets, such as FakeNewsNet, ISOT, and LIAR, achieving an average accuracy of 79.85%. Specifically, it attains 80% accuracy on FakeNewsNet, 100% on ISOT, and 59.56% on LIAR. With just 3.1 to 4.3 million parameters, the model achieves an 85% reduction in size compared to full-sized BERT architectures. These results highlight the model’s effectiveness in balancing high accuracy with resource efficiency, making it suitable for real-world applications such as social media monitoring and automated fact-checking. Future work will explore multilingual extensions, cross-domain generalization, and integration with multimodal misinformation detection systems. Full article
(This article belongs to the Special Issue Cyber Security and Digital Forensics—2nd Edition)
Show Figures

Figure 1

25 pages, 3917 KiB  
Article
Energy Consumption Framework and Analysis of Post-Quantum Key-Generation on Embedded Devices
by J. Cameron Patterson, William J. Buchanan and Callum Turino
J. Cybersecur. Priv. 2025, 5(3), 42; https://doi.org/10.3390/jcp5030042 - 8 Jul 2025
Viewed by 798
Abstract
The emergence of quantum computing and Shor’s algorithm necessitates an imminent shift from current public key cryptography techniques to post-quantum-robust techniques. The NIST has responded by standardising Post-Quantum Cryptography (PQC) algorithms, with ML-KEM (FIPS-203) slated to replace ECDH (Elliptic Curve Diffie-Hellman) for key [...] Read more.
The emergence of quantum computing and Shor’s algorithm necessitates an imminent shift from current public key cryptography techniques to post-quantum-robust techniques. The NIST has responded by standardising Post-Quantum Cryptography (PQC) algorithms, with ML-KEM (FIPS-203) slated to replace ECDH (Elliptic Curve Diffie-Hellman) for key exchange. A key practical concern for PQC adoption is energy consumption. This paper introduces a new framework for measuring PQC energy consumption on a Raspberry Pi when performing key generation. The framework uses both the available traditional methods and the newly standardised ML-KEM algorithm via the commonly utilised OpenSSL library. Full article
(This article belongs to the Section Cryptography and Cryptology)
Show Figures

Figure 1

24 pages, 2288 KiB  
Systematic Review
A Systematic Review on Hybrid AI Models Integrating Machine Learning and Federated Learning
by Jallal-Eddine Moussaoui, Mehdi Kmiti, Khalid El Gholami and Yassine Maleh
J. Cybersecur. Priv. 2025, 5(3), 41; https://doi.org/10.3390/jcp5030041 - 2 Jul 2025
Viewed by 1618
Abstract
Cyber threats are growing in scale and complexity, outpacing the capabilities of traditional security systems. Machine learning (ML) models offer enhanced detection accuracy but often rely on centralized data, raising privacy concerns. Federated learning (FL), by contrast, enables decentralized model training but suffers [...] Read more.
Cyber threats are growing in scale and complexity, outpacing the capabilities of traditional security systems. Machine learning (ML) models offer enhanced detection accuracy but often rely on centralized data, raising privacy concerns. Federated learning (FL), by contrast, enables decentralized model training but suffers from scalability and latency issues. Hybrid AI models, which integrate ML and FL techniques, have emerged as a promising solution to balance performance, privacy, and scalability in cybersecurity. This systematic review investigates the current landscape of hybrid AI models, evaluating their strengths and limitations across five key dimensions: accuracy, privacy preservation, scalability, explainability, and robustness. Findings indicate that hybrid models consistently outperform standalone approaches, yet challenges remain in real-time deployment and interpretability. Future research should focus on improving explainability, optimizing communication protocols, and integrating secure technologies such as blockchain to enhance real-world applicability. Full article
Show Figures

Figure 1

14 pages, 752 KiB  
Article
A Framework for Compliance with Regulation (EU) 2024/1689 for Small and Medium-Sized Enterprises
by Sotirios Stampernas and Costas Lambrinoudakis
J. Cybersecur. Priv. 2025, 5(3), 40; https://doi.org/10.3390/jcp5030040 - 1 Jul 2025
Viewed by 2084
Abstract
The European Union’s Artificial Intelligence Act (EU AI Act) is expected to be a major legal breakthrough in an attempt to tame AI’s negative aspects by setting common rules and obligations for companies active in the EU Single Market. Globally, there is a [...] Read more.
The European Union’s Artificial Intelligence Act (EU AI Act) is expected to be a major legal breakthrough in an attempt to tame AI’s negative aspects by setting common rules and obligations for companies active in the EU Single Market. Globally, there is a surge in investments to encourage research, development and innovation in AI that originates both from governments and private firms. The EU recognizes that the new Regulation (EU) 2024/1689 is difficult for start-ups and SMEs to cope with and it announced the release of tools, in the near future, to ease that difficulty. To facilitate the active participation of SMEs in the AI arena, we propose a framework that could assist them to better comply with the challenging EU AI Act during the development life cycle of an AI system. We use the spiral SDLC model and we map its phases and development tasks to the legal provisions of Regulation (EU) 2024/1689. Furthermore, the framework can be used to promote innovation, improve their personnel’s expertise, reduce costs and help the companies avoid the proposed substantial fines described in the Act. Full article
Show Figures

Figure 1

29 pages, 2303 KiB  
Article
Denial-of-Service Attacks on Permissioned Blockchains: A Practical Study
by Mohammad Pishdar, Yixing Lei, Khaled Harfoush and Jawad Manzoor
J. Cybersecur. Priv. 2025, 5(3), 39; https://doi.org/10.3390/jcp5030039 - 30 Jun 2025
Viewed by 914
Abstract
Hyperledger Fabric (HLF) is a leading permissioned blockchain platform designed for enterprise applications. However, it faces significant security risks from Denial-of-Service (DoS) attacks targeting its core components. This study systematically investigated network-level DoS attack vectors against HLF, with a focus on threats to [...] Read more.
Hyperledger Fabric (HLF) is a leading permissioned blockchain platform designed for enterprise applications. However, it faces significant security risks from Denial-of-Service (DoS) attacks targeting its core components. This study systematically investigated network-level DoS attack vectors against HLF, with a focus on threats to its ordering service, Membership Service Provider (MSP), peer nodes, consensus protocols, and architectural dependencies. In this research, we performed experiments on an HLF test bed to demonstrate how compromised components can be exploited to launch DoS attacks and degrade the performance and availability of the blockchain network. Key attack scenarios included manipulating block sizes to induce latency, discarding blocks to disrupt consensus, issuing malicious certificates via MSP, colluding peers to sabotage validation, flooding external clients to overwhelm resources, misconfiguring Raft consensus parameters, and disabling CouchDB to cripple data access. The experimental results reveal severe impacts on the availability, including increased latency, decreased throughput, and inaccessibility of the ledger. Our findings emphasize the need for proactive monitoring and robust defense mechanisms to detect and mitigate DoS threats. Finally, we discuss some future research directions, including lightweight machine learning tailored to HLF, enhanced monitoring by aggregating logs from multiple sources, and collaboration with industry stakeholders to deploy pilot studies of security-enhanced HLF in operational environments. Full article
(This article belongs to the Special Issue Cyber Security and Digital Forensics—2nd Edition)
Show Figures

Figure 1

26 pages, 1774 KiB  
Article
Evaluating End-User Defensive Approaches Against Phishing Using Education and Simulated Attacks in a Croatian University
by Zlatan Morić, Vedran Dakić, Mladen Plećaš and Ivana Ogrizek Biškupić
J. Cybersecur. Priv. 2025, 5(3), 38; https://doi.org/10.3390/jcp5030038 - 27 Jun 2025
Viewed by 1056
Abstract
This study investigates the effectiveness of two cybersecurity awareness interventions—phishing simulations and organized online training—in enhancing end-user resilience to phishing attacks in a Croatian university setting. Three controlled phishing simulations and one targeted instructional module were executed across several organizational departments. This study [...] Read more.
This study investigates the effectiveness of two cybersecurity awareness interventions—phishing simulations and organized online training—in enhancing end-user resilience to phishing attacks in a Croatian university setting. Three controlled phishing simulations and one targeted instructional module were executed across several organizational departments. This study assesses behavioral responses, compromise rates, and statistical associations with demographic variables, including age, department, and educational background. Despite educational instruction yielding a marginally reduced number of compromised users, statistical analysis revealed no meaningful difference between the two methods. The third phishing simulation, executed over a pre-holiday timeframe, demonstrated a significantly elevated compromising rate, underscoring the influence of temporal and organizational context on employee alertness. These findings highlight the shortcomings of standalone awareness assessments and stress the necessity for ongoing, contextualized, and integrated cybersecurity training approaches. The findings offer practical guidance for developing more effective phishing defense strategies within organizational environments. Full article
(This article belongs to the Special Issue Cyber Security and Digital Forensics—2nd Edition)
Show Figures

Figure 1

19 pages, 1686 KiB  
Article
A Trust-Aware Incentive Mechanism for Federated Learning with Heterogeneous Clients in Edge Computing
by Jiantao Xu, Chen Zhang, Liu Jin and Chunhua Su
J. Cybersecur. Priv. 2025, 5(3), 37; https://doi.org/10.3390/jcp5030037 - 25 Jun 2025
Viewed by 901
Abstract
Federated learning enables privacy-preserving model training across distributed clients, yet real-world deployments face statistical, system, and behavioral heterogeneity, which degrades performance and increases vulnerability to adversarial clients. Existing incentive mechanisms often neglect participant credibility, leading to unfair rewards and reduced robustness. To address [...] Read more.
Federated learning enables privacy-preserving model training across distributed clients, yet real-world deployments face statistical, system, and behavioral heterogeneity, which degrades performance and increases vulnerability to adversarial clients. Existing incentive mechanisms often neglect participant credibility, leading to unfair rewards and reduced robustness. To address these issues, we propose a Trust-Aware Incentive Mechanism (TAIM), which evaluates client reliability through a multi-dimensional trust model incorporating participation frequency, gradient consistency, and contribution effectiveness. A trust-weighted reward allocation is formulated via a Stackelberg game, and a confidence-based soft filtering algorithm is introduced to mitigate the impact of unreliable updates. Experiments on FEMNIST, CIFAR-10, and Sent140 demonstrate that TAIM improves accuracy by up to 4.1%, reduces performance degradation under adaptive attacks by over 35%, and ensures fairer incentive distribution with a Gini coefficient below 0.3. TAIM offers a robust and equitable FL framework suitable for heterogeneous edge environments. Full article
Show Figures

Figure 1

17 pages, 6523 KiB  
Article
Enhancing User Experience with Visual Controls for Local Differential Privacy
by Xueting Li, Shiyao Dong and Amin Milani Fard
J. Cybersecur. Priv. 2025, 5(3), 36; https://doi.org/10.3390/jcp5030036 - 22 Jun 2025
Viewed by 768
Abstract
While Local Differential Privacy (LDP) offers strong privacy guarantees for IoT data collection, users often struggle to understand its implications and control their privacy settings. This paper presents a user-centric approach to implementing LDP in smart home environments, focusing on voice command privacy. [...] Read more.
While Local Differential Privacy (LDP) offers strong privacy guarantees for IoT data collection, users often struggle to understand its implications and control their privacy settings. This paper presents a user-centric approach to implementing LDP in smart home environments, focusing on voice command privacy. We analyze privacy control patterns across major smart home platforms and propose a novel interface that translates complex LDP parameters into four intuitive privacy levels. The interface combines visual controls with concrete examples showing how privacy transformations affect voice commands. By mapping mathematical privacy parameters to user-friendly settings while maintaining theoretical guarantees, our approach explores making differential privacy more accessible in IoT environments. We validated our design through a usability study to understand its strengths in accessibility and key areas for refinement. Full article
(This article belongs to the Special Issue Data Protection and Privacy)
Show Figures

Figure 1

Previous Issue
Back to TopTop