Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (226)

Search Parameters:
Journal = JCP

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1027 KiB  
Article
AI-Driven Security for Blockchain-Based Smart Contracts: A GAN-Assisted Deep Learning Approach to Malware Detection
by Imad Bourian, Lahcen Hassine and Khalid Chougdali
J. Cybersecur. Priv. 2025, 5(3), 53; https://doi.org/10.3390/jcp5030053 (registering DOI) - 1 Aug 2025
Viewed by 129
Abstract
In the modern era, the use of blockchain technology has been growing rapidly, where Ethereum smart contracts play an important role in securing decentralized application systems. However, these smart contracts are also susceptible to a large number of vulnerabilities, which pose significant threats [...] Read more.
In the modern era, the use of blockchain technology has been growing rapidly, where Ethereum smart contracts play an important role in securing decentralized application systems. However, these smart contracts are also susceptible to a large number of vulnerabilities, which pose significant threats to intelligent systems and IoT applications, leading to data breaches and financial losses. Traditional detection techniques, such as manual analysis and static automated tools, suffer from high false positives and undetected security vulnerabilities. To address these problems, this paper proposes an Artificial Intelligence (AI)-based security framework that integrates Generative Adversarial Network (GAN)-based feature selection and deep learning techniques to classify and detect malware attacks on smart contract execution in the blockchain decentralized network. After an exhaustive pre-processing phase yielding a dataset of 40,000 malware and benign samples, the proposed model is evaluated and compared with related studies on the basis of a number of performance metrics including training accuracy, training loss, and classification metrics (accuracy, precision, recall, and F1-score). Our combined approach achieved a remarkable accuracy of 97.6%, demonstrating its effectiveness in detecting malware and protecting blockchain systems. Full article
Show Figures

Figure 1

28 pages, 6624 KiB  
Article
YoloMal-XAI: Interpretable Android Malware Classification Using RGB Images and YOLO11
by Chaymae El Youssofi and Khalid Chougdali
J. Cybersecur. Priv. 2025, 5(3), 52; https://doi.org/10.3390/jcp5030052 (registering DOI) - 1 Aug 2025
Viewed by 130
Abstract
As Android malware grows increasingly sophisticated, traditional detection methods struggle to keep pace, creating an urgent need for robust, interpretable, and real-time solutions to safeguard mobile ecosystems. This study introduces YoloMal-XAI, a novel deep learning framework that transforms Android application files into RGB [...] Read more.
As Android malware grows increasingly sophisticated, traditional detection methods struggle to keep pace, creating an urgent need for robust, interpretable, and real-time solutions to safeguard mobile ecosystems. This study introduces YoloMal-XAI, a novel deep learning framework that transforms Android application files into RGB image representations by mapping DEX (Dalvik Executable), Manifest.xml, and Resources.arsc files to distinct color channels. Evaluated on the CICMalDroid2020 dataset using YOLO11 pretrained classification models, YoloMal-XAI achieves 99.87% accuracy in binary classification and 99.56% in multi-class classification (Adware, Banking, Riskware, SMS, and Benign). Compared to ResNet-50, GoogLeNet, and MobileNetV2, YOLO11 offers competitive accuracy with at least 7× faster training over 100 epochs. Against YOLOv8, YOLO11 achieves comparable or superior accuracy while reducing training time by up to 3.5×. Cross-corpus validation using Drebin and CICAndMal2017 further confirms the model’s generalization capability on previously unseen malware. An ablation study highlights the value of integrating DEX, Manifest, and Resources components, with the full RGB configuration consistently delivering the best performance. Explainable AI (XAI) techniques—Grad-CAM, Grad-CAM++, Eigen-CAM, and HiRes-CAM—are employed to interpret model decisions, revealing the DEX segment as the most influential component. These results establish YoloMal-XAI as a scalable, efficient, and interpretable framework for Android malware detection, with strong potential for future deployment on resource-constrained mobile devices. Full article
Show Figures

Figure 1

20 pages, 1059 KiB  
Article
The Knowledge Sovereignty Paradigm: Mapping Employee-Driven Information Governance Following Organisational Data Breaches
by Jeferson Martínez Lozano, Kevin Restrepo Bedoya and Juan Velez-Ocampo
J. Cybersecur. Priv. 2025, 5(3), 51; https://doi.org/10.3390/jcp5030051 (registering DOI) - 31 Jul 2025
Viewed by 163
Abstract
This study explores the emergent dynamics of knowledge sovereignty within organisations following data breach incidents. Using qualitative analysis based on Benoit’s image restoration theory, this study shows that employees do more than relay official messages—they actively shape information governance after a cyberattack. Employees [...] Read more.
This study explores the emergent dynamics of knowledge sovereignty within organisations following data breach incidents. Using qualitative analysis based on Benoit’s image restoration theory, this study shows that employees do more than relay official messages—they actively shape information governance after a cyberattack. Employees adapt Benoit’s response strategies (denial, evasion of responsibility, reducing offensiveness, corrective action, and mortification) based on how authentic they perceive the organisation’s response, their identification with the company, and their sense of fairness in crisis management. This investigation substantively extends extant crisis communication theory by showing how knowledge sovereignty is shaped through negotiation, as employees manage their dual role as breach victims and organisational representatives. The findings suggest that employees are key actors in post-breach information governance, and that their authentic engagement is critical to organisational recovery after cybersecurity incidents. Full article
36 pages, 856 KiB  
Systematic Review
Is Blockchain the Future of AI Alignment? Developing a Framework and a Research Agenda Based on a Systematic Literature Review
by Alexander Neulinger, Lukas Sparer, Maryam Roshanaei, Dragutin Ostojić, Jainil Kakka and Dušan Ramljak
J. Cybersecur. Priv. 2025, 5(3), 50; https://doi.org/10.3390/jcp5030050 - 29 Jul 2025
Viewed by 461
Abstract
Artificial intelligence (AI) agents are increasingly shaping vital sectors of society, including healthcare, education, supply chains, and finance. As their influence grows, AI alignment research plays a pivotal role in ensuring these systems are trustworthy, transparent, and aligned with human values. Leveraging blockchain [...] Read more.
Artificial intelligence (AI) agents are increasingly shaping vital sectors of society, including healthcare, education, supply chains, and finance. As their influence grows, AI alignment research plays a pivotal role in ensuring these systems are trustworthy, transparent, and aligned with human values. Leveraging blockchain technology, proven over the past decade in enabling transparent, tamper-resistant distributed systems, offers significant potential to strengthen AI alignment. However, despite its potential, the current AI alignment literature has yet to systematically explore the effectiveness of blockchain in facilitating secure and ethical behavior in AI agents. While existing systematic literature reviews (SLRs) in AI alignment address various aspects of AI safety and AI alignment, this SLR specifically examines the gap at the intersection of AI alignment, blockchain, and ethics. To address this gap, this SLR explores how blockchain technology can overcome the limitations of existing AI alignment approaches. We searched for studies containing keywords from AI, blockchain, and ethics domains in the Scopus database, identifying 7110 initial records on 28 May 2024. We excluded studies which did not answer our research questions and did not discuss the thematic intersection between AI, blockchain, and ethics to a sufficient extent. The quality of the selected studies was assessed on the basis of their methodology, clarity, completeness, and transparency, resulting in a final number of 46 included studies, the majority of which were journal articles. Results were synthesized through quantitative topic analysis and qualitative analysis to identify key themes and patterns. The contributions of this paper include the following: (i) presentation of the results of an SLR conducted to identify, extract, evaluate, and synthesize studies on the symbiosis of AI alignment, blockchain, and ethics; (ii) summary and categorization of the existing benefits and challenges in incorporating blockchain for AI alignment within the context of ethics; (iii) development of a framework that will facilitate new research activities; and (iv) establishment of the state of evidence with in-depth assessment. The proposed blockchain-based AI alignment framework in this study demonstrates that integrating blockchain with AI alignment can substantially enhance robustness, promote public trust, and facilitate ethical compliance in AI systems. Full article
Show Figures

Figure 1

18 pages, 2539 KiB  
Article
Empowering End-Users with Cybersecurity Situational Awareness: Findings from IoT-Health Table-Top Exercises
by Fariha Tasmin Jaigirdar, Carsten Rudolph, Misita Anwar and Boyu Tan
J. Cybersecur. Priv. 2025, 5(3), 49; https://doi.org/10.3390/jcp5030049 - 25 Jul 2025
Viewed by 283
Abstract
End-users in a decision-oriented Internet of Things (IoT) healthcare system are often left in the dark regarding critical security information necessary for making informed decisions about potential risks. This is partly due to the lack of transparency and system security awareness end-users have [...] Read more.
End-users in a decision-oriented Internet of Things (IoT) healthcare system are often left in the dark regarding critical security information necessary for making informed decisions about potential risks. This is partly due to the lack of transparency and system security awareness end-users have in such systems. To empower end-users and enhance their cybersecurity situational awareness, it is imperative to thoroughly document and report the runtime security controls in place, as well as the security-relevant aspects of the devices they rely on, while the need for better transparency is obvious, it remains uncertain whether current systems offer adequate security metadata for end-users and how future designs can be improved to ensure better visibility into the security measures implemented. To address this gap, we conducted table-top exercises with ten security and ICT experts to evaluate a typical IoT-Health scenario. These exercises revealed the critical role of security metadata, identified the available ones to be presented to users, and suggested potential enhancements that could be integrated into system design. We present our observations from the exercises, highlighting experts’ valuable suggestions, concerns, and views, backed by our in-depth analysis. Moreover, as a proof-of-concept of our study, we simulated three relevant use cases to detect cyber risks. This comprehensive analysis underscores critical considerations that can significantly improve future system protocols, ensuring end-users are better equipped to navigate and mitigate security risks effectively. Full article
Show Figures

Figure 1

36 pages, 8047 KiB  
Article
Fed-DTB: A Dynamic Trust-Based Framework for Secure and Efficient Federated Learning in IoV Networks: Securing V2V/V2I Communication
by Ahmed Alruwaili, Sardar Islam and Iqbal Gondal
J. Cybersecur. Priv. 2025, 5(3), 48; https://doi.org/10.3390/jcp5030048 - 19 Jul 2025
Viewed by 446
Abstract
The Internet of Vehicles (IoV) presents a vast opportunity for optimised traffic flow, road safety, and enhanced usage experience with the influence of Federated Learning (FL). However, the distributed nature of IoV networks creates certain inherent problems regarding data privacy, security from adversarial [...] Read more.
The Internet of Vehicles (IoV) presents a vast opportunity for optimised traffic flow, road safety, and enhanced usage experience with the influence of Federated Learning (FL). However, the distributed nature of IoV networks creates certain inherent problems regarding data privacy, security from adversarial attacks, and the handling of available resources. This paper introduces Fed-DTB, a new dynamic trust-based framework for FL that aims to overcome these challenges in the context of IoV. Fed-DTB integrates the adaptive trust evaluation that is capable of quickly identifying and excluding malicious clients to maintain the authenticity of the learning process. A performance comparison with previous approaches is shown, where the Fed-DTB method improves accuracy in the first two training rounds and decreases the per-round training time. The Fed-DTB is robust to non-IID data distributions and outperforms all other state-of-the-art approaches regarding the final accuracy (87–88%), convergence rate, and adversary detection (99.86% accuracy). The key contributions include (1) a multi-factor trust evaluation mechanism with seven contextual factors, (2) correlation-based adaptive weighting that dynamically prioritises trust factors based on vehicular conditions, and (3) an optimisation-based client selection strategy that maximises collaborative reliability. This work opens up opportunities for more accurate, secure, and private collaborative learning in future intelligent transportation systems with the help of federated learning while overcoming the conventional trade-off of security vs. efficiency. Full article
Show Figures

Figure 1

24 pages, 1991 KiB  
Article
A Multi-Feature Semantic Fusion Machine Learning Architecture for Detecting Encrypted Malicious Traffic
by Shiyu Tang, Fei Du, Zulong Diao and Wenjun Fan
J. Cybersecur. Priv. 2025, 5(3), 47; https://doi.org/10.3390/jcp5030047 - 17 Jul 2025
Viewed by 393
Abstract
With the increasing sophistication of network attacks, machine learning (ML)-based methods have showcased promising performance in attack detection. However, ML-based methods often suffer from high false rates when tackling encrypted malicious traffic. To break through these bottlenecks, we propose EFTransformer, an encrypted flow [...] Read more.
With the increasing sophistication of network attacks, machine learning (ML)-based methods have showcased promising performance in attack detection. However, ML-based methods often suffer from high false rates when tackling encrypted malicious traffic. To break through these bottlenecks, we propose EFTransformer, an encrypted flow transformer framework which inherits semantic perception and multi-scale feature fusion, can robustly and efficiently detect encrypted malicious traffic, and make up for the shortcomings of ML in the context of modeling ability and feature adequacy. EFTransformer introduces a channel-level extraction mechanism based on quintuples and a noise-aware clustering strategy to enhance the recognition ability of traffic patterns; adopts a dual-channel embedding method, using Word2Vec and FastText to capture global semantics and subword-level changes; and uses a Transformer-based classifier and attention pooling module to achieve dynamic feature-weighted fusion, thereby improving the robustness and accuracy of malicious traffic detection. Our systematic experiments on the ISCX2012 dataset demonstrate that EFTransformer achieves the best detection performance, with an accuracy of up to 95.26%, a false positive rate (FPR) of 6.19%, and a false negative rate (FNR) of only 5.85%. These results show that EFTransformer achieves high detection performance against encrypted malicious traffic. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

28 pages, 1727 KiB  
Article
Detecting Jamming in Smart Grid Communications via Deep Learning
by Muhammad Irfan, Aymen Omri, Javier Hernandez Fernandez, Savio Sciancalepore and Gabriele Oligeri
J. Cybersecur. Priv. 2025, 5(3), 46; https://doi.org/10.3390/jcp5030046 - 15 Jul 2025
Viewed by 356
Abstract
Power-Line Communication (PLC) allows data transmission through existing power lines, thus avoiding the expensive deployment of ad hoc network infrastructures. However, power line networks remain vastly unattended, which allows tampering by malicious actors. In fact, an attacker can easily inject a malicious signal [...] Read more.
Power-Line Communication (PLC) allows data transmission through existing power lines, thus avoiding the expensive deployment of ad hoc network infrastructures. However, power line networks remain vastly unattended, which allows tampering by malicious actors. In fact, an attacker can easily inject a malicious signal (jamming) with the aim of disrupting ongoing communications. In this paper, we propose a new solution to detect jamming attacks before they significantly affect the quality of the communication link, thus allowing the detection of a jammer (geographically) far away from a receiver. We consider two scenarios as a function of the receiver’s ability to know in advance the impact of the jammer on the received signal. In the first scenario (jamming-aware), we leverage a classifier based on a Convolutional Neural Network, which has been trained on both jammed and non-jammed signals. In the second scenario (jamming-unaware), we consider a one-class classifier based on autoencoders, allowing us to address the challenge of jamming detection as a classical anomaly detection problem. Our proposed solution can detect jamming attacks on PLC networks with an accuracy greater than 99% even when the jammer is 68 m away from the receiver while requiring training only on traffic acquired during the regular operation of the target PLC network. Full article
Show Figures

Figure 1

36 pages, 1120 KiB  
Article
Triple-Shield Privacy in Healthcare: Federated Learning, p-ABCs, and Distributed Ledger Authentication
by Sofia Sakka, Nikolaos Pavlidis, Vasiliki Liagkou, Ioannis Panges, Despina Elizabeth Filippidou, Chrysostomos Stylios and Anastasios Manos
J. Cybersecur. Priv. 2025, 5(3), 45; https://doi.org/10.3390/jcp5030045 - 12 Jul 2025
Viewed by 464
Abstract
The growing influence of technology in the healthcare industry has led to the creation of innovative applications that improve convenience, accessibility, and diagnostic accuracy. However, health applications face significant challenges concerning user privacy and data security, as they handle extremely sensitive personal and [...] Read more.
The growing influence of technology in the healthcare industry has led to the creation of innovative applications that improve convenience, accessibility, and diagnostic accuracy. However, health applications face significant challenges concerning user privacy and data security, as they handle extremely sensitive personal and medical information. Privacy-Enhancing Technologies (PETs), such as Privacy-Attribute-based Credentials, Differential Privacy, and Federated Learning, have emerged as crucial tools to tackle these challenges. Despite their potential, PETs are not widely utilized due to technical and implementation obstacles. This research introduces a comprehensive framework for protecting health applications from privacy and security threats, with a specific emphasis on gamified mental health apps designed to manage Attention Deficit Hyperactivity Disorder (ADHD) in children. Acknowledging the heightened sensitivity of mental health data, especially in applications for children, our framework prioritizes user-centered design and strong privacy measures. We suggest an identity management system based on blockchain technology to ensure secure and transparent credential management and incorporate Federated Learning to enable privacy-preserving AI-driven predictions. These advancements ensure compliance with data protection regulations, like GDPR, while meeting the needs of various stakeholders, including children, parents, educators, and healthcare professionals. Full article
(This article belongs to the Special Issue Data Protection and Privacy)
Show Figures

Figure 1

13 pages, 1053 KiB  
Opinion
A Framework for the Design of Privacy-Preserving Record Linkage Systems
by Zixin Nie, Benjamin Tyndall, Daniel Brannock, Emily Gentles, Elizabeth Parish and Alison Banger
J. Cybersecur. Priv. 2025, 5(3), 44; https://doi.org/10.3390/jcp5030044 - 9 Jul 2025
Viewed by 356
Abstract
Record linkage can enhance the utility of data by bringing data together from different sources, increasing the available information about data subjects and providing more holistic views. Doing so, however, can increase privacy risks. To mitigate these risks, a family of methods known [...] Read more.
Record linkage can enhance the utility of data by bringing data together from different sources, increasing the available information about data subjects and providing more holistic views. Doing so, however, can increase privacy risks. To mitigate these risks, a family of methods known as privacy-preserving record linkage (PPRL) was developed, using techniques such as cryptography, de-identification, and the strict separation of roles to ensure data subjects’ privacy remains protected throughout the linkage process, and the resulting linked data poses no additional privacy risks. Building privacy protections into the architecture of the system (for instance, ensuring that data flows between different parties in the system do not allow for transmission of private information) is just as important as the technology used to obfuscate private information. In this paper, we present a technology-agnostic framework for designing PPRL systems that is focused on privacy protection, defining key roles, providing a system architecture with data flows, detailing system controls, and discussing privacy evaluations that ensure the system protects privacy. We hope that the framework presented in this paper can both help elucidate how currently deployed PPRL systems protect privacy and help developers design future PPRL systems. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

22 pages, 818 KiB  
Article
Towards Reliable Fake News Detection: Enhanced Attention-Based Transformer Model
by Jayanti Rout, Minati Mishra and Manob Jyoti Saikia
J. Cybersecur. Priv. 2025, 5(3), 43; https://doi.org/10.3390/jcp5030043 - 9 Jul 2025
Viewed by 690
Abstract
The widespread rise of misinformation across digital platforms has increased the demand for accurate and efficient Fake News Detection (FND) systems. This study introduces an enhanced transformer-based architecture for FND, developed through comprehensive ablation studies and empirical evaluations on multiple benchmark datasets. The [...] Read more.
The widespread rise of misinformation across digital platforms has increased the demand for accurate and efficient Fake News Detection (FND) systems. This study introduces an enhanced transformer-based architecture for FND, developed through comprehensive ablation studies and empirical evaluations on multiple benchmark datasets. The proposed model combines improved multi-head attention, dynamic positional encoding, and a lightweight classification head to effectively capture nuanced linguistic patterns, while maintaining computational efficiency. To ensure robust training, techniques such as label smoothing, learning rate warm-up, and reproducibility protocols were incorporated. The model demonstrates strong generalization across three diverse datasets, such as FakeNewsNet, ISOT, and LIAR, achieving an average accuracy of 79.85%. Specifically, it attains 80% accuracy on FakeNewsNet, 100% on ISOT, and 59.56% on LIAR. With just 3.1 to 4.3 million parameters, the model achieves an 85% reduction in size compared to full-sized BERT architectures. These results highlight the model’s effectiveness in balancing high accuracy with resource efficiency, making it suitable for real-world applications such as social media monitoring and automated fact-checking. Future work will explore multilingual extensions, cross-domain generalization, and integration with multimodal misinformation detection systems. Full article
(This article belongs to the Special Issue Cyber Security and Digital Forensics—2nd Edition)
Show Figures

Figure 1

25 pages, 3917 KiB  
Article
Energy Consumption Framework and Analysis of Post-Quantum Key-Generation on Embedded Devices
by J. Cameron Patterson, William J. Buchanan and Callum Turino
J. Cybersecur. Priv. 2025, 5(3), 42; https://doi.org/10.3390/jcp5030042 - 8 Jul 2025
Viewed by 541
Abstract
The emergence of quantum computing and Shor’s algorithm necessitates an imminent shift from current public key cryptography techniques to post-quantum-robust techniques. The NIST has responded by standardising Post-Quantum Cryptography (PQC) algorithms, with ML-KEM (FIPS-203) slated to replace ECDH (Elliptic Curve Diffie-Hellman) for key [...] Read more.
The emergence of quantum computing and Shor’s algorithm necessitates an imminent shift from current public key cryptography techniques to post-quantum-robust techniques. The NIST has responded by standardising Post-Quantum Cryptography (PQC) algorithms, with ML-KEM (FIPS-203) slated to replace ECDH (Elliptic Curve Diffie-Hellman) for key exchange. A key practical concern for PQC adoption is energy consumption. This paper introduces a new framework for measuring PQC energy consumption on a Raspberry Pi when performing key generation. The framework uses both the available traditional methods and the newly standardised ML-KEM algorithm via the commonly utilised OpenSSL library. Full article
(This article belongs to the Section Cryptography and Cryptology)
Show Figures

Figure 1

24 pages, 2288 KiB  
Systematic Review
A Systematic Review on Hybrid AI Models Integrating Machine Learning and Federated Learning
by Jallal-Eddine Moussaoui, Mehdi Kmiti, Khalid El Gholami and Yassine Maleh
J. Cybersecur. Priv. 2025, 5(3), 41; https://doi.org/10.3390/jcp5030041 - 2 Jul 2025
Viewed by 1152
Abstract
Cyber threats are growing in scale and complexity, outpacing the capabilities of traditional security systems. Machine learning (ML) models offer enhanced detection accuracy but often rely on centralized data, raising privacy concerns. Federated learning (FL), by contrast, enables decentralized model training but suffers [...] Read more.
Cyber threats are growing in scale and complexity, outpacing the capabilities of traditional security systems. Machine learning (ML) models offer enhanced detection accuracy but often rely on centralized data, raising privacy concerns. Federated learning (FL), by contrast, enables decentralized model training but suffers from scalability and latency issues. Hybrid AI models, which integrate ML and FL techniques, have emerged as a promising solution to balance performance, privacy, and scalability in cybersecurity. This systematic review investigates the current landscape of hybrid AI models, evaluating their strengths and limitations across five key dimensions: accuracy, privacy preservation, scalability, explainability, and robustness. Findings indicate that hybrid models consistently outperform standalone approaches, yet challenges remain in real-time deployment and interpretability. Future research should focus on improving explainability, optimizing communication protocols, and integrating secure technologies such as blockchain to enhance real-world applicability. Full article
Show Figures

Figure 1

14 pages, 752 KiB  
Article
A Framework for Compliance with Regulation (EU) 2024/1689 for Small and Medium-Sized Enterprises
by Sotirios Stampernas and Costas Lambrinoudakis
J. Cybersecur. Priv. 2025, 5(3), 40; https://doi.org/10.3390/jcp5030040 - 1 Jul 2025
Viewed by 1186
Abstract
The European Union’s Artificial Intelligence Act (EU AI Act) is expected to be a major legal breakthrough in an attempt to tame AI’s negative aspects by setting common rules and obligations for companies active in the EU Single Market. Globally, there is a [...] Read more.
The European Union’s Artificial Intelligence Act (EU AI Act) is expected to be a major legal breakthrough in an attempt to tame AI’s negative aspects by setting common rules and obligations for companies active in the EU Single Market. Globally, there is a surge in investments to encourage research, development and innovation in AI that originates both from governments and private firms. The EU recognizes that the new Regulation (EU) 2024/1689 is difficult for start-ups and SMEs to cope with and it announced the release of tools, in the near future, to ease that difficulty. To facilitate the active participation of SMEs in the AI arena, we propose a framework that could assist them to better comply with the challenging EU AI Act during the development life cycle of an AI system. We use the spiral SDLC model and we map its phases and development tasks to the legal provisions of Regulation (EU) 2024/1689. Furthermore, the framework can be used to promote innovation, improve their personnel’s expertise, reduce costs and help the companies avoid the proposed substantial fines described in the Act. Full article
Show Figures

Figure 1

29 pages, 2303 KiB  
Article
Denial-of-Service Attacks on Permissioned Blockchains: A Practical Study
by Mohammad Pishdar, Yixing Lei, Khaled Harfoush and Jawad Manzoor
J. Cybersecur. Priv. 2025, 5(3), 39; https://doi.org/10.3390/jcp5030039 - 30 Jun 2025
Viewed by 670
Abstract
Hyperledger Fabric (HLF) is a leading permissioned blockchain platform designed for enterprise applications. However, it faces significant security risks from Denial-of-Service (DoS) attacks targeting its core components. This study systematically investigated network-level DoS attack vectors against HLF, with a focus on threats to [...] Read more.
Hyperledger Fabric (HLF) is a leading permissioned blockchain platform designed for enterprise applications. However, it faces significant security risks from Denial-of-Service (DoS) attacks targeting its core components. This study systematically investigated network-level DoS attack vectors against HLF, with a focus on threats to its ordering service, Membership Service Provider (MSP), peer nodes, consensus protocols, and architectural dependencies. In this research, we performed experiments on an HLF test bed to demonstrate how compromised components can be exploited to launch DoS attacks and degrade the performance and availability of the blockchain network. Key attack scenarios included manipulating block sizes to induce latency, discarding blocks to disrupt consensus, issuing malicious certificates via MSP, colluding peers to sabotage validation, flooding external clients to overwhelm resources, misconfiguring Raft consensus parameters, and disabling CouchDB to cripple data access. The experimental results reveal severe impacts on the availability, including increased latency, decreased throughput, and inaccessibility of the ledger. Our findings emphasize the need for proactive monitoring and robust defense mechanisms to detect and mitigate DoS threats. Finally, we discuss some future research directions, including lightweight machine learning tailored to HLF, enhanced monitoring by aggregating logs from multiple sources, and collaboration with industry stakeholders to deploy pilot studies of security-enhanced HLF in operational environments. Full article
(This article belongs to the Special Issue Cyber Security and Digital Forensics—2nd Edition)
Show Figures

Figure 1

Back to TopTop