Next Issue
Volume 6, February
Previous Issue
Volume 5, September
 
 

J. Cybersecur. Priv., Volume 5, Issue 4 (December 2025) – 36 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
42 pages, 1665 KB  
Article
Exploring Determinants of Information Security Systems Adoption in Saudi Arabian SMEs: An Integrated Multitheoretical Model
by Ali Abdu M Dighriri, Sarvjeet Kaur Chatrath and Masoud Mohammadian
J. Cybersecur. Priv. 2025, 5(4), 113; https://doi.org/10.3390/jcp5040113 - 18 Dec 2025
Abstract
High cybersecurity risks and attacks cause information theft, unauthorized access to data and information, reputational damage, and financial loss in small and medium enterprises (SMEs). This creates a need to adopt information security systems of SMEs through innovation and compliance with information security [...] Read more.
High cybersecurity risks and attacks cause information theft, unauthorized access to data and information, reputational damage, and financial loss in small and medium enterprises (SMEs). This creates a need to adopt information security systems of SMEs through innovation and compliance with information security policies. This study seeks to develop an integrated research model assessing the adoption of InfoSec systems in SMEs based on three existing theories, namely the technology acceptance model (TAM), theory of reasoned action (TRA), and unified theory of acceptance and use of technology (UTAUT). A thorough review of literature identified prior experience, enjoyment of new InfoSec technology, top management support, IT infrastructure, security training, legal-governmental regulations, and attitude as potential determinants of adoption of InfoSec systems. A self-developed and self-administered questionnaire was distributed to 418 employees, mid-level managers, and top-level managers working in SMEs operating in Riyadh, Saudi Arabia. The study found that prior experience, top management support, IT infrastructure, security training, and legal-governmental regulations have a positive impact on attitude toward InfoSec systems, which in turn positively influences the adoption of InfoSec systems. Gender, education, and occupation significantly moderated the impact of some determinants on attitude and, consequently, adoption of InfoSec systems. Such an integrated framework offers actionable insights and recommendations, including enhancing information security awareness and compliance with information security policies, as well as increasing profitability within SMEs. The study findings make considerable theoretical contributions to the development of knowledge and deliver practical contributions towards the status of SMEs in Saudi Arabia. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

22 pages, 2261 KB  
Article
Statistical and Multivariate Analysis of the IoT-23 Dataset: A Comprehensive Approach to Network Traffic Pattern Discovery
by Humera Ghani, Shahram Salekzamankhani and Bal Virdee
J. Cybersecur. Priv. 2025, 5(4), 112; https://doi.org/10.3390/jcp5040112 - 16 Dec 2025
Viewed by 101
Abstract
The rapid expansion of Internet of Things (IoT) technologies has introduced significant challenges in understanding the complexity and structure of network traffic data, which is essential for developing effective cybersecurity solutions. This research presents a comprehensive statistical and multivariate analysis of the IoT-23 [...] Read more.
The rapid expansion of Internet of Things (IoT) technologies has introduced significant challenges in understanding the complexity and structure of network traffic data, which is essential for developing effective cybersecurity solutions. This research presents a comprehensive statistical and multivariate analysis of the IoT-23 dataset to identify meaningful network traffic patterns and assess the effectiveness of various analytical methods for IoT security research. The study applies descriptive statistics, inferential analysis, and multivariate techniques, including Principal Component Analysis (PCA), DBSCAN clustering, and factor analysis (FA), to the publicly available IoT-23 dataset. Descriptive analysis reveals clear evidence of non-normal distributions: for example, the features src_bytes, dst_bytes, and src_pkts have skewness values of −4.21, −3.87, and −2.98, and kurtosis values of 38.45, 29.67, and 18.23, respectively. These values indicate highly skewed, heavy-tailed distributions with frequent outliers. Correlation analysis revealed a strong positive correlation (0.97) between orig_bytes and resp_bytes, and a strong negative correlation (−0.76) between duration and resp_bytes, while inferential statistics indicate that linear regression provides optimal modeling of data relationships. Key findings show that PCA is highly effective, capturing 99% of the dataset’s variance and enabling significant dimensionality reduction. DBSCAN clustering identifies six distinct clusters, highlighting diverse network traffic behaviors within IoT environments. In contrast, FA explains only 11.63% of the variance, indicating limited suitability for this dataset. These results establish important benchmarks for future IoT cybersecurity research and demonstrate the superior effectiveness of PCA and DBSCAN for analyzing complex IoT network traffic data. The findings offer practical guidance for researchers in selecting appropriate statistical methods for IoT dataset analysis, ultimately supporting the development of more robust cybersecurity solutions. Full article
(This article belongs to the Special Issue Intrusion/Malware Detection and Prevention in Networks—2nd Edition)
Show Figures

Figure 1

24 pages, 588 KB  
Article
Quantifying Privacy Risk of Mobile Apps as Textual Entailment Using Language Models
by Chris Y. T. Ma
J. Cybersecur. Priv. 2025, 5(4), 111; https://doi.org/10.3390/jcp5040111 - 12 Dec 2025
Viewed by 144
Abstract
Smart phones have become an integral part of our lives in modern society, as we carry and use them throughout a day. However, this “body part” may maliciously collect and leak our personal information without our knowledge. When we install mobile applications on [...] Read more.
Smart phones have become an integral part of our lives in modern society, as we carry and use them throughout a day. However, this “body part” may maliciously collect and leak our personal information without our knowledge. When we install mobile applications on our smart phones and grant their permission requests, these apps can use sensors embedded in the smart phones and the stored data to gather and infer our personal information, preferences, and habits. In this paper, we present our preliminary results on quantifying the privacy risk of mobile applications by assessing whether requested permissions are necessary based on app descriptions through textual entailment decided by language models (LMs). We observe that despite incorporating various improvements of LMs proposed in the literature for natural language processing (NLP) tasks, the performance of the trained model remains far from ideal. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

29 pages, 3212 KB  
Article
Leveraging Static Analysis for Feedback-Driven Security Patching in LLM-Generated Code
by Kamel Alrashedy, Abdullah Aljasser, Pradyumna Tambwekar and Matthew Gombolay
J. Cybersecur. Priv. 2025, 5(4), 110; https://doi.org/10.3390/jcp5040110 - 5 Dec 2025
Viewed by 467
Abstract
Large language models (LLMs) have shown remarkable potential for automatic code generation. Yet, these models share a weakness with their human counterparts: inadvertently generating code with security vulnerabilities that could allow unauthorized attackers to access sensitive data or systems. In this work, we [...] Read more.
Large language models (LLMs) have shown remarkable potential for automatic code generation. Yet, these models share a weakness with their human counterparts: inadvertently generating code with security vulnerabilities that could allow unauthorized attackers to access sensitive data or systems. In this work, we propose Feedback-Driven Security Patching (FDSP), wherein LLMs automatically refine vulnerable generated code. The key to our approach is a unique framework that leverages automatic static code analysis to enable the LLM to create and implement potential solutions to code vulnerabilities. Further, we curate a novel benchmark, PythonSecurityEval, that can accelerate progress in the field of code generation by covering diverse, real-world applications, including databases, websites, and operating systems. Our proposed FDSP approach achieves the strongest improvements, reducing vulnerabilities by up to 33% when evaluated with Bandit and 12% with CodeQL and outperforming baseline refinement methods. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

27 pages, 56691 KB  
Article
MalVis: Large-Scale Bytecode Visualization Framework for Explainable Android Malware Detection
by Saleh J. Makkawy, Michael J. De Lucia and Kenneth E. Barner
J. Cybersecur. Priv. 2025, 5(4), 109; https://doi.org/10.3390/jcp5040109 - 4 Dec 2025
Viewed by 343
Abstract
As technology advances, developers continually create innovative solutions to enhance smartphone security. However, the rapid spread of Android malware poses significant threats to devices and sensitive data. The Android Operating System (OS)’s open-source nature and Software Development Kit (SDK) availability mainly contribute to [...] Read more.
As technology advances, developers continually create innovative solutions to enhance smartphone security. However, the rapid spread of Android malware poses significant threats to devices and sensitive data. The Android Operating System (OS)’s open-source nature and Software Development Kit (SDK) availability mainly contribute to this alarming growth. Conventional malware detection methods, such as signature-based, static, and dynamic analysis, face challenges in detecting obfuscated techniques, including encryption, packing, and compression, in malware. Although developers have created several visualization techniques for malware detection using deep learning (DL), they often fail to accurately identify the critical malicious features of malware. This research introduces MalVis, a unified visualization framework that integrates entropy and N-gram analysis to emphasize meaningful structural and anomalous operational patterns within the malware bytecode. By addressing significant limitations of existing visualization methods, such as insufficient feature representation, limited interpretability, small dataset sizes, and restricted data access, MalVis delivers enhanced detection capabilities, particularly for obfuscated and previously unseen (zero-day) malware. The framework leverages the MalVis dataset introduced in this work, a publicly available large-scale dataset comprising more than 1.3 million visual representations in nine malware classes and one benign class. A comprehensive comparative evaluation was performed against existing state-of-the-art visualization techniques using leading convolutional neural network (CNN) architectures, MobileNet-V2, DenseNet201, ResNet50, VGG16, and Inception-V3. To further boost classification performance and mitigate overfitting, the outputs of these models were combined using eight distinct ensemble strategies. To address the issue of imbalanced class distribution in the multiclass dataset, we employed an undersampling technique to ensure balanced learning across all types of malware. MalVis achieved superior results, with 95% accuracy, 90% F1-score, 92% precision, 89% recall, 87% Matthews Correlation Coefficient (MCC), and 98% Receiver Operating Characteristic Area Under Curve (ROC-AUC). These findings highlight the effectiveness of MalVis in providing interpretable and accurate representation features for malware detection and classification, making it valuable for research and real-world security applications. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

27 pages, 9777 KB  
Article
Towards an End-to-End (E2E) Adversarial Learning and Application in the Physical World
by Dudi Biton, Jacob Shams, Satoru Koda, Asaf Shabtai, Yuval Elovici and Ben Nassi
J. Cybersecur. Priv. 2025, 5(4), 108; https://doi.org/10.3390/jcp5040108 - 1 Dec 2025
Viewed by 272
Abstract
The traditional process for learning patch-based adversarial attacks, conducted in the digital domain and later applied in the physical domain (e.g., via printed stickers), may suffer reduced performance due to adversarial patches’ limited transferability between domains. Given that previous studies have considered using [...] Read more.
The traditional process for learning patch-based adversarial attacks, conducted in the digital domain and later applied in the physical domain (e.g., via printed stickers), may suffer reduced performance due to adversarial patches’ limited transferability between domains. Given that previous studies have considered using film projectors to apply adversarial attacks, we ask: Can adversarial learning (i.e., patch generation) be performed entirely in the physical domain using a film projector? In this work, we propose the Physical-domain Adversarial Patch Learning Augmentation (PAPLA) framework, a novel end-to-end (E2E) framework that shifts adversarial learning from the digital domain to the physical domain using a film projector. We evaluate PAPLA in scenarios, including controlled laboratory and realistic outdoor settings, demonstrating its ability to ensure attack success compared to conventional digital learning–physical application (DL-PA) methods. We also analyze how environmental factors such as projection surface color, projector strength, ambient light, distance, and the target object’s angle relative to the camera affect patch effectiveness. Finally, we demonstrate the feasibility of the attack against a parked car and a stop sign in a real-world outdoor environment. Our results show that under specific conditions, E2E adversarial learning in the physical domain eliminates transferability issues and ensures evasion of object detectors. We also discuss the challenges and opportunities of adversarial learning in the physical domain and identify where this approach is more effective than using a sticker. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

14 pages, 7840 KB  
Article
Evaluating Privacy Technologies in Digital Payments: A Balanced Framework
by Ioannis Fragkiadakis, Stefanos Gritzalis and Costas Lambrinoudakis
J. Cybersecur. Priv. 2025, 5(4), 107; https://doi.org/10.3390/jcp5040107 - 1 Dec 2025
Viewed by 321
Abstract
Privacy enhancement technologies are significant in the development of digital payment systems. At present, multiple innovative digital payment solutions have been introduced and may be implemented globally soon. As cyber threats continue to increase in complexity, security is a crucial factor to consider [...] Read more.
Privacy enhancement technologies are significant in the development of digital payment systems. At present, multiple innovative digital payment solutions have been introduced and may be implemented globally soon. As cyber threats continue to increase in complexity, security is a crucial factor to consider before adopting any technology. In addition to prioritizing security in the development of digital payment systems, it is essential to address user privacy concerns. Modern digital payment solutions offer numerous advantages over traditional systems; however, they also introduce new considerations that must be accounted for during implementation. These considerations go beyond legislative requirements and encompass new payment methods, including transactions made through mobile devices regardless of internet connectivity. A range of regulations and guidelines exist to ensure user privacy in financial transactions, with the General Data Protection Regulation (GDPR) being particularly notable, while technical reports have thoroughly examined the differences between various privacy-enhancing technologies. Additionally, it is important to note that all legal payment systems are required to maintain information for audit purposes. This paper introduces a comprehensive framework that integrates all critical considerations for selecting appropriate privacy enhancement technologies within digital payment systems, while it utilizes a detailed scoring system designed for convenience and adaptability, allowing it to be employed for purposes such as auditing. Thus, the proposed scoring framework integrates security, GDPR compliance, audit, privacy-preserving technical measures, and operational constraints to assess privacy technologies for digital payments. Full article
(This article belongs to the Section Privacy)
29 pages, 4103 KB  
Article
Bridging Cybersecurity Practice and Law: A Hands-On, Scenario-Based Curriculum Using the NICE Framework to Foster Skill Development
by Colman McGuan, Aadithyan Vijaya Raghavan, Komala M. Mandapati, Chansu Yu, Brian E. Ray, Debbie K. Jackson and Sathish Kumar
J. Cybersecur. Priv. 2025, 5(4), 106; https://doi.org/10.3390/jcp5040106 - 1 Dec 2025
Viewed by 423
Abstract
In an increasingly interconnected world, cybersecurity professionals play a pivotal role in safeguarding organizations from cyber threats. To secure their cyberspace, organizations are forced to adopt a cybersecurity framework such as the NIST National Initiative for Cybersecurity Education Workforce Framework for Cybersecurity (NICE [...] Read more.
In an increasingly interconnected world, cybersecurity professionals play a pivotal role in safeguarding organizations from cyber threats. To secure their cyberspace, organizations are forced to adopt a cybersecurity framework such as the NIST National Initiative for Cybersecurity Education Workforce Framework for Cybersecurity (NICE Framework). Although these frameworks are a good starting point for businesses and offer critical information to identify, prevent, and respond to cyber incidents, they can be difficult to navigate and implement, particularly for small-medium businesses (SMBs). To help overcome this issue, this paper identifies the most frequent attack vectors to SMBs (Objective 1) and proposes a practical model of both technical and non-technical tasks, knowledge, skills, abilities (TKSA) from the NICE Framework for those attacks (Objective 2). This research develops a scenario-based curriculum. By immersing learners in realistic cyber threat scenarios, their practical understanding and preparedness in responding to cybersecurity incidents is enhanced (Objective 3). Finally, this work integrates practical experience and real-life skill development into the curriculum (Objective 4). SMBs can use the model as a guide to evaluate, equip their existing workforce, or assist in hiring new employees. In addition, educational institutions can use the model to develop scenario-based learning modules to adequately equip the emerging cybersecurity workforce for SMBs. Trainees will have the opportunity to practice both technical and legal issues in a simulated environment, thereby strengthening their ability to identify, mitigate, and respond to cyber threats effectively. We piloted these learning modules as a semester-long course titled “Hack Lab” for both Computer Science (CS) and Law students at CSU during Spring 2024 and Spring 2025. According to the self-assessment survey by the end of the semester, students demonstrated substantial gains in confidence across four key competencies (identifying vulnerabilities and using tools, applying cybersecurity laws, recognizing steps in incident response, and explaining organizational response preparation) with an average improvement of +2.8 on a 1–5 scale. Separately, overall course evaluations averaged 4.4 for CS students and 4.0 for Law students, respectively, on a 1–5 scale (college average is 4.21 and 4.19, respectively). Law students reported that hands-on labs were difficult, although they were the most impactful experience. They demonstrated a notable improvement in identifying vulnerabilities and understanding response processes. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

64 pages, 12541 KB  
Article
A Game-Theoretic Approach for Quantification of Strategic Behaviors in Digital Forensic Readiness
by Mehrnoush Vaseghipanah, Sam Jabbehdari and Hamidreza Navidi
J. Cybersecur. Priv. 2025, 5(4), 105; https://doi.org/10.3390/jcp5040105 - 26 Nov 2025
Viewed by 518
Abstract
Small and Medium-sized Enterprises (SMEs) face disproportionately high risks from Advanced Persistent Threats (APTs), which often evade traditional cybersecurity measures. Existing frameworks catalogue adversary tactics and defensive solutions but provide limited quantitative guidance for allocating limited resources under uncertainty, a challenge amplified by [...] Read more.
Small and Medium-sized Enterprises (SMEs) face disproportionately high risks from Advanced Persistent Threats (APTs), which often evade traditional cybersecurity measures. Existing frameworks catalogue adversary tactics and defensive solutions but provide limited quantitative guidance for allocating limited resources under uncertainty, a challenge amplified by the growing use of AI in both offensive operations and digital forensics. This paper proposes a game-theoretic model for improving digital forensic readiness (DFR) in SMEs. The approach integrates the MITRE ATT&CK and D3FEND frameworks to map APT behaviors to defensive countermeasures and defines 32 custom DFR metrics, weighted using the Analytic Hierarchy Process (AHP), to derive utility functions for both attackers and defenders. The main analysis considers a non-zero-sum attacker–defender bimatrix game and yields a single Nash equilibrium in which the attacker concentrates on Impact-oriented tactics and the defender on Detect-focused controls. In a synthetic calibration across ten organizational profiles, the framework achieves a median readiness improvement of 18.0% (95% confidence interval: 16.3% to 19.7%) relative to pre-framework baselines, with targeted improvements in logging and forensic preservation typically reducing key attacker utility components by around 15–30%. A zero-sum variant of the game is also analyzed as a robustness check and exhibits consistent tactical themes, but all policy conclusions are drawn from the empirical non-zero-sum model. Despite relying on expert-driven AHP weights and synthetic profiles, the framework offers SMEs actionable, equilibrium-informed guidance for strengthening forensic preparedness against advanced cyber threats. Full article
(This article belongs to the Special Issue Cyber Security and Digital Forensics—2nd Edition)
Show Figures

Figure 1

20 pages, 724 KB  
Article
A Lightweight Multimodal Framework for Misleading News Classification Using Linguistic and Behavioral Biometrics
by Mahmudul Haque, A. S. M. Hossain Bari and Marina L. Gavrilova
J. Cybersecur. Priv. 2025, 5(4), 104; https://doi.org/10.3390/jcp5040104 - 25 Nov 2025
Viewed by 323
Abstract
The widespread dissemination of misleading news presents serious challenges to public discourse, democratic institutions, and societal trust. Misleading-news classification (MNC) has been extensively studied through deep neural models that rely mainly on semantic understanding or large-scale pretrained language models. However, these methods often [...] Read more.
The widespread dissemination of misleading news presents serious challenges to public discourse, democratic institutions, and societal trust. Misleading-news classification (MNC) has been extensively studied through deep neural models that rely mainly on semantic understanding or large-scale pretrained language models. However, these methods often lack interpretability and are computationally expensive, limiting their practical use in real-time or resource-constrained environments. Existing approaches can be broadly categorized into transformer-based text encoders, hybrid CNN–LSTM frameworks, and fuzzy-logic fusion networks. To advance research on MNC, this study presents a lightweight multimodal framework that extends the Fuzzy Deep Hybrid Network (FDHN) paradigm by introducing a linguistic and behavioral biometric perspective to MNC. We reinterpret the FDHN architecture to incorporate linguistic cues such as lexical diversity, subjectivity, and contradiction scores as behavioral signatures of deception. These features are processed and fused with semantic embeddings, resulting in a model that captures both what is written and how it is written. The design of the proposed method ensures the trade-off between feature complexity and model generalizability. Experimental results demonstrate that the inclusion of lightweight linguistic and behavioral biometric features significantly enhance model performance, yielding a test accuracy of 71.91 ± 0.23% and a macro F1 score of 71.17 ± 0.26%, outperforming the state-of-the-art method. The findings of the study underscore the utility of stylistic and affective cues in MNC while highlighting the need for model simplicity to maintain robustness and adaptability. Full article
(This article belongs to the Special Issue Multimedia Security and Privacy)
Show Figures

Figure 1

30 pages, 388 KB  
Systematic Review
Privacy in Flux: A 35-Year Systematic Review of Legal Evolution, Effectiveness, and Global Challenges (U.S./E.U. Focus with International Comparisons)
by Kong Phang and Jihene Kaabi
J. Cybersecur. Priv. 2025, 5(4), 103; https://doi.org/10.3390/jcp5040103 - 22 Nov 2025
Viewed by 802
Abstract
Privacy harms have expanded alongside rapid technological change, challenging the adequacy of existing regulatory frameworks. This systematic review (1990–2025) systematically maps documented privacy harms to specific legal mechanisms and observed enforcement outcomes across jurisdictions, using PRISMA-guided methods and ROBIS risk-of-bias assessment. We synthesize [...] Read more.
Privacy harms have expanded alongside rapid technological change, challenging the adequacy of existing regulatory frameworks. This systematic review (1990–2025) systematically maps documented privacy harms to specific legal mechanisms and observed enforcement outcomes across jurisdictions, using PRISMA-guided methods and ROBIS risk-of-bias assessment. We synthesize evidence on major regimes (e.g., GDPR, COPPA, CCPA, HIPAA, GLBA) and conduct comparative legal analysis across the U.S., E.U., and underexplored regions in Asia, Latin America, and Africa. Key findings indicate increased recognition of data subject rights, persistent gaps in cross-border data governance, and emerging risks from AI/ML/LLMs, IoT, and blockchain, including data breaches, algorithmic discrimination, and surveillance. While regulations have advanced, enforcement variability and fragmented standards limit effectiveness. We propose strategies for harmonization and risk-based, technology-neutral safeguards. While focusing on the U.S. sectoral and E.U. comprehensive models, we include targeted comparisons with Canada (PIPEDA), Australia (Privacy Act/APPs), Japan (APPI), India (DPDPA), Africa (POPIA/NDPR/Kenya DPA), and ASEAN interoperability instruments. This review presents an evidence-based framework for understanding the interplay between evolving harms, emerging technologies, and legal protections, and identifies priorities for strengthening global privacy governance. Full article
(This article belongs to the Special Issue Data Protection and Privacy)
Show Figures

Figure A1

16 pages, 822 KB  
Article
Deep Learning Approaches for Multi-Class Classification of Phishing Text Messages
by Miriam L. Munoz and Muhammad F. Islam
J. Cybersecur. Priv. 2025, 5(4), 102; https://doi.org/10.3390/jcp5040102 - 21 Nov 2025
Viewed by 423
Abstract
Phishing attacks, particularly Smishing (SMS phishing), have become a major cybersecurity threat, with attackers using social engineering tactics to take advantage of human vulnerabilities. Traditional detection models often struggle to keep up with the evolving sophistication of these attacks, especially on devices with [...] Read more.
Phishing attacks, particularly Smishing (SMS phishing), have become a major cybersecurity threat, with attackers using social engineering tactics to take advantage of human vulnerabilities. Traditional detection models often struggle to keep up with the evolving sophistication of these attacks, especially on devices with constrained computational resources. This research proposes a chain transformer model that integrates GPT-2 for synthetic data generation and BERT for embeddings to detect Smishing within a multiclass dataset, including minority smishing variants. By utilizing compact, open-source transformer models designed to balance accuracy and efficiency, this study explores improved detection of phishing threats on text-based platforms. Experimental results demonstrate an accuracy rate exceeding 97% in detecting phishing attacks across multiple categories. The proposed chained transformer model achieved an F1-score of 0.97, precision of 0.98, and recall of 0.96, indicating strong overall performance. Full article
Show Figures

Figure 1

22 pages, 1015 KB  
Systematic Review
Gaps in AI-Compliant Complementary Governance Frameworks’ Suitability (for Low-Capacity Actors), and Structural Asymmetries (in the Compliance Ecosystem)—A Systematic Review
by William Walter Finch and Marya Butt
J. Cybersecur. Priv. 2025, 5(4), 101; https://doi.org/10.3390/jcp5040101 - 18 Nov 2025
Viewed by 1165
Abstract
This review examines AI governance centered on Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (the EU Artificial Intelligence Act), alongside comparable instruments (ISO/IEC 42001, NIST AI RMF, OECD [...] Read more.
This review examines AI governance centered on Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (the EU Artificial Intelligence Act), alongside comparable instruments (ISO/IEC 42001, NIST AI RMF, OECD Principles, ALTAI). Using a hybrid systematic–scoping method, it maps obligations across actor roles and risk tiers, with particular attention to low-capacity actors, especially SMEs and public authorities. Across the surveyed literature, persistent gaps emerge in enforceability, proportionality, and auditability, compounded by frictions between the AI Act and GDPR and fragmented accountability along the value chain. Rather than introducing a formal model, this paper develops a conceptual lens—compliance asymmetry—to interrogate the structural frictions between regulatory ambition and institutional capacity. This framing enables the identification of normative and operational gaps that must be addressed in future model design. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

19 pages, 1939 KB  
Article
The Use of Artificial Intelligence in Cybercrime: Impact Analysis in Ecuador and Mitigation Strategies
by Carlos Varela Enríquez, Renato Toasa and Maryory Urdaneta
J. Cybersecur. Priv. 2025, 5(4), 100; https://doi.org/10.3390/jcp5040100 - 17 Nov 2025
Viewed by 739
Abstract
This article analyzes how artificial intelligence (AI) is influencing the evolution of cybercrime in Ecuador. The use of AI tools to create new threats, such as intelligent malware, automated phishing, and financial fraud, is on the rise. The main problem is the increasing [...] Read more.
This article analyzes how artificial intelligence (AI) is influencing the evolution of cybercrime in Ecuador. The use of AI tools to create new threats, such as intelligent malware, automated phishing, and financial fraud, is on the rise. The main problem is the increasing sophistication of AI-driven cyberattacks and the limited preventive response capacity in Ecuador. In Ecuador, cybercrime rose by more than 7% in 2024 compared to 2023, and by nearly 130% between 2020 and 2021. This research focuses on exploring mitigation strategies based on international frameworks such as NIST and ISO, as well as developing measures through training and knowledge transfer. The results obtained are expected to help identify the main trends in AI-driven cyberthreats and propose a set of technical, legal, and training measures to strengthen public and private institutions in Ecuador. It is important to emphasize that the implementation of international standards, national policies, and specialized training is essential to address emerging cybersecurity risks in Ecuador. Full article
Show Figures

Figure 1

37 pages, 754 KB  
Article
Zero Trust in Practice: A Mixed-Methods Study Under the TOE Framework
by Angélica Pigola and Fernando de Souza Meirelles
J. Cybersecur. Priv. 2025, 5(4), 99; https://doi.org/10.3390/jcp5040099 - 14 Nov 2025
Viewed by 598
Abstract
This study examines the adoption and implementation of the Zero Trust (ZT) cybersecurity paradigm using the Technology–Organization–Environment (TOE) framework. While ZT is gaining traction as a security model, many organizations struggle to align strategic intent with effective implementation. We adopted a sequential mixed-methods [...] Read more.
This study examines the adoption and implementation of the Zero Trust (ZT) cybersecurity paradigm using the Technology–Organization–Environment (TOE) framework. While ZT is gaining traction as a security model, many organizations struggle to align strategic intent with effective implementation. We adopted a sequential mixed-methods design combining 27 semi-structured interviews with cybersecurity professionals and a survey of 267 experts across industries. The qualitative phase used an inductive approach to identify organizational challenges, whereas the quantitative phase employed Partial Least Squares Structural Equation Modeling (PLS-SEM) to test the hypothesized relationships. Results show that information security culture and investment significantly influence both strategic alignment and the technical implementation of ZT. Implementation acted as an intermediary mechanism through which these organizational factors affected governance and compliance outcomes. Strategic commitment alone was insufficient to drive effective implementation without strong cultural support. Qualitative insights underscored the importance of leadership engagement, cross-functional collaboration, and legacy infrastructure readiness in shaping outcomes. The findings emphasize the need for cultural alignment, targeted investments, and process maturity to ensure successful ZT adoption. Organizations can leverage these insights to prioritize resources, strengthen governance, and reduce implementation friction. This research is among the first to empirically investigate ZT implementation through the TOE lens. It contributes to cybersecurity management literature by integrating strategic, cultural, and operational dimensions of ZT adoption and offers practical guidance for decision-makers seeking to institutionalize Zero Trust principles. Full article
(This article belongs to the Topic Recent Advances in Security, Privacy, and Trust)
Show Figures

Figure 1

40 pages, 5207 KB  
Article
Integrated Analysis of Malicious Software: Insights from Static and Dynamic Perspectives
by Maria-Mădălina Andronache, Alexandru Vulpe and Corneliu Burileanu
J. Cybersecur. Priv. 2025, 5(4), 98; https://doi.org/10.3390/jcp5040098 - 10 Nov 2025
Viewed by 1118
Abstract
Malware remains one of the most persistent and evolving threats to cybersecurity, necessitating robust analysis techniques to understand and mitigate its impact. This study presents a comprehensive analysis of selected malware samples using both static and dynamic analysis techniques. In the static phase, [...] Read more.
Malware remains one of the most persistent and evolving threats to cybersecurity, necessitating robust analysis techniques to understand and mitigate its impact. This study presents a comprehensive analysis of selected malware samples using both static and dynamic analysis techniques. In the static phase, file structure, embedded strings, and code signatures were examined, while in the dynamic analysis phase, the malware was executed in a virtual sandbox environment to observe process creation, network communication, and file system changes. By combining these two approaches, various types of malware files could be characterized and have their key elements revealed. This improved the understanding of the code capabilities and evasive behaviors of malicious files. The goal of these analyses was to create a database of malware profiling tools and tools that can be utilized to identify and analyze malware. The results demonstrate that integrating static and dynamic methodologies improves the accuracy of malware profiling and supports more effective threat detection and incident response strategies. Full article
(This article belongs to the Special Issue Intrusion/Malware Detection and Prevention in Networks—2nd Edition)
Show Figures

Figure 1

26 pages, 599 KB  
Article
Identifying and Modeling Barriers to Compliance with the NIS2 Directive: A DEMATEL Approach
by Konstantina Mentzelou, Panos T. Chountalas, Fotis C. Kitsios, Anastasios I. Magoutas and Thomas K. Dasaklis
J. Cybersecur. Priv. 2025, 5(4), 97; https://doi.org/10.3390/jcp5040097 - 7 Nov 2025
Viewed by 1533
Abstract
The implementation of the NIS2 Directive expands the scope of cybersecurity regulation across the European Union, placing new demands on both essential and important entities. Despite its importance, organizations face multiple barriers that undermine compliance, including lack of awareness, technical complexity, financial constraints, [...] Read more.
The implementation of the NIS2 Directive expands the scope of cybersecurity regulation across the European Union, placing new demands on both essential and important entities. Despite its importance, organizations face multiple barriers that undermine compliance, including lack of awareness, technical complexity, financial constraints, and regulatory uncertainty. This study identifies and models these barriers to provide a clearer view of the systemic challenges of NIS2 implementation. Building on a structured literature review, fourteen barriers were defined and validated through expert input. The Decision-Making Trial and Evaluation Laboratory (DEMATEL) method was then applied to examine their interdependencies and to map causal relationships. The analysis highlights lack of awareness and the evolving threat landscape as key drivers (i.e., causal factors) that reinforce each other. Technical complexity and financial constraints act as mediators transmitting the influence of these causal factors toward operational and governance failures. Operational disruptions, high reporting costs, and inadequate risk assessment emerge as the most dependent outcomes (i.e., effect factors), absorbing the impact of the driving and mediating factors. The findings suggest that interventions targeted at awareness-building, resource allocation, and risk management capacity have the greatest leverage for improving compliance and resilience. By clarifying the cause-and-effect dynamics among barriers, this study supports policymakers and managers in designing more effective strategies for NIS2 implementation and contributes to current debates on cybersecurity governance in critical infrastructures. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

26 pages, 1484 KB  
Article
Enhancing Ransomware Threat Detection: Risk-Aware Classification via Windows API Call Analysis and Hybrid ML/DL Models
by Sarah Alhuwayshil, Sundaresan Ramachandran and Kyounggon Kim
J. Cybersecur. Priv. 2025, 5(4), 96; https://doi.org/10.3390/jcp5040096 - 5 Nov 2025
Viewed by 1073
Abstract
Ransomware attacks pose a serious threat to computer networks, causing widespread disruption to individual, corporate, governmental, and critical national infrastructures. To mitigate their impact, extensive research has been conducted to analyze ransomware operations. However, most prior studies have focused on decryption, post-infection response, [...] Read more.
Ransomware attacks pose a serious threat to computer networks, causing widespread disruption to individual, corporate, governmental, and critical national infrastructures. To mitigate their impact, extensive research has been conducted to analyze ransomware operations. However, most prior studies have focused on decryption, post-infection response, or general family-level classification for performance evaluation, with limited attention to linking classification accuracy to each family’s threat level and behavioral patterns. In this study, we propose a classification framework for the most dangerous ransomware families targeting Windows systems, correlating model performance with defined threat levels (high, medium, and low) based on API call patterns. Two independent datasets were used, extracted from VirusTotal and Cuckoo Sandbox, and a cross-source evaluation strategy was applied, alternating training and testing roles between datasets to assess generalization ability and minimize source bias. The results show that the proposed approach, particularly when using XGBoost and LightGBM, achieved accuracy rates ranging from 84 to 100% across datasets. These findings confirm the effectiveness of our method in accurately classifying ransomware families while accounting for their severity and behavioral characteristics. Full article
(This article belongs to the Collection Machine Learning and Data Analytics for Cyber Security)
Show Figures

Figure 1

25 pages, 847 KB  
Systematic Review
AI-Augmented SOC: A Survey of LLMs and Agents for Security Automation
by Siddhant Srinivas, Brandon Kirk, Julissa Zendejas, Michael Espino, Matthew Boskovich, Abdul Bari, Khalil Dajani and Nabeel Alzahrani
J. Cybersecur. Priv. 2025, 5(4), 95; https://doi.org/10.3390/jcp5040095 - 5 Nov 2025
Viewed by 4633
Abstract
The increasing volume, velocity, and sophistication of cyber threats have placed immense pressure on modern Security Operations Centers (SOCs). Traditional rule-based and manual processes are proving insufficient, leading to alert fatigue, delayed responses, high false-positive rates, analyst dependency, and escalating operational costs. Recent [...] Read more.
The increasing volume, velocity, and sophistication of cyber threats have placed immense pressure on modern Security Operations Centers (SOCs). Traditional rule-based and manual processes are proving insufficient, leading to alert fatigue, delayed responses, high false-positive rates, analyst dependency, and escalating operational costs. Recent advancements in Artificial Intelligence (AI) offer new opportunities to transform SOC workflows through automation and augmentation. Large Language Models (LLMs) and autonomous AI agents have shown strong potential in enhancing capabilities such as log summarization, alert triage, threat intelligence, incident response, report generation, asset discovery, and vulnerability management. This paper reviews recent developments in the application of LLMs and AI agents across these SOC functions, introducing a taxonomy that organizes their roles and capabilities within operational pipelines. While these technologies improve detection accuracy, response time, and analyst support, challenges persist, including model interpretability, adversarial robustness, integration with legacy systems, and the risk of hallucinations or data leakage. A detailed capability-maturity model outlines the levels of integration with SOC tasks. This survey synthesizes trends, identifies persistent limitations, and outlines future directions for trustworthy, explainable, and safe AI integration in SOC environments. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

17 pages, 911 KB  
Article
Anomaly Detection Against Fake Base Station Threats Using Machine Learning
by Amanul Islam, Sourav Purification and Sang-Yoon Chang
J. Cybersecur. Priv. 2025, 5(4), 94; https://doi.org/10.3390/jcp5040094 - 3 Nov 2025
Viewed by 1093
Abstract
Mobile networking in 4G and 5G remains vulnerable against fake base stations. A fake base station can inject and manipulate the radio resource control (RRC) communication protocol to disable the user equipment’s connectivity. To motivate our research, we empirically show that such a [...] Read more.
Mobile networking in 4G and 5G remains vulnerable against fake base stations. A fake base station can inject and manipulate the radio resource control (RRC) communication protocol to disable the user equipment’s connectivity. To motivate our research, we empirically show that such a fake base station can cause an indefinite hold of the user equipment’s connectivity using our fake base station prototype against an off-the-shelf phone. To defend against such threat, we design and build an anomaly detection system to detect the fake base station threats. It detects any base station’s deviations from the 4G/5G RRC protocol, which supports both the connectivity provision case (all works well and the user receives connectivity) and the connection-release case (cannot provide connectivity at the time and thus releases connections). Our scheme based on unsupervised machine learning dynamically and automatically controls and sets the detection parameters, which vary with mobility and the communication channel, and utilizes greater information to improve its effectiveness. Using software-defined radios and srsRAN, we implement a prototype of our scheme from sensing to data collection to machine-learning-based detection processing. Our empirical evaluations demonstrate the detection effectiveness and adaptability; i.e., our scheme accurately detects fake base stations deviating from the set protocol in mobile scenarios by adapting its model parameters. Our scheme achieves 100% accuracy in static scenarios against the fake base station threats. If the dynamic control is disabled, i.e., not adapting to mobility and different channel environments, the accuracy drops to 65–76%, but our scheme adjusts the model via dynamic training to recover to 100% accuracy. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

17 pages, 680 KB  
Article
Perceiving Digital Threats and Artificial Intelligence: A Psychometric Approach to Cyber Risk
by Diana Carbone, Francesco Marcatto, Francesca Mistichelli and Donatella Ferrante
J. Cybersecur. Priv. 2025, 5(4), 93; https://doi.org/10.3390/jcp5040093 - 3 Nov 2025
Viewed by 708
Abstract
The rapid digitalization of work and daily life has introduced a wide range of online threats, from common hazards such as malware and phishing to emerging challenges posed by artificial intelligence (AI). While technical aspects of cybersecurity have received extensive attention, less is [...] Read more.
The rapid digitalization of work and daily life has introduced a wide range of online threats, from common hazards such as malware and phishing to emerging challenges posed by artificial intelligence (AI). While technical aspects of cybersecurity have received extensive attention, less is known about how individuals perceive digital risks and how these perceptions shape protective behaviors. Building on the psychometric paradigm, this study investigated the perception of seven digital threats among a sample of 300 Italian workers employed in IT and non-IT sectors. Participants rated each hazard on dread and unknown risk dimensions and reported their cybersecurity expertise. Optimism bias and proactive awareness were also detected. Cluster analyses revealed four profiles based on different levels of dread and unknown risk ratings. The four profiles also differed in reported levels of expertise, optimism bias, and proactive awareness. Notably, AI was perceived as the least familiar and most uncertain hazard across groups, underscoring its salience in shaping digital risk perceptions. These findings highlight the heterogeneity of digital risk perception and suggest that tailored communication and training strategies, rather than one-size-fits-all approaches, are essential to fostering safer online practices. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

20 pages, 520 KB  
Review
Hashing in the Fight Against CSAM: Technology at the Crossroads of Law and Ethics
by Evangelia Daskalaki, Emmanouela Kokolaki and Paraskevi Fragopoulou
J. Cybersecur. Priv. 2025, 5(4), 92; https://doi.org/10.3390/jcp5040092 - 31 Oct 2025
Viewed by 1499
Abstract
Hashes are vital in limiting the spread of child sexual abuse material online, yet their use introduces unresolved technical, legal, and ethical challenges. This paper bridges a critical gap by analyzing both cryptographic and perceptual hashing, not only in terms of detection capabilities, [...] Read more.
Hashes are vital in limiting the spread of child sexual abuse material online, yet their use introduces unresolved technical, legal, and ethical challenges. This paper bridges a critical gap by analyzing both cryptographic and perceptual hashing, not only in terms of detection capabilities, but also their vulnerabilities and implications for privacy governance. Unlike prior work, it reframes CSAM detection as a multidimensional issue, at the intersection of cybersecurity, data protection law, and digital ethics. Three key contributions are made: first, a comparative evaluation of hashing techniques, revealing weaknesses, such as susceptibility to media edits, collision attacks, hash inversion, and data leakage; second, a call for standardized benchmarks and interoperable evaluation protocols to assess system robustness; and third, a legal argument that perceptual hashes qualify as personal data under EU law, with implications for transparency and accountability. Ethically, the paper underscores the tension faced by service providers in balancing user privacy with the duty to detect CSAM. It advocates for detection systems that are not only technically sound, but also legally defensible and ethically governed. By integrating technical analysis with legal insight, this paper offers a comprehensive framework for evaluating CSAM detection, within the broader context of digital safety and privacy. Full article
(This article belongs to the Section Cryptography and Cryptology)
Show Figures

Figure 1

25 pages, 1777 KB  
Article
TwinGuard: Privacy-Preserving Digital Twins for Adaptive Email Threat Detection
by Taiwo Oladipupo Ayodele
J. Cybersecur. Priv. 2025, 5(4), 91; https://doi.org/10.3390/jcp5040091 - 29 Oct 2025
Viewed by 730
Abstract
Email continues to serve as a primary vector for cyber-attacks, with phishing, spoofing, and polymorphic malware evolving rapidly to evade traditional defences. Conventional email security systems, often reliant on static, signature-based detection struggle to identify zero-day exploits and protect user privacy in increasingly [...] Read more.
Email continues to serve as a primary vector for cyber-attacks, with phishing, spoofing, and polymorphic malware evolving rapidly to evade traditional defences. Conventional email security systems, often reliant on static, signature-based detection struggle to identify zero-day exploits and protect user privacy in increasingly data-driven environments. This paper introduces TwinGuard, a privacy-preserving framework that leverages digital twin technology to enable adaptive, personalised email threat detection. TwinGuard constructs dynamic behavioural models tailored to individual email ecosystems, facilitating proactive threat simulation and anomaly detection without accessing raw message content. The system integrates a BERT–LSTM hybrid for semantic and temporal profiling, alongside federated learning, secure multi-party computation (SMPC), and differential privacy to enable collaborative intelligence while preserving confidentiality. Empirical evaluations were conducted using both synthetic AI-generated email datasets and real-world datasets sourced from Hugging Face and Kaggle. TwinGuard achieved 98% accuracy, 97% precision, and a false positive rate of 3%, outperforming conventional detection methods. The framework offers a scalable, regulation-compliant solution that balances security efficacy with strong privacy protection in modern email ecosystems. Full article
(This article belongs to the Special Issue Cybersecurity in the Age of AI and IoT: Challenges and Innovations)
Show Figures

Figure 1

22 pages, 1339 KB  
Article
AI-Powered Security for IoT Ecosystems: A Hybrid Deep Learning Approach to Anomaly Detection
by Deepak Kumar, Priyanka Pramod Pawar, Santosh Reddy Addula, Mohan Kumar Meesala, Oludotun Oni, Qasim Naveed Cheema, Anwar Ul Haq and Guna Sekhar Sajja
J. Cybersecur. Priv. 2025, 5(4), 90; https://doi.org/10.3390/jcp5040090 - 27 Oct 2025
Cited by 2 | Viewed by 938
Abstract
The rapid expansion of the Internet of Things (IoT) has introduced new vulnerabilities that traditional security mechanisms often fail to address effectively. Signature-based intrusion detection systems cannot adapt to zero-day attacks, while rule-based solutions lack scalability for the diverse and high-volume traffic in [...] Read more.
The rapid expansion of the Internet of Things (IoT) has introduced new vulnerabilities that traditional security mechanisms often fail to address effectively. Signature-based intrusion detection systems cannot adapt to zero-day attacks, while rule-based solutions lack scalability for the diverse and high-volume traffic in IoT environments. To strengthen the security framework for IoT, this paper proposes a deep learning-based anomaly detection approach that integrates Convolutional Neural Networks (CNNs) and Bidirectional Gated Recurrent Units (BiGRUs). The model is further optimized using the Moth–Flame Optimization (MFO) algorithm for automated hyperparameter tuning. To mitigate class imbalance in benchmark datasets, we employ Generative Adversarial Networks (GANs) for synthetic sample generation alongside Z-score normalization. The proposed CNN–BiGRU + MFO framework is evaluated on two widely used datasets, UNSW-NB15 and UCI SECOM. Experimental results demonstrate superior performance compared to several baseline deep learning models, achieving improvements across accuracy, precision, recall, F1-score, and ROC–AUC. These findings highlight the potential of combining hybrid deep learning architectures with evolutionary optimization for effective and generalizable intrusion detection in IoT systems. Full article
(This article belongs to the Special Issue Cybersecurity in the Age of AI and IoT: Challenges and Innovations)
Show Figures

Figure 1

29 pages, 1032 KB  
Article
Between Firewalls and Feelings: Modelling Trust and Commitment in Digital Banking Platforms
by Ruhunage Panchali Dias, Zazli Lily Wisker and Noor H. S. Alani
J. Cybersecur. Priv. 2025, 5(4), 89; https://doi.org/10.3390/jcp5040089 - 20 Oct 2025
Viewed by 1747
Abstract
Digital banking has become part of everyday life in Aotearoa–New Zealand, offering convenience but also raising questions of trust, security, and long-term commitment. This study examines how service quality, security and privacy, user experience, emotional attachment, and perceived risk shape customer trust and [...] Read more.
Digital banking has become part of everyday life in Aotearoa–New Zealand, offering convenience but also raising questions of trust, security, and long-term commitment. This study examines how service quality, security and privacy, user experience, emotional attachment, and perceived risk shape customer trust and commitment in digital banking platforms. Trust is positioned as a key mediating factor, guided by the Technology Acceptance Model, Commitment–Trust Theory, Service Quality Theory, and Perceived Risk Theory. An online survey of 111 digital banking users from diverse backgrounds was conducted, and Hayes’s PROCESS Model 4 was applied to test both direct and indirect relationships. The results show that security/privacy and emotional attachment are the strongest predictors of commitment, while service quality and user experience contribute indirectly through trust. This study adds three contributions. First, it explains customer commitment rather than intention. Second, it compares the indirect paths through trust from service quality, security and privacy, user experience, and emotional attachment within one model using bias corrected bootstrap confidence intervals. Third, in a sample with many experienced users, perceived risk shows no indirect effect, which suggests a boundary condition for risk focused models. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

14 pages, 389 KB  
Article
A Similarity Measure for Linking CoinJoin Output Spenders
by Michael Herbert Ziegler, Mariusz Nowostawski and Basel Katt
J. Cybersecur. Priv. 2025, 5(4), 88; https://doi.org/10.3390/jcp5040088 - 18 Oct 2025
Viewed by 1064
Abstract
This paper introduces a novel similarity measure to link transactions which spend outputs of CoinJoin transactions, CoinJoin Spending Transactions (CSTs), by analyzing their on-chain properties, addressing the challenge of preserving user privacy in blockchain systems. Despite the adoption of privacy-enhancing techniques like CoinJoin, [...] Read more.
This paper introduces a novel similarity measure to link transactions which spend outputs of CoinJoin transactions, CoinJoin Spending Transactions (CSTs), by analyzing their on-chain properties, addressing the challenge of preserving user privacy in blockchain systems. Despite the adoption of privacy-enhancing techniques like CoinJoin, users remain vulnerable to transaction linkage through shared output patterns. The proposed method leverages timestamp analysis of mixed outputs and employs a one-sided Chamfer distance to quantify similarities between CSTs, enabling the identification of transactions associated with the same user. The approach is evaluated across three major CoinJoin implementations (Dash, Whirlpool, and Wasabi 2.0) demonstrating its effectiveness in detecting linked CSTs. Additionally, the work improves transaction classification rules for Wasabi 2.0 by introducing criteria for uncommon denomination outputs, reducing false positives. Results show that multiple CSTs spending shared CoinJoin outputs are prevalent, highlighting the practical significance of the similarity measure. The findings underscore the ongoing privacy risks posed by transaction linkage, even within privacy-focused protocols. This work contributes to the understanding of CoinJoin’s limitations and offers insights for developing more robust privacy mechanisms in decentralized systems. To the authors knowledge this is the first work analyzing the linkage between CSTs. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

56 pages, 732 KB  
Review
The Erosion of Cybersecurity Zero-Trust Principles Through Generative AI: A Survey on the Challenges and Future Directions
by Dan Xu, Iqbal Gondal, Xun Yi, Teo Susnjak, Paul Watters and Timothy R. McIntosh
J. Cybersecur. Priv. 2025, 5(4), 87; https://doi.org/10.3390/jcp5040087 - 15 Oct 2025
Viewed by 4265
Abstract
Generative artificial intelligence (AI) and persistent empirical gaps are reshaping the cyber threat landscape faster than Zero-Trust Architecture (ZTA) research can respond. We reviewed 10 recent ZTA surveys and 136 primary studies (2022–2024) and found that 98% provided only partial or no real-world [...] Read more.
Generative artificial intelligence (AI) and persistent empirical gaps are reshaping the cyber threat landscape faster than Zero-Trust Architecture (ZTA) research can respond. We reviewed 10 recent ZTA surveys and 136 primary studies (2022–2024) and found that 98% provided only partial or no real-world validation, leaving several core controls largely untested. Our critique, therefore, proceeds on two axes: first, mainstream ZTA research is empirically under-powered and operationally unproven; second, generative-AI attacks exploit these very weaknesses, accelerating policy bypass and detection failure. To expose this compounding risk, we contribute the Cyber Fraud Kill Chain (CFKC), a seven-stage attacker model (target identification, preparation, engagement, deception, execution, monetization, and cover-up) that maps specific generative techniques to NIST SP 800-207 components they erode. The CFKC highlights how synthetic identities, context manipulation and adversarial telemetry drive up false-negative rates, extend dwell time, and sidestep audit trails, thereby undermining the Zero-Trust principles of verify explicitly and assume breach. Existing guidance offers no systematic countermeasures for AI-scaled attacks, and that compliance regimes struggle to audit content that AI can mutate on demand. Finally, we outline research directions for adaptive, evidence-driven ZTA, and we argue that incremental extensions of current ZTA that are insufficient; only a generative-AI-aware redesign will sustain defensive parity in the coming threat cycle. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

26 pages, 1417 KB  
Article
A Unified, Threat-Validated Taxonomy for Hardware Security Assurance
by Shao-Fang Wen and Arvind Sharma
J. Cybersecur. Priv. 2025, 5(4), 86; https://doi.org/10.3390/jcp5040086 - 13 Oct 2025
Viewed by 840
Abstract
Hardware systems are foundational to critical infrastructure, embedded devices, and consumer products, making robust security assurance essential. However, existing hardware security standards remain fragmented, inconsistent in scope, and difficult to integrate, creating gaps in protection and inefficiencies in assurance planning. This paper proposes [...] Read more.
Hardware systems are foundational to critical infrastructure, embedded devices, and consumer products, making robust security assurance essential. However, existing hardware security standards remain fragmented, inconsistent in scope, and difficult to integrate, creating gaps in protection and inefficiencies in assurance planning. This paper proposes a unified, standard-aligned, and threat-validated taxonomy of Security Objective Domains (SODs) for hardware security assurance. The taxonomy was inductively derived from 1287 requirements across ten internationally recognized standards using AI-assisted clustering and expert validation, resulting in 22 domains structured by the Boundary-Driven System of Interest model. Each domain was then validated against 167 documented hardware-related threats from CWE/CVE databases, regulatory advisories, and incident reports. This threat-informed mapping enables quantitative analysis of assurance coverage, prioritization of high-risk areas, and identification of cross-domain dependencies. The framework harmonizes terminology, reduces redundancy, and addresses assurance gaps, offering a scalable basis for sector-specific profiles, automated compliance tooling, and evidence-driven risk management. Looking forward, the taxonomy can be extended with sector-specific standards, expanded threat datasets, and integration of weighted severity metrics such as CVSS to further enhance risk-based assurance. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

20 pages, 2594 KB  
Article
Evaluating the Generalization Gaps of Intrusion Detection Systems Across DoS Attack Variants
by Roshan Jameel, Khyati Marwah, Sheikh Mohammad Idrees and Mariusz Nowostawski
J. Cybersecur. Priv. 2025, 5(4), 85; https://doi.org/10.3390/jcp5040085 - 11 Oct 2025
Viewed by 921
Abstract
Intrusion Detection Systems (IDS) play a vital role in safeguarding networks, yet their effectiveness is often challenged, as cyberattacks evolve in new and unexpected ways. Machine learning models, although very powerful, usually perform well only on data that closely resembles what they were [...] Read more.
Intrusion Detection Systems (IDS) play a vital role in safeguarding networks, yet their effectiveness is often challenged, as cyberattacks evolve in new and unexpected ways. Machine learning models, although very powerful, usually perform well only on data that closely resembles what they were trained on. When faced with unfamiliar traffic, they often misclassify. In this work, we examine this generalization gap by training IDS models on one Denial-of-Service (DoS) variant, DoS Hulk, and testing them against other variants such as Goldeneye, Slowloris, and Slowhttptest. Our approach combines careful preprocessing, dimensionality reduction with Principal Component Analysis (PCA), and model training using Random Forests and Deep Neural Networks. To better understand model behavior, we tuned decision thresholds beyond the default 0.5 and found that small adjustments can significantly affect results. We also applied Shapley Additive Explanations (SHAP) to shed light on which features the models rely on, revealing a tendency to focus on fixed components that do not generalize well. Finally, using Uniform Manifold Approximation and Projection (UMAP), we visualized feature distributions and observed overlaps between training and testing datasets, but these did not translate into improved detection performance. Our findings highlight an important lesson: visual or apparent similarity between datasets does not guarantee generalization, and building robust IDS requires exposure to diverse attack patterns during training. Full article
Show Figures

Figure 1

15 pages, 577 KB  
Article
Blockchain-Enabled GDPR Compliance Enforcement for IIoT Data Access
by Amina Isazade, Ali Malik and Mohammed B. Alshawki
J. Cybersecur. Priv. 2025, 5(4), 84; https://doi.org/10.3390/jcp5040084 - 3 Oct 2025
Viewed by 1015
Abstract
The General Data Protection Regulation (GDPR) imposes additional demands and obligations on service providers that handle and process personal data. In this paper, we examine how advanced cryptographic techniques can be employed to develop a privacy-preserving solution for ensuring GDPR compliance in Industrial [...] Read more.
The General Data Protection Regulation (GDPR) imposes additional demands and obligations on service providers that handle and process personal data. In this paper, we examine how advanced cryptographic techniques can be employed to develop a privacy-preserving solution for ensuring GDPR compliance in Industrial Internet of Things (IIoT) systems. The primary objective is to ensure that sensitive data from IIoT devices is encrypted and accessible only to authorized entities, in accordance with Article 32 of the GDPR. The proposed system combines Decentralized Attribute-Based Encryption (DABE) with smart contracts on a blockchain to create a decentralized way of managing access to IIoT systems. The proposed system is used in an IIoT use case where industrial sensors collect operational data that is encrypted according to DABE. The encrypted data is stored in the IPFS decentralized storage system. The access policy and IPFS hash are stored in the blockchain’s smart contracts, allowing only authorized and compliant entities to retrieve the data based on matching attributes. This decentralized system ensures that information is stored encrypted and secure until it is retrieved by legitimate entities, whose access rights are automatically enforced by smart contracts. The implementation and evaluation of the proposed system have been analyzed and discussed, showing the promising achievement of the proposed system. Full article
(This article belongs to the Special Issue Data Protection and Privacy)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop