Previous Issue
Volume 6, February
 
 

J. Cybersecur. Priv., Volume 6, Issue 2 (April 2026) – 13 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
23 pages, 831 KB  
Article
Security Aspects of Zones and Conduits in IEC 62443
by Martin Gilje Jaatun, Mary Ann Lundteigen, Christoph Thieme, Lars Halvdan Flå, Karin Bernsmed, Roald Lygre and Fredrik Gratte
J. Cybersecur. Priv. 2026, 6(2), 52; https://doi.org/10.3390/jcp6020052 - 12 Mar 2026
Abstract
The IEC 62443 standard defines that, based on risk assessment, different parts of an Industrial Automation and Control System (IACS) may have different security levels, and that parts with the same security level can be designated as separate zones. Furthermore, communication between different [...] Read more.
The IEC 62443 standard defines that, based on risk assessment, different parts of an Industrial Automation and Control System (IACS) may have different security levels, and that parts with the same security level can be designated as separate zones. Furthermore, communication between different zones, both intra-IACS and inter-IACS, can be done via conduits. In this article, we argue that zones and particularly conduits can benefit from more detailed discussions of their architecture and implementation. Consequently, as novel contributions we (1) describe detailed principles for implementing conduits; (2) outline a process for connecting zones with potentially different Security Levels (SLs), expressed in the form of a flow chart; and (3) discuss challenges related to the application of zones and conduits in practice. Full article
(This article belongs to the Special Issue Building Community of Good Practice in Cybersecurity)
32 pages, 4199 KB  
Article
Beyond Semantic Noise: A Dual-Verification Framework for Thai–English Code-Mixed Malicious Script Detection via XAI-Guided Selective Integration
by Prasert Teppap, Wirot Ponglangka, Panudech Tipauksorn and Prasert Luekhong
J. Cybersecur. Priv. 2026, 6(2), 51; https://doi.org/10.3390/jcp6020051 - 9 Mar 2026
Viewed by 154
Abstract
In the evolving cybersecurity landscape, detecting Thai-English code-mixed malicious scripts within high-trust domains such as governmental and academic portals presents a significant defensive challenge. While Transformer-based architectures excel in semantic parsing, they often exhibit ‘Structural Bias,’ misinterpreting the high-entropy syntax of benign legacy [...] Read more.
In the evolving cybersecurity landscape, detecting Thai-English code-mixed malicious scripts within high-trust domains such as governmental and academic portals presents a significant defensive challenge. While Transformer-based architectures excel in semantic parsing, they often exhibit ‘Structural Bias,’ misinterpreting the high-entropy syntax of benign legacy HyperText Markup Language (HTML) as malicious obfuscation due to inherent ‘Attention Deficit’ in token-limited models. To address this, we propose an Explainable AI (XAI)-Driven Hybrid Architecture grounded in a ‘Selective Integration’ strategy. Unlike traditional hybrid models, our framework mathematically formalizes the fusion process by synergizing context-aware WangChanBERTa embeddings with orthogonal structural statistics through Dempster-Shafer Theory and Conditional Mutual Information (CMI). The proposed model was validated on a high-fidelity corpus, achieving a state-of-the-art F1-score of 0.9908, significantly outperforming standalone Transformers, Random Forest, and unsupervised baselines. XAI diagnostics revealed a ‘Dual-Validation’ mechanism where structural features act as an epistemic anchor. This mechanism effectively triggers a ‘Semantic Veto’ to filter hallucinations caused by benign complexity, achieving a remarkably low False Positive Rate (FPR) of 0.0116. Our findings demonstrate that hybridization is most effective when engineered features provide mathematical orthogonality to semantic embeddings. This work offers a robust, theoretically grounded framework for securing critical digital infrastructures in low-resource linguistic environments. Full article
(This article belongs to the Collection Machine Learning and Data Analytics for Cyber Security)
Show Figures

Figure 1

18 pages, 1286 KB  
Article
Performance Evaluation of Advanced Encryption Standard and Blowfish Encryption on WearOS: Implications for Wearable Device Security
by Sirapat Boonkrong and Papitchaya Kaensawan
J. Cybersecur. Priv. 2026, 6(2), 50; https://doi.org/10.3390/jcp6020050 - 7 Mar 2026
Viewed by 226
Abstract
In this study, we evaluated the performance of the Advanced Encryption Standard (AES)-128, AES-256, and Blowfish algorithms on WearOS for messages ranging from 8 to 128 bytes, which are typical message sizes for contemporary smartwatch applications. Using a WearOS emulator, we measured encryption [...] Read more.
In this study, we evaluated the performance of the Advanced Encryption Standard (AES)-128, AES-256, and Blowfish algorithms on WearOS for messages ranging from 8 to 128 bytes, which are typical message sizes for contemporary smartwatch applications. Using a WearOS emulator, we measured encryption time, memory usage, central processing unit (CPU) utilization, and battery consumption across 16 messages sizes with 10 repetitions over each configuration. The AES-128 algorithm consistently outperformed the others with approximately 1.0 ms of encryption time at 128 bytes, less than 6 KB memory, and less than 39% peak CPU utilization. The AES-256 algorithm added 25–30% processing overhead and higher energy consumption with negligible extra memory cost. The Blowfish algorithm consumed approximately three times more memory and exhibited the highest battery consumption per operation. It also scales poorly due to its 64-bit block size and large key scheduling approach. In addition, all performance differences are highly statistically significant (p < 0.001). Given the widespread hardware AES acceleration on WearOS devices and memory constraints, AES-128 is recommended as the default symmetric encryption algorithm for confidentiality in smartwatch applications. Full article
(This article belongs to the Section Cryptography and Cryptology)
Show Figures

Figure 1

36 pages, 2037 KB  
Article
Operational Threat Modeling of Adversarial Disturbances in Continuous-Variable Quantum Communication
by José R. Rosas-Bustos, Jesse Van Griensven Thé, Roydon Andrew Fraser, Nadeem Said, Sebastian Ratto Valderrama, Mark Pecen, Alexander Truskovsky and Andy Thanos
J. Cybersecur. Priv. 2026, 6(2), 49; https://doi.org/10.3390/jcp6020049 - 7 Mar 2026
Viewed by 154
Abstract
Continuous-variable quantum communication (CVQC) relies on finite-window estimation of phase space moments, making receiver decisions sensitive to finite measurement resolution, calibration uncertainty, and confidence-calibrated tolerances. This paper develops a receiver-centric threat modeling framework for structured (including adversarial) physical-layer disturbances under finite-sample inference. We [...] Read more.
Continuous-variable quantum communication (CVQC) relies on finite-window estimation of phase space moments, making receiver decisions sensitive to finite measurement resolution, calibration uncertainty, and confidence-calibrated tolerances. This paper develops a receiver-centric threat modeling framework for structured (including adversarial) physical-layer disturbances under finite-sample inference. We introduce an operational taxonomy, reconnaissance, exploratory, and denial-of-service, defined by statistical visibility relative to acceptance regions rather than by assumed physical mechanisms. Using an effective estimator space Gaussian model r^=Gr^+ξ with additive covariance N, we show how distinct mechanisms can be observationally equivalent within finite tolerances and we propose a protocol-agnostic scalar severity coordinate ΔE based on the covariance trace. We derive χ2-based missed-detection expressions and a soft detectability boundary scaling as 1/n, and we corroborate the predicted Pmiss(ν) behavior via Monte Carlo simulations across representative block sizes. The resulting framework clarifies the delimitation from conventional CV-QKD excess noise parameterization and provides a structured basis for monitoring-layer design and comparative threat-taxonomy mapping. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

34 pages, 2208 KB  
Article
Small Language Models for Phishing Website Detection: Cost, Performance, and Privacy Trade-Offs
by Georg Goldenits, Philip König, Sebastian Raubitzek and Andreas Ekelhart
J. Cybersecur. Priv. 2026, 6(2), 48; https://doi.org/10.3390/jcp6020048 - 5 Mar 2026
Viewed by 258
Abstract
Phishing websites pose a major cybersecurity threat, exploiting unsuspecting users and causing significant financial and organisational harm. Traditional machine learning approaches for phishing detection often require extensive feature engineering, continuous retraining, and costly infrastructure maintenance. At the same time, proprietary large language models [...] Read more.
Phishing websites pose a major cybersecurity threat, exploiting unsuspecting users and causing significant financial and organisational harm. Traditional machine learning approaches for phishing detection often require extensive feature engineering, continuous retraining, and costly infrastructure maintenance. At the same time, proprietary large language models (LLMs) have demonstrated strong performance in phishing-related classification tasks, but their operational costs and reliance on external providers limit their practical adoption in many business environments. This paper presents a detection pipeline for malicious websites and investigates the feasibility of Small Language Models (SLMs) using raw HTML code and URLs. A key advantage of these models is that they can be deployed on local infrastructure, providing organisations with greater control over data and operations. We systematically evaluate 15 commonly used SLMs, ranging from 1 billion to 70 billion parameters, benchmarking their classification accuracy, computational requirements, and cost-efficiency. Our results highlight the trade-offs between detection performance and resource consumption. While SLMs underperform compared to state-of-the-art proprietary LLMs, the gap is moderate: the best SLM achieves an F1-score of 0.893 (Llama3.3:70B), compared to 0.929 for GPT-5.2, indicating that open-source models can provide a viable and scalable alternative to external LLM services. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

20 pages, 2485 KB  
Article
Gated Residual Chebyshev KAN for Lightweight IoT DDoS Detection
by Fray L. Becerra-Suarez, Edwin Valencia-Castillo, Ana G. Borrero-Ramírez and Manuel G. Forero
J. Cybersecur. Priv. 2026, 6(2), 47; https://doi.org/10.3390/jcp6020047 - 4 Mar 2026
Viewed by 226
Abstract
Distributed denial-of-service (DDoS) attacks have become a critical threat to Internet of Things (IoT) infrastructures due to their high traffic dynamics, strong class imbalance, and strict resource constraints at the edge. This paper proposes ChebyKANRes, a lightweight intrusion detection model that combines Chebyshev [...] Read more.
Distributed denial-of-service (DDoS) attacks have become a critical threat to Internet of Things (IoT) infrastructures due to their high traffic dynamics, strong class imbalance, and strict resource constraints at the edge. This paper proposes ChebyKANRes, a lightweight intrusion detection model that combines Chebyshev polynomial expansions to parameterize learnable univariate transformations, a gate mechanism to modulate feature flow, and residual connections to stabilize optimization in deeper KAN-style stacks. Experiments were conducted on the CICIoT2023 dataset focusing on benign traffic and 12 DDoS subtypes, using a reproducible pipeline with stratified splitting, cross-validation (k = 5), and early stopping. The proposed model consistently improves multi-class performance (Accuracy: 0.9983) over an optimized MLP baseline (Accuracy: 0.9641), while maintaining a compact size suitable for edge deployment (≈123 k parameters; ~0.47 MB). Within CICIoT2023 and the evaluated split/training protocol, the proposed ChebyKANRes configuration shows improved imbalance-robust multiclass detection while maintaining a compact model size and comparable batch inference time. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

13 pages, 481 KB  
Article
A Conceptual Framework for a Morphological Scenario Library and Playbook Mapping in Cognitive Warfare Defense
by Dojin Ryu
J. Cybersecur. Priv. 2026, 6(2), 46; https://doi.org/10.3390/jcp6020046 - 3 Mar 2026
Viewed by 275
Abstract
Cognitive warfare is a hybrid threat that combines information manipulation with psychological influence, often amplified by digital platforms and synthetic media. Conventional cybersecurity tooling is optimized for technical intrusion and offers limited support for anticipating and responding to influence operations. This paper presents [...] Read more.
Cognitive warfare is a hybrid threat that combines information manipulation with psychological influence, often amplified by digital platforms and synthetic media. Conventional cybersecurity tooling is optimized for technical intrusion and offers limited support for anticipating and responding to influence operations. This paper presents a conceptual framework that structures cognitive warfare threats with General Morphological Analysis (GMA) and links plausible configurations to indicator profiles and response playbooks. We first conduct a PRISMA-informed literature review (2018–2025) to derive a five-dimensional taxonomy (actor, tactic, medium, target, objective). We then apply cross-consistency assessment to remove implausible state-pair combinations and obtain a reduced library of internally consistent scenarios. To support analyst-guided triage, we outline an AI-enabled workflow that maps observable signals to taxonomy states, matches events to scenarios, and prioritizes responses via an auditable, policy-set risk score. Finally, we illustrate the framework on three publicly documented cases and show how each case maps to scenario vectors, indicators, and playbooks. No end-to-end system implementation or performance metrics are reported; the contribution is the structured scenario library and the traceable mapping from observations to response guidance. Full article
(This article belongs to the Special Issue Building Community of Good Practice in Cybersecurity)
Show Figures

Figure 1

21 pages, 2858 KB  
Article
Generation of Distances Between Feature Vectors Derived from a Siamese Neural Network for Continuous Authentication
by Sergey Davydenko, Pavel Laptev and Evgeny Kostyuchenko
J. Cybersecur. Priv. 2026, 6(2), 45; https://doi.org/10.3390/jcp6020045 - 3 Mar 2026
Viewed by 205
Abstract
Continuous authentication is a promising method for protecting computer systems in the event of compromise of primary authentication factors, such as passwords or tokens. Systems employing continuous authentication that rely on biometrics may not be restricted to a single biometric characteristic; rather, they [...] Read more.
Continuous authentication is a promising method for protecting computer systems in the event of compromise of primary authentication factors, such as passwords or tokens. Systems employing continuous authentication that rely on biometrics may not be restricted to a single biometric characteristic; rather, they can simultaneously utilize multiple characteristics and subsequently arrive at a conclusive decision based on their collective analysis outcomes. One of the significant challenges researchers encounter when investigating effective fusion in decision-making is the lack of data. At present, data generation primarily involves the creation of feature vectors or attack simulation. This paper introduces a method for directly generating distances derived from a Siamese neural network, utilizing the probability density function of an existing distribution. Through statistical analysis, we successfully generated 5000 samples that correspond to the initial distribution, which were then employed to discover the threshold values at which FAR and FRR were less than 1%. The methods developed can be further applied to identify the most efficient strategies for integrating the results of continuous authentication in systems that incorporate multiple biometric characteristics. Full article
(This article belongs to the Special Issue Cyber Security and Digital Forensics—3rd Edition)
Show Figures

Figure 1

43 pages, 2473 KB  
Article
A Lightweight Post-Quantum Anonymous Attestation Framework for Traceable and Comprehensive Privacy Preservation in VANETs
by Esti Rahmawati Agustina, Kalamullah Ramli, Ruki Harwahyu, Teddy Surya Gunawan, Muhammad Salman, Andriani Adi Lestari and Arif Rahman Hakim
J. Cybersecur. Priv. 2026, 6(2), 44; https://doi.org/10.3390/jcp6020044 - 2 Mar 2026
Viewed by 184
Abstract
Vehicular ad hoc networks (VANETs) require authentication systems that balance privacy, scalability, and post-quantum security. While lattice-based V-LDAA offers quantum resistance, it faces challenges in signature size, traceability, and integration. We propose post-quantum traceable direct anonymous attestation (PQ-TDAA), combining National Institute of Standards [...] Read more.
Vehicular ad hoc networks (VANETs) require authentication systems that balance privacy, scalability, and post-quantum security. While lattice-based V-LDAA offers quantum resistance, it faces challenges in signature size, traceability, and integration. We propose post-quantum traceable direct anonymous attestation (PQ-TDAA), combining National Institute of Standards and Technology (NIST)-standard Dilithium2 and Falcon-512 signatures with adapted Beullens-style blind signatures and Fiat–Shamir simplified Schnorr proofs, reducing proof size by 69.2% (8 kB vs. V-LDAA’s 26 kB) and supporting European Telecommunications Standards Institute Technical Specification (ETSI TS) 102 941-compliant traceability through Road Side Unit (RSU)-assisted verification. Evaluated using SageMath, Python 3.11, and NS-3, PQ-TDAA-Falcon-512 achieves 8.1 ms and 49.7 ms end-to-end delays at 10 and 20 vehicles, respectively, with 64.7 Mbps goodput on congested 802.11p channels, showing promise for densities of ≤50 vehicles and advantages over Dilithium2. Real-world validation on ARM Cortex-A76 (Raspberry Pi 5, emulating automotive OBUs) yields sub-0.5 ms V2V cycles within 100 ms beacon intervals, supporting practical embedded deployment. Future work will extend PQ-TDAA to emerging 5G and NR-V2X settings, integrate more realistic mobility and channel models through coupled NS-3 and SUMO co-simulation, and investigate side-channel resistance for enhanced scalability and robustness in real deployments. Full article
(This article belongs to the Special Issue Applied Cryptography)
Show Figures

Graphical abstract

41 pages, 815 KB  
Article
XAI-Compliance-by-Design: A Modular Framework for GDPR- and AI Act-Aligned Decision Transparency in High-Risk AI Systems
by Antonio Goncalves and Anacleto Correia
J. Cybersecur. Priv. 2026, 6(2), 43; https://doi.org/10.3390/jcp6020043 - 2 Mar 2026
Viewed by 331
Abstract
High-risk Artificial Intelligence (AI) systems deployed in cybersecurity and privacy-critical contexts must satisfy not only demanding performance targets but also stringent obligations for transparency, accountability, and human oversight under the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Existing [...] Read more.
High-risk Artificial Intelligence (AI) systems deployed in cybersecurity and privacy-critical contexts must satisfy not only demanding performance targets but also stringent obligations for transparency, accountability, and human oversight under the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Existing approaches often treat these concerns in isolation as follows: Explainable Artificial Intelligence (XAI) methods are added ad hoc to machine learning pipelines, while governance and regulatory frameworks remain largely conceptual and weakly connected to the concrete artefacts produced in practice. This article proposes XAI-Compliance-by-Design, a modular framework that integrates XAI techniques, compliance-by-design principles and trustworthy Machine Learning Operations (MLOps) practices into a unified architecture for high-risk AI systems in cybersecurity and privacy domains. The framework follows a dual-flow design that couples an upstream technical pipeline (data, model, explanation, and monitoring) with a downstream governance pipeline (policy, oversight, audit, and decision-making), orchestrated by a Compliance-by-Design Engine and a technical–regulatory correspondence matrix aligned with the GDPR, the AI Act, and ISO/IEC 42001. The framework is instantiated and evaluated through an end-to-end, Python-based proof of concept using a synthetic, intrusion detection system (IDS)-inspired anomaly detection scenario with a Random Forest (RF) classifier, Shapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), drift indicators, and tamper-evident evidence bundles and decision dossiers. The results show that, even in a modest, toy setting, the framework systematically produces verifiable artefacts that support auditability and accountability across the model lifecycle. By linking explanation reports, drift statistics and compliance logs to concrete regulatory provisions, the approach illustrates how organisations operating high-risk AI for cybersecurity and privacy can move from model-centric optimisation to evidence-centric governance. The article discusses how the proposed framework can be generalised to real-world high-risk AI applications, contributing to the operationalisation of European digital sovereignty in AI governance. This article does not introduce a new intrusion detection algorithm; instead, it proposes an evidence-centric governance pipeline that captures decision provenance and compliance artefacts so that decisions can be audited and justified against regulatory obligations. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Graphical abstract

23 pages, 614 KB  
Article
Two-Factor Cancelable Biometric Key Binding via Euclidean Challenge–Response Pair Mechanism
by Michael Logan Garrett, Mahafujul Alam, Michael Partridge and Julie Heynssens
J. Cybersecur. Priv. 2026, 6(2), 42; https://doi.org/10.3390/jcp6020042 - 2 Mar 2026
Viewed by 134
Abstract
This work proposes a lightweight biometric key-binding scheme that adapts a PUF-style challenge–response mechanism to face geometry: a two-factor password and session nonce generate random challenge points, Gray-coded Euclidean distances to facial landmarks form responses, and a random key is bound by discarding [...] Read more.
This work proposes a lightweight biometric key-binding scheme that adapts a PUF-style challenge–response mechanism to face geometry: a two-factor password and session nonce generate random challenge points, Gray-coded Euclidean distances to facial landmarks form responses, and a random key is bound by discarding selected positions so only a reduced subset, the nonce, and a key hash are stored. At authentication, a fresh response set is compared to the subset with a Hamming-distance tolerance, and bounded local search corrects residual errors; each successful session rotates the nonce and refreshes the ephemeral key. We frame this as a conceptual exploration of an interpretable, on-device, controlled-capture design niche—a per-session nonce-driven cancelable biometric key-binding mechanism—and we quantify the resulting security–usability trade-offs. Empirically, the scheme works under stable capture conditions with carefully tuned thresholds, and it is naturally suited to tightly controlled deployments (e.g., access kiosks) where it can also incorporate user-driven micro-gestures as an extra behavioral factor. While the construction is fragile under broader variability and leans on the second factor for security, it offers an alternative to existing mechanisms and a clear niche, and we present it as a conceptual exploration showing how CRP mechanisms can inform cancelable biometrics with per-session revocability. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

29 pages, 1253 KB  
Article
Enhancing Federated Data Trading via Trustworthy Identity and Access Management Framework
by Kyriakos Stefanidis, Vasilis Bekos and Dimitris Karadimas
J. Cybersecur. Priv. 2026, 6(2), 41; https://doi.org/10.3390/jcp6020041 - 28 Feb 2026
Viewed by 260
Abstract
Trustworthy Identity and Access Management (IAM) is a foundational requirement for federated data trading platforms, yet existing solutions often rely on centralized Identity Providers (IdPs), lack cross-border interoperability, and offer limited support for user-friendly authorization management. These limitations hinder secure onboarding, fine-grained access [...] Read more.
Trustworthy Identity and Access Management (IAM) is a foundational requirement for federated data trading platforms, yet existing solutions often rely on centralized Identity Providers (IdPs), lack cross-border interoperability, and offer limited support for user-friendly authorization management. These limitations hinder secure onboarding, fine-grained access control, and regulatory compliance, especially within European Union (EU) data spaces governed by the Electronic Identification, Authentication, and Trust Services (eIDAS) 2.0 framework. This work presents a comprehensive IAM framework designed for federated data trading environments, developed within the EU-funded PISTIS project. The framework is based on Keycloak IAM and offers three major capabilities: (i) a novel IAM architecture tailored to distributed data trading scenarios; (ii) full integration of eIDAS-compliant cross-border authentication and initial support for European Digital Identity (EUDI) Wallets; and (iii) a standalone, web-based Access Policy Editor (APE) that abstracts Keycloak’s policy engine and enables non-technical users to define fine-grained, owner-driven access rules. The approach is evaluated across real-world mobility, energy, and automotive industry pilots, demonstrating its effectiveness in enhancing trust, interoperability, and usability within regulated data-sharing ecosystems. Full article
(This article belongs to the Special Issue Building Community of Good Practice in Cybersecurity)
Show Figures

Figure 1

24 pages, 328 KB  
Article
Strengthening Workforce Readiness: Evidence on Work-Based Learning in U.S. Higher Education Cybersecurity Programs
by Oscar A. Aliaga, Noémi Nagy, Bonnie Gómez Torres, Ajara Mahmoud and Courtney N. Callahan
J. Cybersecur. Priv. 2026, 6(2), 40; https://doi.org/10.3390/jcp6020040 - 25 Feb 2026
Viewed by 262
Abstract
This study provides a foundational review of work-based learning (WBL) opportunities offered by colleges and universities to students in higher education cybersecurity (CS) programs in the United States, with the goal of mapping the WBL practices across institutional and program contexts. Integrating WBL [...] Read more.
This study provides a foundational review of work-based learning (WBL) opportunities offered by colleges and universities to students in higher education cybersecurity (CS) programs in the United States, with the goal of mapping the WBL practices across institutional and program contexts. Integrating WBL into CS curricula is widely recognized as an effective way to strengthen essential skills and address employer concerns about the gap between academic preparation and labor market needs. We first outline the characteristics of institutions and CS programs offering WBL. Next, we examine the range of WBL experiences designed to enhance students’ professional competencies. Finally, we explore characteristics of the partnerships between higher education and industry that support these initiatives. Using a status survey approach, we collected responses from 92 higher education institutions offering CS programs. We analyzed the data using descriptive statistics and linear regression models to explore patterns of association between the type and number of WBL opportunities available to students, institutional characteristics related to the total number of WBL offerings, and program features associated with WBL intensity across Awareness, Exploration, and Direct Experience levels of intensity. Findings reveal a diverse array of WBL opportunities, with notable growth across credential levels. Notably, certificates and associate degrees place particular emphasis on WBL. Both institutional characteristics and program features explain, albeit partially, the number of WBL opportunities implemented and the intensity levels of those WBL. However, results also indicate an ambivalent connection to employers, despite their critical role in providing hands-on, problem-solving experiences. Based on these insights, we recommend expanding WBL beyond internships, strengthening institutional–industry partnerships, and fostering employer engagement through structured WBL collaboration models. These strategies aim to improve workforce readiness and create a more inclusive, scalable system of experiential learning in cybersecurity education. Full article
(This article belongs to the Section Security Engineering & Applications)
Previous Issue
Back to TopTop