Previous Issue
Volume 5, December
 
 

J. Cybersecur. Priv., Volume 6, Issue 1 (February 2026) – 25 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 1561 KB  
Article
DIGITRACKER: An Efficient Tool Leveraging Loki for Detecting, Mitigating Cyber Threats and Empowering Cyber Defense
by Mohammad Meraj Mirza, Rayan Saad Alsuwat, Yasser Musaed Alqurashi, Abdullah Adel Alharthi, Abdulrahman Matar Alsuwat, Osama Mohammed Alasamri and Nasser Ahmed Hussain
J. Cybersecur. Priv. 2026, 6(1), 25; https://doi.org/10.3390/jcp6010025 - 2 Feb 2026
Abstract
Cybersecurity teams rely on signature-based scanners such as Loki, a command-line tool for scanning malware, to identify Indicators of Compromise (IOCs), malicious artifacts, and YARA-rule matches. However, the raw Loki log output delivered as CSV or plaintext is challenging to interpret without additional [...] Read more.
Cybersecurity teams rely on signature-based scanners such as Loki, a command-line tool for scanning malware, to identify Indicators of Compromise (IOCs), malicious artifacts, and YARA-rule matches. However, the raw Loki log output delivered as CSV or plaintext is challenging to interpret without additional visualization and correlation tools. Therefore, this research discusses the creation of a web-based dashboard that displays results from the Loki scanner. The project focuses on processing and displaying information collected from Loki’s scans, which are available in log files or CSV format. DIGITRACKER was developed as a proof-of-concept (PoC) to process this data and present it in a user-friendly, visually appealing way, enabling system administrators and cybersecurity teams to monitor potential threats and vulnerabilities effectively. By leveraging modern web technologies and dynamic data visualization, the tool enhances the user experience, transforming raw scan results into a well-organized, interactive dashboard. This approach simplifies the often-complicated task of manual log analysis, making it easier to interpret output data and to support low-budget or resource-constrained cybersecurity teams by transforming raw logs into actionable insights. The project demonstrates the dashboard’s effectiveness in identifying and addressing threats, providing valuable tools for cybersecurity system administrators. Moreover, our evaluation shows that DIGITRACKER can process scan logs containing hundreds of IOC alerts within seconds and supports multiple concurrent users with minimal latency overhead. In test scenarios, the integrated Loki scans were achieved, and the end-to-end pipeline from the end of the scan to the initiation of dashboard visualization incurred an average latency of under 20 s. These results demonstrate improved threat visibility, support structured triage workflows, and enhance analysts’ task management. Overall, the system provides a practical, extensible PoC that bridges the gap between command-line scanners and operational security dashboards, with new scan results displayed on the dashboard faster than manual log analysis. By streamlining analysis and enabling near-real-time monitoring, the PoC tool DIGITRACKER empowers cyber defense initiatives and enhances overall system security. Full article
(This article belongs to the Special Issue Cybersecurity Risk Prediction, Assessment and Management)
Show Figures

Figure 1

29 pages, 679 KB  
Article
Digital Boundaries and Consent in the Metaverse: A Comparative Review of Privacy Risks
by Sofia Sakka, Vasiliki Liagkou, Afonso Ferreira and Chrysostomos Stylios
J. Cybersecur. Priv. 2026, 6(1), 24; https://doi.org/10.3390/jcp6010024 - 2 Feb 2026
Abstract
Metaverse presents significant opportunities for educational advancement by facilitating immersive, personalized, and interactive learning experiences through technologies such as virtual reality (VR), augmented reality (AR), extended reality (XR), and artificial intelligence (AI). However, this potential is compromised if digital environments fail to uphold [...] Read more.
Metaverse presents significant opportunities for educational advancement by facilitating immersive, personalized, and interactive learning experiences through technologies such as virtual reality (VR), augmented reality (AR), extended reality (XR), and artificial intelligence (AI). However, this potential is compromised if digital environments fail to uphold individuals’ privacy, autonomy, and equity. Despite their widespread adoption, the privacy implications of these environments remain inadequately understood, both in terms of technical vulnerabilities and legislative challenges, particularly regarding user consent management. Contemporary Metaverse systems collect highly sensitive information, including biometric signals, spatial behavior, motion patterns, and interaction data, often surpassing the granularity captured by traditional social networks. The lack of privacy-by-design solutions, coupled with the complexity of underlying technologies such as VR/AR infrastructures, 3D tracking systems, and AI-driven personalization engines, makes these platforms vulnerable to security breaches, data misuse, and opaque processing practices. This study presents a structured literature review and comparative analysis of privacy risks, consent mechanisms, and digital boundaries in metaverse platforms, with particular attention to educational contexts. We argue that privacy-aware design is essential not only for ethical compliance but also for supporting the long-term sustainability goals of digital education. Our findings aim to inform and support the development of secure, inclusive, and ethically grounded immersive learning environments by providing insights into systemic privacy and policy shortcomings. Full article
(This article belongs to the Special Issue Current Trends in Data Security and Privacy—2nd Edition)
Show Figures

Figure 1

14 pages, 286 KB  
Article
Trusted Yet Flexible: High-Level Runtimes for Secure ML Inference in TEEs
by Nikolaos-Achilleas Steiakakis and Giorgos Vasiliadis
J. Cybersecur. Priv. 2026, 6(1), 23; https://doi.org/10.3390/jcp6010023 - 27 Jan 2026
Viewed by 188
Abstract
Machine learning inference is increasingly deployed on shared and cloud infrastructures, where both user inputs and model parameters are highly sensitive. Confidential computing promises to protect these assets using Trusted Execution Environments (TEEs), yet existing TEE-based inference systems remain fundamentally constrained: they rely [...] Read more.
Machine learning inference is increasingly deployed on shared and cloud infrastructures, where both user inputs and model parameters are highly sensitive. Confidential computing promises to protect these assets using Trusted Execution Environments (TEEs), yet existing TEE-based inference systems remain fundamentally constrained: they rely almost exclusively on low-level, memory-unsafe languages to enforce confinement, sacrificing developer productivity, portability, and access to modern ML ecosystems. At the same time, mainstream high-level runtimes, such as Python, are widely considered incompatible with enclave execution due to their large memory footprints and unsafe model-loading mechanisms that permit arbitrary code execution. To bridge this gap, we present the first Python-based ML inference system that executes entirely inside Intel SGX enclaves while safely supporting untrusted third-party models. Our design enforces standardized, declarative model representations (ONNX), eliminating deserialization-time code execution and confining model behavior through interpreter-mediated execution. The entire inference pipeline (including model loading, execution, and I/O) remains enclave-resident, with cryptographic protection and integrity verification throughout. Our experimental results show that Python incurs modest overheads for small models (≈17%) and outperforms a low-level baseline on larger workloads (97% vs. 265% overhead), demonstrating that enclave-resident high-level runtimes can achieve competitive performances. Overall, our findings indicate that Python-based TEE inference is practical and secure, enabling the deployment of untrusted models with strong confidentiality and integrity guarantees while maintaining developer productivity and ecosystem advantages. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

41 pages, 1318 KB  
Article
Probabilistic Bit-Similarity-Based Key Agreement Protocol Employing Fuzzy Extraction for Secure and Lightweight Wireless Sensor Networks
by Sofia Sakka, Vasiliki Liagkou, Yannis Stamatiou and Chrysostomos Stylios
J. Cybersecur. Priv. 2026, 6(1), 22; https://doi.org/10.3390/jcp6010022 - 22 Jan 2026
Viewed by 141
Abstract
Wireless sensor networks comprise many resource-constrained nodes that must protect both local readings and routing metadata. The sensors collect data from the environment or from the individual to whom they are attached and transmit it to the nearest gateway node via a wireless [...] Read more.
Wireless sensor networks comprise many resource-constrained nodes that must protect both local readings and routing metadata. The sensors collect data from the environment or from the individual to whom they are attached and transmit it to the nearest gateway node via a wireless network for further delivery to external users. Due to wireless communication, the transmitted messages may be intercepted, rerouted, or even modified by an attacker. Consequently, security and privacy issues are of utmost importance, and the nodes must be protected against unauthorized access during transmission over a public wireless channel. To address these issues, we propose the Probabilistic Bit-Similarity-Based Key Agreement Protocol (PBS-KAP). This novel method enables two nodes to iteratively converge on a shared secret key without transmitting it or relying on pre-installed keys. PBS-KAP enables two nodes to agree on a symmetric session key using probabilistic similarity alignment with explicit key confirmation (MAC). Optimized Garbled Circuits facilitate secure computation with minimal computational and communication overhead, while Secure Sketches combined with Fuzzy Extractors correct residual errors and amplify entropy, producing reliable and uniformly random session keys. The resulting protocol provides a balance between security, privacy, and usability, standing as a practical solution for real-world WSN and IoT applications without imposing excessive computational or communication burdens. Security relies on standard computational assumptions via a one-time elliptic–curve–based base Oblivious Transfer, followed by an IKNP Oblivious Transfer extension and a small garbled threshold circuit. No pre-deployed long-term keys are required. After the bootstrap, only symmetric operations are used. We analyze confidentiality in the semi-honest model. However, entity authentication, though feasible, requires an additional Authenticated Key Exchange step or malicious-secure OT/GC. Under the semi-honest OT/GC assumption, we prove session-key secrecy/indistinguishability; full entity authentication requires an additional AKE binding step or malicious-secure OT/GC. Full article
(This article belongs to the Special Issue Data Protection and Privacy)
Show Figures

Figure 1

21 pages, 13708 KB  
Article
Image Encryption Using Chaotic Box Partition–Permutation and Modular Diffusion with PBKDF2 Key Derivation
by Javier Alberto Vargas Valencia, Mauricio A. Londoño-Arboleda, Hernán David Salinas Jiménez, Carlos Alberto Marín Arango and Luis Fernando Duque Gómez
J. Cybersecur. Priv. 2026, 6(1), 21; https://doi.org/10.3390/jcp6010021 - 22 Jan 2026
Viewed by 120
Abstract
This work presents a hybrid chaotic–cryptographic image encryption method that integrates a physical two-dimensional delta-kicked oscillator with a PBKDF2-HMAC-SHA256 key derivation function (KDF). The user-provided key material—a 12-character, human-readable key and four salt words—is transformed by the KDF into 256 bits of high-entropy [...] Read more.
This work presents a hybrid chaotic–cryptographic image encryption method that integrates a physical two-dimensional delta-kicked oscillator with a PBKDF2-HMAC-SHA256 key derivation function (KDF). The user-provided key material—a 12-character, human-readable key and four salt words—is transformed by the KDF into 256 bits of high-entropy data, which is then converted into 96 balanced decimal digits to seed the chaotic system. Encryption operates in the real number domain through a chaotic partition–permutation stage followed by modular diffusion. Experimental results confirm perfect reversibility, high randomness (Shannon entropy 7.9981), and negligible adjacent-pixel correlation. The method resists known- and chosen-plaintext attacks, showing no statistical dependence between plain and cipher images. Differential analysis yields NPCR99.6% and UACI33.9%, demonstrating complete diffusion. The PBKDF2-based key derivation expands the effective key space to 2256, eliminates weak-key conditions, and ensures full reproducibility. The proposed approach bridges deterministic chaos and modern cryptography, offering a secure, verifiable framework for protecting sensitive images. Full article
(This article belongs to the Section Cryptography and Cryptology)
Show Figures

Figure 1

48 pages, 10884 KB  
Article
A Practical Incident-Response Framework for Generative AI Systems
by Derrisa Tuscano and Jules Pagna Disso
J. Cybersecur. Priv. 2026, 6(1), 20; https://doi.org/10.3390/jcp6010020 - 19 Jan 2026
Viewed by 432
Abstract
Generative Artificial Intelligence (GenAI) systems have introduced new classes of security incidents that traditional response frameworks were not designed to manage, ranging from model manipulation and data exfiltration to misinformation cascades and prompt-based privilege escalation. This study proposes a Practical Incident-Response Framework for [...] Read more.
Generative Artificial Intelligence (GenAI) systems have introduced new classes of security incidents that traditional response frameworks were not designed to manage, ranging from model manipulation and data exfiltration to misinformation cascades and prompt-based privilege escalation. This study proposes a Practical Incident-Response Framework for Generative AI Systems (GenAI-IRF) that bridges established cybersecurity standards with emerging AI assurance principles. Using a Design Science Research (DSR) approach, this study identifies six recurrent incident archetypes and formalises a structured playbook aligned with NIST SP 800-61r3, NIST AI 600-1, MITRE ATLAS, and OWASP LLM Top-10. The artefact was evaluated in controlled scenarios using scenario-based simulations and expert reviews involving AI-security practitioners from academia, finance, and technology sectors. The results suggest high inter-rater reliability (κ = 0.88), strong usability (SUS = 86.4), and improved incident resolution times compared to baseline procedures. The findings demonstrate how traditional response models can be adapted to GenAI contexts using taxonomy-driven analysis, artefact-centred validation, and practitioner feedback. This framework provides a practical foundation for security teams seeking to operationalise AI incident response and contributes to the emerging body of work on trustworthy and resilient AI systems. Full article
(This article belongs to the Special Issue Cyber Security and Digital Forensics—2nd Edition)
Show Figures

Figure 1

2 pages, 149 KB  
Correction
Correction: Iavich et al. Post-Quantum Digital Signature: Verkle-Based HORST. J. Cybersecur. Priv. 2025, 5, 28
by Maksim Iavich, Tamari Kuchukhidze and Razvan Bocu
J. Cybersecur. Priv. 2026, 6(1), 19; https://doi.org/10.3390/jcp6010019 - 19 Jan 2026
Viewed by 119
Abstract
In the original publication [...] Full article
24 pages, 588 KB  
Article
An Improved Detection of Cross-Site Scripting (XSS) Attacks Using a Hybrid Approach Combining Convolutional Neural Networks and Support Vector Machine
by Abdissamad Ayoubi, Loubna Laaouina, Adil Jeghal and Hamid Tairi
J. Cybersecur. Priv. 2026, 6(1), 18; https://doi.org/10.3390/jcp6010018 - 17 Jan 2026
Viewed by 284
Abstract
Cross-site scripting (XSS) attacks are among the threats facing web security, resulting from the diversity and complexity of HTML formats. Research has shown that some text processing-based methods are limited in their ability to detect this type of attack. This article proposes an [...] Read more.
Cross-site scripting (XSS) attacks are among the threats facing web security, resulting from the diversity and complexity of HTML formats. Research has shown that some text processing-based methods are limited in their ability to detect this type of attack. This article proposes an approach aimed at improving the detection of this type of attack, taking into account the limitations of certain techniques. It combines the effectiveness of deep learning represented by convolutional neural networks (CNN) and the accuracy of classification methods represented by support vector machines (SVM). It takes advantage of the ability of CNNs to effectively detect complex visual patterns in the face of injection variations and the SVM’s powerful classification capability, as XSS attacks often use obfuscation or encryption techniques that are difficult to be detected with textual methods alone. This work relies on a dataset that focuses specifically on XSS attacks, which is available on Kaggle and contains 13,686 sentences in script form, including benign and malicious cases associated with these attacks. Benign data represents 6313 cases, while malicious data represents 7373 cases. The model was trained on 80% of this data, while the remaining 20% was allocated for test. Computer vision techniques were used to analyze the visual patterns in the images and extract distinctive features, moving from a textual representation to a visual one where each character is converted into its ASCII encoding, then into grayscale pixels. In order to visually distinguish the characteristics of normal and malicious code strings and the differences in their visual representation, a CNN model was used in the analysis. The convolution and subsampling (pooling) layers extract significant patterns at different levels of abstraction, while the final output is converted into a feature vector that can be exploited by a classification algorithm such as an Optimized SVM. The experimental results showed excellent performance for the model, with an accuracy of (99.7%), and this model is capable of generalizing effectively without the risk of overfitting or loss of performance. This significantly enhances the security of web applications by providing robust protection against complex XSS threats. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

22 pages, 840 KB  
Article
A Comparative Evaluation of Snort and Suricata for Detecting Data Exfiltration Tunnels in Cloud Environments
by Mahmoud H. Qutqut, Ali Ahmed, Mustafa K. Taqi, Jordan Abimanyu, Erika Thea Ajes and Fatima Alhaj
J. Cybersecur. Priv. 2026, 6(1), 17; https://doi.org/10.3390/jcp6010017 - 8 Jan 2026
Viewed by 533
Abstract
Data exfiltration poses a major cybersecurity challenge because it involves the unauthorized transfer of sensitive information. Intrusion Detection Systems (IDSs) are vital security controls in identifying such attacks; however, their effectiveness in cloud computing environments remains limited, particularly against covert channels such as [...] Read more.
Data exfiltration poses a major cybersecurity challenge because it involves the unauthorized transfer of sensitive information. Intrusion Detection Systems (IDSs) are vital security controls in identifying such attacks; however, their effectiveness in cloud computing environments remains limited, particularly against covert channels such as Internet Control Message Protocol (ICMP) and Domain Name System (DNS) tunneling. This study compares two widely used IDSs, Snort and Suricata, in a controlled cloud computing environment. The assessment focuses on their ability to detect data exfiltration techniques implemented via ICMP and DNS tunneling, using DNSCat2 and Iodine. We evaluate detection performance using standard classification metrics, including Recall, Precision, Accuracy, and F1-Score. Our experiments were conducted on Amazon Web Services (AWS) Elastic Compute Cloud (EC2) instances, where IDS instances monitored simulated exfiltration traffic generated by DNSCat2, Iodine, and Metasploit. Network traffic was mirrored via AWS Virtual Private Cloud (VPC) Traffic Mirroring, with the ELK Stack integrated for centralized logging and visual analysis. The findings indicate that Suricata outperformed Snort in detecting DNS-based exfiltration, underscoring the advantages of multi-threaded architectures for managing high-volume cloud traffic. For DNS tunneling, Suricata achieved 100% detection (recall) for both DNSCat2 and Iodine, whereas Snort achieved 85.7% and 66.7%, respectively. Neither IDS detected ICMP tunneling using Metasploit, with both recording 0% recall. It is worth noting that both IDSs failed to detect ICMP tunneling under default configurations, highlighting the limitations of signature-based detection in isolation. These results emphasize the need to combine signature-based and behavior-based analytics, supported by centralized logging frameworks, to strengthen cloud-based intrusion detection and enhance forensic visibility. Full article
(This article belongs to the Special Issue Cloud Security and Privacy)
Show Figures

Figure 1

16 pages, 834 KB  
Article
Learning to Hack, Playing to Learn: Gamification in Cybersecurity Courses
by Pierre-Emmanuel Arduin and Benjamin Costé
J. Cybersecur. Priv. 2026, 6(1), 16; https://doi.org/10.3390/jcp6010016 - 7 Jan 2026
Viewed by 612
Abstract
Cybersecurity education requires practical activities such as malware analysis, phishing detection, and Capture the Flag (CTF) challenges. These exercises enable students to actively apply theoretical concepts in realistic scenarios, fostering experiential learning. This article introduces an innovative pedagogical approach relying on gamification in [...] Read more.
Cybersecurity education requires practical activities such as malware analysis, phishing detection, and Capture the Flag (CTF) challenges. These exercises enable students to actively apply theoretical concepts in realistic scenarios, fostering experiential learning. This article introduces an innovative pedagogical approach relying on gamification in cybersecurity courses, combining technical problem-solving with human factors such as social engineering and risk-taking behavior. By integrating interactive challenges into the courses, engagement and motivation have been enhanced, while addressing both technological and managerial dimensions of cybersecurity. Observations from course implementation indicate that students demonstrate higher involvement when participating in supervised offensive security tasks and social engineering simulations within controlled environments. These findings highlight the potential of gamified strategies to strengthen cybersecurity competencies and promote ethical awareness, paving the way for future research on long-term cybersecurity learning outcomes. Full article
Show Figures

Figure 1

15 pages, 471 KB  
Article
Theoretical Vulnerabilities in Quantum Integrity Verification Under Bell-Hidden Variable Convergence
by Jose R. Rosas-Bustos, Jesse Van Griensven Thé, Roydon Andrew Fraser, Sebastian Ratto Valderrama, Nadeem Said and Andy Thanos
J. Cybersecur. Priv. 2026, 6(1), 15; https://doi.org/10.3390/jcp6010015 - 7 Jan 2026
Viewed by 376
Abstract
This paper identifies theoretical vulnerabilities in quantum integrity verification by demonstrating that Bell inequality (BI) violations, central to the detection of quantum entanglement, can align with predictions from hidden variable theories (HVTs) under specific measurement configurations. By invoking a Heisenberg-inspired measurement resolution constraint [...] Read more.
This paper identifies theoretical vulnerabilities in quantum integrity verification by demonstrating that Bell inequality (BI) violations, central to the detection of quantum entanglement, can align with predictions from hidden variable theories (HVTs) under specific measurement configurations. By invoking a Heisenberg-inspired measurement resolution constraint and finite-resolution positive operator-valued measures (POVMs), we identify “convergence vicinities” where the statistical outputs of quantum and classical models become operationally indistinguishable. These results do not challenge Bell’s theorem itself; rather, they expose a vulnerability in quantum integrity frameworks that treat observed Bell violations as definitive, experiment-level evidence of nonclassical entanglement correlations. We support our theoretical analysis with simulations and experimental results from IBM quantum hardware. Our findings call for more robust quantum-verification frameworks, with direct implications for the security of quantum computing, quantum-network architectures, and device-independent cryptographic protocols (e.g., device-independent quantum key distribution (DIQKD)). Full article
(This article belongs to the Section Cryptography and Cryptology)
Show Figures

Figure 1

36 pages, 5962 KB  
Article
Evaluation of Anomaly-Based Network Intrusion Detection Systems with Unclean Training Data for Low-Rate Attack Detection
by Angela Oryza Prabowo, Deka Julian Arrizki, Baskoro Adi Pratomo, Ahmad Ibnu Fajar, Krisna Badru Wijaya, Hudan Studiawan, Ary Mazharuddin Shiddiqi and Siti Hajar Othman
J. Cybersecur. Priv. 2026, 6(1), 14; https://doi.org/10.3390/jcp6010014 - 6 Jan 2026
Viewed by 440
Abstract
Anomaly-based network intrusion detection systems (NIDSs) complement signature-based detection methods to identify unknown (zero-day) attacks. The integration of machine and deep learning enhanced the efficiency of such NIDSs. However, since anomaly-based NIDSs heavily depend on the quality of the training data, the presence [...] Read more.
Anomaly-based network intrusion detection systems (NIDSs) complement signature-based detection methods to identify unknown (zero-day) attacks. The integration of machine and deep learning enhanced the efficiency of such NIDSs. However, since anomaly-based NIDSs heavily depend on the quality of the training data, the presence of malicious traffic in the training set can significantly degrade the model’s performance. Purging the training data of such traffic is often impractical. This study investigates performance degradation caused by increasing amounts of malicious traffic in the training data. We introduced varying portions of malicious traffic into the training sets of machine and deep learning models to determine which approach is most resilient to unclean training data. Our experiments revealed that Autoencoders, using a byte frequency feature set, achieved the highest F2 score (0.8989), with only a minor decrease of 0.0009 when trained on the most contaminated dataset. This performance drop was the smallest compared to other algorithms tested, including an Isolation Forest, a Local Outlier Factor, a One-Class Support Vector Machine, and Long Short-Term Memory. Full article
(This article belongs to the Special Issue Intrusion/Malware Detection and Prevention in Networks—2nd Edition)
Show Figures

Figure 1

38 pages, 1444 KB  
Review
A Comprehensive Review: The Evolving Cat-and-Mouse Game in Network Intrusion Detection Systems Leveraging Machine Learning
by Qutaiba Alasad, Meaad Ahmed, Shahad Alahmed, Omer T. Khattab, Saba Alaa Abdulwahhab and Jiann-Shuin Yuan
J. Cybersecur. Priv. 2026, 6(1), 13; https://doi.org/10.3390/jcp6010013 - 4 Jan 2026
Viewed by 601
Abstract
Machine learning (ML) techniques have significantly enhanced decision support systems to render them more accurate, efficient, and faster. ML classifiers in securing networks, on the other hand, face a disproportionate risk from the sophisticated adversarial attacks compared to other areas, such as spam [...] Read more.
Machine learning (ML) techniques have significantly enhanced decision support systems to render them more accurate, efficient, and faster. ML classifiers in securing networks, on the other hand, face a disproportionate risk from the sophisticated adversarial attacks compared to other areas, such as spam filtering, intrusion, and virus detection, and this introduces a continuous competition between malicious users and preventers. Attackers test ML models with inputs that have been specifically crafted to evade these models and obtain inaccurate forecasts. This paper presents a comprehensive review of attack and defensive techniques in ML-based NIDSs. It highlights the current serious challenges that the systems face in preserving robustness against adversarial attacks. Based on our analysis, with respect to their current superior performance and robustness, ML-based NIDS require urgent attention to develop more robust techniques to withstand such attacks. Finally, we discuss the current existing approaches in generating adversarial attacks and reveal the limitations of current defensive approaches. In this paper, the most recent advancements, such as hybrid defensive techniques that integrate multiple strategies to prevent adversarial attacks in NIDS, have highlighted the ongoing challenges. Full article
Show Figures

Figure 1

35 pages, 6609 KB  
Article
Fairness-Aware Face Presentation Attack Detection Using Local Binary Patterns: Bridging Skin Tone Bias in Biometric Systems
by Jema David Ndibwile, Ntung Ngela Landon and Floride Tuyisenge
J. Cybersecur. Priv. 2026, 6(1), 12; https://doi.org/10.3390/jcp6010012 - 4 Jan 2026
Viewed by 239
Abstract
While face recognition systems are increasingly deployed in critical domains, they remain vulnerable to presentation attacks and exhibit significant demographic bias, particularly affecting African populations. This paper presents a fairness-aware Presentation Attack Detection (PAD) system using Local Binary Patterns (LBPs) with novel ethnicity-aware [...] Read more.
While face recognition systems are increasingly deployed in critical domains, they remain vulnerable to presentation attacks and exhibit significant demographic bias, particularly affecting African populations. This paper presents a fairness-aware Presentation Attack Detection (PAD) system using Local Binary Patterns (LBPs) with novel ethnicity-aware processing techniques specifically designed for African contexts. Our approach introduces three key technical innovations: (1) adaptive preprocessing with differentiated Contrast-Limited Adaptive Histogram Equalization (CLAHE) parameters and gamma correction optimized for different skin tones, (2) group-specific decision threshold optimization using Equal Error Rate (EER) minimization for each ethnic group, and (3) three novel statistical methods for PAD fairness evaluation such as Coefficient of Variation analysis, McNemar’s significance testing, and bootstrap confidence intervals representing the first application of these techniques in Presentation Attack Detection. Comprehensive evaluation on the Chinese Academy of Sciences Institute of Automation-SURF Cross-ethnicity Face Anti-spoofing dataset (CASIA-SURF CeFA) dataset demonstrates significant bias reduction achievements: a 75.6% reduction in the accuracy gap between African and East Asian subjects (from 3.07% to 0.75%), elimination of statistically significant bias across all ethnic group comparisons, and strong overall performance, with 95.12% accuracy and 98.55% AUC. Our work establishes a comprehensive methodology for measuring and mitigating demographic bias in PAD systems while maintaining security effectiveness, contributing both technical innovations and statistical frameworks for inclusive biometric security research. Full article
Show Figures

Figure 1

21 pages, 1428 KB  
Review
Encryption for Industrial Control Systems: A Survey of Application-Level and Network-Level Approaches in Smart Grids
by Mahesh Narayanan, Muhammad Asfand Hafeez and Arslan Munir
J. Cybersecur. Priv. 2026, 6(1), 11; https://doi.org/10.3390/jcp6010011 - 4 Jan 2026
Viewed by 506
Abstract
Industrial Control Systems (ICS) are fundamental to the operation, monitoring, and automation of critical infrastructure in sectors such as energy, water utilities, manufacturing, transportation, and oil and gas. According to the Purdue Model, ICS encompasses tightly coupled OT and IT layers, becoming increasingly [...] Read more.
Industrial Control Systems (ICS) are fundamental to the operation, monitoring, and automation of critical infrastructure in sectors such as energy, water utilities, manufacturing, transportation, and oil and gas. According to the Purdue Model, ICS encompasses tightly coupled OT and IT layers, becoming increasingly interconnected. Smart grids represent a critical class of ICS; thus, this survey examines encryption and relevant protocols in smart grid communications, with findings extendable to other ICS. Encryption techniques implemented at both the protocol and network layers are among the most effective cybersecurity strategies for protecting communications in increasingly interconnected ICS environments. This paper provides a comprehensive survey of encryption practices within the smart grid as the primary ICS application domain, focusing on protocol-level solutions (e.g., DNP3, IEC 60870-5-104, IEC 61850, ICCP/TASE.2, Modbus, OPC UA, and MQTT) and network-level mechanisms (e.g., VPNs, IPsec, and MACsec). We evaluate these technologies in terms of security, performance, and deployability in legacy and heterogeneous systems that include renewable energy resources. Key implementation challenges are explored, including real-time operational constraints, cryptographic key management, interoperability across platforms, and alignment with NERC CIP, IEC 62351, and IEC 62443. The survey highlights emerging trends such as lightweight Transport Layer Security (TLS) for constrained devices, post-quantum cryptography, and Zero Trust architectures. Our goal is to provide a practical resource for building resilient smart grid security frameworks, with takeaways that generalize to other ICS. Full article
(This article belongs to the Special Issue Security of Smart Grid: From Cryptography to Artificial Intelligence)
Show Figures

Figure 1

18 pages, 1420 KB  
Article
FedPrIDS: Privacy-Preserving Federated Learning for Collaborative Network Intrusion Detection in IoT
by Sameer Mankotia, Daniel Conte de Leon and Bhaskar P. Rimal
J. Cybersecur. Priv. 2026, 6(1), 10; https://doi.org/10.3390/jcp6010010 - 2 Jan 2026
Viewed by 503
Abstract
One of the major challenges for effective intrusion detection systems (IDSs) is continuously and efficiently incorporating changes on cyber-attack tactics, techniques, and procedures in the Internet of Things (IoT). Semi-automated cross-organizational sharing of IDS data is a potential solution. However, a major barrier [...] Read more.
One of the major challenges for effective intrusion detection systems (IDSs) is continuously and efficiently incorporating changes on cyber-attack tactics, techniques, and procedures in the Internet of Things (IoT). Semi-automated cross-organizational sharing of IDS data is a potential solution. However, a major barrier to IDS data sharing is privacy. In this article, we describe the design, implementation, and evaluation of FedPrIDS: a privacy-preserving federated learning system for collaborative network intrusion detection in IoT. We performed experimental evaluation of FedPrIDS using three public network-based intrusion datasets: CIC-IDS-2017, UNSW-NB15, and Bot-IoT. Based on the labels in these datasets for attack type, we created five fictitious organizations, Financial, Technology, Healthcare, Government, and University and evaluated IDS accuracy before and after intelligence sharing. In our evaluation, FedPrIDS showed (1) a detection accuracy net gain of 8.5% to 14.4% from a comparative non-federated approach, with ranges depending on the organization type, where the organization type determines its estimated most likely attack types, privacy thresholds, and data quality measures; (2) a federated detection accuracy across attack types of 90.3% on CIC-IDS-2017, 89.7% on UNSW-NB15, and 92.1% on Bot-IoT; (3) maintained privacy of shared NIDS data via federated machine learning; and (4) reduced inter-organizational communication overhead by an average 50% and showed convergence within 20 training rounds. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

26 pages, 1079 KB  
Article
Secure Local Communication Between Browser Clients and Resource-Constrained Embedded IoT Devices
by Christian Schwinne and Jan Pelzl
J. Cybersecur. Priv. 2026, 6(1), 9; https://doi.org/10.3390/jcp6010009 - 1 Jan 2026
Viewed by 287
Abstract
This contribution outlines a completely new, fully local approach for secure web-based device control on the basis of browser inter-window messaging. Modern smart home IoT (Internet of Things) devices are commonly controlled with proprietary mobile applications via remote servers, which can have significant [...] Read more.
This contribution outlines a completely new, fully local approach for secure web-based device control on the basis of browser inter-window messaging. Modern smart home IoT (Internet of Things) devices are commonly controlled with proprietary mobile applications via remote servers, which can have significant adverse implications for the end user. Given that many IoT devices in use today are limited in both available memory and processing speed, standard approaches such as HTTPS-based transport security are not always feasible and a need for more suitable alternatives for such constrained devices arises. The proposed local method for lightweight and secure web-based device control using inter-window messaging leverages existing standard web technologies to enable a maximum degree of privacy, choice, and sustainability within the smart home ecosystem. The implemented proof-of-concept shows that it is feasible to meet essential security objectives in a local web IoT control context while utilizing less than a kilobyte of additional memory compared to an unsecured solution, thereby promoting sustainability through hardening of the control protocols used by existing devices with too few resources for implementing standard web cryptography. In this way, the present work contributes to achieving the vision of a fully open and secure local smart home. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

15 pages, 714 KB  
Article
An In-Depth Measurement of Security and Privacy Risks in the Free Live Sports Streaming Ecosystem
by Nithiya Muruganandham, Yogesh Sharma and Sina Keshvadi
J. Cybersecur. Priv. 2026, 6(1), 8; https://doi.org/10.3390/jcp6010008 - 1 Jan 2026
Viewed by 771
Abstract
Free live sports streaming (FLS) services attract millions of users who, driven by the excitement of live events, often engage with these high-risk platforms. Although these platforms are widely perceived as risky, the specific threats they pose have lacked large-scale empirical analysis. This [...] Read more.
Free live sports streaming (FLS) services attract millions of users who, driven by the excitement of live events, often engage with these high-risk platforms. Although these platforms are widely perceived as risky, the specific threats they pose have lacked large-scale empirical analysis. This paper addresses this gap through a comprehensive study of the FLS ecosystem, conducted during two major international sporting events (UCL playoffs and NHL Stanley Cup Playoffs, 2024–2025 season). We analyze the infrastructure, security threats, and privacy violations that define this space. Analysis of 260 unique domains uncovers systemic security risks, including drive-by downloads delivering persistent malware, and widespread privacy violations, such as invasive device fingerprinting that disregards regulations like the General Data Protection Regulation (GDPR). Furthermore, we map the ecosystem’s resilient infrastructure, identifying eight clusters of co-owned domains. These findings imply that effective countermeasures must target the centralized infrastructure and ephemeral nature of the FLS ecosystem beyond traditional blocking. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

11 pages, 370 KB  
Communication
Engineering Explainable AI Systems for GDPR-Aligned Decision Transparency: A Modular Framework for Continuous Compliance
by Antonio Goncalves and Anacleto Correia
J. Cybersecur. Priv. 2026, 6(1), 7; https://doi.org/10.3390/jcp6010007 - 30 Dec 2025
Viewed by 673
Abstract
Explainability is increasingly expected to support not only interpretation, but also accountability, human oversight, and auditability in high-risk Artificial Intelligence (AI) systems. However, in many deployments, explanations are generated as isolated technical reports, remaining weakly connected to decision provenance, governance actions, audit logs, [...] Read more.
Explainability is increasingly expected to support not only interpretation, but also accountability, human oversight, and auditability in high-risk Artificial Intelligence (AI) systems. However, in many deployments, explanations are generated as isolated technical reports, remaining weakly connected to decision provenance, governance actions, audit logs, and regulatory documentation. This short communication introduces XAI-Compliance-by-Design, a modular engineering framework for explainable artificial intelligence (XAI) systems that routes explainability outputs and related technical traces into structured, audit-ready evidence throughout the AI lifecycle, designed to align with key obligations under the European Union Artificial Intelligence Act (EU AI Act) and the General Data Protection Regulation (GDPR). The framework specifies (i) a modular architecture that separates technical evidence generation from governance consumption through explicit interface points for emitting, storing, and querying evidence, and (ii) a Technical–Regulatory Correspondence Matrix—a mapping table linking regulatory anchors to concrete evidence artefacts and governance triggers. As this communication does not report measured results, it also introduces an Evidence-by-Design evaluation protocol defining measurable indicators, baseline configurations, and required artefacts to enable reproducible empirical validation in future work. Overall, the contribution is a practical blueprint that clarifies what evidence must be produced, where it is generated in the pipeline, and how it supports continuous compliance and auditability efforts without relying on post hoc explanations. Full article
(This article belongs to the Special Issue Data Protection and Privacy)
Show Figures

Figure 1

29 pages, 1277 KB  
Review
A Survey on Acoustic Side-Channel Attacks: An Artificial Intelligence Perspective
by Benjamin Quattrone and Youakim Badr
J. Cybersecur. Priv. 2026, 6(1), 6; https://doi.org/10.3390/jcp6010006 - 29 Dec 2025
Viewed by 783
Abstract
Acoustic Side-Channel Attacks (ASCAs) exploit the sound produced by keyboards and other devices to infer sensitive information without breaching software or network defenses. Recent advances in deep learning, large language models, and signal processing have greatly expanded the feasibility and accuracy of these [...] Read more.
Acoustic Side-Channel Attacks (ASCAs) exploit the sound produced by keyboards and other devices to infer sensitive information without breaching software or network defenses. Recent advances in deep learning, large language models, and signal processing have greatly expanded the feasibility and accuracy of these attacks. To clarify the evolving threat landscape, this survey systematically reviews ASCA research published between January 2020 and February 2025. We categorize modern ASCA methods into three levels of text reconstruction—individual keystrokes, short text (words/phrases), and long-text regeneration— and analyze the signal processing, machine learning, and language-model decoding techniques that enable them. We also evaluate how environmental factors such as microphone placement, ambient noise, and keyboard design influence attack performance, and we examine the challenges of generalizing laboratory-trained models to real-world settings. This survey makes three primary contributions: (1) it provides the first structured taxonomy of ASCAs based on text generation granularity and decoding methodology; (2) it synthesizes cross-study evidence on environmental and hardware factors that fundamentally shape ASCA performance; and (3) it consolidates emerging countermeasures, including Generative Adversarial Network-based noise masking, cryptographic defenses, and environmental mitigation, while identifying open research gaps and future threats posed by voice-enabled IoT and prospective quantum side-channels. Together, these insights underscore the need for interdisciplinary, multi-layered defenses against rapidly advancing ASCA techniques. Full article
Show Figures

Figure 1

31 pages, 4683 KB  
Article
From Context to Action: Establishing a Pre-Chain Phase Within the Cyber Kill Chain
by Robert Kopal, Bojan Alikavazović and Zlatan Morić
J. Cybersecur. Priv. 2026, 6(1), 5; https://doi.org/10.3390/jcp6010005 - 26 Dec 2025
Viewed by 608
Abstract
The Cyber Kill Chain (CKC) is a prevalent concept in cyber defense; nevertheless, its emphasis on post-reconnaissance phases limits the capacity to foresee attacker activities outside the organizational boundary. This study introduces and empirically substantiates a pre-chain phase, referred to as contextual anticipation, [...] Read more.
The Cyber Kill Chain (CKC) is a prevalent concept in cyber defense; nevertheless, its emphasis on post-reconnaissance phases limits the capacity to foresee attacker activities outside the organizational boundary. This study introduces and empirically substantiates a pre-chain phase, referred to as contextual anticipation, which broadens the temporal framework of the CKC by methodically identifying subtle yet actionable signals prior to reconnaissance. The methodology combines the STEMPLES+ framework for socio-technical scanning with General Morphological Analysis (GMA), generating internally coherent scenarios that are translated into Indicators of Threats (IOT). These indicators connect contextual triggers to threshold-based monitoring activities and established courses of action, forming a reproducible and auditable relationship between foresight analysis and operational defense. The application of three illustrative cases—a banking merger, the distribution of a phishing kit in underground marketplaces, and wartime contribution scams—illustrated that contextual anticipation consistently provided quantifiable lead-time benefits varying from several days to six weeks. This proactive stance enabled measures such as registrar takedowns, targeted awareness campaigns, and anticipatory monitoring before distribution and exploitation. By formalizing CKC-0 as an integrated socio-technical phase, the research enhances cybersecurity practice by demonstrating how diffuse contextual drivers can be converted into organized, actionable mechanisms for proactive resilience. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

23 pages, 1828 KB  
Article
Homomorphic Encryption for Confidential Statistical Computation: Feasibility and Challenges
by Yesem Kurt Peker and Rahul Raj
J. Cybersecur. Priv. 2026, 6(1), 4; https://doi.org/10.3390/jcp6010004 - 25 Dec 2025
Viewed by 612
Abstract
Statistical confidentiality focuses on protecting data to preserve its analytical value while preventing identity exposure, ensuring privacy and security in any system handling sensitive information. Homomorphic encryption allows computations on encrypted data without revealing it to anyone other than an owner or an [...] Read more.
Statistical confidentiality focuses on protecting data to preserve its analytical value while preventing identity exposure, ensuring privacy and security in any system handling sensitive information. Homomorphic encryption allows computations on encrypted data without revealing it to anyone other than an owner or an authorized collector. When combined with other techniques, homomorphic encryption offers an ideal solution for ensuring statistical confidentiality. TFHE (Fast Fully Homomorphic Encryption over the Torus) is a fully homomorphic encryption scheme that supports efficient homomorphic operations on Booleans and integers. Building on TFHE, Zama’s Concrete project offers an open-source compiler that translates high-level Python code (version 3.9 or higher) into secure homomorphic computations. This study examines the feasibility of the Concrete compiler to perform core statistical analyses on encrypted data. We implement traditional algorithms for core statistical measures including the mean, variance, and five-point summary on encrypted datasets. Additionally, we develop a bitonic sort implementation to support the five-point summary. All implementations are executed within the Concrete framework, leveraging its built-in optimizations. Their performance is systematically evaluated by measuring circuit complexity, programmable bootstrapping count (PBS), compilation time, and execution time. We compare these results to findings from previous studies wherever possible. The results show that the complexity of sorting and statistical computations on encrypted data with the Concrete implementation of TFHE increases rapidly, and the size and range of data that can be accommodated is small for most applications. Nevertheless, this work reinforces the theoretical promise of Fully Homomorphic Encryption (FHE) for statistical analysis and highlights a clear path forward: the development of optimized, FHE-compatible algorithms. Full article
(This article belongs to the Special Issue Data Protection and Privacy)
Show Figures

Figure 1

20 pages, 953 KB  
Article
Digital Resilience and the “Awareness Gap”: An Empirical Study of Youth Perceptions of Hate Speech Governance on Meta Platforms in Hungary
by Roland Kelemen, Dorina Bosits and Zsófia Réti
J. Cybersecur. Priv. 2026, 6(1), 3; https://doi.org/10.3390/jcp6010003 - 24 Dec 2025
Viewed by 796
Abstract
Online hate speech poses a growing socio-technological threat that undermines democratic resilience and obstructs progress toward Sustainable Development Goal 16 (SDG 16). This study examines the regulatory and behavioral dimensions of this phenomenon through a combined legal analysis of platform governance and an [...] Read more.
Online hate speech poses a growing socio-technological threat that undermines democratic resilience and obstructs progress toward Sustainable Development Goal 16 (SDG 16). This study examines the regulatory and behavioral dimensions of this phenomenon through a combined legal analysis of platform governance and an empirical survey conducted on Meta platforms, based on a sample of young Hungarians (N = 301, aged 14–34). This study focuses on Hungary as a relevant case study of a Central and Eastern European (CEE) state. Countries in this region, due to their shared historical development, face similar societal challenges that are also reflected in the online sphere. The combination of high social media penetration, a highly polarized political discourse, and the tensions between platform governance and EU law (the DSA) makes the Hungarian context particularly suitable for examining digital resilience and the legal awareness of young users. The results reveal a significant “awareness gap”: While a majority of young users can intuitively identify overt hate speech, their formal understanding of platform rules is minimal. Furthermore, their sanctioning preferences often diverge from Meta’s actual policies, indicating a lack of clarity and predictability in platform governance. This gap signals a structural weakness that erodes user trust. The legal analysis highlights the limited enforceability and opacity of content moderation mechanisms, even under the Digital Services Act (DSA) framework. The empirical findings show that current self-regulation models fail to empower users with the necessary knowledge. The contribution of this study is to empirically identify and critically reframe this ‘awareness gap’. Moving beyond a simple knowledge deficit, we argue that the gap is a symptom of a deeper legitimacy crisis in platform governance. It reflects a rational user response—manifesting as digital resignation—to opaque, commercially driven, and unaccountable moderation systems. By integrating legal and behavioral insights with critical platform studies, this paper argues that achieving SDG 16 requires a dual strategy: (1) fundamentally increasing transparency and accountability in content governance to rebuild user trust, and (2) enhancing user-centered digital and legal literacy through a shared responsibility model. Such a strategy must involve both public and private actors in a coordinated, rights-based approach. Ultimately, this study calls for policy frameworks that strengthen democratic resilience not only through better regulation, but by empowering citizens to become active participants—rather than passive subjects—in the governance of online spaces. Full article
(This article belongs to the Special Issue Multimedia Security and Privacy)
Show Figures

Figure 1

21 pages, 483 KB  
Article
Using Secure Multi-Party Computation to Create Clinical Trial Cohorts
by Rafael Borges, Bruno Ferreira, Carlos Machado Antunes, Marisa Maximiano, Ricardo Gomes, Vítor Távora, Manuel Dias, Ricardo Correia Bezerra and Patrício Domingues
J. Cybersecur. Priv. 2026, 6(1), 2; https://doi.org/10.3390/jcp6010002 - 24 Dec 2025
Viewed by 518
Abstract
The increasing volume of digital medical data offers substantial research opportunities, though its complete utilization is hindered by ongoing privacy and security obstacles. This proof-of-concept study explores and confirms the viability of using Secure Multi-Party Computation (SMPC) to ensure protection and integrity of [...] Read more.
The increasing volume of digital medical data offers substantial research opportunities, though its complete utilization is hindered by ongoing privacy and security obstacles. This proof-of-concept study explores and confirms the viability of using Secure Multi-Party Computation (SMPC) to ensure protection and integrity of sensitive patient data, allowing the construction of clinical trial cohorts. Our findings reveal that SMPC facilitates collaborative data analysis on distributed, private datasets with negligible computational costs and optimized data partition sizes. The established architecture incorporates patient information via a blockchain-based decentralized healthcare platform and employs the MPyC library in Python for secure computations on Fast Healthcare Interoperability Resources (FHIR)-format data. The outcomes affirm SMPC’s capacity to maintain patient privacy during cohort formation, with minimal overhead. It illustrates the potential of SMPC-based methodologies to expand access to medical research data. A key contribution of this work is eliminating the need for complex cryptographic key management while maintaining patient privacy, illustrating the potential of SMPC-based methodologies to expand access to medical research data by reducing implementation barriers. Full article
(This article belongs to the Special Issue Cyber Security and Digital Forensics—2nd Edition)
Show Figures

Figure 1

23 pages, 1267 KB  
Article
Huffman Tree and Binary Conversion for Efficient and Secure Data Encryption and Decryption
by Suchart Khummanee, Thanapat Cheawchanwattana, Chanwit Suwannapong, Sarutte Atsawaraungsuk and Kritsanapong Somsuk
J. Cybersecur. Priv. 2026, 6(1), 1; https://doi.org/10.3390/jcp6010001 - 22 Dec 2025
Viewed by 406
Abstract
This study proposes the Huffman Tree and Binary Conversion (HTB) which is a preprocessing algorithm to transform the Huffman tree into binary representation before the encryption process. In fact, HTB can improve the structural readiness of plaintext by combining the Huffman code with [...] Read more.
This study proposes the Huffman Tree and Binary Conversion (HTB) which is a preprocessing algorithm to transform the Huffman tree into binary representation before the encryption process. In fact, HTB can improve the structural readiness of plaintext by combining the Huffman code with a deterministic binary representation of the Huffman tree. In addition, binary representation of the Huffman tree and the compressed information will be encrypted by standard cryptographic algorithms. Six datasets, divided into two groups (short and long texts), were chosen to evaluate compression behavior and the processing cost. Moreover, AES and RSA are chosen to combine with the proposed method to analyze the encryption and decryption cycles. The experimental results show that HTB introduces a small linear-time overhead. That means, it is slightly slower than applying only the Huffman code. Across these datasets, HTB maintained a consistently low processing cost. The processing time is below one millisecond in both encoding and decoding processes. However, for long texts, the structural conversion cost becomes amortized across larger encoded messages, and the reduction in plaintext size leads to fewer encryption blocks for both AES and RSA. The reduced plaintext size lowers the number of AES encryption blocks by approximately 30–45% and decreases the number of encryption and decryption rounds in RSA. The encrypted binary representation of the Huffman tree also decreased structural ambiguity and reduced the potential exposure of frequency-related metadata. Although HTB does not replace cryptographic security, it enhances the structural consistency of compression. Therefore, the proposed method demonstrates scalability, predictable overhead, and improved suitability for cryptographic workflows. Full article
(This article belongs to the Section Cryptography and Cryptology)
Show Figures

Figure 1

Previous Issue
Back to TopTop