Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,777)

Search Parameters:
Keywords = cybersecurity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 4155 KB  
Article
Federated Learning and Data Mining-Based Botnet Attack Detection Framework for Internet of Things
by Kalupahana Liyanage Kushan Sudheera, Lokuge Lehele Gedara Madhuwantha Priyashan, Oruthota Arachchige Sanduni Pavithra, Malwaththe Widanalage Tharindu Aththanayake, Piyumi Bhagya Sudasinghe, Wijethunga Gamage Chatum Aloj Sankalpa, Gammana Guruge Nadeesha Sandamali and Peter Han Joo Chong
Sensors 2026, 26(5), 1573; https://doi.org/10.3390/s26051573 - 2 Mar 2026
Abstract
Botnet attacks in Internet of Things (IoT) environments often occur as multi-stage campaigns, making early and reliable detection difficult across distributed and privacy-sensitive networks. Centralized detection approaches are often limited by heterogeneous traffic characteristics, severe data imbalance, and the need to aggregate large [...] Read more.
Botnet attacks in Internet of Things (IoT) environments often occur as multi-stage campaigns, making early and reliable detection difficult across distributed and privacy-sensitive networks. Centralized detection approaches are often limited by heterogeneous traffic characteristics, severe data imbalance, and the need to aggregate large volumes of raw network data, raising scalability and privacy concerns. To address these challenges, this paper proposes FDA, a federated learning-based and data mining-driven framework for stage-aware botnet attack detection in IoT networks. FDA operates at network gateways, where anomalous traffic is first detected and then abstracted into compact and interpretable patterns using Frequent Itemset Mining (FIM). This pattern-based representation reduces noise and local traffic bias, enabling more robust learning across different IoT networks. Lightweight neural network models are trained locally at gateways, and a global model is learned through federated aggregation of model parameters, avoiding direct sharing of raw network data while enabling gateways to collaboratively learn evolving attack patterns across different IoT networks. Experimental results show that FDA achieves anomaly detection F1-scores above 99% across all gateways and multi-stage botnet attack classification F1-scores in the range of 48–49%, which are comparable to centralized machine-learning baselines while operating under decentralized and privacy-preserving constraints. Overall, FDA provides a practical, privacy-preserving, and effective solution for distributed botnet attack stage detection in real-world IoT deployments. Full article
(This article belongs to the Special Issue Feature Papers in Communications Section 2025–2026)
43 pages, 2158 KB  
Article
A Lightweight Post-Quantum Anonymous Attestation Framework for Traceable and Comprehensive Privacy Preservation in VANETs
by Esti Rahmawati Agustina, Kalamullah Ramli, Ruki Harwahyu, Teddy Surya Gunawan, Muhammad Salman, Andriani Adi Lestari and Arif Rahman Hakim
J. Cybersecur. Priv. 2026, 6(2), 44; https://doi.org/10.3390/jcp6020044 - 2 Mar 2026
Abstract
Vehicular ad hoc networks (VANETs) require authentication systems that balance privacy, scalability, and post-quantum security. While lattice-based V-LDAA offers quantum resistance, it faces challenges in signature size, traceability, and integration. We propose post-quantum traceable direct anonymous attestation (PQ-TDAA), combining National Institute of Standards [...] Read more.
Vehicular ad hoc networks (VANETs) require authentication systems that balance privacy, scalability, and post-quantum security. While lattice-based V-LDAA offers quantum resistance, it faces challenges in signature size, traceability, and integration. We propose post-quantum traceable direct anonymous attestation (PQ-TDAA), combining National Institute of Standards and Technology (NIST)-standard Dilithium2 and Falcon-512 signatures with adapted Beullens-style blind signatures and Fiat–Shamir simplified Schnorr proofs, reducing proof size by 69.2% (8 kB vs. V-LDAA’s 26 kB) and supporting European Telecommunications Standards Institute Technical Specification (ETSI TS) 102 941-compliant traceability through Road Side Unit (RSU)-assisted verification. Evaluated using SageMath, Python 3.11, and NS-3, PQ-TDAA-Falcon-512 achieves 8.1 ms and 49.7 ms end-to-end delays at 10 and 20 vehicles, respectively, with 64.7 Mbps goodput on congested 802.11p channels, showing promise for densities of ≤50 vehicles and advantages over Dilithium2. Real-world validation on ARM Cortex-A76 (Raspberry Pi 5, emulating automotive OBUs) yields sub-0.5 ms V2V cycles within 100 ms beacon intervals, supporting practical embedded deployment. Future work will extend PQ-TDAA to emerging 5G and NR-V2X settings, integrate more realistic mobility and channel models through coupled NS-3 and SUMO co-simulation, and investigate side-channel resistance for enhanced scalability and robustness in real deployments. Full article
(This article belongs to the Special Issue Applied Cryptography)
34 pages, 13258 KB  
Article
A Robust Image Encryption Framework Using Deep Feature Extraction and AES Key Optimization
by Sahara A. S. Almola, Hameed A. Younis and Raidah S. Khudeyer
Cryptography 2026, 10(2), 16; https://doi.org/10.3390/cryptography10020016 - 2 Mar 2026
Abstract
This article presents a novel framework for encrypting color images to enhance digital data security using deep learning and artificial intelligence techniques. The system employs a two-model neural architecture: the first, a Convolutional Neural Network (CNN), verifies sender authenticity during user authentication, while [...] Read more.
This article presents a novel framework for encrypting color images to enhance digital data security using deep learning and artificial intelligence techniques. The system employs a two-model neural architecture: the first, a Convolutional Neural Network (CNN), verifies sender authenticity during user authentication, while the second extracts unique fingerprint features. These features are converted into high-entropy encryption keys using Particle Swarm Optimization (PSO), minimizing key similarity and ensuring that no key is reused or transmitted. Keys are generated in real time simultaneously at both the sender and receiver ends, preventing interception or leakage and providing maximum confidentiality. Encrypted images are secured using the Advanced Encryption Standard (AES-256) with keys uniquely bound to each user’s biometric identity, ensuring personalized privacy. Evaluation using security and encryption metrics yielded strong results: entropy of 7.9991, correlation coefficient below 0.00001, NPCR of 99.66%, UACI of 33.9069%, and key space of 2256. Although the final encryption employs an AES-256 key (key space of 2256), this key is derived from a much larger deep-key space of 28192 generated by multi-layer neural feature extraction and optimized via PSO, thereby significantly enhancing the overall cryptographic strength. The system also demonstrated robustness against common attacks, including noise and cropping, while maintaining recoverable original content. Furthermore, the neural models achieved classification accuracy exceeding 99.83% with an error rate below 0.05%, confirming the framework’s reliability and practical applicability. This approach provides a secure, dynamic, and efficient image encryption paradigm, combining biometric authentication and AI-based feature extraction for advanced cybersecurity applications. Full article
Show Figures

Figure 1

23 pages, 919 KB  
Article
A Hybrid Deep Learning Architecture for Intrusion Detection Deploying Multi-Scale Feature Interaction and Temporal Modeling
by Eva Jakubcova, Maros Jakubec and Peter Pocta
AI 2026, 7(3), 87; https://doi.org/10.3390/ai7030087 (registering DOI) - 2 Mar 2026
Abstract
Network intrusion detection is a core component of modern cybersecurity, but it remains challenging due to highly imbalanced traffic, heterogeneous feature types, and a presence of short-term temporal dependencies in network flows. Traditional machine learning models often rely on handcrafted features and struggle [...] Read more.
Network intrusion detection is a core component of modern cybersecurity, but it remains challenging due to highly imbalanced traffic, heterogeneous feature types, and a presence of short-term temporal dependencies in network flows. Traditional machine learning models often rely on handcrafted features and struggle with complex attack patterns, while deep learning approaches may become overly complex or difficult to interpret. In this paper, we propose a neural intrusion detection method that combines structured feature preprocessing with a compact hybrid architecture. Numerical and categorical traffic features are processed separately using robust normalisation and trainable embeddings, and then merged into an unified representation. The proposed model builds on a multi-scale feature interaction block, followed by channel-wise attention and a single bidirectional gated recurrent unit layer with attention pooling to capture short-term temporal behavior. The method is evaluated on two widely used benchmark datasets, i.e., the CIC-IDS2017 and CSE-CIC-IDS2018 dataset. Experimental results show that the proposed approach consistently outperforms the classical machine learning baselines and achieves competitive or superior performance compared to the recent deep learning methods proposed in the literature. The results confirm that the proposed architectural choices effectively capture both feature interactions and temporal patterns in network traffic. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

3 pages, 131 KB  
Editorial
Cybersecurity in the IoT
by Christos Tryfonopoulos and Nicholas Kolokotronis
Future Internet 2026, 18(3), 127; https://doi.org/10.3390/fi18030127 - 2 Mar 2026
Abstract
The Internet of Things (IoT) has evolved into a vast ecosystem of massively interconnected devices delivering intelligent services across consumer, commercial, and industrial environments [...] Full article
(This article belongs to the Special Issue Cybersecurity in the IoT)
41 pages, 815 KB  
Article
XAI-Compliance-by-Design: A Modular Framework for GDPR- and AI Act-Aligned Decision Transparency in High-Risk AI Systems
by Antonio Goncalves and Anacleto Correia
J. Cybersecur. Priv. 2026, 6(2), 43; https://doi.org/10.3390/jcp6020043 - 2 Mar 2026
Abstract
High-risk Artificial Intelligence (AI) systems deployed in cybersecurity and privacy-critical contexts must satisfy not only demanding performance targets but also stringent obligations for transparency, accountability, and human oversight under the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Existing [...] Read more.
High-risk Artificial Intelligence (AI) systems deployed in cybersecurity and privacy-critical contexts must satisfy not only demanding performance targets but also stringent obligations for transparency, accountability, and human oversight under the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Existing approaches often treat these concerns in isolation as follows: Explainable Artificial Intelligence (XAI) methods are added ad hoc to machine learning pipelines, while governance and regulatory frameworks remain largely conceptual and weakly connected to the concrete artefacts produced in practice. This article proposes XAI-Compliance-by-Design, a modular framework that integrates XAI techniques, compliance-by-design principles and trustworthy Machine Learning Operations (MLOps) practices into a unified architecture for high-risk AI systems in cybersecurity and privacy domains. The framework follows a dual-flow design that couples an upstream technical pipeline (data, model, explanation, and monitoring) with a downstream governance pipeline (policy, oversight, audit, and decision-making), orchestrated by a Compliance-by-Design Engine and a technical–regulatory correspondence matrix aligned with the GDPR, the AI Act, and ISO/IEC 42001. The framework is instantiated and evaluated through an end-to-end, Python-based proof of concept using a synthetic, intrusion detection system (IDS)-inspired anomaly detection scenario with a Random Forest (RF) classifier, Shapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), drift indicators, and tamper-evident evidence bundles and decision dossiers. The results show that, even in a modest, toy setting, the framework systematically produces verifiable artefacts that support auditability and accountability across the model lifecycle. By linking explanation reports, drift statistics and compliance logs to concrete regulatory provisions, the approach illustrates how organisations operating high-risk AI for cybersecurity and privacy can move from model-centric optimisation to evidence-centric governance. The article discusses how the proposed framework can be generalised to real-world high-risk AI applications, contributing to the operationalisation of European digital sovereignty in AI governance. This article does not introduce a new intrusion detection algorithm; instead, it proposes an evidence-centric governance pipeline that captures decision provenance and compliance artefacts so that decisions can be audited and justified against regulatory obligations. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Graphical abstract

38 pages, 3007 KB  
Systematic Review
Generative AI Integration in Education: Theoretical Review and Future Directions Informed by the ADO Framework
by Raghu Raman, Krishnashree Achuthan and Prema Nedungadi
Information 2026, 17(3), 241; https://doi.org/10.3390/info17030241 - 2 Mar 2026
Abstract
The accelerated integration of Generative Artificial Intelligence (GenAI) tools such as ChatGPT is transforming learner engagement, instructional design, and institutional governance in education. This systematic literature review synthesizes theory-driven scholarship on GenAI adoption and pedagogical use through the Antecedents–Decisions–Outcomes (ADO) framework, examining how [...] Read more.
The accelerated integration of Generative Artificial Intelligence (GenAI) tools such as ChatGPT is transforming learner engagement, instructional design, and institutional governance in education. This systematic literature review synthesizes theory-driven scholarship on GenAI adoption and pedagogical use through the Antecedents–Decisions–Outcomes (ADO) framework, examining how cognitive, motivational, technological, and institutional factors collectively shape implementation and learning outcomes. Drawing primarily on the Technology Acceptance Model (TAM), Self-Determination Theory (SDT), and Institutional Theory, the review integrates complementary insights from Constructivist Learning and Diffusion of Innovations perspectives to conceptualize how antecedents influence decision-making and outcomes across educational settings. The findings indicate that learner motivation, perceived usefulness, digital literacy, and institutional readiness constitute key antecedents affecting GenAI adoption. Decision processes—spanning instructional design, ethical regulation, and pedagogical adaptation—mediate how these antecedents translate into practice. Outcomes reveal a dual trajectory: GenAI enhances personalization, feedback, and self-regulated learning, yet introduces challenges related to ethical ambiguity and overreliance. The review offers a conceptually integrated synthesis that bridges motivational, technological, and organizational perspectives, advancing a theoretical roadmap for ethical and sustainable GenAI adoption. For educators and policymakers, the findings emphasize transparent governance, faculty capacity-building, and equitable access to ensure that innovation remains aligned with pedagogical integrity and human-centered values. Full article
(This article belongs to the Special Issue Advancing Educational Innovation with Artificial Intelligence)
Show Figures

Figure 1

14 pages, 392 KB  
Review
Distributed Trust in the Age of Malware Blockchain Applications
by Paul A. Gagniuc, Maria-Iuliana Dascălu and Ionel-Bujorel Păvăloiu
Algorithms 2026, 19(3), 185; https://doi.org/10.3390/a19030185 - 2 Mar 2026
Abstract
Blockchain technology is redefining the foundations of cybersecurity by introducing decentralized, tamper-resistant mechanisms for data integrity, trust management, and malware intelligence sharing. Traditional detection systems, which are dependent on centralized control and opaque validation, remain vulnerable to data manipulation and systemic compromise. The [...] Read more.
Blockchain technology is redefining the foundations of cybersecurity by introducing decentralized, tamper-resistant mechanisms for data integrity, trust management, and malware intelligence sharing. Traditional detection systems, which are dependent on centralized control and opaque validation, remain vulnerable to data manipulation and systemic compromise. The integration of blockchain transforms these paradigms because it provides verifiable provenance, distributed consensus, and autonomous enforcement through smart contracts. This review synthesizes fifteen years of progress (2010–2025) at the intersection of blockchain and malware detection and discusses core architectures, consensus protocols, and cryptographic properties that underpin decentralized defenses. The review follows a structured literature review methodology, which focuses on blockchain architectures, consensus protocols, and malware-detection pipelines reported in the cybersecurity literature. It also analyzes blockchain detection pipelines, performance tradeoffs, and data protection mechanisms in distributed learning systems and artificial intelligence models. Special attention is given to scalability constraints, regulatory compliance, and interoperability challenges that shape adoption. The review identifies three dominant design patterns: (i) decentralized threat-intelligence sharing with provenance guarantees, (ii) consensus-driven validation of malware artifacts, and (iii) on-chain trust and reputation mechanisms for detector accountability. Through the union of blockchain, artificial intelligence, edge computation, and federated learning, cybersecurity attains an auditable and adaptive architecture resilient to adversarial threats. The study concludes that blockchain provides a verifiable trust infrastructure for malware detection, but its practical deployment requires faster transaction validation and stronger protection of sensitive data; future research should address performance optimization and regulatory compliance. Full article
Show Figures

Figure 1

31 pages, 3616 KB  
Article
A Hybrid Ensemble Framework for Rare Event Detection in Large-Scale Tabular Data
by Natalya Maxutova, Akmaral Kassymova, Kuanysh Kadirkulov, Aisulu Ismailova, Gulkiz Zhidekulova, Zhanar Azhibekova, Jamalbek Tussupov, Quvvatali Rakhimov and Zhanat Kenzhebayeva
Computers 2026, 15(3), 151; https://doi.org/10.3390/computers15030151 - 1 Mar 2026
Abstract
Rare event detection in large tabular data remains a computationally challenging problem due to class imbalance, heterogeneous feature distributions, and unstable thresholds. Traditional machine learning approaches based on individual models and fixed thresholds often exhibit limited robustness and reproducibility in such settings. This [...] Read more.
Rare event detection in large tabular data remains a computationally challenging problem due to class imbalance, heterogeneous feature distributions, and unstable thresholds. Traditional machine learning approaches based on individual models and fixed thresholds often exhibit limited robustness and reproducibility in such settings. This paper proposes a hybrid ensemble framework for rare event detection that integrates heterogeneous machine learning models through threshold-aware probabilistic aggregation. The framework combines gradient-boosted decision trees, regularized linear models, and neural networks, leveraging their complementary inductive biases. To ensure reproducibility and robust performance evaluation under severe class imbalance, a leaky-controlled evaluation protocol is employed, including rootwise summation, probability calibration, and validation-based threshold optimization. The proposed approach is evaluated on a large tabular dataset containing approximately 50,000 observations. Experimental results demonstrate improved rare event detection and robust generalization performance compared to individual baseline models. Explainability is achieved through Shapley Additive Explanations (SHAP)-based attribution analysis and clustering in the explanation space, enabling transparent analysis of ensemble decision-making behavior. The proposed framework represents a general-purpose computational solution for rare event detection and can be applied to a wide range of data-driven decision-making and anomaly detection problems. Full article
Show Figures

Figure 1

39 pages, 1457 KB  
Review
Algorithmic Challenges and Regulatory Frameworks of Artificial Intelligence in Mexico: A Prospective Analysis from the Perspective of Digital Governance Theory
by Eduardo Arguijo, Yenny Villuendas-Rey, Arturo Cruz-Jiménez, Jonatan Mireles-Hernández, Oscar Camacho-Nieto and Mario Aldape-Pérez
Computers 2026, 15(3), 150; https://doi.org/10.3390/computers15030150 - 1 Mar 2026
Abstract
The rapid integration of artificial intelligence (AI) has heightened the need for evidence-based regulatory frameworks to effectively address its legal, ethical, and societal consequences. This research carefully analyzes the prevailing landscape of AI-related legislation in Mexico. The study conducts a comprehensive review of [...] Read more.
The rapid integration of artificial intelligence (AI) has heightened the need for evidence-based regulatory frameworks to effectively address its legal, ethical, and societal consequences. This research carefully analyzes the prevailing landscape of AI-related legislation in Mexico. The study conducts a comprehensive review of legislative initiatives related to AI regulation submitted to Mexican legislative bodies, encompassing those approved or pending in commissions. This process leads to the identification and categorization of outstanding initiatives across seven policy areas: Congress, Education, Health, Intellectual Property, Justice, AI Promotion, and AI Regulation. As a principal contribution, this work offers the first exhaustive mapping and thematic classification of legislative activity related to AI in Mexico. Furthermore, the analysis identifies systemic regulatory deficiencies, such as the lack of AI-specific legislation, the limited scope of existing data protection laws in relation to AI systems, and an absence of technical provisions concerning ethical design, algorithmic transparency, cybersecurity, and accountability frameworks. By showcasing these deficiencies, the study contributes a diagnostic framework for evaluating AI governance readiness in emerging economies. The findings emphasize the importance of establishing a comprehensive, technically sound, and internationally harmonized regulatory framework to reduce AI-related risks while promoting responsible innovation in Mexico. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

32 pages, 2913 KB  
Article
Integrating Generative Design and Artificial Intelligence for Optimized Energy-Efficient Composite Facades in Next-Generation Smart Buildings
by Mohammad Q. Al-Jamal, Ayoub Alsarhan, Mahmoud AlJamal, Qasim Aljamal, Bashar S. Khassawneh, Amina Salhi and Hanan Hayat
Sustainability 2026, 18(5), 2379; https://doi.org/10.3390/su18052379 - 1 Mar 2026
Abstract
The pursuit of energy efficiency and sustainability in the built environment has placed façade systems at the forefront of innovation in architectural design. This study proposes an integrated framework that combines generative design techniques with artificial intelligence (AI) to optimize composite façade configurations [...] Read more.
The pursuit of energy efficiency and sustainability in the built environment has placed façade systems at the forefront of innovation in architectural design. This study proposes an integrated framework that combines generative design techniques with artificial intelligence (AI) to optimize composite façade configurations for next-generation smart buildings. Using parametric modeling, a wide design space of façade geometries and material compositions was generated, capturing trade-offs between thermal performance, daylight, structural strength, and aesthetic variability. Artificial intelligence algorithms, particularly machine learning models, are trained on simulation-derived performance datasets to rapidly predict key indicators such as energy consumption, thermal transmittance (U-value) and solar heat gain coefficients. The proposed approach achieved a predictive accuracy of 99.85%, enabling efficient exploration of optimal solutions across high-dimensional design alternatives. A multi-objective optimization strategy was further implemented to balance energy efficiency with structural and aesthetic constraints, producing façade configurations that outperform conventional designs. The findings demonstrate that integrating generative design with AI-based prediction not only accelerates the façade design process but also provides actionable pathways toward net-zero energy buildings. This research highlights the transformative potential of AI-driven generative workflows in advancing sustainable architecture and delivering intelligent, adaptive and performance-oriented façades for future urban environments. Full article
(This article belongs to the Special Issue Building a Sustainable Future: Sustainability and Innovation in BIM)
24 pages, 4005 KB  
Article
Explainable Firewall Penetration Testing Method Employing Machine Learning
by Algimantas Venčkauskas, Jevgenijus Toldinas and Nerijus Morkevičius
Electronics 2026, 15(5), 1030; https://doi.org/10.3390/electronics15051030 - 1 Mar 2026
Abstract
Cyber adversaries are becoming more sophisticated, creating complex security challenges as digital services expand. The reliability of the firewall is of the utmost importance in the context of network security since it serves as the first line of protection. Penetration testing is an [...] Read more.
Cyber adversaries are becoming more sophisticated, creating complex security challenges as digital services expand. The reliability of the firewall is of the utmost importance in the context of network security since it serves as the first line of protection. Penetration testing is an approach used to evaluate the reliability of a firewall and improve security by uncovering exploitable flaws. Frequently, penetration testing solutions are developed using machine learning, and it is of the utmost importance to explain the obtained results during the penetration testing. The emergence of explainable AI (XAI) addresses transparency in ML models, which is essential for informed cybersecurity decisions. Additionally, effective penetration testing reports are crucial for organizations, helping them comprehend and address vulnerabilities with tailored mitigation strategies. This study contributes to firewall security by developing an explainable penetration testing method, which includes two machine learning classification models: a binary model for detecting attacks and a multiclass model for identifying attack types with an explainability feature. This research introduces a novel explainability method that emphasizes significant features related to attack types based on multiclass predictions and proposes an approach using the extended System Security Assurance Ontology (SSAO) to clarify vulnerabilities and suggest alternative mitigation strategies. After evaluating numerous ML algorithms for the CIC-IDS2017 dataset, the Fine Tree model was considered to have the greatest performance. For the binary model, it achieved a validation accuracy of 99.7%, while for the multiclass model, it achieved a validation accuracy of 99.6%. Both models were used to test the firewall for vulnerabilities. Firewall penetration testing using the binary model achieves an accuracy of 82.1%, while the multiclass model achieves an accuracy of 78.7%. Full article
(This article belongs to the Special Issue Recent Advances in Information Security and Data Privacy, 2nd Edition)
Show Figures

Figure 1

17 pages, 1099 KB  
Article
LLM Security and Safety: Insights from Homotopy-Inspired Prompt Obfuscation
by Luis Eduardo Lazo Vera, Hamed Jelodar and Roozbeh Razavi-Far
AI 2026, 7(3), 83; https://doi.org/10.3390/ai7030083 (registering DOI) - 1 Mar 2026
Abstract
In this study, we propose a homotopy-inspired prompt obfuscation framework to enhance understanding of security and safety vulnerabilities in Large Language Models (LLMs). By systematically applying carefully engineered prompts, we demonstrate how latent model behaviors can be influenced in unexpected ways. Our experiments [...] Read more.
In this study, we propose a homotopy-inspired prompt obfuscation framework to enhance understanding of security and safety vulnerabilities in Large Language Models (LLMs). By systematically applying carefully engineered prompts, we demonstrate how latent model behaviors can be influenced in unexpected ways. Our experiments encompassed 15,732 prompts, including 10,000 high-priority cases, across LLama, Deepseek, KIMI for code generation, and Claude to verify. The results reveal critical insights into current LLM safeguards, highlighting the need for more robust defense mechanisms, reliable detection strategies, and improved resilience. Importantly, this work provides a principled framework for analyzing and mitigating potential weaknesses, with the goal of advancing safe, responsible, and trustworthy AI technologies. Full article
Show Figures

Figure 1

20 pages, 1045 KB  
Systematic Review
Cybersecurity of Cyber-Physical Systems in the Quantum Era: A Systematic Literature Review-Based Approach
by Siler Amador, César Pardo and Raúl Mazo
Future Internet 2026, 18(3), 125; https://doi.org/10.3390/fi18030125 - 28 Feb 2026
Abstract
The convergence of cyber-physical systems (CPSs), operational technologies (OTs), industrial control systems (ICSs), and quantum computing poses unprecedented challenges for the security and resilience of critical infrastructures (CIs). As quantum capabilities progress, classical cryptographic mechanisms such as RSA and ECC face increasing risks [...] Read more.
The convergence of cyber-physical systems (CPSs), operational technologies (OTs), industrial control systems (ICSs), and quantum computing poses unprecedented challenges for the security and resilience of critical infrastructures (CIs). As quantum capabilities progress, classical cryptographic mechanisms such as RSA and ECC face increasing risks from quantum algorithms (Shor and Grover), while CPS and OT remain constrained by long life cycles, heterogeneity, and limited upgrade capabilities. This study conducts a systematic literature review (SLR) following a GQM-PICO-PRISMA methodological framework to examine 66 primary studies, selected from 1.522 records identified in seven scientific databases and published between 2005 and 2025. The review identifies dominant research domains, ranging from IoT/IIoT security to machine learning-based intrusion detection in CPS/OT environments, and synthesizes key challenges. Findings reveal significant fragmentation in CPS taxonomies, limited integration of post-quantum cryptography (PQC) into OT/ICS protocols, a scarcity of real-world datasets, and insufficient quantum threat modeling (QTM). This work consolidates and structures prior evidence into a literature-derived classification of quantum-era CPS/OT cybersecurity topics and distills a prioritized research agenda for advancing quantum-resilient architectures. Full article
(This article belongs to the Section Cybersecurity)
32 pages, 2478 KB  
Article
Blockchain Security Using Confidentiality, Integrity, and Availability for Secure Communication
by Chukwuebuka Francis Ikenga-Metuh and Abel Yeboah-Ofori
Blockchains 2026, 4(1), 3; https://doi.org/10.3390/blockchains4010003 - 28 Feb 2026
Viewed by 45
Abstract
Background: Blockchain technology has emerged as a transformative communication solution for securing distributed systems. However, several vulnerabilities exist during transactions, including latency and network congestion issues during mempool processing, topology weaknesses, cross-chain bridge exploits, and cryptographic weaknesses. These vulnerabilities have led to [...] Read more.
Background: Blockchain technology has emerged as a transformative communication solution for securing distributed systems. However, several vulnerabilities exist during transactions, including latency and network congestion issues during mempool processing, topology weaknesses, cross-chain bridge exploits, and cryptographic weaknesses. These vulnerabilities have led to attacks that have threatened system integrity, including Block Extractable Value (BEV) attacks, Maximal Extractable Value (MEV) attacks, sandwich attacks, liquidation, and Decentralized Finance (DeFi) reordering attacks, among others. Thus, implementing a robust security framework based on the Confidentiality, Integrity, and Availability (CIA) triad remains critical for addressing modern blockchain technology threats. Objective: This paper examines blockchain technology, its various vulnerabilities, and attacks to determine how criminals exploit the system during transactions. Further, it evaluates its impact on users. Then, implement a blockchain attack in a “MasterChain” virtual environment to demonstrate how vulnerable spots can be practically exploited and discuss the application of the CIA security triad through modern cryptographic primitives. Methods: The approach considers Hevner’s design science framework, which emphasizes creating innovative artifacts that address identified problems while contributing to the knowledge base through rigorous evaluation. Furthermore, we developed a MasterChain tool using Python with Flask for distributed node communication, utilizing the Elliptic Curve Digital Signature Algorithm (ECDSA) with the Standards for Efficient Cryptography Prime 256-bit Koblitz curve 1 (secp256k1) for digital signatures and Secure Hash (SHA-3) (Keccak-256) hashing for block integrity. Results: show how the CIA has been implemented to provide secure communication through ECDSA-based transactions, SHA-3 chain integrity verification, and a multi-node distributed architecture, respectively. The performance analysis shows that ECDSA provides 256-bit security with 64-byte signatures compared to 2048-bit Rivest–Shamir–Adleman (RSA)’s 256-byte signatures, achieving a 75% reduction in bandwidth overhead. SHA-3 provides immunity to length extension attacks while maintaining equivalent collision resistance to SHA-256. Conclusions: The MasterChain framework provides a practical foundation for implementing blockchain security that addresses both classical and emerging vulnerabilities. The adoption of ECDSA and SHA-3 (Keccak-256) positions the system favourably for modern blockchain applications, while providing insights into the cryptographic trade-offs between performance, security, and compatibility. Full article
(This article belongs to the Special Issue Feature Papers in Blockchains 2025)
Back to TopTop