Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,784)

Search Parameters:
Keywords = cybersecurity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 348 KB  
Article
Sandwich Results for Holomorphic Functions Related to an Integral Operator
by Amal Mohammed Darweesh, Adel Salim Tayyah, Sarem H. Hadi and Alina Alb Lupaş
Fractal Fract. 2026, 10(3), 171; https://doi.org/10.3390/fractalfract10030171 - 4 Mar 2026
Abstract
In this paper, we introduce a new logarithmic integral operator that unifies differentiation and fractional integration within the complex domain. The present work addresses this gap by applying the proposed operator to analytic functions represented by alternating power series. The method demonstrates that [...] Read more.
In this paper, we introduce a new logarithmic integral operator that unifies differentiation and fractional integration within the complex domain. The present work addresses this gap by applying the proposed operator to analytic functions represented by alternating power series. The method demonstrates that the coefficients can be reorganized in a controlled manner without affecting convergence or analytic behavior. Using this framework, we derive third-order differential subordination and superordination results, which naturally lead to corresponding sandwich-type results. The findings confirm that the introduced operator offers an effective analytical tool for studying distortion, growth, and mapping properties of analytic functions, with promising potential for future applications in fluid mechanics. Full article
24 pages, 4572 KB  
Article
Mitigating Machine-in-the-Loop Drone Attacks on Satellite Links via Atmospheric Scintillation Analysis
by Rajnish Kumar and Shlomi Arnon
Electronics 2026, 15(5), 1076; https://doi.org/10.3390/electronics15051076 - 4 Mar 2026
Abstract
The emergence of quantum computing poses a significant threat to the security of traditional encryption methods employed in satellite communication. To mitigate this vulnerability and enhance cybersecurity in the next generation of communication systems, a novel physical-layer solution is presented. This approach centers [...] Read more.
The emergence of quantum computing poses a significant threat to the security of traditional encryption methods employed in satellite communication. To mitigate this vulnerability and enhance cybersecurity in the next generation of communication systems, a novel physical-layer solution is presented. This approach centers on enhancing satellite link security through the analysis of stochastic atmospheric scintillation, facilitated by machine learning (ML). The proposed method safeguards ground stations against Machine-in-the-Middle (MITM) attacks perpetrated from aerial platforms (AP) such as drones or Unmanned Aerial Vehicles (UAVs). The underlying principle leverages the distinct statistical parameters inherent to received signals. These parameters are contingent upon the specific propagation channel, which is influenced by rapid tropospheric scintillation. As signals from legitimate satellites and malicious drones traverse separate spatial paths within the dynamic atmosphere, they exhibit demonstrably divergent scintillation statistics. Wavelet filtering is employed to extract these statistics from the incoming signal. The extracted data is subsequently processed through an ML algorithm, enabling the differentiation between satellite signals and potential spoofing signals emanating from drones. Extensive simulations have been conducted, illustrating the efficacy and robustness of the proposed architecture, consistently achieving an authentication rate exceeding 98% across diverse scenarios. Additionally, experimental results obtained from measurement data collected from Nilesat and Eutelsat satellites at a ground station in Israel provide empirical validation for this innovative approach. Full article
Show Figures

Figure 1

25 pages, 2523 KB  
Review
Risks Related to Advanced Bridge Monitoring Technologies
by Michal Miške, Pasquale Daponte, Luca De Vito and Lucia Figuli
Sensors 2026, 26(5), 1603; https://doi.org/10.3390/s26051603 - 4 Mar 2026
Abstract
Bridge monitoring has undergone a significant transformation with the integration of advanced technologies, including structural health monitoring systems, Internet of Things sensors, unmanned aerial vehicles, artificial intelligence, and cloud computing. These technologies enable continuous real-time data acquisition, processing, and early detection of structural [...] Read more.
Bridge monitoring has undergone a significant transformation with the integration of advanced technologies, including structural health monitoring systems, Internet of Things sensors, unmanned aerial vehicles, artificial intelligence, and cloud computing. These technologies enable continuous real-time data acquisition, processing, and early detection of structural degradation. However, their deployment also introduces a range of emerging risks that require careful consideration. This paper presents descriptive risk listings and proposes a comprehensive risk-governance framework for advanced bridge monitoring using the SWOT analysis. The framework integrates a unified risk taxonomy and assessment that links sensor and AI performance with cyber threat modeling and data governance requirements. The application of two real deployments, the Jindo Bridge SHM program and the Stava Bridge digital-twin implementation, shows how the framework converts heterogeneous measurements for improving bridge lifecycle management with the implementation of advanced monitoring technologies. Compared with prior studies that primarily catalog risks, the contribution of the paper is an interdisciplinary, operationalizable method that couples reliability, security, and governance into a single process, thereby ensuring that advanced technologies enhance, rather than erode, the safety and resilience of bridge infrastructure. Full article
(This article belongs to the Special Issue Feature Papers in Physical Sensors 2025)
Show Figures

Figure 1

21 pages, 1976 KB  
Review
Clinical Trial Design and Regulatory Requirements for Artificial Intelligence as a Medical Device: A PRISMA-ScR–Guided Scoping Review of Global Guidance and Evidence (2017–2025)
by Umamaheswari Shanmugam, Mohan Kumar Rajendran, Jawahar Natarajan and Veera Venkata Satyanarayana Reddy Karri
J. Clin. Med. 2026, 15(5), 1937; https://doi.org/10.3390/jcm15051937 - 4 Mar 2026
Abstract
Background: Artificial Intelligence as a Medical Device (AIaMD) introduces regulatory, methodological, ethical, and clinical challenges that are not fully addressed by traditional device trial frameworks. Given rapidly evolving and jurisdiction-specific guidance, a consolidated mapping of trial design expectations and regulatory requirements is [...] Read more.
Background: Artificial Intelligence as a Medical Device (AIaMD) introduces regulatory, methodological, ethical, and clinical challenges that are not fully addressed by traditional device trial frameworks. Given rapidly evolving and jurisdiction-specific guidance, a consolidated mapping of trial design expectations and regulatory requirements is needed. Objective: To map regulatory requirements and clinical trial design approaches for AIaMD across major jurisdictions and to identify key methodological and implementation gaps relevant to adaptive/continuously learning systems. Methods: A scoping review was conducted in accordance with the PRISMA-ScR reporting guideline. Peer-reviewed literature (2017–2025) was searched in PubMed, Embase, Web of Science, and the Cochrane Library. Gray literature was identified from major regulators and policy bodies (FDA, EMA, MHRA, PMDA, WHO, CDSCO). Eligible records addressed AIaMD clinical evaluation, trial design, regulatory pathways, post-market surveillance, or reporting standards. Data were charted using a predefined extraction framework and synthesized descriptively with thematic analysis across regulatory, methodological, ethical, and clinical implementation domains. Results: Included sources demonstrate substantial heterogeneity in evidence expectations and AI-specific pathways across jurisdictions. Recurrent themes include the need for predefined change management, performance monitoring and drift controls, dataset representativeness and bias evaluation, transparency and versioning, cybersecurity, and real-world evidence integration. Reporting frameworks (SPIRIT-AI, CONSORT-AI, MI-CLAIM) are frequently cited as mechanisms to improve reproducibility and regulatory readiness. Conclusions: Evidence and regulatory expectations for AIaMD remain fragmented. Harmonization of terminology, trial design principles, and post-market governance—supported by standardized reporting—would improve clinical validity, safety assurance, and scalability across regions. This review has several limitations. As a scoping synthesis, it prioritizes breadth of coverage rather than quantitative meta-analysis. Included sources vary in methodological rigor and reporting detail, and evolving regulatory guidance may change rapidly over time. Nevertheless, integrating peer-reviewed and regulatory evidence provides a comprehensive overview of current expectations and emerging gaps. In conclusion, effective evaluation of AIaMD requires a shift from static, one-time validation toward continuous lifecycle oversight that integrates adaptive trial designs, transparent reporting standards, bias surveillance, and structured post-market monitoring. Regulatory heterogeneity currently poses significant barriers to multinational development; however, coordinated adoption of standardized evidence frameworks and collaborative governance mechanisms may reduce duplication while preserving patient safety. By translating methodological principles into operational guidance, this review aims to support regulators, sponsors, and clinical investigators in designing trials that are both scientifically rigorous and practically implementable for continuously learning systems. Full article
Show Figures

Figure 1

6 pages, 215 KB  
Proceeding Paper
Measuring Risk in Cybersecurity via Likelihood
by Pablo Corona-Fraga, Vanessa Díaz-Rodriguez and Jesús M. Niebla-Zatarain
Eng. Proc. 2026, 123(1), 39; https://doi.org/10.3390/engproc2026123039 - 4 Mar 2026
Abstract
Cybersecurity risk is commonly expressed as impact × probability, yet probability is rarely defensible because incidents are underreported, data are heterogeneous, and adversary behavior changes quickly. We present a preliminary, data-driven framework to estimate cyber likelihood without relying on naive event frequencies. The [...] Read more.
Cybersecurity risk is commonly expressed as impact × probability, yet probability is rarely defensible because incidents are underreported, data are heterogeneous, and adversary behavior changes quickly. We present a preliminary, data-driven framework to estimate cyber likelihood without relying on naive event frequencies. The approach fuses incident narratives, threat intelligence, vulnerabilities, and control mappings into an organization-specific cyber-exposure profile represented as a typed knowledge graph and a normalized metric vector. Four measurable variables—Exposure, Traceability, Motivation, and System Update—are computed from standardized sensors spanning attack surface, observability, asset value, and patch velocity, then combined into a refreshable likelihood score for monitoring and control prioritization and to support transparent, repeatable risk governance. Unsupervised NLP (TF–IDF, latent semantic representations, and spherical clustering) supports construct discovery and profile population. Full article
(This article belongs to the Proceedings of First Summer School on Artificial Intelligence in Cybersecurity)
37 pages, 2784 KB  
Article
FedSMOTE-DP: Privacy-Aware Federated Ensemble Learning for Intrusion Detection in IoMT Networks
by Theyab Alsolami and Mohammad Ilyas
Sensors 2026, 26(5), 1592; https://doi.org/10.3390/s26051592 - 3 Mar 2026
Abstract
The Internet of Medical Things (IoMT) transforms healthcare through interconnected medical devices but faces significant cybersecurity threats, particularly intrusion and exfiltration attacks. Centralized intrusion detection systems (IDSs) require data aggregation, presenting privacy and scalability risks. This paper proposes FedEnsemble-DP, a privacy-aware Federated Learning [...] Read more.
The Internet of Medical Things (IoMT) transforms healthcare through interconnected medical devices but faces significant cybersecurity threats, particularly intrusion and exfiltration attacks. Centralized intrusion detection systems (IDSs) require data aggregation, presenting privacy and scalability risks. This paper proposes FedEnsemble-DP, a privacy-aware Federated Learning (FL) framework for decentralized intrusion detection in IoMT networks. The framework integrates three data balancing scenarios (Raw Imbalanced, Local SMOTE, Centralized SMOTE) with Differential Privacy (DP) and Secure Aggregation mechanisms. Extensive experiments on WUSTL-EHMS-2020 and CIC-IoMT-2024 datasets under non-IID settings (Dirichlet α = 0.3) demonstrate that models with strong privacy guarantees (ε = 3.0) frequently match or exceed non-private baselines. Key findings show Local SMOTE with ε = 3.0 achieved 94.60% accuracy and 0.9598 AUC, while Raw Imbalanced with ε = 3.0 attained 94.50% accuracy and 0.9494 AUC. Even with strict privacy (ε = 3.0), these results surpassed the non-private baseline (93.20% accuracy) in the raw scenario. Centralized SMOTE showed effectiveness but introduced training instability. These results indicate that local data balancing combined with calibrated DP noise can yield high detection performance while preserving privacy, effectively bridging security-performance and data confidentiality requirements in distributed healthcare networks. Full article
(This article belongs to the Special Issue Blockchain Technology for Internet of Things)
13 pages, 481 KB  
Article
A Conceptual Framework for a Morphological Scenario Library and Playbook Mapping in Cognitive Warfare Defense
by Dojin Ryu
J. Cybersecur. Priv. 2026, 6(2), 46; https://doi.org/10.3390/jcp6020046 - 3 Mar 2026
Abstract
Cognitive warfare is a hybrid threat that combines information manipulation with psychological influence, often amplified by digital platforms and synthetic media. Conventional cybersecurity tooling is optimized for technical intrusion and offers limited support for anticipating and responding to influence operations. This paper presents [...] Read more.
Cognitive warfare is a hybrid threat that combines information manipulation with psychological influence, often amplified by digital platforms and synthetic media. Conventional cybersecurity tooling is optimized for technical intrusion and offers limited support for anticipating and responding to influence operations. This paper presents a conceptual framework that structures cognitive warfare threats with General Morphological Analysis (GMA) and links plausible configurations to indicator profiles and response playbooks. We first conduct a PRISMA-informed literature review (2018–2025) to derive a five-dimensional taxonomy (actor, tactic, medium, target, objective). We then apply cross-consistency assessment to remove implausible state-pair combinations and obtain a reduced library of internally consistent scenarios. To support analyst-guided triage, we outline an AI-enabled workflow that maps observable signals to taxonomy states, matches events to scenarios, and prioritizes responses via an auditable, policy-set risk score. Finally, we illustrate the framework on three publicly documented cases and show how each case maps to scenario vectors, indicators, and playbooks. No end-to-end system implementation or performance metrics are reported; the contribution is the structured scenario library and the traceable mapping from observations to response guidance. Full article
(This article belongs to the Special Issue Building Community of Good Practice in Cybersecurity)
Show Figures

Figure 1

24 pages, 4155 KB  
Article
Federated Learning and Data Mining-Based Botnet Attack Detection Framework for Internet of Things
by Kalupahana Liyanage Kushan Sudheera, Lokuge Lehele Gedara Madhuwantha Priyashan, Oruthota Arachchige Sanduni Pavithra, Malwaththe Widanalage Tharindu Aththanayake, Piyumi Bhagya Sudasinghe, Wijethunga Gamage Chatum Aloj Sankalpa, Gammana Guruge Nadeesha Sandamali and Peter Han Joo Chong
Sensors 2026, 26(5), 1573; https://doi.org/10.3390/s26051573 - 2 Mar 2026
Abstract
Botnet attacks in Internet of Things (IoT) environments often occur as multi-stage campaigns, making early and reliable detection difficult across distributed and privacy-sensitive networks. Centralized detection approaches are often limited by heterogeneous traffic characteristics, severe data imbalance, and the need to aggregate large [...] Read more.
Botnet attacks in Internet of Things (IoT) environments often occur as multi-stage campaigns, making early and reliable detection difficult across distributed and privacy-sensitive networks. Centralized detection approaches are often limited by heterogeneous traffic characteristics, severe data imbalance, and the need to aggregate large volumes of raw network data, raising scalability and privacy concerns. To address these challenges, this paper proposes FDA, a federated learning-based and data mining-driven framework for stage-aware botnet attack detection in IoT networks. FDA operates at network gateways, where anomalous traffic is first detected and then abstracted into compact and interpretable patterns using Frequent Itemset Mining (FIM). This pattern-based representation reduces noise and local traffic bias, enabling more robust learning across different IoT networks. Lightweight neural network models are trained locally at gateways, and a global model is learned through federated aggregation of model parameters, avoiding direct sharing of raw network data while enabling gateways to collaboratively learn evolving attack patterns across different IoT networks. Experimental results show that FDA achieves anomaly detection F1-scores above 99% across all gateways and multi-stage botnet attack classification F1-scores in the range of 48–49%, which are comparable to centralized machine-learning baselines while operating under decentralized and privacy-preserving constraints. Overall, FDA provides a practical, privacy-preserving, and effective solution for distributed botnet attack stage detection in real-world IoT deployments. Full article
(This article belongs to the Special Issue Feature Papers in Communications Section 2025–2026)
43 pages, 2158 KB  
Article
A Lightweight Post-Quantum Anonymous Attestation Framework for Traceable and Comprehensive Privacy Preservation in VANETs
by Esti Rahmawati Agustina, Kalamullah Ramli, Ruki Harwahyu, Teddy Surya Gunawan, Muhammad Salman, Andriani Adi Lestari and Arif Rahman Hakim
J. Cybersecur. Priv. 2026, 6(2), 44; https://doi.org/10.3390/jcp6020044 - 2 Mar 2026
Abstract
Vehicular ad hoc networks (VANETs) require authentication systems that balance privacy, scalability, and post-quantum security. While lattice-based V-LDAA offers quantum resistance, it faces challenges in signature size, traceability, and integration. We propose post-quantum traceable direct anonymous attestation (PQ-TDAA), combining National Institute of Standards [...] Read more.
Vehicular ad hoc networks (VANETs) require authentication systems that balance privacy, scalability, and post-quantum security. While lattice-based V-LDAA offers quantum resistance, it faces challenges in signature size, traceability, and integration. We propose post-quantum traceable direct anonymous attestation (PQ-TDAA), combining National Institute of Standards and Technology (NIST)-standard Dilithium2 and Falcon-512 signatures with adapted Beullens-style blind signatures and Fiat–Shamir simplified Schnorr proofs, reducing proof size by 69.2% (8 kB vs. V-LDAA’s 26 kB) and supporting European Telecommunications Standards Institute Technical Specification (ETSI TS) 102 941-compliant traceability through Road Side Unit (RSU)-assisted verification. Evaluated using SageMath, Python 3.11, and NS-3, PQ-TDAA-Falcon-512 achieves 8.1 ms and 49.7 ms end-to-end delays at 10 and 20 vehicles, respectively, with 64.7 Mbps goodput on congested 802.11p channels, showing promise for densities of ≤50 vehicles and advantages over Dilithium2. Real-world validation on ARM Cortex-A76 (Raspberry Pi 5, emulating automotive OBUs) yields sub-0.5 ms V2V cycles within 100 ms beacon intervals, supporting practical embedded deployment. Future work will extend PQ-TDAA to emerging 5G and NR-V2X settings, integrate more realistic mobility and channel models through coupled NS-3 and SUMO co-simulation, and investigate side-channel resistance for enhanced scalability and robustness in real deployments. Full article
(This article belongs to the Special Issue Applied Cryptography)
34 pages, 13258 KB  
Article
A Robust Image Encryption Framework Using Deep Feature Extraction and AES Key Optimization
by Sahara A. S. Almola, Hameed A. Younis and Raidah S. Khudeyer
Cryptography 2026, 10(2), 16; https://doi.org/10.3390/cryptography10020016 - 2 Mar 2026
Abstract
This article presents a novel framework for encrypting color images to enhance digital data security using deep learning and artificial intelligence techniques. The system employs a two-model neural architecture: the first, a Convolutional Neural Network (CNN), verifies sender authenticity during user authentication, while [...] Read more.
This article presents a novel framework for encrypting color images to enhance digital data security using deep learning and artificial intelligence techniques. The system employs a two-model neural architecture: the first, a Convolutional Neural Network (CNN), verifies sender authenticity during user authentication, while the second extracts unique fingerprint features. These features are converted into high-entropy encryption keys using Particle Swarm Optimization (PSO), minimizing key similarity and ensuring that no key is reused or transmitted. Keys are generated in real time simultaneously at both the sender and receiver ends, preventing interception or leakage and providing maximum confidentiality. Encrypted images are secured using the Advanced Encryption Standard (AES-256) with keys uniquely bound to each user’s biometric identity, ensuring personalized privacy. Evaluation using security and encryption metrics yielded strong results: entropy of 7.9991, correlation coefficient below 0.00001, NPCR of 99.66%, UACI of 33.9069%, and key space of 2256. Although the final encryption employs an AES-256 key (key space of 2256), this key is derived from a much larger deep-key space of 28192 generated by multi-layer neural feature extraction and optimized via PSO, thereby significantly enhancing the overall cryptographic strength. The system also demonstrated robustness against common attacks, including noise and cropping, while maintaining recoverable original content. Furthermore, the neural models achieved classification accuracy exceeding 99.83% with an error rate below 0.05%, confirming the framework’s reliability and practical applicability. This approach provides a secure, dynamic, and efficient image encryption paradigm, combining biometric authentication and AI-based feature extraction for advanced cybersecurity applications. Full article
Show Figures

Figure 1

23 pages, 919 KB  
Article
A Hybrid Deep Learning Architecture for Intrusion Detection Deploying Multi-Scale Feature Interaction and Temporal Modeling
by Eva Jakubcova, Maros Jakubec and Peter Pocta
AI 2026, 7(3), 87; https://doi.org/10.3390/ai7030087 (registering DOI) - 2 Mar 2026
Viewed by 14
Abstract
Network intrusion detection is a core component of modern cybersecurity, but it remains challenging due to highly imbalanced traffic, heterogeneous feature types, and a presence of short-term temporal dependencies in network flows. Traditional machine learning models often rely on handcrafted features and struggle [...] Read more.
Network intrusion detection is a core component of modern cybersecurity, but it remains challenging due to highly imbalanced traffic, heterogeneous feature types, and a presence of short-term temporal dependencies in network flows. Traditional machine learning models often rely on handcrafted features and struggle with complex attack patterns, while deep learning approaches may become overly complex or difficult to interpret. In this paper, we propose a neural intrusion detection method that combines structured feature preprocessing with a compact hybrid architecture. Numerical and categorical traffic features are processed separately using robust normalisation and trainable embeddings, and then merged into an unified representation. The proposed model builds on a multi-scale feature interaction block, followed by channel-wise attention and a single bidirectional gated recurrent unit layer with attention pooling to capture short-term temporal behavior. The method is evaluated on two widely used benchmark datasets, i.e., the CIC-IDS2017 and CSE-CIC-IDS2018 dataset. Experimental results show that the proposed approach consistently outperforms the classical machine learning baselines and achieves competitive or superior performance compared to the recent deep learning methods proposed in the literature. The results confirm that the proposed architectural choices effectively capture both feature interactions and temporal patterns in network traffic. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

3 pages, 131 KB  
Editorial
Cybersecurity in the IoT
by Christos Tryfonopoulos and Nicholas Kolokotronis
Future Internet 2026, 18(3), 127; https://doi.org/10.3390/fi18030127 - 2 Mar 2026
Viewed by 33
Abstract
The Internet of Things (IoT) has evolved into a vast ecosystem of massively interconnected devices delivering intelligent services across consumer, commercial, and industrial environments [...] Full article
(This article belongs to the Special Issue Cybersecurity in the IoT)
41 pages, 815 KB  
Article
XAI-Compliance-by-Design: A Modular Framework for GDPR- and AI Act-Aligned Decision Transparency in High-Risk AI Systems
by Antonio Goncalves and Anacleto Correia
J. Cybersecur. Priv. 2026, 6(2), 43; https://doi.org/10.3390/jcp6020043 - 2 Mar 2026
Viewed by 28
Abstract
High-risk Artificial Intelligence (AI) systems deployed in cybersecurity and privacy-critical contexts must satisfy not only demanding performance targets but also stringent obligations for transparency, accountability, and human oversight under the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Existing [...] Read more.
High-risk Artificial Intelligence (AI) systems deployed in cybersecurity and privacy-critical contexts must satisfy not only demanding performance targets but also stringent obligations for transparency, accountability, and human oversight under the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Existing approaches often treat these concerns in isolation as follows: Explainable Artificial Intelligence (XAI) methods are added ad hoc to machine learning pipelines, while governance and regulatory frameworks remain largely conceptual and weakly connected to the concrete artefacts produced in practice. This article proposes XAI-Compliance-by-Design, a modular framework that integrates XAI techniques, compliance-by-design principles and trustworthy Machine Learning Operations (MLOps) practices into a unified architecture for high-risk AI systems in cybersecurity and privacy domains. The framework follows a dual-flow design that couples an upstream technical pipeline (data, model, explanation, and monitoring) with a downstream governance pipeline (policy, oversight, audit, and decision-making), orchestrated by a Compliance-by-Design Engine and a technical–regulatory correspondence matrix aligned with the GDPR, the AI Act, and ISO/IEC 42001. The framework is instantiated and evaluated through an end-to-end, Python-based proof of concept using a synthetic, intrusion detection system (IDS)-inspired anomaly detection scenario with a Random Forest (RF) classifier, Shapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), drift indicators, and tamper-evident evidence bundles and decision dossiers. The results show that, even in a modest, toy setting, the framework systematically produces verifiable artefacts that support auditability and accountability across the model lifecycle. By linking explanation reports, drift statistics and compliance logs to concrete regulatory provisions, the approach illustrates how organisations operating high-risk AI for cybersecurity and privacy can move from model-centric optimisation to evidence-centric governance. The article discusses how the proposed framework can be generalised to real-world high-risk AI applications, contributing to the operationalisation of European digital sovereignty in AI governance. This article does not introduce a new intrusion detection algorithm; instead, it proposes an evidence-centric governance pipeline that captures decision provenance and compliance artefacts so that decisions can be audited and justified against regulatory obligations. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Graphical abstract

38 pages, 3007 KB  
Systematic Review
Generative AI Integration in Education: Theoretical Review and Future Directions Informed by the ADO Framework
by Raghu Raman, Krishnashree Achuthan and Prema Nedungadi
Information 2026, 17(3), 241; https://doi.org/10.3390/info17030241 - 2 Mar 2026
Viewed by 135
Abstract
The accelerated integration of Generative Artificial Intelligence (GenAI) tools such as ChatGPT is transforming learner engagement, instructional design, and institutional governance in education. This systematic literature review synthesizes theory-driven scholarship on GenAI adoption and pedagogical use through the Antecedents–Decisions–Outcomes (ADO) framework, examining how [...] Read more.
The accelerated integration of Generative Artificial Intelligence (GenAI) tools such as ChatGPT is transforming learner engagement, instructional design, and institutional governance in education. This systematic literature review synthesizes theory-driven scholarship on GenAI adoption and pedagogical use through the Antecedents–Decisions–Outcomes (ADO) framework, examining how cognitive, motivational, technological, and institutional factors collectively shape implementation and learning outcomes. Drawing primarily on the Technology Acceptance Model (TAM), Self-Determination Theory (SDT), and Institutional Theory, the review integrates complementary insights from Constructivist Learning and Diffusion of Innovations perspectives to conceptualize how antecedents influence decision-making and outcomes across educational settings. The findings indicate that learner motivation, perceived usefulness, digital literacy, and institutional readiness constitute key antecedents affecting GenAI adoption. Decision processes—spanning instructional design, ethical regulation, and pedagogical adaptation—mediate how these antecedents translate into practice. Outcomes reveal a dual trajectory: GenAI enhances personalization, feedback, and self-regulated learning, yet introduces challenges related to ethical ambiguity and overreliance. The review offers a conceptually integrated synthesis that bridges motivational, technological, and organizational perspectives, advancing a theoretical roadmap for ethical and sustainable GenAI adoption. For educators and policymakers, the findings emphasize transparent governance, faculty capacity-building, and equitable access to ensure that innovation remains aligned with pedagogical integrity and human-centered values. Full article
(This article belongs to the Special Issue Advancing Educational Innovation with Artificial Intelligence)
Show Figures

Figure 1

14 pages, 392 KB  
Review
Distributed Trust in the Age of Malware Blockchain Applications
by Paul A. Gagniuc, Maria-Iuliana Dascălu and Ionel-Bujorel Păvăloiu
Algorithms 2026, 19(3), 185; https://doi.org/10.3390/a19030185 - 2 Mar 2026
Viewed by 66
Abstract
Blockchain technology is redefining the foundations of cybersecurity by introducing decentralized, tamper-resistant mechanisms for data integrity, trust management, and malware intelligence sharing. Traditional detection systems, which are dependent on centralized control and opaque validation, remain vulnerable to data manipulation and systemic compromise. The [...] Read more.
Blockchain technology is redefining the foundations of cybersecurity by introducing decentralized, tamper-resistant mechanisms for data integrity, trust management, and malware intelligence sharing. Traditional detection systems, which are dependent on centralized control and opaque validation, remain vulnerable to data manipulation and systemic compromise. The integration of blockchain transforms these paradigms because it provides verifiable provenance, distributed consensus, and autonomous enforcement through smart contracts. This review synthesizes fifteen years of progress (2010–2025) at the intersection of blockchain and malware detection and discusses core architectures, consensus protocols, and cryptographic properties that underpin decentralized defenses. The review follows a structured literature review methodology, which focuses on blockchain architectures, consensus protocols, and malware-detection pipelines reported in the cybersecurity literature. It also analyzes blockchain detection pipelines, performance tradeoffs, and data protection mechanisms in distributed learning systems and artificial intelligence models. Special attention is given to scalability constraints, regulatory compliance, and interoperability challenges that shape adoption. The review identifies three dominant design patterns: (i) decentralized threat-intelligence sharing with provenance guarantees, (ii) consensus-driven validation of malware artifacts, and (iii) on-chain trust and reputation mechanisms for detector accountability. Through the union of blockchain, artificial intelligence, edge computation, and federated learning, cybersecurity attains an auditable and adaptive architecture resilient to adversarial threats. The study concludes that blockchain provides a verifiable trust infrastructure for malware detection, but its practical deployment requires faster transaction validation and stronger protection of sensitive data; future research should address performance optimization and regulatory compliance. Full article
Show Figures

Figure 1

Back to TopTop