Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,767)

Search Parameters:
Keywords = cybersecurity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 3616 KB  
Article
A Hybrid Ensemble Framework for Rare Event Detection in Large-Scale Tabular Data
by Natalya Maxutova, Akmaral Kassymova, Kuanysh Kadirkulov, Aisulu Ismailova, Gulkiz Zhidekulova, Zhanar Azhibekova, Jamalbek Tussupov, Quvvatali Rakhimov and Zhanat Kenzhebayeva
Computers 2026, 15(3), 151; https://doi.org/10.3390/computers15030151 (registering DOI) - 1 Mar 2026
Abstract
Rare event detection in large tabular data remains a computationally challenging problem due to class imbalance, heterogeneous feature distributions, and unstable thresholds. Traditional machine learning approaches based on individual models and fixed thresholds often exhibit limited robustness and reproducibility in such settings. This [...] Read more.
Rare event detection in large tabular data remains a computationally challenging problem due to class imbalance, heterogeneous feature distributions, and unstable thresholds. Traditional machine learning approaches based on individual models and fixed thresholds often exhibit limited robustness and reproducibility in such settings. This paper proposes a hybrid ensemble framework for rare event detection that integrates heterogeneous machine learning models through threshold-aware probabilistic aggregation. The framework combines gradient-boosted decision trees, regularized linear models, and neural networks, leveraging their complementary inductive biases. To ensure reproducibility and robust performance evaluation under severe class imbalance, a leaky-controlled evaluation protocol is employed, including rootwise summation, probability calibration, and validation-based threshold optimization. The proposed approach is evaluated on a large tabular dataset containing approximately 50,000 observations. Experimental results demonstrate improved rare event detection and robust generalization performance compared to individual baseline models. Explainability is achieved through Shapley Additive Explanations (SHAP)-based attribution analysis and clustering in the explanation space, enabling transparent analysis of ensemble decision-making behavior. The proposed framework represents a general-purpose computational solution for rare event detection and can be applied to a wide range of data-driven decision-making and anomaly detection problems. Full article
Show Figures

Figure 1

39 pages, 1457 KB  
Review
Algorithmic Challenges and Regulatory Frameworks of Artificial Intelligence in Mexico: A Prospective Analysis from the Perspective of Digital Governance Theory
by Eduardo Arguijo, Yenny Villuendas-Rey, Arturo Cruz-Jiménez, Jonatan Mireles-Hernández, Oscar Camacho-Nieto and Mario Aldape-Pérez
Computers 2026, 15(3), 150; https://doi.org/10.3390/computers15030150 (registering DOI) - 1 Mar 2026
Abstract
The rapid integration of artificial intelligence (AI) has heightened the need for evidence-based regulatory frameworks to effectively address its legal, ethical, and societal consequences. This research carefully analyzes the prevailing landscape of AI-related legislation in Mexico. The study conducts a comprehensive review of [...] Read more.
The rapid integration of artificial intelligence (AI) has heightened the need for evidence-based regulatory frameworks to effectively address its legal, ethical, and societal consequences. This research carefully analyzes the prevailing landscape of AI-related legislation in Mexico. The study conducts a comprehensive review of legislative initiatives related to AI regulation submitted to Mexican legislative bodies, encompassing those approved or pending in commissions. This process leads to the identification and categorization of outstanding initiatives across seven policy areas: Congress, Education, Health, Intellectual Property, Justice, AI Promotion, and AI Regulation. As a principal contribution, this work offers the first exhaustive mapping and thematic classification of legislative activity related to AI in Mexico. Furthermore, the analysis identifies systemic regulatory deficiencies, such as the lack of AI-specific legislation, the limited scope of existing data protection laws in relation to AI systems, and an absence of technical provisions concerning ethical design, algorithmic transparency, cybersecurity, and accountability frameworks. By showcasing these deficiencies, the study contributes a diagnostic framework for evaluating AI governance readiness in emerging economies. The findings emphasize the importance of establishing a comprehensive, technically sound, and internationally harmonized regulatory framework to reduce AI-related risks while promoting responsible innovation in Mexico. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

32 pages, 2913 KB  
Article
Integrating Generative Design and Artificial Intelligence for Optimized Energy-Efficient Composite Facades in Next-Generation Smart Buildings
by Mohammad Q. Al-Jamal, Ayoub Alsarhan, Mahmoud AlJamal, Qasim Aljamal, Bashar S. Khassawneh, Amina Salhi and Hanan Hayat
Sustainability 2026, 18(5), 2379; https://doi.org/10.3390/su18052379 (registering DOI) - 1 Mar 2026
Abstract
The pursuit of energy efficiency and sustainability in the built environment has placed façade systems at the forefront of innovation in architectural design. This study proposes an integrated framework that combines generative design techniques with artificial intelligence (AI) to optimize composite façade configurations [...] Read more.
The pursuit of energy efficiency and sustainability in the built environment has placed façade systems at the forefront of innovation in architectural design. This study proposes an integrated framework that combines generative design techniques with artificial intelligence (AI) to optimize composite façade configurations for next-generation smart buildings. Using parametric modeling, a wide design space of façade geometries and material compositions was generated, capturing trade-offs between thermal performance, daylight, structural strength, and aesthetic variability. Artificial intelligence algorithms, particularly machine learning models, are trained on simulation-derived performance datasets to rapidly predict key indicators such as energy consumption, thermal transmittance (U-value) and solar heat gain coefficients. The proposed approach achieved a predictive accuracy of 99.85%, enabling efficient exploration of optimal solutions across high-dimensional design alternatives. A multi-objective optimization strategy was further implemented to balance energy efficiency with structural and aesthetic constraints, producing façade configurations that outperform conventional designs. The findings demonstrate that integrating generative design with AI-based prediction not only accelerates the façade design process but also provides actionable pathways toward net-zero energy buildings. This research highlights the transformative potential of AI-driven generative workflows in advancing sustainable architecture and delivering intelligent, adaptive and performance-oriented façades for future urban environments. Full article
(This article belongs to the Special Issue Building a Sustainable Future: Sustainability and Innovation in BIM)
24 pages, 4005 KB  
Article
Explainable Firewall Penetration Testing Method Employing Machine Learning
by Algimantas Venčkauskas, Jevgenijus Toldinas and Nerijus Morkevičius
Electronics 2026, 15(5), 1030; https://doi.org/10.3390/electronics15051030 (registering DOI) - 1 Mar 2026
Abstract
Cyber adversaries are becoming more sophisticated, creating complex security challenges as digital services expand. The reliability of the firewall is of the utmost importance in the context of network security since it serves as the first line of protection. Penetration testing is an [...] Read more.
Cyber adversaries are becoming more sophisticated, creating complex security challenges as digital services expand. The reliability of the firewall is of the utmost importance in the context of network security since it serves as the first line of protection. Penetration testing is an approach used to evaluate the reliability of a firewall and improve security by uncovering exploitable flaws. Frequently, penetration testing solutions are developed using machine learning, and it is of the utmost importance to explain the obtained results during the penetration testing. The emergence of explainable AI (XAI) addresses transparency in ML models, which is essential for informed cybersecurity decisions. Additionally, effective penetration testing reports are crucial for organizations, helping them comprehend and address vulnerabilities with tailored mitigation strategies. This study contributes to firewall security by developing an explainable penetration testing method, which includes two machine learning classification models: a binary model for detecting attacks and a multiclass model for identifying attack types with an explainability feature. This research introduces a novel explainability method that emphasizes significant features related to attack types based on multiclass predictions and proposes an approach using the extended System Security Assurance Ontology (SSAO) to clarify vulnerabilities and suggest alternative mitigation strategies. After evaluating numerous ML algorithms for the CIC-IDS2017 dataset, the Fine Tree model was considered to have the greatest performance. For the binary model, it achieved a validation accuracy of 99.7%, while for the multiclass model, it achieved a validation accuracy of 99.6%. Both models were used to test the firewall for vulnerabilities. Firewall penetration testing using the binary model achieves an accuracy of 82.1%, while the multiclass model achieves an accuracy of 78.7%. Full article
(This article belongs to the Special Issue Recent Advances in Information Security and Data Privacy, 2nd Edition)
Show Figures

Figure 1

17 pages, 1099 KB  
Article
LLM Security and Safety: Insights from Homotopy-Inspired Prompt Obfuscation
by Luis Eduardo Lazo Vera, Hamed Jelodar and Roozbeh Razavi-Far
AI 2026, 7(3), 83; https://doi.org/10.3390/ai7030083 (registering DOI) - 1 Mar 2026
Abstract
In this study, we propose a homotopy-inspired prompt obfuscation framework to enhance understanding of security and safety vulnerabilities in Large Language Models (LLMs). By systematically applying carefully engineered prompts, we demonstrate how latent model behaviors can be influenced in unexpected ways. Our experiments [...] Read more.
In this study, we propose a homotopy-inspired prompt obfuscation framework to enhance understanding of security and safety vulnerabilities in Large Language Models (LLMs). By systematically applying carefully engineered prompts, we demonstrate how latent model behaviors can be influenced in unexpected ways. Our experiments encompassed 15,732 prompts, including 10,000 high-priority cases, across LLama, Deepseek, KIMI for code generation, and Claude to verify. The results reveal critical insights into current LLM safeguards, highlighting the need for more robust defense mechanisms, reliable detection strategies, and improved resilience. Importantly, this work provides a principled framework for analyzing and mitigating potential weaknesses, with the goal of advancing safe, responsible, and trustworthy AI technologies. Full article
Show Figures

Figure 1

20 pages, 1045 KB  
Systematic Review
Cybersecurity of Cyber-Physical Systems in the Quantum Era: A Systematic Literature Review-Based Approach
by Siler Amador, César Pardo and Raúl Mazo
Future Internet 2026, 18(3), 125; https://doi.org/10.3390/fi18030125 (registering DOI) - 28 Feb 2026
Abstract
The convergence of cyber-physical systems (CPSs), operational technologies (OTs), industrial control systems (ICSs), and quantum computing poses unprecedented challenges for the security and resilience of critical infrastructures (CIs). As quantum capabilities progress, classical cryptographic mechanisms such as RSA and ECC face increasing risks [...] Read more.
The convergence of cyber-physical systems (CPSs), operational technologies (OTs), industrial control systems (ICSs), and quantum computing poses unprecedented challenges for the security and resilience of critical infrastructures (CIs). As quantum capabilities progress, classical cryptographic mechanisms such as RSA and ECC face increasing risks from quantum algorithms (Shor and Grover), while CPS and OT remain constrained by long life cycles, heterogeneity, and limited upgrade capabilities. This study conducts a systematic literature review (SLR) following a GQM-PICO-PRISMA methodological framework to examine 66 primary studies, selected from 1.522 records identified in seven scientific databases and published between 2005 and 2025. The review identifies dominant research domains, ranging from IoT/IIoT security to machine learning-based intrusion detection in CPS/OT environments, and synthesizes key challenges. Findings reveal significant fragmentation in CPS taxonomies, limited integration of post-quantum cryptography (PQC) into OT/ICS protocols, a scarcity of real-world datasets, and insufficient quantum threat modeling (QTM). This work consolidates and structures prior evidence into a literature-derived classification of quantum-era CPS/OT cybersecurity topics and distills a prioritized research agenda for advancing quantum-resilient architectures. Full article
(This article belongs to the Section Cybersecurity)
32 pages, 2478 KB  
Article
Blockchain Security Using Confidentiality, Integrity, and Availability for Secure Communication
by Chukwuebuka Francis Ikenga-Metuh and Abel Yeboah-Ofori
Blockchains 2026, 4(1), 3; https://doi.org/10.3390/blockchains4010003 (registering DOI) - 28 Feb 2026
Abstract
Background: Blockchain technology has emerged as a transformative communication solution for securing distributed systems. However, several vulnerabilities exist during transactions, including latency and network congestion issues during mempool processing, topology weaknesses, cross-chain bridge exploits, and cryptographic weaknesses. These vulnerabilities have led to [...] Read more.
Background: Blockchain technology has emerged as a transformative communication solution for securing distributed systems. However, several vulnerabilities exist during transactions, including latency and network congestion issues during mempool processing, topology weaknesses, cross-chain bridge exploits, and cryptographic weaknesses. These vulnerabilities have led to attacks that have threatened system integrity, including Block Extractable Value (BEV) attacks, Maximal Extractable Value (MEV) attacks, sandwich attacks, liquidation, and Decentralized Finance (DeFi) reordering attacks, among others. Thus, implementing a robust security framework based on the Confidentiality, Integrity, and Availability (CIA) triad remains critical for addressing modern blockchain technology threats. Objective: This paper examines blockchain technology, its various vulnerabilities, and attacks to determine how criminals exploit the system during transactions. Further, it evaluates its impact on users. Then, implement a blockchain attack in a “MasterChain” virtual environment to demonstrate how vulnerable spots can be practically exploited and discuss the application of the CIA security triad through modern cryptographic primitives. Methods: The approach considers Hevner’s design science framework, which emphasizes creating innovative artifacts that address identified problems while contributing to the knowledge base through rigorous evaluation. Furthermore, we developed a MasterChain tool using Python with Flask for distributed node communication, utilizing the Elliptic Curve Digital Signature Algorithm (ECDSA) with the Standards for Efficient Cryptography Prime 256-bit Koblitz curve 1 (secp256k1) for digital signatures and Secure Hash (SHA-3) (Keccak-256) hashing for block integrity. Results: show how the CIA has been implemented to provide secure communication through ECDSA-based transactions, SHA-3 chain integrity verification, and a multi-node distributed architecture, respectively. The performance analysis shows that ECDSA provides 256-bit security with 64-byte signatures compared to 2048-bit Rivest–Shamir–Adleman (RSA)’s 256-byte signatures, achieving a 75% reduction in bandwidth overhead. SHA-3 provides immunity to length extension attacks while maintaining equivalent collision resistance to SHA-256. Conclusions: The MasterChain framework provides a practical foundation for implementing blockchain security that addresses both classical and emerging vulnerabilities. The adoption of ECDSA and SHA-3 (Keccak-256) positions the system favourably for modern blockchain applications, while providing insights into the cryptographic trade-offs between performance, security, and compatibility. Full article
(This article belongs to the Special Issue Feature Papers in Blockchains 2025)
28 pages, 1696 KB  
Article
Few-Shot Open-Set Ransomware Detection Through Meta-Learning and Energy-Based Modeling
by Yun-Yi Fan, Cheng-Yu Chiang and Jung-San Lee
Appl. Sci. 2026, 16(5), 2364; https://doi.org/10.3390/app16052364 (registering DOI) - 28 Feb 2026
Abstract
As network communication technologies rapidly advance, ransomware has emerged as a significant cybersecurity threat that organizations cannot ignore. Static analysis enables rapid identification of ransomware by examining file structure and code characteristics before execution. However, existing classifiers are predominantly designed under the closed-set [...] Read more.
As network communication technologies rapidly advance, ransomware has emerged as a significant cybersecurity threat that organizations cannot ignore. Static analysis enables rapid identification of ransomware by examining file structure and code characteristics before execution. However, existing classifiers are predominantly designed under the closed-set assumption, causing them to misclassify novel variants into known families. Furthermore, ransomware datasets typically exhibit long-tailed distributions with emerging families having very few available samples, making it difficult for models to learn discriminative features. To address these challenges, we propose Few-Shot Open-Set Ransomware Detection through Meta-learning and Energy-based Modeling (MEM), a unified open-set recognition framework based on static analysis of Portable Executable features. By integrating Model-agnostic Meta-learning (MAML), the model rapidly adapts to new families with limited samples. The Energy Function quantifies the confidence of predictions in distinguishing between known samples and unknown ones, while Focal Loss dynamically adjusts sample weights to reduce bias introduced by imbalanced distributions. The experimental results demonstrate that MEM achieves higher classification accuracy and better rejection performance of unknown samples than existing open-set recognition methods. Full article
(This article belongs to the Special Issue New Advances in Cybersecurity Technology and Cybersecurity Management)
10 pages, 847 KB  
Proceeding Paper
Enhancing Precision Farming Security Through IoT-Driven Adaptive Anomaly Detection Using a Hybrid CNN–PSO–GA Framework
by Faruk Salihu Umar and Nurudeen Mahmud Ibrahim
Biol. Life Sci. Forum 2025, 54(1), 29; https://doi.org/10.3390/blsf2025054029 (registering DOI) - 28 Feb 2026
Abstract
The adoption of Internet of Things (IoT) technologies has significantly enhanced precision farming by enabling continuous environmental monitoring and data-driven agricultural management. However, the increasing reliance on distributed sensor networks introduces critical challenges, including sensor faults, data anomalies, and cyber-physical security threats, which [...] Read more.
The adoption of Internet of Things (IoT) technologies has significantly enhanced precision farming by enabling continuous environmental monitoring and data-driven agricultural management. However, the increasing reliance on distributed sensor networks introduces critical challenges, including sensor faults, data anomalies, and cyber-physical security threats, which can undermine system reliability and decision accuracy. This study proposes an IoT-driven anomaly detection framework for smart agriculture that integrates a Convolutional Neural Network (CNN) optimized using a hybrid Particle Swarm Optimization and Genetic Algorithm (PSO–GA). The CNN learns complex spatio-temporal patterns from multivariate sensor data, while the PSO–GA strategy automatically tunes CNN hyperparameters to improve detection accuracy and model stability. To enhance adaptability under dynamic agricultural conditions, the proposed framework incorporates an online learning mechanism that incrementally updates the CNN model using newly arriving sensor data, enabling continuous adaptation to environmental changes and concept drift without full model retraining. Experiments conducted on a publicly available smart agriculture dataset demonstrate that the proposed CNN–PSO–GA framework achieves an accuracy of 74%, precision of 74%, recall of 100%, and an F1-score of 85%, outperforming baseline methods such as One-Class Support Vector Machine and Isolation Forest, particularly in reducing missed anomaly events. The results confirm the robustness, adaptability, and reliability of the proposed approach. Overall, the framework provides a practical and scalable solution for enhancing security, resilience, and operational effectiveness in precision farming systems. Full article
(This article belongs to the Proceedings of The 3rd International Online Conference on Agriculture)
Show Figures

Figure 1

39 pages, 3580 KB  
Review
Applicationof AI in Cyberattack Detection: A Review
by Yaw Jantuah Boateng, Nusrat Jahan Mim, Nasrin Akhter, Ranesh Naha, Aniket Mahanti and Alistair Barros
Sensors 2026, 26(5), 1518; https://doi.org/10.3390/s26051518 (registering DOI) - 28 Feb 2026
Abstract
In today’s fast-changing digital environment, cyber-physical systems face escalating security challenges due to increasingly sophisticated cyberattacks. Artificial Intelligence (AI) has emerged as a powerful enabler of modern cyberattack detection, offering scalable, accurate, and adaptive solutions to counter dynamic threats. This paper provides a [...] Read more.
In today’s fast-changing digital environment, cyber-physical systems face escalating security challenges due to increasingly sophisticated cyberattacks. Artificial Intelligence (AI) has emerged as a powerful enabler of modern cyberattack detection, offering scalable, accurate, and adaptive solutions to counter dynamic threats. This paper provides a comprehensive review of recent advancements in AI-based cyberattack detection, focusing on Machine Learning (ML), Deep Learning (DL), Reinforcement Learning (RL), Federated Learning (FL), and emerging techniques such as generative AI, neuro-symbolic AI, swarm intelligence, lightweight AI, and quantum Computing. We evaluate the strengths and limitations of these approaches, highlighting their performance on benchmark datasets. The review discusses traditional signature-based Intrusion Detection Systems (IDS) and their limitations against novel attack patterns, contrasted with AI-driven anomaly-based and hybrid detection methods that improve detection rates for unknown and zero-day attacks. Key challenges, including computational costs, data quality, privacy concerns, and model interpretability, are analysed alongside the role of Explainable AI (XAI) in enhancing trust and transparency. The impact of computational resources, dataset representativeness, and evaluation metrics on AI model performance is also explored. Furthermore, we investigate the potential of lightweight AI for resource-constrained environments like IoT and edge devices, and quantum computing’s role in advancing detection efficiency and cryptographic security. The paper also draws attention to future research directions, particularly the development of up-to-date datasets, integration of hybrid quantum–classical models, and optimisation of asynchronous FL protocols to address evolving cybersecurity challenges. This study aims to inspire innovation in AI-driven cyberattack detection, fostering robust, interpretable, and efficient solutions for securing complex digital environments. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

36 pages, 3241 KB  
Article
An Anti-Sheriff Cybersecurity Audit Model: From Compliance Checklists to Intelligence-Supported Cyber Risk Auditing
by Ndaedzo Rananga and H. S. Venter
Appl. Sci. 2026, 16(5), 2315; https://doi.org/10.3390/app16052315 - 27 Feb 2026
Abstract
The increasing adoption of data-driven techniques in cybersecurity has introduced new opportunities to enhance detection, response, and automation capabilities within the cybersecurity ecosystem; however, cybersecurity auditing remains constrained by traditional compliance-oriented approaches that rely profoundly on binary, checklist-based evaluations. Such approaches often reinforce [...] Read more.
The increasing adoption of data-driven techniques in cybersecurity has introduced new opportunities to enhance detection, response, and automation capabilities within the cybersecurity ecosystem; however, cybersecurity auditing remains constrained by traditional compliance-oriented approaches that rely profoundly on binary, checklist-based evaluations. Such approaches often reinforce a policing or “sheriff-style” perception of auditing, emphasizing enforcement rather than enablement, risk insight, and organizational improvement. Of primary concern is that the “sheriff-style” cybersecurity audit approach often fails to accurately portray the true state of an organization’s cybersecurity posture, often providing a misleading sense of assurance based solely on formal compliance and controls existence. This study proposes an Anti-Sheriff Cybersecurity Audit Model, that moves beyond cybersecurity control checklists, by integrating intelligence-informed risk assessments with structured human judgment to support a more robust, adaptive, and risk-oriented auditing process. Grounded in design science research (DSR), the proposed approach combines conventional binary compliance verification with intelligence-derived risk indicators and governance-based maturity assessments to evaluate cybersecurity controls across technical, operational, and organizational dimensions. The approach aligns with established standards and frameworks, including International Organization for Standardization and the International Electrotechnical Commission (ISO/IEC) 27001, the National Institute of Standards and Technology (NIST), and the Center for Internet Security (CIS) benchmarks, while extending their application beyond static compliance validation. A fictional case study is used to demonstrate the model’s applicability and to illustrate how hybrid scoring can reveal residual risk not captured by conventional cybersecurity audits. The findings indicate that combining intelligence-informed analytics with structured human judgment enhances audit depth, interpretability, and business relevance. The proposed approach, therefore, provides a foundation for evolving cybersecurity auditing from just periodic compliance assessments, toward a continuous, risk-informed, and governance-aligned assurance system. Full article
(This article belongs to the Special Issue Progress in Information Security and Privacy)
Show Figures

Figure 1

26 pages, 3199 KB  
Article
EDAER: Entropy-Driven Approach for Entity and Relation Extraction in Chinese Cyber Threat Intelligence
by Yong Li, Xiuping Li, Yangbai Zhang, Zhiqiang Liu, Xiaowei Li, Qi Xu and Xiaolin Chang
Entropy 2026, 28(3), 261; https://doi.org/10.3390/e28030261 - 27 Feb 2026
Viewed by 41
Abstract
Cyber threat intelligence (CTI) has been explored to strengthen system security via taking raw threat data from various data sources and transforming it into actionable insights that enable organizations to predict, detect, and respond to cyber threats. Named entity recognition (NER) and relation [...] Read more.
Cyber threat intelligence (CTI) has been explored to strengthen system security via taking raw threat data from various data sources and transforming it into actionable insights that enable organizations to predict, detect, and respond to cyber threats. Named entity recognition (NER) and relation extraction (RE) are the key tasks of CTI data mining. However, current CTI NER and/or RE research is mainly focused on English CTI, which is not directly transferable to Chinese CTI due to fundamental linguistic and terminological differences. Moreover, the existing limited studies on Chinese CTI do not effectively address uncertainty in predictions in low-resource scenarios where entities and relations are sparse. This work aims to improve the performance of NER and RE tasks in low-resource Chinese CTI scenarios, and we make two major contributions. The first is that we construct a Chinese CTI dataset, which includes 16 types of entities and 9 types of relations—more than those of the existing open-source dataset on Chinese CTI. The second is that we propose an entropy-driven approach for entity and relation (EDAER) extraction. EDAER is the first to combine the techniques of RoBERTa_wwm, Mamba, RDCNN and CRF to perform NER tasks. In addition, EDAER is the first to apply entropy to quantify the uncertainty of the model’s predictions in NER and RE tasks in Chinese CTI scenarios. Moreover, EDAER is the first to apply contrastive learning techniques in Chinese CTI scenarios to learn meaningful features by maximizing the similarity between positive samples and minimizing the similarity between negative samples. Extensive experimental results on public and our built datasets demonstrate that our proposed approach performs the best. These results show that (1) RoBERTa_wwwm significantly outperforms BERT on both NER and RE tasks; (2) Mamba outperforms BiLSTM on the NER task; (3) the entropy-based dynamic gating mechanism contributes to performance improvements in both NER and RE tasks; and (4) the uncertainty-guided contrastive learning mechanism is helpful for performance improvement in the NER task. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications, 2nd Edition)
Show Figures

Figure 1

35 pages, 13595 KB  
Review
A Comprehensive Survey on 5G RedCap: Technologies, Security Vulnerabilities, and Attack Vectors
by Pavan Raja I, Kurunandan Jain, Hari N. N, Sethu Subramanian N and Prabhakar Krishnan
Future Internet 2026, 18(3), 118; https://doi.org/10.3390/fi18030118 - 27 Feb 2026
Viewed by 44
Abstract
While 5G addresses extreme performance tiers, 3GPP Releases 17 and 18 RedCap fill critical mid-tier performance gaps for diverse applications like industrial sensors and consumer wearables. The existing academic literature remains fragmented, focusing on isolated metrics rather than a holistic synthesis. There is [...] Read more.
While 5G addresses extreme performance tiers, 3GPP Releases 17 and 18 RedCap fill critical mid-tier performance gaps for diverse applications like industrial sensors and consumer wearables. The existing academic literature remains fragmented, focusing on isolated metrics rather than a holistic synthesis. There is a significant need to integrate technical specifications with empirical industry data. This survey systematically reviews Release 17/18 specifications, integrating literature from 2021 to 2025. We consolidate academic simulations and industry empirical reports to facilitate a rigorous comparative analysis across critical performance indicators. Findings evaluate complexity reduction via bandwidth limitation, antenna reduction, and HD-FDD. We provide a comprehensive security threat matrix, mapping vulnerabilities like RACH spoofing and paging suppression to countermeasures. RedCap cannot match eMBB throughput or NB-IoT’s battery life. Consequently, legacy LPWA remains more suitable for simple, decade-long sensing tasks. This work contributes a novel use-case taxonomy and a security analysis. This study provides practitioners with actionable insights into complexity trade-offs and network security risks. Future research should prioritize AI-driven management and “zero-maintenance” IoT through advanced power-saving innovations. Full article
(This article belongs to the Special Issue Cybersecurity in the Age of AI, IoT, and Edge Computing)
Show Figures

Graphical abstract

45 pages, 9338 KB  
Review
Wireless and Emerging Technologies to Meet E-Government Demands: Applications, Benefits, and Challenges
by Hussein Mohammed Barakat, Sawal Hamid Md Ali, Zixin Xu, Sallar S. Murad, Mohammad D. Soltani, Salman Yussof and Bha-Aldan M. Oraibi
Information 2026, 17(3), 225; https://doi.org/10.3390/info17030225 - 27 Feb 2026
Viewed by 50
Abstract
The rapid digitization of public services has positioned e-government as a cornerstone of modern governance, relying increasingly on advanced wireless and emerging technologies to support scalable, resilient, and data-driven operations. Despite extensive adoption efforts, a comprehensive investigation and systematic analysis of how emerging [...] Read more.
The rapid digitization of public services has positioned e-government as a cornerstone of modern governance, relying increasingly on advanced wireless and emerging technologies to support scalable, resilient, and data-driven operations. Despite extensive adoption efforts, a comprehensive investigation and systematic analysis of how emerging technologies collectively serve e-government across key domains remains limited. In particular, existing studies often address technologies in isolation, leaving gaps in understanding their integrated roles in smart cities, sustainability initiatives, cybersecurity frameworks, and evolving energy paradigms. This paper investigates the use of wireless and emerging technologies within e-government ecosystems and examines their employment and benefits across diverse public-sector applications. The study analyzes how these technologies contribute to service delivery, operational coordination, and policy execution, while critically discussing the technical, organizational, and regulatory challenges associated with their deployment. Furthermore, the impacts of these challenges on performance, security, and long-term viability are assessed and provided to guide researchers, system designers, and policymakers. By consolidating fragmented research and highlighting cross-domain interactions, this work offers a structured perspective on the role of wireless and emerging technologies in shaping the next generation of e-government systems. Full article
(This article belongs to the Section Wireless Technologies)
Show Figures

Figure 1

27 pages, 1644 KB  
Article
Artificial Intelligence as an Emerging Risk Dimension in Corporate Sustainability Reporting: A Legal and Governance Perspective
by Andreja Primec, Jernej Belak and Matic Čufar
Sustainability 2026, 18(5), 2278; https://doi.org/10.3390/su18052278 - 27 Feb 2026
Viewed by 142
Abstract
(1) Background: As digital technologies become integral to business operations, risks associated with artificial intelligence (AI), data governance, and cybersecurity are emerging as material concerns in the context of corporate sustainability. This article examines the extent to which current sustainability reporting standards (CSRD/ESRS, [...] Read more.
(1) Background: As digital technologies become integral to business operations, risks associated with artificial intelligence (AI), data governance, and cybersecurity are emerging as material concerns in the context of corporate sustainability. This article examines the extent to which current sustainability reporting standards (CSRD/ESRS, GRI, ISSB, and SASB) address risks associated with AI and broader digital transitions. (2) Methods: This study employs a qualitative content analysis of twenty corporate sustainability reports across digitally intensive sectors, complemented by a doctrinal legal analysis of the relevant normative instruments governing sustainability and digital risk reporting. (3) Results: This study finds that while references to AI risks are increasingly present, they often lack depth, standardisation, and connection to legal accountability mechanisms. Key gaps include underreporting human rights impacts linked to algorithmic decision-making and limited disclosure on internal AI governance structures. (4) Conclusions: The article argues that to uphold the principles of transparency, due diligence, and stakeholder accountability, sustainability reporting frameworks must evolve to capture the risks associated with AI transitions explicitly. The findings call for regulatory and standard-setting bodies to establish more explicit guidance on AI risk disclosures, and for corporate directors to integrate these dimensions into their risk governance and reporting duties. Full article
(This article belongs to the Section Economic and Business Aspects of Sustainability)
Show Figures

Figure 1

Back to TopTop