Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (114)

Search Parameters:
Keywords = hardware defenses

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 1543 KB  
Article
Green Computing for Critical Infrastructure: A Sustainability-First AI Framework for Energy-Efficient Anomaly Detection in Industrial Control Systems
by Muhammad Muzamil Aslam, Ali Tufail, Yepeng Ding, Liyanage Chandratilak De Silva, Rosyzie Anna Awg Haji Mohd Apong and Megat F. Zuhairi
Technologies 2026, 14(5), 267; https://doi.org/10.3390/technologies14050267 - 29 Apr 2026
Viewed by 129
Abstract
Industrial Control Systems (ICSs) face dual imperatives: protecting critical infrastructure from escalating cybersecurity threats while reducing the environmental impact of AI-powered defense mechanisms. Current deep learning anomaly detection approaches achieve security performance but consumes substantial computational resources, creating an environmental paradox in which [...] Read more.
Industrial Control Systems (ICSs) face dual imperatives: protecting critical infrastructure from escalating cybersecurity threats while reducing the environmental impact of AI-powered defense mechanisms. Current deep learning anomaly detection approaches achieve security performance but consumes substantial computational resources, creating an environmental paradox in which AI solutions designed to protect infrastructure contribute to carbon emissions at scale. This competition between cybersecurity effectiveness and sustainability objectives intensifies as regulatory frameworks increasingly mandate both security resilience and environmental accountability. This research presents Green-USAD, a sustainability-first AI framework that inverts traditional design paradigms by integrating energy efficiency as a primary architectural constraint from inception rather than applying compression retrospectively. The proposed approach advances green computing for critical infrastructure through four key contributions: (1) a compressed architecture with validation-guided convergence protocols achieving competitive detection performance with minimal computational overhead; (2) a multi-objective optimization framework using the Analytic Hierarchy Process to systematically balance security and sustainability requirements; (3) a hardware-validated energy measurement methodology addressing reproducibility challenges in green AI literature; and (4) a comprehensive evaluation demonstrating cross-datasets and edge-deployment viability. Validation on ICS benchmarks demonstrates that sustainability-first design achieves substantial energy reduction while maintaining operational detection accuracy, with measured training consumption below 1% of conventional approaches and proportional carbon emission reductions. Comparative analysis against post hoc compression baselines establishes fundamental advantages of design-from-inception over train-then-compress paradigms. Edge device deployment on resource-constrained hardware confirms real-world applicability for distributed industrial environments. Results establish that robust cybersecurity and environmental sustainability represent unified rather than competing objectives when intelligent systems are designed with sustainability as a foundational principle. Full article
Show Figures

Figure 1

35 pages, 2859 KB  
Article
Laser Linewidth Effects in Continuous-Variable QKD: Simulation-Based Analysis and Optimization Guidelines for Defense-Grade Secure System
by Seyed Saman Mahjour and Fernando M. Araújo-Moreira
Photonics 2026, 13(5), 432; https://doi.org/10.3390/photonics13050432 - 27 Apr 2026
Viewed by 142
Abstract
Continuous-Variable Quantum Key Distribution (CV-QKD) offers practical advantages for secure communication, but laser linewidth-induced phase noise remains a critical performance limitation. This work presents a comprehensive simulation-based analysis quantifying the impact of laser linewidth on secret key rate (SKR) in Gaussian-modulated coherent-state CV-QKD [...] Read more.
Continuous-Variable Quantum Key Distribution (CV-QKD) offers practical advantages for secure communication, but laser linewidth-induced phase noise remains a critical performance limitation. This work presents a comprehensive simulation-based analysis quantifying the impact of laser linewidth on secret key rate (SKR) in Gaussian-modulated coherent-state CV-QKD systems. We develop a detailed noise model incorporating detector electronics, Raman scattering, phase recovery, ADC quantization, and laser relative intensity noise. Through systematic parameter sweeps spanning linewidths from 10 Hz to 250 kHz, modulation variances from 1 to 20 SNU, and fiber distances up to 100 km, we identify three distinct operational regimes and optimization strategies for both transmitted local oscillator (TLO) and local–local oscillator (LLO) configurations under homodyne and heterodyne detection. Results show that metropolitan-scale links (50 km) require linewidths below 5 kHz to maintain secure operation, with performance decreasing beyond 25 kHz. We demonstrate that modulation variance must be jointly optimized with laser quality, with optimal values decreasing from 3–4 SNU at narrow linewidths to 2–2.5 SNU at moderate linewidths. The analysis reveals asymmetric sensitivity in LLO systems where local oscillator linewidth degrades performance more strongly than signal laser linewidth. These quantitative findings provide practical design guidelines for achieving secure CV-QKD operation over metropolitan distances with realistic hardware constraints, supporting deployment of defense-grade quantum communication networks. Full article
(This article belongs to the Special Issue Quantum Optics: Communication, Sensing, Computing, and Simulation)
39 pages, 1037 KB  
Article
IoT-Oriented Digital Signature Defense Against Single-Trace Belief Propagation Attacks in Post-Quantum Cryptography
by Maksim Iavich and Nursulu Kapalova
J. Cybersecur. Priv. 2026, 6(3), 77; https://doi.org/10.3390/jcp6030077 - 27 Apr 2026
Viewed by 387
Abstract
Post-quantum cryptographic implementations in Internet-of-Things (IoT) devices are significantly threatened by physical side-channel attacks, where practical attack risks are increased by physical accessibility and resource limitations. In particular, recent work has shown that belief propagation-based attacks can recover secret keys from lattice-based digital [...] Read more.
Post-quantum cryptographic implementations in Internet-of-Things (IoT) devices are significantly threatened by physical side-channel attacks, where practical attack risks are increased by physical accessibility and resource limitations. In particular, recent work has shown that belief propagation-based attacks can recover secret keys from lattice-based digital signatures using only a single side-channel trace of the Number Theoretic Transform (NTT). This work introduces the Quantum-Randomized Number Theoretic Transform (QR-NTT), an implementation-level defense mechanism that integrates quantum-derived entropy directly into the execution flow of lattice-based signature algorithms. Rather than treating randomness as a static input, QR-NTT uses quantum entropy to introduce controlled variability in execution ordering, arithmetic factor usage, and memory access behavior while preserving mathematical correctness and constant-time execution. The proposed framework is designed for embedded platforms and remains compatible with existing post-quantum cryptographic standards and IoT communication protocols. A complete implementation on an ARM Cortex-M4 platform, coupled with commercial quantum random number generator (QRNG) hardware, demonstrates that QR-NTT significantly degrades the effectiveness of template matching and belief propagation attacks. Experimental evaluation shows a reduction in single-trace attack success rates from over 90% to below 3% and an increase of approximately two orders of magnitude in the number of traces required for successful key recovery. These security gains are achieved with moderate overheads of 18.3% in execution time and 1.8 KB of additional memory while remaining well within practical IoT constraints. The results indicate that quantum-derived entropy can be leveraged as a practical implementation-level defense against physical attacks, complementing algorithmic post-quantum security. QR-NTT demonstrates a viable path toward strengthening the real-world resilience of post-quantum IoT systems without sacrificing deployability. Full article
(This article belongs to the Section Cryptography and Cryptology)
21 pages, 8107 KB  
Systematic Review
A Systematic Review of Kernel-Level Security Mechanisms, Vulnerability Detection and Mitigation in Modern Operating Systems
by Zeeshan Ali, Naeem Aslam, Andrea Marotta, Walter Tiberti and Dajana Cassioli
Sensors 2026, 26(8), 2452; https://doi.org/10.3390/s26082452 - 16 Apr 2026
Viewed by 571
Abstract
Kernel attacks are still one of the most severe threats to modern operating systems (OS) due to the kernel’s privileged control over hardware, memory, and process management. This study reviews some significant kernel-level security mechanisms regarding vulnerability detection, as well as the prevention [...] Read more.
Kernel attacks are still one of the most severe threats to modern operating systems (OS) due to the kernel’s privileged control over hardware, memory, and process management. This study reviews some significant kernel-level security mechanisms regarding vulnerability detection, as well as the prevention and mitigation of exploitation in today’s OSs. Using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology, a total of 30 high-quality, peer-reviewed studies were examined and analyzed in detail using the Critical Appraisal Skills Programme (CASP) quality framework. Discussion about the leading research directions emanated from three central questions of this review: What are the predominant kernel attack vectors? How are the techniques for protection and detection that are currently available assessed? What are the emerging research directions? The study identifies the following as the principal sources of kernel compromise: memory corruption, privilege escalation, rootkits, and race condition exploits. It also identifies several techniques for kernel hardening, such as Mandatory Access Control (MAC), the use of SELinux and AppArmor, kernel integrity monitoring, secure and measured boot, fuzz testing, and hardware-assisted protection. Some of these emerged as having a great deal of promise for proactive defense against zero-day vulnerabilities, including machine learning-based detection and live kernel patching. Issues regarding scalability, detection accuracy, and securing containerized and virtualized environments need to be solved. This paper aims to provide relevant, structured, and up-to-date research on kernel security synthesis and offer valuable guidance on the development of robust, adaptive, and novel OS defense mechanisms. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

28 pages, 1349 KB  
Review
Adversarial Robustness in Quantum Machine Learning: A Scoping Review
by Yanche Ari Kustiawan and Khairil Imran Ghauth
Computers 2026, 15(4), 233; https://doi.org/10.3390/computers15040233 - 9 Apr 2026
Viewed by 608
Abstract
Quantum machine learning (QML) is emerging as a promising paradigm at the intersection of quantum computing and artificial intelligence, yet its security under adversarial conditions remains insufficiently understood. This scoping review aims to systematically map empirical research on adversarial robustness in QML and [...] Read more.
Quantum machine learning (QML) is emerging as a promising paradigm at the intersection of quantum computing and artificial intelligence, yet its security under adversarial conditions remains insufficiently understood. This scoping review aims to systematically map empirical research on adversarial robustness in QML and to identify dominant threat models, defense strategies, evaluation approaches, practical constraints, and future research directions. Following PRISMA-ScR guidelines, four major databases were searched, resulting in 53 eligible empirical studies published between 2020 and 2026. The findings show that most research concentrates on input-level evasion attacks, particularly adversarial examples, and primarily evaluates robustness in classification-oriented models such as variational quantum circuits and quantum neural networks. Defense strategies are largely adapted from classical adversarial training and noise-based mitigation, with limited deployment on real quantum hardware. Robustness assessment is predominantly empirical, relying on accuracy degradation and attack success rate, while formal certification methods remain less common. The literature also highlights substantial constraints related to hardware limitations, NISQ noise, computational cost, and dataset scale. Overall, the evidence indicates that adversarial robustness research in QML is expanding but remains methodologically concentrated, underscoring the need for standardized benchmarking, scalable defenses, and hardware-validated robustness evaluation frameworks. Full article
Show Figures

Figure 1

19 pages, 712 KB  
Article
Federated Learning-Driven Protection Against Adversarial Agents in a ROS2 Powered Edge-Device Swarm Environment
by Brenden Preiss and George Pappas
AI 2026, 7(4), 127; https://doi.org/10.3390/ai7040127 - 1 Apr 2026
Viewed by 753
Abstract
Federated learning (FL) enables collaborative model training across distributed devices and robotic systems while preserving data privacy, making it well-suited for swarm robotics and edge-device-powered intelligence. However, FL remains vulnerable to adversarial behaviors such as data and model poisoning, particularly in real-world deployments [...] Read more.
Federated learning (FL) enables collaborative model training across distributed devices and robotic systems while preserving data privacy, making it well-suited for swarm robotics and edge-device-powered intelligence. However, FL remains vulnerable to adversarial behaviors such as data and model poisoning, particularly in real-world deployments where detection methods must operate under strict computational and communication constraints. This paper presents a practical, real-world federated learning framework that enhances robustness to adversarial agents in a ROS2-based edge-device swarm environment. The proposed system integrates the Federated Averaging (FedAvg) algorithm with a lightweight average cosine similarity-based filtering method to detect and suppress harmful model updates during aggregation. Unlike prior work that primarily evaluates poisoning defenses in simulated environments, this framework is implemented and evaluated on physical hardware, consisting of a laptop-based aggregator and multiple Raspberry Pi worker nodes. A convolutional neural network (CNN) based on the MobileNetV3-Small architecture is trained on the MNIST dataset, with one worker executing a sign-flipping model poisoning attack. Experimental results show that FedAvg alone fails to maintain meaningful model accuracy under adversarial conditions, resulting in near-random classification performance with a final global model accuracy of 11% and a loss of 2.3. In contrast, the integration of cosine similarity filtering demonstrates effective detection of sign-flipping model poisoning in the evaluated ROS2 swarm experiment, allowing the global model to maintain model accuracy of around 90% and loss around 0.37, which is close to baseline accuracy of 93% of the FedAvg algorithm only under no attack with a very minimal increase in loss, despite the presence of an attacker. The proposed method also maintains a false positive rate (FPR) of around 0.01 and a false negative rate (FNR) of around 0.10 of the global model in the presence of an attacker, which is a minimal difference from the baseline FedAvg-only results of around 0.008 for FPR and 0.07 for FNR. Additionally, the proposed method of FedAvg + cosine similarity filtering maintains computational statistics similar to baseline FedAvg with no attacker. Baseline results show an average runtime of about 34 min, while our proposed method shows an average runtime of about 35 min. Also, the average size of the global model being shared among workers remains consistent at around 7.15 megabytes, showing little to no increase in message payload sizes between baseline results and our proposed method. These results demonstrate that computationally lightweight cosine similarity-based detection methods can be effectively deployed in real-world, resource-constrained robotic swarm environments, providing a practical path toward improving robustness in real-world federated learning deployments beyond simulation-based evaluation. Full article
Show Figures

Figure 1

26 pages, 423 KB  
Article
Hardware-Anchored ES-SPA: A Dynamic Zero-Trust Architecture for Secure eSIM Provisioning in 6G IoT via Moving Target Defense
by Hari N. N., Kurunandan Jain, Prabu P and Prabhakar Krishnan
Future Internet 2026, 18(4), 187; https://doi.org/10.3390/fi18040187 - 1 Apr 2026
Viewed by 598
Abstract
The rapid evolution of 6G networks and large-scale Internet of Things (IoT) deployments intensifies security and privacy challenges in embedded SIM (eSIM) Remote SIM Provisioning (RSP), particularly during the bootstrap and profile delivery phases. Traditional perimeter-based and VPN-centric approaches expose static attack surfaces, [...] Read more.
The rapid evolution of 6G networks and large-scale Internet of Things (IoT) deployments intensifies security and privacy challenges in embedded SIM (eSIM) Remote SIM Provisioning (RSP), particularly during the bootstrap and profile delivery phases. Traditional perimeter-based and VPN-centric approaches expose static attack surfaces, making provisioning workflows vulnerable to denial-of-service (DoS) attacks, reconnaissance, and profile lock-in risks. This paper presents MTD-SDP-eSIM, a hardware-anchored Zero Trust Architecture that secures eSIM provisioning by integrating the embedded Universal Integrated Circuit Card (eUICC) as a root of trust with Software-Defined Perimeter (SDP), Software-Defined Networking (SDN), and Moving Target Defense (MTD). The framework introduces Hardware-Anchored Single Packet Authorization (ES-SPA), which cryptographically binds initial access to tamper-resistant eUICC credentials and enforces an authenticate-before-connect model. A unified Zero Trust controller dynamically orchestrates SDP access control, SDN-based micro-segmentation, and MTD-driven Network Address Shuffling during high-risk provisioning phases. This framework is validated on a high-fidelity 6G testbed built using ns-3, Open5GS, and P4-programmable switches. Experimental results demonstrate a 90% DoS survival rate during provisioning, a 35% scalability improvement over VPN-based baselines, and a 75% reduction in profile lock-in failures through runtime deletion verification. These findings confirm that anchoring dynamic network defenses in hardware-rooted identity significantly enhances the resilience, scalability, and privacy of eSIM provisioning for massive 6G IoT deployments. Full article
Show Figures

Graphical abstract

7 pages, 1907 KB  
Proceeding Paper
Adaptive Phishing Detection and Mitigation System Using Huawei Mind Reinforcement Learning with Human Feedback
by Jesher Immanuel B. Hael, Mark Daniel S. Ortiz and Dionis A. Padilla
Eng. Proc. 2026, 134(1), 13; https://doi.org/10.3390/engproc2026134013 - 30 Mar 2026
Viewed by 289
Abstract
Phishing remains a persistent cybersecurity threat, exploiting social engineering to bypass traditional defenses. We developed a phishing detection system that integrates baseline supervised learning with Reinforcement Learning through human feedback (RLHF) to improve adaptability against evolving attack strategies. Implemented using the Huawei MindRLHF [...] Read more.
Phishing remains a persistent cybersecurity threat, exploiting social engineering to bypass traditional defenses. We developed a phishing detection system that integrates baseline supervised learning with Reinforcement Learning through human feedback (RLHF) to improve adaptability against evolving attack strategies. Implemented using the Huawei MindRLHF framework and deployed on Raspberry Pi hardware, the system was evaluated using a dataset of 135,325 email samples consisting of both phishing and legitimate messages. The baseline supervised model achieved 94.3% accuracy, while the RLHF-enhanced model, through 74 iterations, achieved improved adaptability, reaching a 96.8% accuracy with balanced precision and recall. A multi-component reward function was designed to incorporate correct classification, human agreement, confidence matching, and consistency, enabling the model to refine its decision boundaries beyond automated optimization. Real-time monitoring and feedback were facilitated through a hardware-integrated LCD interface. The results confirm enhanced detection accuracy and reduced error rates, demonstrating its viability for deployment. The findings highlight the potential of human-centered RLHF the resilience and scalability of phishing mitigation systems against emerging cyber threats. Full article
Show Figures

Figure 1

47 pages, 646 KB  
Review
Securing Unmanned Devices in Critical Infrastructure: A Survey of Hardware, Network, and Swarm Intelligence
by Kubra Kose, Nuri Alperen Kose and Fan Liang
Electronics 2026, 15(6), 1204; https://doi.org/10.3390/electronics15061204 - 13 Mar 2026
Viewed by 1360
Abstract
As Unmanned Aerial Vehicles (UAVs) become integral to critical infrastructure, ranging from precision agriculture to emergency disaster recovery, their security becomes a matter of systemic resilience. This paper provides a comprehensive thematic survey of the security landscape for unmanned devices, bridging the gap [...] Read more.
As Unmanned Aerial Vehicles (UAVs) become integral to critical infrastructure, ranging from precision agriculture to emergency disaster recovery, their security becomes a matter of systemic resilience. This paper provides a comprehensive thematic survey of the security landscape for unmanned devices, bridging the gap between low-level hardware vulnerabilities and high-level mission failures. We propose a multidimensional taxonomy that categorizes challenges into hardware roots of trust, swarm intelligence threats, and domain-specific applications. A primary focus is placed on the Resource–Security Paradox, where the energy cost of heavy cryptographic or AI defenses directly reduces flight endurance, creating a trade-off that adversaries exploit through battery-exhaustion attacks. Beyond standard threats, we analyze emerging risks in additive manufacturing supply chains, the “Sim-to-Real” gap in AI-driven perception, and the legal necessity of Digital Forensic Readiness (DFR) for post-incident attribution. Through a systematic review of defensive frameworks, including lightweight encryption, Mamba-KAN anomaly detection, and blockchain-anchored logging, we evaluate the effectiveness of current solutions against complex adversarial models. Finally, we identify critical research gaps, providing a roadmap for security-by-design in the next generation of critical infrastructure swarms. Full article
(This article belongs to the Special Issue Computer Networking Security and Privacy)
Show Figures

Figure 1

17 pages, 3378 KB  
Article
Securing Virtual Reality: Threat Models, Vulnerabilities, and Defense Strategies
by Andrija Bernik, Igor Tomicic and Petra Grd
Virtual Worlds 2026, 5(1), 13; https://doi.org/10.3390/virtualworlds5010013 - 10 Mar 2026
Viewed by 615
Abstract
As virtual reality technologies evolve toward widespread adoption in education, industry, and social communication, their increasing complexity exposes new and often overlooked security challenges. Immersive environments collect continuous multimodal data, including motion tracking, gaze, voice, and biometric indicators that extend far beyond traditional [...] Read more.
As virtual reality technologies evolve toward widespread adoption in education, industry, and social communication, their increasing complexity exposes new and often overlooked security challenges. Immersive environments collect continuous multimodal data, including motion tracking, gaze, voice, and biometric indicators that extend far beyond traditional computing attack surfaces. This paper synthesizes recent research (2023–2025) on cybersecurity, privacy, and behavioral safety in virtual reality (VR) systems, identifies the main vulnerabilities, and proposes a unified defense architecture: the three-layer VR Security Framework (TVR-Sec). Through comparative review and conceptual integration of 31 peer-reviewed studies, three interdependent protection domains emerged: (1) System Integrity, securing hardware, firmware, and network communications against spoofing and malware; (2) User Privacy, ensuring the ethical management of biometric and behavioral data through federated learning and consent-based control; and (3) Socio-Behavioral Safety, addressing harassment, manipulation, and psychological exploitation in shared virtual spaces. The framework situates VR security as a multidimensional adaptive process that combines technical hardening with human-centered defense and ethical design. By aligning cyber–human protections through an AI-driven monitoring and policy engine, TVR-Sec advances a holistic paradigm for securing future immersive ecosystems. Full article
Show Figures

Figure 1

16 pages, 396 KB  
Review
Security Threats and AI-Based Detection Techniques in IoT Chips
by Hiba El Balbali and Anas Abou El Kalam
Chips 2026, 5(1), 9; https://doi.org/10.3390/chips5010009 - 4 Mar 2026
Viewed by 774
Abstract
The rapid expansion of the Internet of Things (IoT) has opened resource-limited devices to novel physical threats, such as Side-Channel Attacks (SCAs) and Hardware Trojans (HTs). Traditional security mechanisms are often not capable of standing against such hardware-based attacks, specifically on low-power System-on-Chip [...] Read more.
The rapid expansion of the Internet of Things (IoT) has opened resource-limited devices to novel physical threats, such as Side-Channel Attacks (SCAs) and Hardware Trojans (HTs). Traditional security mechanisms are often not capable of standing against such hardware-based attacks, specifically on low-power System-on-Chip (SoC) where static defenses can incur 2× to 3× overhead in silicon area and power. Herein, the gap between hardware security and embedded AI is compositionally formulated for discussion. We present a comprehensive survey of the current hardware threat landscape and analyze the emergence of “Secure-by-Design” paradigms, specifically focusing on the integration of Edge AI and TinyML as active, on-chip intrusion detection mechanisms. This review presents a critical analysis of trade-offs for running lightweight ML models on hardware by comparing state-of-the-art approaches. Our analysis highlights that optimized architectures, such as Mamba-Enhanced Convolutional Neural Networks (CNNs) and Gated Recurrent Unit (GRU), can achieve detection accuracies exceeding 99% against SCA and >92% against stealthy Hardware Trojans, while offering up to 75% lower power consumption compared to standard deep learning baselines. Finally, open challenges such as adversarial attacks on defense models are briefly discussed, and the focus is put on future directions toward constructing secure chips based on robust, AI-driven technology. Full article
(This article belongs to the Special Issue Emerging Issues in Hardware and IC System Security)
Show Figures

Figure 1

49 pages, 943 KB  
Review
A Review of Resilient IoT Systems: Trends, Challenges, and Future Directions
by Bandar Alotaibi
Appl. Sci. 2026, 16(4), 2079; https://doi.org/10.3390/app16042079 - 20 Feb 2026
Cited by 1 | Viewed by 885
Abstract
The Internet of Things (IoT) is increasingly embedded in critical infrastructures across healthcare, energy, transportation, and industrial automation, yet its pervasiveness introduces substantial security and resilience challenges. This paper presents a comprehensive review of recent advances in IoT resilience, focusing on developments reported [...] Read more.
The Internet of Things (IoT) is increasingly embedded in critical infrastructures across healthcare, energy, transportation, and industrial automation, yet its pervasiveness introduces substantial security and resilience challenges. This paper presents a comprehensive review of recent advances in IoT resilience, focusing on developments reported between 2022 and 2025. A layered taxonomy is proposed to organize resilience strategies across hardware, network, learning, application, and governance layers, addressing adversarial, environmental, and hybrid stressors. The survey systematically classifies and compares more than forty representative studies encompassing deep learning under adversarial attack, generative and ensemble intrusion detection, hardware and protocol-level defenses, federated and distributed learning, and trust and governance-based approaches. A comparative analysis shows that while adversarial training, GAN-based augmentation, and decentralized learning improve robustness, their evidence is often confined to specific datasets or attack scenarios, with limited validation in large-scale deployments. The study highlights challenges in benchmarking adaptivity, cross-layer integration, and explainable resilience, concluding with future directions for creating antifragile IoT systems that can self-heal and adapt to evolving cyber–physical threats. Full article
Show Figures

Figure 1

27 pages, 1193 KB  
Review
A Survey of Emerging DDoS Threats in New Power Systems
by Fan Luo, Siqin Fan and Guolin Shao
Sensors 2026, 26(4), 1097; https://doi.org/10.3390/s26041097 - 8 Feb 2026
Viewed by 660
Abstract
Distributed Denial-of-Service (DDoS) attacks remain the most pervasive and operationally disruptive cyber threat and are routinely weaponized in interstate conflict (e.g., Russia–Ukraine and Stuxnet). Although attack-chain models are standard for Advanced Persistent Threat (APT) analysis, they have seldom been applied to DDoS, which [...] Read more.
Distributed Denial-of-Service (DDoS) attacks remain the most pervasive and operationally disruptive cyber threat and are routinely weaponized in interstate conflict (e.g., Russia–Ukraine and Stuxnet). Although attack-chain models are standard for Advanced Persistent Threat (APT) analysis, they have seldom been applied to DDoS, which is often framed as a single-step volumetric assault. However, ubiquitous intelligence and ambient connectivity increasingly enable DDoS campaigns to unfold as multi-stage operations rather than isolated floods. In parallel, large language models (LLMs) create new opportunities to strengthen traditional DDoS defenses through richer contextual understanding. Reviewing incidents from 2019 to 2024, we propose a three-phase DDoS attack chain—preparation, development, and execution—that captures contemporary tactics and their dependencies on novel hardware, network architectures, and application protocols. We classify these patterns, contrast them with conventional DDoS, survey current defenses (anycast and scrubbing, BGP Flowspec, programmable data planes, adaptive ML detection, API hardening), and outline research directions in cross-layer telemetry, adversarially robust learning, automated mitigation orchestration, and cooperative takedown. Full article
Show Figures

Figure 1

44 pages, 1387 KB  
Review
FPGA-Based Reconfigurable System: Research Progress and New Trend on High-Reliability Key Problems
by Zeyu Li, Pinle Qin, Rui Chai, Yuchen Hao, Dongmei Zhang and Hui Li
Electronics 2026, 15(3), 548; https://doi.org/10.3390/electronics15030548 - 27 Jan 2026
Cited by 1 | Viewed by 1096
Abstract
FPGA-based reconfigurable systems play a vital role in many critical domains by virtue of their unique advantages. They can effectively adapt to dynamically changing application scenarios, while featuring high parallelism and low power consumption. As a result, they have been widely adopted in [...] Read more.
FPGA-based reconfigurable systems play a vital role in many critical domains by virtue of their unique advantages. They can effectively adapt to dynamically changing application scenarios, while featuring high parallelism and low power consumption. As a result, they have been widely adopted in key sectors such as aerospace, nuclear industry, and weapon equipment, where high performance and stability are of utmost importance. However, these systems face significant challenges. The continuous and drastic reduction in chip process size has led to increasingly complex and delicate internal circuit structures and physical characteristics. Meanwhile, the operating environments are often harsh and unpredictable. Additionally, the adoption of untrusted third-party foundries to reduce development costs further compounds these issues. Collectively, these factors make such systems highly susceptible to reliability threats, including environmental radiation, aging degradation, and malicious hardware attacks. These problems severely impact the stable operation and functionality of the systems. Therefore, ensuring the highly reliable operation of reconfigurable systems has become a critical issue that urgently needs to be addressed. There is a pressing need to summarize their technical characteristics, research status, and development trends comprehensively and in depth. In response, this paper conducts relevant research. By systematically reviewing 183 domestic and international research papers published between 2012 and 2024, it first provides a detailed analysis of the root causes of reliability issues in reconfigurable systems, thoroughly exploring their underlying mechanisms. Second, it focuses on the key technologies for achieving high reliability, encompassing four types of fault-tolerant design technologies, three types of aging mitigation technologies, and two types of hardware attack defense technologies. The paper comprehensively summarizes relevant research findings and the latest advancements in this field, offering a wealth of references for related research. Finally, it conducts a detailed comparative analysis and summary of the research hotspots in the field of high-reliability reconfigurable systems. It objectively evaluates the achievements and shortcomings of current research efforts and delves into the development trends of key technologies for high-reliability reconfigurable systems, providing clear directions for future research and practical applications. Full article
(This article belongs to the Special Issue New Trends in Cybersecurity and Hardware Design for IoT)
Show Figures

Figure 1

25 pages, 1436 KB  
Article
Entropy-Augmented Forecasting and Portfolio Construction at the Industry-Group Level: A Causal Machine-Learning Approach Using Gradient-Boosted Decision Trees
by Gil Cohen, Avishay Aiche and Ron Eichel
Entropy 2026, 28(1), 108; https://doi.org/10.3390/e28010108 - 16 Jan 2026
Viewed by 732
Abstract
This paper examines whether information-theoretic complexity measures enhance industry-group return forecasting and portfolio construction within a machine-learning framework. Using daily data for 25 U.S. GICS industry groups spanning more than three decades, we augment gradient-boosted decision tree models with Shannon entropy and fuzzy [...] Read more.
This paper examines whether information-theoretic complexity measures enhance industry-group return forecasting and portfolio construction within a machine-learning framework. Using daily data for 25 U.S. GICS industry groups spanning more than three decades, we augment gradient-boosted decision tree models with Shannon entropy and fuzzy entropy computed from recent return dynamics. Models are estimated at weekly, monthly, and quarterly horizons using a strictly causal rolling-window design and translated into two economically interpretable allocation rules, a maximum-profit strategy and a minimum-risk strategy. Results show that the top performing strategy, the weekly maximum-profit model augmented with Shannon entropy, achieves an accumulated return exceeding 30,000%, substantially outperforming both the baseline model and the fuzzy-entropy variant. On monthly and quarterly horizons, entropy and fuzzy entropy generate smaller but robust improvements by maintaining lower volatility and better downside protection. Industry allocations display stable and economically interpretable patterns, profit-oriented strategies concentrate primarily in cyclical and growth-sensitive industries such as semiconductors, automobiles, technology hardware, banks, and energy, while minimum-risk strategies consistently favor defensive industries including utilities, food, beverage and tobacco, real estate, and consumer staples. Overall, the results demonstrate that entropy-based complexity measures improve both economic performance and interpretability, yielding industry-rotation strategies that are simultaneously more profitable, more stable, and more transparent. Full article
(This article belongs to the Special Issue Entropy, Artificial Intelligence and the Financial Markets)
Show Figures

Figure 1

Back to TopTop