Next Article in Journal
Hardware Design Optimization of a Sparse Hyperdimensional Computing Accelerator for iEEG Seizure Detection
Previous Article in Journal
An LOFIC Image Sensor Readout Circuit with an On-Chip HDR Merger Achieving 36.5% Area and 14.9% Power Reduction
Previous Article in Special Issue
Hardware Acceleration with LWECC Approach on Memory and Router Optimization in Communication Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Security Threats and AI-Based Detection Techniques in IoT Chips

by
Hiba El Balbali
* and
Anas Abou El Kalam
LaRTID Laboratory, National School of Applied Sciences, Cadi Ayyad University, Marrakech 40000, Morocco
*
Author to whom correspondence should be addressed.
Submission received: 31 December 2025 / Revised: 18 February 2026 / Accepted: 27 February 2026 / Published: 4 March 2026
(This article belongs to the Special Issue Emerging Issues in Hardware and IC System Security)

Abstract

The rapid expansion of the Internet of Things (IoT) has opened resource-limited devices to novel physical threats, such as Side-Channel Attacks (SCAs) and Hardware Trojans (HTs). Traditional security mechanisms are often not capable of standing against such hardware-based attacks, specifically on low-power System-on-Chip (SoC) where static defenses can incur 2× to 3× overhead in silicon area and power. Herein, the gap between hardware security and embedded AI is compositionally formulated for discussion. We present a comprehensive survey of the current hardware threat landscape and analyze the emergence of “Secure-by-Design” paradigms, specifically focusing on the integration of Edge AI and TinyML as active, on-chip intrusion detection mechanisms. This review presents a critical analysis of trade-offs for running lightweight ML models on hardware by comparing state-of-the-art approaches. Our analysis highlights that optimized architectures, such as Mamba-Enhanced Convolutional Neural Networks (CNNs) and Gated Recurrent Unit (GRU), can achieve detection accuracies exceeding 99% against SCA and >92% against stealthy Hardware Trojans, while offering up to 75% lower power consumption compared to standard deep learning baselines. Finally, open challenges such as adversarial attacks on defense models are briefly discussed, and the focus is put on future directions toward constructing secure chips based on robust, AI-driven technology.

1. Introduction

The IoT’s rapid growth has radically transformed the technology landscape, extending connectivity from cloud infrastructures to billions of edge devices across industries such as smart healthcare, industrial automation, and smart agriculture [1,2]. The processing paradigm is changing from centralized cloud computing to Edge Computing, where data is processed locally to lower latency and bandwidth consumption, as these ecosystems grow [3]. However, this widespread use puts resource-constrained IoT nodes, usually low-power SoCs or microcontrollers, at serious security risk. IoT devices are frequently placed in hostile, physically accessible environments, which makes them prime targets for physical layer attacks, in contrast to secure data centers.
Although traditional software-related security solutions are primarily centered around detecting network intrusion and secure data transmission through cryptographic protocols, they tend to ignore the hardware-related vulnerabilities. An aggressive attacker can use the implementation of these cryptographic protocols at the hardware level to launch Side-Channel Attacks (SCAs) and thereby derive the secret keys based on the power dissipation or electromagnetic radiation [4]. Also, the threat of Hardware Trojan (HT) attacks, which are introduced as malicious components at the semiconductor level of circuit design, has been introduced due to globalization in the semiconductor industry [5]. The classical cryptographic solutions tend to provide over-heads in terms of power dissipation and silicon space for resource-constrained chips, thereby demanding the design of adaptive and light-weight security solutions [6].
As a response to these growing challenges, the notion of Secure-by-Design is transforming into active, smart defense strategies. The rising trend of Edge AI & Tiny ML is bringing about a new model, which is shifting intelligence to the chip to identify anomalies or intrusions in real-time [7], coming up with innovations like the detection of SCA patterns or Trojans related to SoC without even needing the internet or the Cloud.
Although the inclusion of AI-powered security solutions in hardware faces a number of challenges. This is due to the intricate balance between accuracy and the strict hardware requirements of IoT, including latency, power consumption, and silicon space [8]. Although a number of surveys have considered IoT security or TinyML, there exists a deficiency in a comprehensive review investigation combining these technologies with a special emphasis on hardware implementation.
This review is intended to provide an overall summary of current research in the hardware security of IoT chips based on three interrelated aspects:
1.
Hardware-level attacks, including Hardware Trojans and side-channel vulnerabilities;
2.
AI-based detection techniques, appropriate for On-Chip and edge implementation;
3.
Emerging secure-by-design architectures that embed security in the hardware foundation of IoT systems.
We identify major challenges, categorize the existing solutions, and present further research directions for strengthening hardware security in this era of pervasive connectivity through the use of this survey.
This review distinguishes itself by providing original, author-elaborated taxonomies and comparative frameworks rather than a simple enumeration of existing works. Specifically, we present a novel hierarchical classification of hardware threats and a critical synthesis of AI-based detection techniques, visualized through a new performance-vs-overhead trade-off analysis. These contributions aim to provide a structured and analytical roadmap for securing resource-constrained IoT chips.
The remainder of this paper is organized as follows: Section 2 provides a comprehensive taxonomy of the hardware threat landscape, framing vulnerability types into Side-Channel Attacks (SCAs), Hardware Trojans, and Fault Injection Attacks (FIAs), with a special emphasis on the impact of low-power designs. Section 3 explores the convergence of Edge AI and hardware security, analyzing specific Deep Learning architectures and TinyML implementation strategies for constructing real-time On-Chip Intrusion Detection Systems (OC-IDS). Finally, Section 4 discusses emerging Secure-by-Design architectures, identifies open challenges such as adversarial attacks, and outlines future research directions for next-generation IoT security.

2. The Hardware Threat Landscape in IoT

The proliferation of the Internet of Things has fundamentally shifted the security perimeter from centralized cloud infrastructure to billions of distributed, resource-constrained devices. Typically built around low-power SoCs and microcontrollers, these devices operate in physically accessible and very often hostile environments, placing them in a unique position of vulnerability to hardware-level attacks that simply bypass traditional software-based defenses. The root of the problem in securing these devices is the intrinsic trade-off between security and resource consumption. The necessity to keep power consumption and silicon area low, while maximizing energy efficiency, frequently makes designers turn to lightweight cryptographic primitives or security protocols that are simplified, thus inadvertently introducing physical layer vulnerabilities that can be exploited.
In this section, there will be a comprehensive taxonomy of the hardware threat landscape, framing the vulnerability types into Side-Channel Attacks (SCAs), Hardware Trojans, and Fault Injection Attacks (FIAs), with a special emphasis on the effect of low-power designs on these vulnerabilities. Figure 1 illustrates our proposed hierarchical taxonomy, classifying the primary hardware threats into Side-Channel Analysis, Hardware Trojans, and Fault Injection.

2.1. Side-Channel Attacks

Side Channel Attacks correlate the physical phenomenon emanating from the device with the intermediate results of the data being processed. Side Channel Attacks in CMOS (Complementary Metal-Oxide-Semiconductor) SoC correlate the secret key processing activities in the cryptographic algorithms, whose unintended leakage is through power consumption, Electromagnetic radiation, Timing, and other mediums.

2.1.1. Power Analysis Attacks

Power analysis is the vector that is used the most against IoT devices because of the direct proportion that exists between the dynamic power consumption and the switching activities in a digital circuit [9]. In CMOS technology, the total power consumption Ptotal is the sum of the static leakage P s t a t i c and the dynamic power P d y n a m i c . The dynamic element, which is exploitable by attackers, is expressed as [10]:
P d y n a m i c = α C L V d d 2 f ,
where α is the switching activity factor, CL is the load capacitance, V d d is the supply voltage, and f is the clock frequency. Since α depends directly on the data being manipulated, 0–> 1 or 1–> 0 transitions, variations in current consumption I d d reveal information about the internal state of the processor [11].
  • Simple Power Analysis (SPA): SPA involves the visual analysis of the power consumption trace to deduce the flow of the program executed by the processor. For example, in an implemented version of RSA (Rivest-Shamir-Adleman), a square function will consume completely different amounts of energy compared to a multiplication function; therefore, it is feasible to make an extraction of a private exponent from an implementation when it is not performed with constant time.
  • Differential and Correlation Power Analysis (DPA/CPA): DPA and CPA are statistical attacks that enable one to extract secret keys even from noisy measurements, where SPA may fail. CPA is the more robust metric that applies the Pearson Correlation Coefficient in mapping the hypothetical power consumption, a hypothesis developed from a leakage model like Hamming Weight or Hamming Distance, to the actual recorded traces. The correlation ρ is computed for key guesses in a set of N traces, thereby revealing the correct sub-key at the point where the correlation is maximized [12].

2.1.2. Electromagnetic (EM) Analysis

Though power analysis requires physical access to the power or ground lines on the chip, Electromagnetic Analysis is semi-invasive or non-invasive. As the current variations in the metal wires on an SoC trace the magnetic field as defined by Maxwell, sensitive near-field probes detect these emissions. The threat posed by electromagnetic analysis on advanced SoCs is that it enables attackers to localize leakage to particular crypto co-processors or logic areas on the chip.
The fundamental advantage of electromagnetic analysis over traditional Power Analysis lies in its high spatial resolution. In complex, multi-core IoT architectures, the total power consumption is an aggregate of the current drawn by the CPU, peripherals, radio modules, and memory interfaces. This aggregation creates significant “algorithmic noise” or “global noise” that can mask the minute leakage of the cryptographic operation. Electromagnetic analysis circumvents this limitation by enabling Side-Channel Cartography: attackers can physically scan the chip surface to locate the specific coordinates of the cryptographic co-processor (e.g., an AES (Advanced Encryption Standard) engine or an ECC (Elliptic Curve Cryptography) accelerator). By positioning the probe exclusively over this “hotspot,” the Signal-to-Noise Ratio (SNR) is drastically improved, effectively isolating the leakage of interest from unrelated background activity [13].
Nevertheless, recent breakthroughs have amplified this risk even further. Although traditional electromagnetic analysis necessitated costly oscilloscope equipment and motorized tables, Software Defined Radios (SDRs) are increasingly being employed nowadays, which can record EM leakage emanating from IoT components with much lower costs. Moreover, incorporating the concept of Deep Learning-Side Channel Analysis (DLSCA), CNNs have exhibited their efficacy in key recovery from electromagnetic emanations even with possible misalignments and jitter, which were heretofore established as effective counterintelligence methods [14]. This synergy between high-precision probes and AI-driven signal processing makes EM analysis a critical threat vector for modern, nanometer-scale SoCs where component density is high.

2.1.3. Micro-Architectural Timing Attacks

Timing attacks rely on data-dependent variations in the execution time, fundamentally exploiting the lack of constant-time execution in modern computing architectures. Though classical timing attacks focused on variable-time cryptographic primitives, contemporary micro-architectural side-channel attacks rely on the optimization capabilities of the processor, such as the presence of the cache memory or the branch predictor [15]. In these scenarios, an attacker triggers a cryptographic operation and measures the execution time to determine whether a specific memory line was accessed (Cache Hit) or fetched from main memory (Cache Miss), thereby leaking information about the secret key index.
For the more complex IoT gateways, such as Cortex-A or RISC-V application processors, Spectre, Meltdown, or the like rely on the speculation feature of the processor to access privileged memory locations. But even low-end Cortex-M microcontrollers are susceptible to cache-based timing attacks if the lookup table access is dependent on the secret key [15]. In addition to cache- and speculation-based timing channels, timing leakage may also arise from pipeline hazards, instruction-level parallelism, and memory access contention, which introduce subtle yet exploitable execution time variations. Recent research highlights that in multi-core IoT SoCs, the contention on the shared system bus or interconnects allows a malicious process on one core to monitor the memory access patterns of a secure process on another core [4].
Furthermore, the increasing heterogeneity of IoT platforms, combining general-purpose cores with hardware accelerators, amplifies the attack surface by introducing new micro-architectural states observable through timing measurements. Specific studies on RISC-V architectures have shown that utilizing custom vector extensions or multi-threading without strict isolation can leak cycle-accurate timing information to concurrent threads [16]. As a result, timing attacks remain a persistent and evolving threat across both high-performance IoT gateways and resource-constrained embedded devices, reinforcing the need for constant-time implementations and hardware-aware security countermeasures.

2.2. Hardware Trojans (HTs)

The globalization and horizontal specialization of the semiconductor supply chain have created a drastic shift in the security paradigm: the HT threat. Historically, the design, fabrication, and packaging of chips were done in trusted environments that were vertically integrated. Today, financial pressure forces most Internet of Things companies to adopt a fabless model, where Intellectual Property (IP) cores are purchased from third-party vendors, and the physical implementation is carried out at offshore fabs. Such integration leaves multiple points of entry to tamper with the Integrated Circuit at the root of trust at the hardware level.
A Hardware Trojan is fundamentally defined by its stealthy nature; it remains dormant during standard testing procedures and activates only under rare conditions. Structurally, an HT is made of two distinct mechanisms, as illustrated in our proposed architectural model in Figure 2: the Trigger and the Payload.
The Trigger: It is the activation logic that monitors internal signals for a specific condition.
  • Combinational Triggers: The triggering occurs when there is a combination of certain logic values on internal nodes; for example, when a certain 128-bit value appears on the data bus. The likelihood of the trigger being true is very low; Ptrigger < 10–20, and hence it is less likely to occur during the verification or random test vectors [17].
  • Sequential Triggers: "Time-bombs" or state machines that trigger after a series of events, like reaching a set threshold (for example, after thousands of hours of operation) [18]. Sequential triggers are very dangerous in an IoT context because they make it simple for an attacker to coordinate multiple device failures.
The Payload: It defines the malicious activity that is executed once the trigger condition is met. The effects of exploits can be divided into three classes:
  • Denial of Service (DoS): The payload could turn off an essential clock tree, reset the processor, or blow a non-volatile fuse, effectively destroying the chip (“Kill Switch”) [19].
  • Information Leakage: The Trojan leaks critical resources such as AES keys using covert channels. These may be physical, such as using power or emanations to escape software firewalls, or logical, such as bits embedded in unused packet headers [20].
  • Functional Modification: The Trojan makes a slight change to the results of the computational processes, like reversing a bit within the Random Number Generator (RNG) that reduces entropy, resulting in predictable cryptographic keys.

2.3. Physical and Fault Injection Attacks

Aside from passive side-channel monitoring and the presence of deliberately implanted Trojans, another threatening aspect of IoT SoC security is active physical attacks. Active physical attacks pertain to the intentional modification of the operating conditions of a chip to cause it to fail or behave erroneously; this is commonly referred to as a Fault Injection Attack (FIA). FIAs can be particularly damaging in cryptographic implementations because one carefully placed fault can mess up an intermediate calculation so that the resulting secret key is disclosed via Differential Fault Analysis (DFA) [21]. Common FIA techniques include:
  • Power Glitching: It involves briefly reducing or rising the voltage of the power supply. It results in a processor skipping or incorrectly executing instructions, commonly used to evade security checks or sabotage key-dependent operations [22].
  • Clock Glitching: Adding a brief, unexpected pulse to the clock signal. Like power glitching, it can upset the sequential activity and timing of the IC [23].
  • Optical Fault Injection: This method involves the use of focused light (such as a laser) to introduce charge carriers into the silicon substrate to disturb the state of the transistor. It involves decapsulation but allows for high control over the fault.
The taxonomy in Table 1 is hardware-centric and, therefore, covers the major physical attacks targeting IoT chips discussed above, including physical mechanisms exploited by those attacks, attacker capabilities required to mount them, and their consequences for the system’s confidentiality, integrity, and availability.
The attack surface on the hardware side is intricate and multi-level, ranging from the supply chain level (HTs) to the operating environment level (SCA and FIAs). The integration of these attack surfaces demands a new paradigm of proactive, real-time, and on-chip security monitoring, which is discussed in the following section.

3. Edge AI and TinyML for On-Chip Security

The limitations of static hardware defenses, such as shielding, dual-rail logic, and randomized clocking, lie in their significant overhead. They consume 2× to 3× silicon area and power redundancy. Moreover, these hardware protection mechanisms are static. Once a method is devised, for example, an alternative signal processing algorithm for SCA, these hardware mechanisms are not capable of adjusting or changing accordingly. This requires a shift towards “Dynamic Secure-by-Design” architectures. Such architectures would ensure that the system itself observes its physical security.
The convergence of Edge AI and the Tiny Machine Learning (TinyML) movement offers the technological foundation upon which such a shift is feasible. By moving intelligence from the cloud to the extreme edge, such as the sensor or SoC itself, it becomes feasible to create an On-Chip Intrusion Detection System (OC-IDS) that detects complex patterns of attack in real time.

3.1. Deep Learning Architectures for On-Chip Security

The application of AI to SCA detection and analysis has transformed the field. There are two main applications of machine learning techniques during an attack analysis, which include profiling the victim device for harvesting the secret key, while the other is defense, where the attacked trace is detected in real time [24].
Our focus here is on the defensive application, where detection of an ongoing Side Channel Attack is performed through examination of physical traces of side channels that occur during cryptographic operations.
Commonly used ML algorithms include [8,25,26]:
– Convolutional Neural Networks (1D-CNN): Originally meant for image processing, they are very effective for analyzing 1D time-series data, like power or EM signals. The convolutional layers function like feature detectors, finding patterns, like a trigger pattern, independent of their positions. This property of 1D-CNNs enables them to be immune to “jittering,” often used as an anti-aging mechanism for side-channel attacks [27].
A convolution operation for the l t h layer, given an input power trace X, is expressed as [28]:
y i l = σ k = 0 K 1 w k · x i + k l 1 + b ,
where w k represents the kernel weight, b the bias, and σ the activation function. The learning capacity of the neural network in learning non-linear boundaries enables the neural network to differentiate between the power profile of a secure AES operation and that of a compromised operation.
– Recurrent Neural Networks (RNNs) and LSTMs: While CNNs excel at extracting spatial features from static snapshots of data, Recurrent Neural Networks, and specifically Long Short-Term Memory (LSTM) networks, are better suited for dealing with sequentially collected data. In the context of hardware security, this is a very desirable property, since physical leakage and Trojan events are essentially time-series events. In particular, traditional RNNs suffer from the vanishing gradient issue, preventing them from learning long-term patterns and making it impossible for them to establish long-term correlations. LSTMs overcome this issue with the help of their gating mechanism, consisting of input, forget, and output gates, which allow them to control the flow of information even after thousands of clock cycles.
This temporal memory makes LSTMs best-suited to detect the case of Sequential Hardware Trojans or advanced Side-Channel Attacks when the leakage does not occur immediately but over the course of some operations [29]. This indicates that the Trojan might be activated only when some non-contiguous instructions are executed.
– Autoencoders (AEs): In the instance of zero-day attacks or stealthy Hardware Trojans, no labeled data may exist. We could not have trained a model for an attack it did not know about. In this situation, unsupervised machine learning, and in particular, Autoencoders (AEs), become the new standard [30].
An Autoencoder would be trained only on benign data. It learns to compress the input into a latent space and reconstruct it. In testing, if it receives normal data, then its reconstruction error loss would be low. However, if it receives an attack in its input, then its ability to reproduce it would falter, and its reconstruction error would be large.
The reconstruction loss L would be measured using Mean Squared Error (MSE) [31]:
L = 1 N i = 1 N x i x ^ i 2 ,
If L > τ (a predefined threshold), the system raises an alarm for an intrusion. This method is also very common for detecting Hardware Trojans that tend to run very infrequently because the running of such Trojans necessarily leaves a power/thermal trace that the AE has not learned to reconstruct [32].
In Table 2, we categorize some tinyML models based on their appropriateness with regard to specific hardware attacks. It highlights the critical trade-offs between the type of physical input data required and the resulting implementation cost in terms of silicon area and power consumption. This Taxonomy shows that there is no universal model, but that an algorithm needs to be selected according to the attack vector and the available resources on the IoT node.
While not all listed models are natively TinyML, several can be adapted to TinyML constraints through model compression, quantization, and lightweight architectural variants, enabling on-chip deployment under strict power and memory budgets.
To quantify the effectiveness of these AI-driven approaches, Table 3 compares the performance of recent state-of-the-art implementations. It highlights the achievable accuracy and hardware efficiency of various TinyML models against specific physical threats.
As synthesized in our comparative analysis, in Table 3, the landscape of hardware security has shifted from heuristic methods to data-driven precision. Our analysis reveals three critical trends defining the state-of-the-art. First, in the domain of Side-Channel Analysis, supervised architectures, specifically 1D-CNNs and MLPs, have reached a level of maturity where they effectively neutralize traditional countermeasures. As shown in the table, these models maintain detection accuracies exceeding 99% against variable-clock implementations (jitter) and can prevent key recovery with fewer than 3000 traces, outperforming classical template attacks.
Second, for Hardware Trojan detection, our compilation highlights a decisive transition from supervised to unsupervised learning (e.g., Autoencoders, GNNs). This shift is strategic; by modeling the “Golden” behavior of the chip rather than specific triggers, these approaches achieve detection rates of >92% against “unknown” or zero-day Trojans without requiring exhaustive labeled datasets.
Finally, and perhaps most significantly, the implementation metrics we collated confirm the viability of the TinyML paradigm, as we visually represented in Figure 3. Unlike earlier iterations that required off-chip processing, recent lightweight models demonstrate negligible logic utilization and power overheads often below 1%. Collectively, these empirical results confirm that TinyML is not merely a theoretical concept but a high-performance, resource-efficient solution for real-time chip protection, paving the way for the hardware-aware implementation strategies discussed next.

Critical Analysis and Comparative Insights

While the metrics synthesized in Table 3 demonstrate the potential of TinyML, a deeper cross-comparison reveals significant architectural trade-offs that are often understated in individual studies.
The Accuracy–Latency Trade-off:
While the highest detection rates are achieved by the Deep Learning models (CNNs), our analysis shows that the cost in terms of inference latency is non-negligible for such models. While they are effective in post-processing traces, the more complex models may incur a cost in terms of delays that may be unacceptable for real-time blocking of instruction-level Trojans. There is a clear divergence in the state of the art. In the case of high-security applications, the deep models are favored for their ability to withstand jitter attacks. However, ultra-low-power nodes are increasingly using optimized MLPs or hybrid models to ensure sub-microsecond response times.
The Generalization Challenge:
Furthermore, although the shift to unsupervised learning addresses the problem of unknown attacks, there is another problem related to the False Positive Rates (FPRs) of the classifier. Unlike the traditional supervised classifier, which has a well-defined decision boundary trained on known attacks, the autoencoder classifier based on the reconstruction error may be affected by the variations of the processes. Recent works [14,32] point out that in order to achieve the detection rates of more than 92%, as reported in Table 3, the thresholds must be well calibrated, often requiring the use of the “Golden Chip” reference, which may not be accessible in the mass production environment.
The Simulation-to-Silicon Gap:
Finally, a critical analysis of the implementation methodologies shows a mismatch between the simulation results and silicon-proven metrics. A major part of the reviewed literature confirms overheads based on the synthesized netlist instead of physical measurements. Comparing these to the FPGA-validated implementations in [35,37] suggests that the “negligible” power overhead (<1%) can only be achieved by using aggressive quantization and pruning. This highlights that future research needs to focus on “Hardware-in-the-Loop” validation to fill the gap between theoretical algorithmic complexity and the physical IoT environment.

3.2. TinyML Implementation Strategies for On-Chip Security

Although designing an appropriate neural network architecture is very essential, setting up such neural networks within resource-constrained IoT silicon poses a different problem altogether. To fill the gap between complex machine learning designs and the strict power, area, and latency budgets of microcontrollers and low-power SoCs, TinyML has evolved.

3.2.1. Model Compression and Optimization

Standard deep learning models are floating-point-intensive and require megabytes of parameters; hence, they are not suitable for Cortex-M class devices with limited Flash and SRAM. In the post-training phase, certain optimization techniques need to be applied in order to enable OC-IDS.
Several compression techniques are employed:
  • Quantization: This involves reducing the precision of the model’s weights and activations from standard 32-bit floating-point to lower bit-widths, such as 8-bit integers or even binary. Quantization significantly reduces memory footprint and allows for faster, more energy-efficient inference using integer arithmetic units, which are common in low-power microcontrollers [38].
  • Pruning: This technique removes redundant weights or connections from the neural network, effectively reducing the number of operations required for inference. Structured pruning, which consists in removing entire channels or layers, is preferred for hardware implementation as it results in a more regular, hardware-friendly architecture [39].
  • Knowledge Distillation: A smaller, “student” model is trained to mimic the output of a larger, more complex “teacher” model. This allows the deployment of a highly compact model that retains much of the accuracy of the original, resource-intensive model [40].

3.2.2. Hardware-Aware Deployment Architectures

The physical placement of the TinyML inference engine determines the security system’s response time and robustness. Three primary architectural paradigms dominate the current state-of-the-art.
  • Software-Based Execution (In-Core): In this architecture, the TinyML model runs as a background task on the main application processor, using ARM’s CMSIS-NN library, for example. One of its main advantages is Zero hardware overhead, as existing devices can be patched with AI security via firmware updates. However, the security task competes with the user application for CPU cycles, potentially introducing latency. Furthermore, if the OS is compromised by a logical attack, the security monitor itself may be disabled.
  • Co-Processor/Accelerator (NPU): Modern IoT SoCs increasingly integrate dedicated Neural Processing Units (NPUs) or DSPs. Offloading the intrusion detection model to an NPU decouples security from the main application logic. An NPU can monitor the power rail continuously without waking the main CPU. Studies have shown that dedicated RISC-V-based accelerators with custom vector extensions can execute SCA detection models faster than software implementations, enabling cycle-accurate detection of anomalies [41].
  • Embedded FPGA (eFPGA) Overlay: For critical infrastructure, heterogeneous SoCs containing eFPGA fabrics offer the highest performance. The neural network is synthesized directly into logic gates. This is particularly effective for identifying Hardware Trojans at runtime, where the detection logic must operate at the same clock speed as the malicious trigger to prevent the payload from executing [42].

4. Secure-by-Design Architectures and Future Directions

The transition from static, reactive hardware protection to dynamic, AI-driven defense mechanisms marks a pivotal evolution in IoT security. However, this paradigm shift introduces new architectural requirements and research challenges that must be addressed to realize the vision of fully autonomous “Secure-by-Design” systems.
Future IoT SoCs can no longer treat security as an afterthought or a peripheral add-on; instead, the Root of Trust (RoT) must be intrinsic to the micro-architecture itself. This necessitates the adoption of open-source hardware instruction set architectures (ISAs), such as RISC-V, which offer unprecedented transparency. Unlike proprietary cores, where the RTL (Register Transfer Level) logic is a “black box,” open-source cores allow designers to formally verify the hardware for hidden Trojans before fabrication. Recent research on RISC-V multicore processors reinforces this need, demonstrating that open architectures facilitate the implementation of isolated execution environments, which are essential for preventing the micro-architectural timing leakages discussed in Section 2 [15,41].
Furthermore, the integration of Embedded FPGA (eFPGA) fabrics into standard SoC dies is emerging as a critical enabler for patchable hardware. In this heterogeneous architecture, if a security vulnerability or a hardware bug is discovered post-deployment, the eFPGA logic can be reconfigured over-the-air to instantiate a new hardware monitor or a patched crypto-engine, extending the device’s lifecycle in the field. As highlighted in [42], such reconfigurable hardware is particularly effective for identifying Hardware Trojans at runtime, allowing the detection logic to operate at the same clock speed as the malicious trigger to prevent the payload from executing.
Despite the promise of AI-driven security, the reliance on machine learning creates a new attack surface: the neural network itself. A major open challenge is the susceptibility of TinyML models to Adversarial Attacks. Research has shown that imperceptible perturbations added to a power trace can fool a CNN into classifying a malicious DPA attack as benign behavior. For instance, the authors in [20] demonstrated that attackers can use Reinforcement Learning to generate adversarial triggers that evade static hardware Trojan detection, necessitating the use of “Adversarial Training” techniques to robustify On-Chip IDSs against mathematical manipulation.
Additionally, the field faces a significant Data Scarcity problem. Hardware Trojans and successful fault injections are rare, “black swan” events, leading to highly imbalanced datasets. As discussed in previous works on AI-based IDS [1,2], the quality and balance of training data are determinant factors for detection accuracy. Consequently, the community must move towards utilizing Generative Adversarial Networks (GANs) to synthesize realistic attack traces, thereby augmenting the limited datasets available for rare hardware anomalies.
Finally, as we approach the era of Post-Quantum Cryptography (PQC), the resource trade-offs discussed in this review will become even more acute. PQC algorithms require significantly larger keys and more memory than current ECC or RSA standards. Implementing AI-based monitoring alongside heavy PQC engines on a battery-powered sensor will require breakthroughs in ultra-low-power computing, likely driving the adoption of neuromorphic processors (Spiking Neural Networks) that compute asynchronously. In conclusion, the secure IoT chip of the future will be a heterogeneous, self-aware entity, combining transparent RISC-V cores, reconfigurable logic for resilience, and neuromorphic AI for real-time immunity, capable of evolving to meet threats that were unknown at the time of its manufacturing.

5. Conclusions

This review has underscored the critical vulnerability of resource-constrained IoT nodes in an era defined by a fragmented, globalized semiconductor supply chain. As analyzed in Section 2, the threat landscape has expanded beyond traditional software exploits to include sophisticated physical attacks, ranging from Side-Channel Analysis and Fault Injection to stealthy Hardware Trojans that target the very silicon root of trust. We have demonstrated that traditional, static countermeasures, such as shielding or redundancy, are often prohibitively expensive, imposing silicon area and power penalties typically exceeding 200%, rendering them insufficient for low-end IoT devices.
The convergence of Hardware Security and Edge AI, specifically through the TinyML paradigm, represents a transformative solution to these challenges. By embedding intelligence directly onto the SoC, it becomes possible to transition from passive protection to active, real-time intrusion detection. Our comparative analysis confirms that On-Chip Intrusion Detection Systems have matured significantly; active deep learning models now demonstrate success rates of over 98% in neutralizing Side-Channel Attacks and are capable of preventing key recovery with fewer than 3000 traces. Furthermore, unsupervised learning approaches show promise for detecting unknown Hardware Trojans with accuracies averaging 93%. However, the viability of these systems relies heavily on rigorous optimization, through quantization and the use of specialized accelerators, to meet the strict latency and energy budgets of the edge.
Ultimately, the future of IoT security lies in a Secure-by-Design philosophy where security is not an add-on, but an architectural imperative. The next generation of smart chips must be self-aware, capable of monitoring their own physical integrity, and resil-ient enough to adapt to threats that were unknown at the time of fabrication. Achiev-ing this will require sustained collaboration across the stack, from open-source RISC-V hardware verification to robust, adversarial-resistant AI models, ensuring that the mas-sive deployment of the Internet of Things does not compromise the safety of critical digital infrastructure.

Author Contributions

Conceptualization, H.E.B.; methodology, H.E.B.; writing—original draft preparation, H.E.B.; writing—review and editing, H.E.B. and A.A.E.K.; visualization, H.E.B.; supervision, A.A.E.K.; project administration, H.E.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. El Balbali, H.; Abou El Kalam, A. Towards Robust IoT Security: The Impact of Data Quality and Imbalanced Data on AI-Based IDS. Int. J. Adv. Comput. Sci. Appl. 2025, 16, 851–865. [Google Scholar] [CrossRef]
  2. El Balbali, H.; Abou El Kalam, A. AI-Driven Big Data Quality Improvement for Efficient Threat Detection in Agricultural IoT Systems. In Proceedings of the International Conference on Advanced Intelligent Systems for Sustainable Development; Springer: Berlin/Heidelberg, Germany, 2023. [Google Scholar] [CrossRef]
  3. Lu, S.; Shi, W. Vehicle Computing: Vision and challenges. J. Inf. Intell. 2023, 1, 23–35. [Google Scholar] [CrossRef]
  4. Yuan, J.; Zhang, J.; Qiu, P.; Wei, X.; Liu, D. A Survey of of Side-Channel Attacks and Mitigation for Processor Interconnects. Appl. Sci. 2024, 14, 6699. [Google Scholar] [CrossRef]
  5. Kuang, S.; Quan, Z.; Xie, G.; Cai, X.; Chen, X.; Li, K. NtNDet: Hardware Trojan detection based on pre-trained language models. Expert Syst. Appl. 2025, 271, 126666. [Google Scholar] [CrossRef]
  6. Biryukov, A.; Perrin, L. State of the Art in Lightweight Symmetric Cryptography; IACR: Beijing, China, 2017. [Google Scholar]
  7. Ray, P.P. A review on TinyML: State-of-the-art and prospects. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 1595–1623. [Google Scholar] [CrossRef]
  8. Capogrosso, L.; Cunico, F.; Cheng, O.S.; Fummi, F.; Cristani, M. A Machine Learning-Oriented Survey on Tiny Machine Learning. IEEE Access 2024, 12, 23406–23426. [Google Scholar] [CrossRef]
  9. Liptak, C.; Mal-Sarkar, S.; Kumar, S.A.P. Power Analysis Side Channel Attacks and Countermeasures for the Internet of Things. In 2022 IEEE Physical Assurance and Inspection of Electronics (PAINE); IEEE: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
  10. Crowe, J.; Hayes-Gill, B. Choosing a means of implementation. In Introduction to Digital Electronics; Elsevier: Amsterdam, The Netherlands, 1998. [Google Scholar] [CrossRef]
  11. Ali, A.; Becher, A.; Ziener, D. Backing the Wrong Horse: How Bit-Level Netlist Augmentation can Counter Power Side Channel Attacks. arXiv 2025, arXiv:2510.04640. [Google Scholar] [CrossRef]
  12. Differential Power Analysis. In Power Analysis Attacks; Springer: Berlin/Heidelberg, Germany, 2007. [CrossRef]
  13. Chen, Y.; Yu, J.; Kong, L.; Zhu, Y. A Comprehensive Survey of Side-Channel Sound-Sensing Methods. IEEE Internet Things J. 2025, 12, 1554–1578. [Google Scholar] [CrossRef]
  14. Rezaeezade, A.; Basurto-Becerra, A.; Weissbart, L.; Perin, G. One for All, All for Ascon: Ensemble-based Deep Learning Side-channel Analysis. In Proceedings of the International Conference on Applied Cryptography and Network Security; Springer: Berlin/Heidelberg, Germany, 2024. [Google Scholar] [CrossRef]
  15. Zhang, J.; Chen, C.; Cui, J.; Li, K. Timing Side-channel Attacks and Countermeasures in CPU Microarchitectures. ACM Comput. Surv. 2024, 56, 1–40. [Google Scholar] [CrossRef]
  16. Zulberti, L.; Nannipieri, P.; Fanucci, L. A Script-Based Cycle-True Verification Framework to Speed-Up Hardware and Software Co-Design of System-on-Chip exploiting RISC-V Architecture. In Proceedings of the 2021 16th International Conference on Design & Technology of Integrated Systems in Nanoscale Era (DTIS); IEEE: Piscataway, NJ, USA, 2021. [Google Scholar] [CrossRef]
  17. Lee, D.; Lee, J.; Jung, Y.; Kauh, J.; Song, T. Robust Hardware Trojan Detection Method by Unsupervised Learning of Electromagnetic Signals. IEEE Trans. Very Large Scale Integr. (Vlsi) Syst. 2024, 32, 2327–2340. [Google Scholar] [CrossRef]
  18. Wang, J.; Hassan, G.M.; Akhtar, N. A Survey of Neural Trojan Attacks and Defenses in Deep Learning. arXiv 2022, arXiv:2202.07183. [Google Scholar] [CrossRef]
  19. Dhavlle, A.; Hassan, R.; Mittapalli, M.; Dinakarrao, S.M.P. Design of Hardware Trojans and its Impact on CPS Systems: A Comprehensive Survey. In Proceedings of the 2021 IEEE International Symposium on Circuits and Systems; IEEE: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
  20. Gohil, V.; Guo, H.; Patnaik, S.; Rajendran, J. ATTRITION: Attacking Static Hardware Trojan Detection Techniques Using Reinforcement Learning. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security; ACM: New York, NY, USA, 2022. [Google Scholar]
  21. Ghosal, A.K.; Sardar, A.; Chowdhury, D.R. Differential fault analysis attack-tolerant hardware implementation of AES. J. Supercomput. 2024, 80, 4648–4681. [Google Scholar] [CrossRef]
  22. Shuvo, A.M.; Zhang, T.; Farahmandi, F.; Tehranipoor, M. A Comprehensive Survey on Non-Invasive Fault Injection Attacks. 2023. Available online: https://ia.cr/2023/1769 (accessed on 26 February 2026).
  23. Breier, J.; Hou, X. How Practical Are Fault Injection Attacks, Really? IEEE Access 2022, 10, 113122–113130. [Google Scholar] [CrossRef]
  24. Alabdulwahab, S.; Cheong, M.; Seo, A.; Kim, Y.-T.; Son, Y. Enhancing deep learning-based side-channel analysis using feature engineering in a fully simulated IoT system. Expert Syst. Appl. 2025, 266, 126079. [Google Scholar] [CrossRef]
  25. Abdollahi, M.; Chegini, M.; Hasanzadeh, M.; Hesar, S.; Patooghy, J.A.; Baniasadi, A. NoCSNet: Network-on-Chip Security Assessment Under Thermal Attacks Using Deep Neural Network. In Proceedings of the 2024 17th IEEE/ACM International Workshop on Network on Chip Architectures (NoCArc); IEEE: Piscataway, NJ, USA, 2024. [Google Scholar] [CrossRef]
  26. Gourousis, T.; Zhang, Z.; Yan, M.; Zhang, M.; Mittal, A.; Shrivastava, A. Identification of Stealthy Hardware Trojans through On-Chip Temperature Sensing and an Autoencoder-Based Machine Learning Algorithm. In Proceedings IEEE International Midwest Symposium on Circuits and Systems (MWSCAS); IEEE: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  27. Ahmed, A.A.; Islam, S.; Aman, A.H.M.; Safie, N. Design of Convolutional Neural Networks Architecture for Non-Profiled Side-Channel Attack Detection. Telecommun. Eng. 2023, 29, 76–81. [Google Scholar] [CrossRef]
  28. ScienceDirect Convolution Formula. 2006. Available online: https://www.sciencedirect.com/topics/computer-science/convolution-formula (accessed on 26 February 2026).
  29. Dofe, J.; Danesh, W.; More, V.; Chaudhari, A. Natural Language Processing for Hardware Security: Case of Hardware Trojan Detection in FPGAs. Cryptography 2024, 8, 36. [Google Scholar] [CrossRef]
  30. Berahmand, K.; Daneshfar, F.; Salehi, E.S.; Li, Y.; Xu, Y. Autoencoders and their applications in machine learning: A survey. Artif. Intell. Rev. 2024, 57, 28. [Google Scholar] [CrossRef]
  31. Apxml Reconstruction Loss Functions. Available online: https://apxml.com/courses/autoencoders-representation-learning/chapter-2-classic-autoencoder-architecture/reconstruction-loss-functions (accessed on 26 February 2026).
  32. Michelucci, U. An Introduction to Autoencoders. arXiv 2022, arXiv:2201.03898. [Google Scholar] [CrossRef]
  33. Li, Z.; Du, C.; Duan, X. Efficient AES Side-Channel Attacks Based on Residual Mamba Enhanced CNN. Entropy 2025, 27, 853. [Google Scholar] [CrossRef]
  34. Pu, K.; Dang, H.; Kong, F.; Zhang, J.; Wang, W. A Quantitative Analysis of Non-Profiled Side-Channel Attacks Based on Attention Mechanism. Electronics 2023, 12, 3279. [Google Scholar] [CrossRef]
  35. Chinbat, M.; Wu, L.; Zhang, X.; Yang, Y.; Wei, M. Comparative Deep Learning-Based Side-Channel Analysis of an FPGA-Based CRYSTALS-Kyber NTT Accelerator. Cryptography 2025, 9, 64. [Google Scholar] [CrossRef]
  36. Wang, J.; Zhai, G.; Gao, H.; Xu, L.; Li, X.; Li, Z.; Huang, Z.; Xie, C. A Hardware Trojan Detection and Diagnosis Method for Gate-Level Netlists Based on Machine Learning and Graph Theory. Electronics 2024, 13, 59. [Google Scholar] [CrossRef]
  37. Diavastos, B.A.A.; Peh, L.-S.; Carlson, T.E. Secure Run-Time Hardware Trojan Detection Using Lightweight Analytical Models. IEEE Trans. Comput. Des. Integr. Circuits Syst. 2024, 43, 431–441. [Google Scholar] [CrossRef]
  38. Li, Z.; Li, H.; Meng, L. Model Compression for Deep Neural Networks: A Survey. Computers 2023, 12, 60. [Google Scholar] [CrossRef]
  39. Wang, M.; Zhao, Y.; Liu, J.J.; Chen, C.Z.; Gu, R.J.; Guo, X.Z. Large Multimodal Model Compression via Iterative Efficient Pruning and Distillation. In Companion Proceedings of the ACM Web Conference 2024; ACM: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
  40. Hong, Y.-W.; Leu, J.-S.; Faisal, M.; Prakaso, S.W. Analysis of Model Compression Using Knowledge Distillation. IEEE Access 2022, 10, 85095–85105. [Google Scholar] [CrossRef]
  41. Kieu, D.-N.-B. Research on RISC-V-Based Multicore Processor for Multi-Threading. Ph.D. Thesis, The University of Electro-Communications, Tokyo, Japan, 2025. [Google Scholar]
  42. Dharavathu, A. Towards Reconfigurable Hardware for In-Field Hardware Bug Patches. Master’s Thesis, University of Calgary, Calgary, AB, Canada, 2024. [Google Scholar]
Figure 1. Our comprehensive visualisation of hardware security threats targeting IoT devices.
Figure 1. Our comprehensive visualisation of hardware security threats targeting IoT devices.
Chips 05 00009 g001
Figure 2. Our proposed general architecture of a hardware Trojan.
Figure 2. Our proposed general architecture of a hardware Trojan.
Chips 05 00009 g002
Figure 3. Comparative analysis of performance metrics across different hardware platforms.
Figure 3. Comparative analysis of performance metrics across different hardware platforms.
Chips 05 00009 g003
Table 1. Our proposed hardware-centric taxonomy of major physical attack classes targeting IoT chips, detailing the exploited physical mechanisms, primary hardware targets, attacker requirements, and security impact in terms of confidentiality, integrity, and availability.
Table 1. Our proposed hardware-centric taxonomy of major physical attack classes targeting IoT chips, detailing the exploited physical mechanisms, primary hardware targets, attacker requirements, and security impact in terms of confidentiality, integrity, and availability.
ThreatAttack VectorPhysical Mechanism/VulnerabilityPrimary TargetAttacker RequirementsSecurity Impact
SCAPower Analysis (SPA)Visual inspection of power traces ( P t o t a l ) to identify instruction sequencesCrypto Co-processors (RSA, ECC)Low: Oscilloscope, Shunt ResistorConfidentiality: Recovery of coarse-grained secrets (e.g., RSA exponent).
Power Analysis (DPA/CPA)Statistical correlation (Pearson) between hypothetical leakage models (Hamming Weight) and actual power consumption.AES/Symmetric EnginesMedium: Oscilloscope, statistical post-processing (e.g., Pearson correlation)Confidentiality: Full key extraction from noisy traces.
Electromagnetic Analysis (EMA)Detection of magnetic near-fields generated by current loops in metal interconnects (Maxwell’s Laws).Localized Logic Blocks (Specific Co-processor)Medium: EM Probes, SDR, XYZ TableConfidentiality: Spatial localization of leakage, bypassing global noise.
Micro-architectural TimingExploitation of data-dependent execution times caused by shared resources.Cache Memory, Branch Predictor, PipelineLow: Remote code execution or shared OSPrivacy: Inference of memory access patterns or key-dependent lookups
Hardware TrojansCombinational TriggerActivation via rare logic states on internal nets ( P t r i g g e r 0 ).System Bus, Internal Data PathsHigh: Foundry access or Design House infiltrationIntegrity/Availability: Silent dormancy until specific input pattern occurs.
Sequential Trigger“Time-bomb” activation based on state machines or counters (e.g., clock cycles).Counter Registers, RTC (Real-Time Clock)High: Supply Chain manipulationAvailability: Synchronized fleet failure after deployment duration.
Payload: Denial of ServiceModification of critical control signals to freeze or destroy the chip.Clock Tree, Reset Logic, FusesInherited from trigger mechanismAvailability: Permanent (“Kill Switch”) or temporary device failure.
Payload: Info LeakageModulation of side-channels (Thermal, Power, Delay) to transmit secrets covertly.Power Management Unit (PMU), GPIOInherited from trigger mechanismConfidentiality: Exfiltration of keys via covert channels.
Payload: ParametricAltering transistor sizing or doping to degrade performance or entropy.Analog Front-End, RNG (TRNG)High: Foundry manipulationIntegrity: Weakened cryptography due to predictable RNG.
FIAVoltage GlitchingUndervolting/
Overvolting to violate setup/hold time constraints of flip-flops.
Power Regulation (LDO), Control LogicLow: FPGA, Voltage GlitcherIntegrity: Instruction skipping (bypassing authentication).
Clock GlitchingInjecting transient pulses into the clock signal to corrupt instruction fetch/decode.Clock Distribution NetworkLow: FPGA, Direct Pin AccessIntegrity: Altering control flow or loop parameters.
Optical/Laser FaultsPhotoelectric effect induces localized charge carriers, causing bit-flips in memorySRAM, Register File, Flash MemoryHigh: Laser Station, Decapsulation equipmentConfidentiality: Differential Fault Analysis (DFA) to recover keys.
Table 2. Our proposed taxonomy of TinyML Algorithms for On-Chip IoT Security: Target threats, input modalities, and implementation overhead.
Table 2. Our proposed taxonomy of TinyML Algorithms for On-Chip IoT Security: Target threats, input modalities, and implementation overhead.
ModelSecurity ApplicationInput DataImplementation CostKey Strength for IoT
1D-CNNSCA Detection: Identifying DPA/CPA patterns amidst noise.Raw Power Traces, EM Emanations.High: Requires hardware accelerator or heavy DSP.Robust against trace misalignment and jitter.
MLPFault Injection: Classifying glitch shapes (e.g., voltage droop).Glitch Detector outputs, On-chip Voltmeter.Medium: Heavy memory usage for weights if FC.Simple architecture, easy to parallelize on SIMD.
LSTM/GRUHardware Trojans: Detecting sequential triggers or complex payloads.Sequence of Power, Op-codes, or PC values.High: Complex memory management (gating).Able to learn long-term dependencies (cycles).
AE/VAEZero-Day Anomaly: Detecting unknown attacks without labels.Performance Counters (HPC), Thermal Maps.Medium/High: Inference is light, training is heavy.Does not require a database of known attacks.
Random ForestLogic Locking: Modeling PUF responses or verifying stability.Challenge–Response Pairs (CRPs).Low: Can be implemented as IF-ELSE statements.Extremely fast inference and interpretable logic.
SVMMalware Detection: Identifying micro-architectural anomalies.Cache hits/misses, Branch stats.Low: Efficient if using linear kernels.Effective in high-dimensional spaces with small data.
SNNAlways-On Monitoring: Wake-up trigger for coarse anomalies.Event-based sensor spikes (asynchronous).Ultra-Low: Event-driven (power only on spikes).Extreme energy efficiency, mimics biology.
Table 3. State-of-the-art comparison of AI-driven security solutions for hardware threats.
Table 3. State-of-the-art comparison of AI-driven security solutions for hardware threats.
Ref.Target ThreatAI Model/ArchitectureHardware PlatformAccuracy/Success RateOverhead/Efficiency
[24]SCA (Hiding Countermeasures)ANN/GRU (Gated Recurrent Unit)Simulated IoT System (RISC-V/ARM)98.8% (Detection Accuracy)High energy efficiency via feature engineering
[17]Hardware Trojan (HT)Deep SVDD (Unsupervised EM Analysis)FPGA Implementation92.87% (Average Accuracy)Superior generalization vs. benchmarks
[33]SCA (AES/Masking)Mamba-Enhanced CNN (Hybrid Attention)ASIC/ASCAD Benchmark> 99 % (Trace Classification)Robust against trace jitter/misalignment
[14]SCA (Lightweight Ascon)Ensemble MLP/CNNMicrocontroller (STM32/Arduino)100% Key Recovery (<3k traces)Efficient on 32-bit Cortex-M4 devices
[34]Non-Profiled SCAAttention-Based CNN8-bit AVR Microcontroller86% (Success Rate)Effective on long, noisy power traces
[35]SCA on PQC (Kyber)Optimized CNN/MLPFPGA Accelerator96.6% (Key Classif.)10.3× faster training than standard DL
[36]Gate-Level TrojansGraph Neural Network (GNN)Netlist>97% (F1-Score)+25% True Negative Rate improvement
[37]Runtime TrojansLightweight Analytical ModelCPU100% TPR/0 False Pos.Negligible power overhead (0.005%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

El Balbali, H.; Abou El Kalam, A. Security Threats and AI-Based Detection Techniques in IoT Chips. Chips 2026, 5, 9. https://doi.org/10.3390/chips5010009

AMA Style

El Balbali H, Abou El Kalam A. Security Threats and AI-Based Detection Techniques in IoT Chips. Chips. 2026; 5(1):9. https://doi.org/10.3390/chips5010009

Chicago/Turabian Style

El Balbali, Hiba, and Anas Abou El Kalam. 2026. "Security Threats and AI-Based Detection Techniques in IoT Chips" Chips 5, no. 1: 9. https://doi.org/10.3390/chips5010009

APA Style

El Balbali, H., & Abou El Kalam, A. (2026). Security Threats and AI-Based Detection Techniques in IoT Chips. Chips, 5(1), 9. https://doi.org/10.3390/chips5010009

Article Metrics

Back to TopTop