Skip to Content
  • Indexed in
    Scopus
  • 23 days
    Time to First Decision

Get Alerted

Add your email address to receive forthcoming issues of this journal.

All Articles (114)

Hyperdimensional computing (HDC) provides a highly efficient alternative to neural networks for intracranial electroencephalography (iEEG) seizure detection on edge devices with strict resource limits. While sparse HDC can significantly reduce energy use, current hardware fails to capitalize on this for two reasons. First, existing designs do not optimize the encoding architecture specifically for sparse execution, leaving potential energy savings on the table. Second, researchers often ignore the “area” problem, the large physical space high-dimensional vectors take up on a chip, which must be solved to make these devices small enough for practical edge use. This work presents a sparse HDC accelerator that bridges these gaps through three key contributions. First, we streamline the sparse encoding architecture to improve energy and area efficiency by integrating a compressed item memory (CompIM) and simplified spatial bundling. Second, to address the area bottleneck and enable true edge deployment, we systematically explore area trade-offs via sequentialization techniques, evaluating both channel folding (CF) and vector folding (VF). Third, we push efficiency even further by proposing an item-memory-free (IM-free) architecture. By replacing the baseline segmented shift binding with a standard shift binding scheme, and gracefully utilizing raw local binary pattern (LBP) codes directly as shift amounts, we completely bypass the CompIM for simultaneous area and energy savings. However, this optimization incurs a drop in detection accuracy; hence, we ultimately present two tailored configurations. First, our energy-optimized IM-free design achieves a 5.55× area and 3.08× energy improvement over the sparse HDC baseline, alongside 8.20× and 13.37× improvements over the dense baseline. Second, to prioritize clinical performance, our balanced streamlined design utilizes a channel folding factor (CFF) of 4 to preserve higher accuracy. This balanced approach achieves a 5.97× area and a 4.66× energy improvement over the dense baseline, with a 4× latency increase.

23 April 2026

Overview figure. (a) Intracranial electroencephalography (iEEG) seizure detection system with hyperdimensional computing (HDC) and local binary pattern (LBP) codes; (b) item memory and spatial HDC encoder with our 3 contributions: (1) streamlined architecture with compressed item memory (CompIM) and simplified spatial bundling, (2) channel folding with a channel folding factor (CFF), and (3) the item-memory-free (IM-free) architecture; (c) switching activity comparison of dense high-dimensional vectors (HVs) and sparse HVs.

The rapid expansion of the Internet of Things (IoT) has opened resource-limited devices to novel physical threats, such as Side-Channel Attacks (SCAs) and Hardware Trojans (HTs). Traditional security mechanisms are often not capable of standing against such hardware-based attacks, specifically on low-power System-on-Chip (SoC) where static defenses can incur 2× to 3× overhead in silicon area and power. Herein, the gap between hardware security and embedded AI is compositionally formulated for discussion. We present a comprehensive survey of the current hardware threat landscape and analyze the emergence of “Secure-by-Design” paradigms, specifically focusing on the integration of Edge AI and TinyML as active, on-chip intrusion detection mechanisms. This review presents a critical analysis of trade-offs for running lightweight ML models on hardware by comparing state-of-the-art approaches. Our analysis highlights that optimized architectures, such as Mamba-Enhanced Convolutional Neural Networks (CNNs) and Gated Recurrent Unit (GRU), can achieve detection accuracies exceeding 99% against SCA and >92% against stealthy Hardware Trojans, while offering up to 75% lower power consumption compared to standard deep learning baselines. Finally, open challenges such as adversarial attacks on defense models are briefly discussed, and the focus is put on future directions toward constructing secure chips based on robust, AI-driven technology.

4 March 2026

Our comprehensive visualisation of hardware security threats targeting IoT devices.

For sensing applications, a complementary metal oxide semiconductor (CMOS) image sensor (CIS) with a lateral overflow integration capacitor (LOFIC) is in high demand. The LOFIC CIS can achieve high-dynamic-range (HDR) imaging by combining a low-conversion-gain (LCG) signal for large maximum signal electrons and a high-conversion-gain (HCG) signal for a low electron-referred noise floor. However, the LOFIC CIS faces challenges regarding the power consumption and circuit area when reading both HCG and LCG signals. To address these issues, this study proposes a readout circuit composed of area-efficient MOS capacitors using a folding DC operating point technique and an in-column signal selector for an on-chip HDR merger of HCG and LCG signals. A 10-bit test chip was fabricated with a 0.18 µm CMOS process with MOS capacitors. The fabricated chip maintains high linearity, achieving an integral nonlinearity (INL) of +7.17/−6.93 LSB for the HCG signal and +7.95/−7.41 LSB for the LCG signal. Furthermore, the proposed design achieves a 14.92% reduction in the average power consumption of the total readout circuit and a 36.5% reduction in the readout circuit area.

24 February 2026

Circuit schematic, pixel operation timing, and potential diagram of the LOFIC CIS. (a) Pixel circuit schematic of the LOFIC CIS. (b) Pixel operation timing of the LOFIC CIS. (c) Potential diagram of the LOFIC CIS.

The fast expansion of the Internet of Things (IoT) has increased the need for strong security measures to protect the enormous network of interconnected devices. This paper proposes a unique approach that combines optimization, intuitive design principles, and Least Weighted Elliptic Curve Cryptography (LWECC) to improve IoT device security while reducing power consumption. The proposed optimization strategy focuses on lowering computational overhead, which is critical for IoT devices with limited energy and processing power. The proposed method significantly reduces the amount of energy required for cryptographic operations by carefully selecting appropriate elliptic curves and optimizing cryptographic algorithms, ensuring that IoT devices may continue to function without compromising security. Furthermore, by selecting elliptic curves with minimal attack vulnerability, the use of LWECC provides an additional layer of protection. This technique ensures that, even in the face of emerging threats, IoT devices remain highly resilient, reducing the chance of security breaches while preserving functionality without using excessive power. Experimental results show a power consumption of only 0.156 W and 0.25 W for memory and router topologies, respectively, with an error margin of 0.01. The stated error margin pertains to the simulation-based evaluation of transmission-level data handling within the LWECC-enabled memory/router pipeline, rather than the risk of physical memory-cell failure or fabrication yield. The value shows the maximum amount of packet/data-stream loss detected during encrypted data transfer, rather than hardware memory reliability.

23 February 2026

OMNI memory-partitioning architecture illustrating structured sparsity patterns that define multi-bank access behavior for the proposed LWECC-secured memory and router.

News & Conferences

Volumes

Latest Issues

Open for Submission

Editor's Choice

XFacebookLinkedIn
Chips - ISSN 2674-0729