Neuromorphic Memory and Computing-in-Memory Architectures: From Devices to Systems

A special issue of Micromachines (ISSN 2072-666X). This special issue belongs to the section "A:Physics".

Deadline for manuscript submissions: 10 September 2026 | Viewed by 1380

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117576, Singapore
Interests: nanostructured materials; 2D materials; memristor; optoelectronic synaptic; brain-inspired computing

E-Mail Website
Guest Editor
State Key Laboratory of Precision Measurement Technology and Instruments, School of Precision Instruments and Opto-Electronics Engineering, Tianjin University, Tianjin 300072, China
Interests: low-dimensional materials; memory; sensor; logic device; neuromorphic device; in-memory computing; in-sensor computing; hardware neural network
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Neuromorphic memory and computing-in-memory architectures are poised to reshape the future of intelligent information processing. By emulating the parallel processing and adaptive learning capabilities of biological neural networks, neuromorphic memory devices deliver energy-efficient data storage and dynamic synaptic functionality. In parallel, computing-in-memory architectures overcome the von Neumann bottleneck by integrating logic and memory within the same physical units, offering substantial advances in latency reduction, scalability, and power efficiency.

Rapid progress in emerging memory technologies—including filament-free and non-volatile defect-engineered memristors, optoelectronic and photoresponsive memories, and photonic–electronic hybrid synaptic devices—is accelerating the development of highly dense and reliable neuromorphic hardware. At the system and algorithm levels, innovations such as high-linearity analog matrix-vector multiplication, error-resilient crossbar architectures, neuromorphic accelerators, and full-stack algorithm–hardware co-optimization are driving real-world deployment for edge intelligence, autonomous navigation, human–machine interaction, and cognitive sensing.

This Special Issue provides a dedicated forum for breakthroughs spanning materials, devices, architectures, and system-level integration for neuromorphic memory and computing-in-memory technologies. We invite submissions that explore new device physics, scalable fabrication strategies, low-power and high-precision neuromorphic computing algorithms, heterogeneous integration, 3D stacking, and cross-domain intelligence. Interdisciplinary studies bridging materials science, semiconductor engineering, and artificial intelligence are strongly encouraged.

Dr. Jinyong Wang
Prof. Dr. Jing Liu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Micromachines is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2100 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • neuromorphic devices
  • memory devices
  • computing-in-memory
  • memristor
  • synaptic electronics
  • analog neural computing
  • crossbar arrays
  • von Neumann bottleneck
  • edge intelligence
  • optoelectronic memory
  • photonic–electronic hybrid computing
  • hardware AI accelerators
  • intelligent sensing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 63805 KB  
Article
A Low-Offset Sense Amplifier with Self-Adaptive Calibration and Dynamic Body-Biased Mitigation Technology for Enhanced SRAM Read Performance
by Yulan Liu, Yibo Hu, Han Xiao, Yuanzhen Liu and Jing Chen
Micromachines 2026, 17(5), 591; https://doi.org/10.3390/mi17050591 (registering DOI) - 11 May 2026
Viewed by 4
Abstract
Offset voltage (VOS) is a critical parameter of sense amplifiers (SAs), determining both the read reliability and performance of SRAM. This paper proposes SC-DISBSA, a low-VOS SA that combines self-adaptive calibration with dynamic body bias technology. Based on [...] Read more.
Offset voltage (VOS) is a critical parameter of sense amplifiers (SAs), determining both the read reliability and performance of SRAM. This paper proposes SC-DISBSA, a low-VOS SA that combines self-adaptive calibration with dynamic body bias technology. Based on the linear relationship between the transfer gate voltage and VOS, a three-step self-adaptive calibration algorithm is established. Supported by the calibration control circuit, this approach quantitatively calibrates circuit mismatch while dynamic body bias further suppresses remaining variations. Under a 28 nm CMOS process, the VOS standard deviation (σOS) of SC-DISBSA remains below 3.1 mV across a 0.7 V to 1.1 V supply range, representing reductions of 49.9% and 69.3% compared to the voltage-latch SA (VLSA) and current-latch SA (CLSA), respectively. At a typical case (TT/0.9 V/27 °C) with a BL differential (ΔVBL) of 6σOS, SC-DISBSA reduces the required bitline discharge delay by 51.7% and improves average read sensing power by 24.9% compared to VLSA. By adopting an non-conventional bitline power supply strategy, SC-DISBSA decreases worst case (FF/1.1 V/125 °C) static power by 36.8% relative to VLSA. Additionally, it reduces gate area by 18.9%. Overall, SC-DISBSA effectively optimizes SRAM read latency and power efficiency. Full article
Show Figures

Figure 1

19 pages, 4431 KB  
Article
A Parameter-Agnostic Adaptive Compensation in Memristor-Based Neuromorphic Systems for Parasitic Resistance
by Texu Liu, Hanbo Ren, Peiwen Tong, Wei Wang, Qingjiang Li, Meng Xia, Yi Sun, Rongrong Cao, Bing Song, Zhiwei Li and Haijun Liu
Micromachines 2026, 17(4), 481; https://doi.org/10.3390/mi17040481 - 16 Apr 2026
Viewed by 358
Abstract
Memristor-based neuromorphic computing offers a promising pathway for efficient in-memory processing. However, the scalability and reliability of such systems are severely compromised by parasitic resistances (including line and input resistances) in crossbar arrays, which cause significant IR-drop during vector–matrix multiplication (VMM). Existing research [...] Read more.
Memristor-based neuromorphic computing offers a promising pathway for efficient in-memory processing. However, the scalability and reliability of such systems are severely compromised by parasitic resistances (including line and input resistances) in crossbar arrays, which cause significant IR-drop during vector–matrix multiplication (VMM). Existing research often suffers from high computational latency or relies on the precise extraction of parasitic parameters, which is impractical and computationally expensive for large-scale integration. To overcome these limitations, we propose a Parameter-Agnostic Adaptive Compensation (PAAC) method based on a distributed linear approximation model. By analyzing the circuit characteristics, we conquered the challenge of coupling between parasitic effects and output current, deriving a simplified linear relationship that requires no prior knowledge of specific resistance values. The PAAC method involves only a single-step pre-calibration experiment to determine a global compensation factor, achieving an ultra-low computational complexity during inference. We validated the method using a comprehensive two-stage strategy: board-level hardware experiments confirmed its feasibility by reducing current distortion from 71% to 2%, while extensive large-scale HSPICE simulations verified its scalability, restoring classification accuracy from 89% to 95%. This work provides a robust, low-overhead solution that eliminates the dependency on precise parameter modeling, facilitating the realization of large-scale, high-precision neuromorphic hardware. Full article
Show Figures

Figure 1

Review

Jump to: Research

34 pages, 517 KB  
Review
A Review of Embedded Artificial Intelligence Research (2023–2026): Technological Advancements, Representative Advances, and Future Prospects
by Zhaoyun Zhang
Micromachines 2026, 17(5), 586; https://doi.org/10.3390/mi17050586 (registering DOI) - 9 May 2026
Viewed by 349
Abstract
Since the publication of the “Review of Embedded Artificial Intelligence Research” in 2023, driven by innovations in hardware architectures, advances in lightweight algorithms, and the maturation of edge–cloud collaboration technologies, embedded artificial intelligence (embedded AI) has progressed from “technically feasible” to “large-scale deployment”. [...] Read more.
Since the publication of the “Review of Embedded Artificial Intelligence Research” in 2023, driven by innovations in hardware architectures, advances in lightweight algorithms, and the maturation of edge–cloud collaboration technologies, embedded artificial intelligence (embedded AI) has progressed from “technically feasible” to “large-scale deployment”. As a continuation of that review, this article systematically surveys the core advances in embedded AI from 2023 to 2026. At the hardware level, it examines engineering progress in non-von Neumann architectures such as compute-in-memory and neuromorphic chips, as well as heterogeneous integration technologies. At the algorithmic level, it covers dynamic adaptive lightweighting, specialized edge-side optimization of large models (including on-device large language model fine-tuning and edge diffusion models), and lightweight multimodal approaches. In terms of deployment paradigms, it discusses edge-side full training, federated edge learning, edge–cloud collaborative intelligence, and emerging paradigms. At the application level, it illustrates the “perception–decision–execution” pipeline in industrial IoT, wearable healthcare, autonomous driving, embodied intelligence, and smart agriculture. The article also analyzes core challenges including ultra-low-power design for extreme scenarios, cross-platform standardization, edge-side data security and privacy, and model robustness in complex environments. Based on these findings, four research directions are proposed to guide future work. Full article
Show Figures

Figure 1

Back to TopTop