Next Article in Journal
Voltage Fluctuation Enhancement of Grid-Connected Power System Using PV and Battery-Based Dynamic Voltage Restorer
Next Article in Special Issue
From Games to Understanding: Semantrix as a Testbed for Advancing Semantics in Human–Computer Interaction with Transformers
Previous Article in Journal
Underwater Image Enhancement Method Based on Vision Mamba
Previous Article in Special Issue
Mitigating Context Bias in Vision–Language Models via Multimodal Emotion Recognition
 
 
Article
Peer-Review Record

High-Density Neuromorphic Inference Platform (HDNIP) with 10 Million Neurons

Electronics 2025, 14(17), 3412; https://doi.org/10.3390/electronics14173412
by Yue Zuo 1, Ning Ning 1, Ke Cao 2, Rui Zhang 2, Cheng Fu 1, Shengxin Wang 2, Liwei Meng 1, Ruichen Ma 1, Guanchao Qiao 1, Yang Liu 1 and Shaogang Hu 1,2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Electronics 2025, 14(17), 3412; https://doi.org/10.3390/electronics14173412
Submission received: 14 July 2025 / Revised: 13 August 2025 / Accepted: 19 August 2025 / Published: 27 August 2025
(This article belongs to the Special Issue Feature Papers in Artificial Intelligence)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The authors presented a novel chip architecture that realizes high neuron density for neuromorphic inference applications. The proposed architecture eliminates the on-chip SRAM and uses NVM (i.e. ReRAM) for weight storage and near-memory computation. 

Below are my questions on the technical details:

  1. For NVM selection, can the authors provide specific reasons why ReRAM is chosen for the demonstration? From figure 5b, the energy consumption and density of ReRAM are superior than other NVMs, but it suffers from low endurance. Did the authors consider the endurance for this NVM selection?
  2. How to read the plots from figure 8? What do the squares mean, and why do software and emulation results not have any trends? 
  3. For the simulated tasks, what are the corresponding throughput, power consumption and area if implemented in the state-of-the-art GPU/TPU?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The authors presented a paper titled “High-Density Neuromorphic Inference Platform (HDNIP) with 10 Million Neurons”. After carefully review this work, I found the following:

- Despite of achieving some improvements when compared with existing work by using novel strategies, such as elimination of on-chip SRAM, ReRAM-based NMC, in- 522
put reuse, and adaptive TDM. However, the proposed neuromorphic chip supports simple LIF neurons. How the authors implement STDP mechanisms?, and other SNN models?, their chip is capable of performing these models?.  

Evidently, supporting simple LIF neurons, a large scale SNN can be supported. However, more complex task need chips with more capabilities. This is a current limitation of these chips. How the authors will cope with these limitations?.

Some important works are: 

Richter, O., Wu, C., Whatley, A. M., Köstinger, G., Nielsen, C., Qiao, N., & Indiveri, G. (2024). DYNAP-SE2: a scalable multi-core dynamic neuromorphic asynchronous spiking neural network processor. Neuromorphic computing and engineering, 4(1), 014003.

Sripad, A., Sanchez, G., Zapata, M., Pirrone, V., Dorta, T., Cambria, S., ... & Madrenas, J. (2018). SNAVA—A real-time multi-FPGA multi-model spiking neural network simulation architecture. Neural Networks, 97, 28-45.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

The paper can be accepted 

Back to TopTop