High-Density Neuromorphic Inference Platform (HDNIP) with 10 Million Neurons
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe authors presented a novel chip architecture that realizes high neuron density for neuromorphic inference applications. The proposed architecture eliminates the on-chip SRAM and uses NVM (i.e. ReRAM) for weight storage and near-memory computation.
Below are my questions on the technical details:
- For NVM selection, can the authors provide specific reasons why ReRAM is chosen for the demonstration? From figure 5b, the energy consumption and density of ReRAM are superior than other NVMs, but it suffers from low endurance. Did the authors consider the endurance for this NVM selection?
- How to read the plots from figure 8? What do the squares mean, and why do software and emulation results not have any trends?
- For the simulated tasks, what are the corresponding throughput, power consumption and area if implemented in the state-of-the-art GPU/TPU?
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThe authors presented a paper titled “High-Density Neuromorphic Inference Platform (HDNIP) with 10 Million Neurons”. After carefully review this work, I found the following:
- Despite of achieving some improvements when compared with existing work by using novel strategies, such as elimination of on-chip SRAM, ReRAM-based NMC, in- 522
put reuse, and adaptive TDM. However, the proposed neuromorphic chip supports simple LIF neurons. How the authors implement STDP mechanisms?, and other SNN models?, their chip is capable of performing these models?.
Evidently, supporting simple LIF neurons, a large scale SNN can be supported. However, more complex task need chips with more capabilities. This is a current limitation of these chips. How the authors will cope with these limitations?.
Some important works are:
Richter, O., Wu, C., Whatley, A. M., Köstinger, G., Nielsen, C., Qiao, N., & Indiveri, G. (2024). DYNAP-SE2: a scalable multi-core dynamic neuromorphic asynchronous spiking neural network processor. Neuromorphic computing and engineering, 4(1), 014003.
Sripad, A., Sanchez, G., Zapata, M., Pirrone, V., Dorta, T., Cambria, S., ... & Madrenas, J. (2018). SNAVA—A real-time multi-FPGA multi-model spiking neural network simulation architecture. Neural Networks, 97, 28-45.
Author Response
Please see the attachment.
Author Response File: Author Response.pdf
Round 2
Reviewer 2 Report
Comments and Suggestions for AuthorsThe paper can be accepted