Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = Neuromorphic System-on-a-Chip (NeuSoC)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 4725 KiB  
Article
Optimal Mapping of Spiking Neural Network to Neuromorphic Hardware for Edge-AI
by Chao Xiao, Jihua Chen and Lei Wang
Sensors 2022, 22(19), 7248; https://doi.org/10.3390/s22197248 - 24 Sep 2022
Cited by 11 | Viewed by 4212
Abstract
Neuromorphic hardware, the new generation of non-von Neumann computing system, implements spiking neurons and synapses to spiking neural network (SNN)-based applications. The energy-efficient property makes the neuromorphic hardware suitable for power-constrained environments where sensors and edge nodes of the internet of things (IoT) [...] Read more.
Neuromorphic hardware, the new generation of non-von Neumann computing system, implements spiking neurons and synapses to spiking neural network (SNN)-based applications. The energy-efficient property makes the neuromorphic hardware suitable for power-constrained environments where sensors and edge nodes of the internet of things (IoT) work. The mapping of SNNs onto neuromorphic hardware is challenging because a non-optimized mapping may result in a high network-on-chip (NoC) latency and energy consumption. In this paper, we propose NeuMap, a simple and fast toolchain, to map SNNs onto the multicore neuromorphic hardware. NeuMap first obtains the communication patterns of an SNN by calculation that simplifies the mapping process. Then, NeuMap exploits localized connections, divides the adjacent layers into a sub-network, and partitions each sub-network into multiple clusters while meeting the hardware resource constraints. Finally, we employ a meta-heuristics algorithm to search for the best cluster-to-core mapping scheme in the reduced searching space. We conduct experiments using six realistic SNN-based applications to evaluate NeuMap and two prior works (SpiNeMap and SNEAP). The experimental results show that, compared to SpiNeMap and SNEAP, NeuMap reduces the average energy consumption by 84% and 17% and has 55% and 12% lower spike latency, respectively. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

15 pages, 1546 KiB  
Article
Topology-Aware Mapping of Spiking Neural Network to Neuromorphic Processor
by Chao Xiao, Yao Wang, Jihua Chen and Lei Wang
Electronics 2022, 11(18), 2867; https://doi.org/10.3390/electronics11182867 - 10 Sep 2022
Cited by 4 | Viewed by 2690
Abstract
Neuromorphic processors, the new generation of brain-inspired non-von Neumann computing systems, are developed to better support the execution of spiking neural networks (SNNs). The neuromorphic processor typically consists of multiple cores and adopts the Network-on-Chip (NoC) as the communication framework. However, an unoptimized [...] Read more.
Neuromorphic processors, the new generation of brain-inspired non-von Neumann computing systems, are developed to better support the execution of spiking neural networks (SNNs). The neuromorphic processor typically consists of multiple cores and adopts the Network-on-Chip (NoC) as the communication framework. However, an unoptimized mapping of SNNs onto the neuromorphic processor results in lots of spike messages on NoC, which increases the energy consumption and spike latency on NoC. Addressing this problem, we present a fast toolchain, NeuToMa, to map SNNs onto the neuromorphic processor. NeuToMa exploits the global topology of SNNs and uses the group optimization strategy to partition SNNs into multiple clusters, significantly reducing the NoC traffic. Then, NeuToMa dispatches the clusters to neuromorphic cores, minimizing the average hop of spike messages and balancing the NoC workload. The experimental results show that compared with the state-of-the-art technique, NeuToMa reduces the spike latency and energy consumption by up to 55% and 86%, respectively. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

24 pages, 2283 KiB  
Review
Towards Neuromorphic Learning Machines Using Emerging Memory Devices with Brain-Like Energy Efficiency
by Vishal Saxena, Xinyu Wu, Ira Srivastava and Kehan Zhu
J. Low Power Electron. Appl. 2018, 8(4), 34; https://doi.org/10.3390/jlpea8040034 - 2 Oct 2018
Cited by 28 | Viewed by 10897
Abstract
The ongoing revolution in Deep Learning is redefining the nature of computing that is driven by the increasing amount of pattern classification and cognitive tasks. Specialized digital hardware for deep learning still holds its predominance due to the flexibility offered by the software [...] Read more.
The ongoing revolution in Deep Learning is redefining the nature of computing that is driven by the increasing amount of pattern classification and cognitive tasks. Specialized digital hardware for deep learning still holds its predominance due to the flexibility offered by the software implementation and maturity of algorithms. However, it is being increasingly desired that cognitive computing occurs at the edge, i.e., on hand-held devices that are energy constrained, which is energy prohibitive when employing digital von Neumann architectures. Recent explorations in digital neuromorphic hardware have shown promise, but offer low neurosynaptic density needed for scaling to applications such as intelligent cognitive assistants (ICA). Large-scale integration of nanoscale emerging memory devices with Complementary Metal Oxide Semiconductor (CMOS) mixed-signal integrated circuits can herald a new generation of Neuromorphic computers that will transcend the von Neumann bottleneck for cognitive computing tasks. Such hybrid Neuromorphic System-on-a-chip (NeuSoC) architectures promise machine learning capability at chip-scale form factor, and several orders of magnitude improvement in energy efficiency. Practical demonstration of such architectures has been limited as performance of emerging memory devices falls short of the expected behavior from the idealized memristor-based analog synapses, or weights, and novel machine learning algorithms are needed to take advantage of the device behavior. In this article, we review the challenges involved and present a pathway to realize large-scale mixed-signal NeuSoCs, from device arrays and circuits to spike-based deep learning algorithms with ‘brain-like’ energy-efficiency. Full article
(This article belongs to the Special Issue CMOS Low Power Design)
Show Figures

Figure 1

Back to TopTop