Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = spiking context guided network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 4923 KB  
Article
LDD: High-Precision Training of Deep Spiking Neural Network Transformers Guided by an Artificial Neural Network
by Yuqian Liu, Chujie Zhao, Yizhou Jiang, Ying Fang and Feng Chen
Biomimetics 2024, 9(7), 413; https://doi.org/10.3390/biomimetics9070413 - 6 Jul 2024
Cited by 1 | Viewed by 2178
Abstract
The rise of large-scale Transformers has led to challenges regarding computational costs and energy consumption. In this context, spiking neural networks (SNNs) offer potential solutions due to their energy efficiency and processing speed. However, the inaccuracy of surrogate gradients and feature space quantization [...] Read more.
The rise of large-scale Transformers has led to challenges regarding computational costs and energy consumption. In this context, spiking neural networks (SNNs) offer potential solutions due to their energy efficiency and processing speed. However, the inaccuracy of surrogate gradients and feature space quantization pose challenges for directly training deep SNN Transformers. To tackle these challenges, we propose a method (called LDD) to align ANN and SNN features across different abstraction levels in a Transformer network. LDD incorporates structured feature knowledge from ANNs to guide SNN training, ensuring the preservation of crucial information and addressing inaccuracies in surrogate gradients through designing layer-wise distillation losses. The proposed approach outperforms existing methods on the CIFAR10 (96.1%), CIFAR100 (82.3%), and ImageNet (80.9%) datasets, and enables training of the deepest SNN Transformer network using ImageNet. Full article
Show Figures

Figure 1

18 pages, 1234 KB  
Article
Energy-Efficient Spiking Segmenter for Frame and Event-Based Images
by Hong Zhang, Xiongfei Fan and Yu Zhang
Biomimetics 2023, 8(4), 356; https://doi.org/10.3390/biomimetics8040356 - 10 Aug 2023
Cited by 14 | Viewed by 2810
Abstract
Semantic segmentation predicts dense pixel-wise semantic labels, which is crucial for autonomous environment perception systems. For applications on mobile devices, current research focuses on energy-efficient segmenters for both frame and event-based cameras. However, there is currently no artificial neural network (ANN) that can [...] Read more.
Semantic segmentation predicts dense pixel-wise semantic labels, which is crucial for autonomous environment perception systems. For applications on mobile devices, current research focuses on energy-efficient segmenters for both frame and event-based cameras. However, there is currently no artificial neural network (ANN) that can perform efficient segmentation on both types of images. This paper introduces spiking neural network (SNN, a bionic model that is energy-efficient when implemented on neuromorphic hardware) and develops a Spiking Context Guided Network (Spiking CGNet) with substantially lower energy consumption and comparable performance for both frame and event-based images. First, this paper proposes a spiking context guided block that can extract local features and context information with spike computations. On this basis, the directly-trained SCGNet-S and SCGNet-L are established for both frame and event-based images. Our method is verified on the frame-based dataset Cityscapes and the event-based dataset DDD17. On the Cityscapes dataset, SCGNet-S achieves comparable results to ANN CGNet with 4.85 × energy efficiency. On the DDD17 dataset, Spiking CGNet outperforms other spiking segmenters by a large margin. Full article
(This article belongs to the Special Issue Design and Control of a Bio-Inspired Robot)
Show Figures

Figure 1

Back to TopTop