Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = VLSI retina

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3575 KB  
Article
A High-Speed Low-Cost VLSI System Capable of On-Chip Online Learning for Dynamic Vision Sensor Data Classification
by Wei He, Jinguo Huang, Tengxiao Wang, Yingcheng Lin, Junxian He, Xichuan Zhou, Ping Li, Ying Wang, Nanjian Wu and Cong Shi
Sensors 2020, 20(17), 4715; https://doi.org/10.3390/s20174715 - 21 Aug 2020
Cited by 7 | Viewed by 6184
Abstract
This paper proposes a high-speed low-cost VLSI system capable of on-chip online learning for classifying address-event representation (AER) streams from dynamic vision sensor (DVS) retina chips. The proposed system executes a lightweight statistic algorithm based on simple binary features extracted from AER streams [...] Read more.
This paper proposes a high-speed low-cost VLSI system capable of on-chip online learning for classifying address-event representation (AER) streams from dynamic vision sensor (DVS) retina chips. The proposed system executes a lightweight statistic algorithm based on simple binary features extracted from AER streams and a Random Ferns classifier to classify these features. The proposed system’s characteristics of multi-level pipelines and parallel processing circuits achieves a high throughput up to 1 spike event per clock cycle for AER data processing. Thanks to the nature of the lightweight algorithm, our hardware system is realized in a low-cost memory-centric paradigm. In addition, the system is capable of on-chip online learning to flexibly adapt to different in-situ application scenarios. The extra overheads for on-chip learning in terms of time and resource consumption are quite low, as the training procedure of the Random Ferns is quite simple, requiring few auxiliary learning circuits. An FPGA prototype of the proposed VLSI system was implemented with 9.5~96.7% memory consumption and <11% computational and logic resources on a Xilinx Zynq-7045 chip platform. It was running at a clock frequency of 100 MHz and achieved a peak processing throughput up to 100 Meps (Mega events per second), with an estimated power consumption of 690 mW leading to a high energy efficiency of 145 Meps/W or 145 event/μJ. We tested the prototype system on MNIST-DVS, Poker-DVS, and Posture-DVS datasets, and obtained classification accuracies of 77.9%, 99.4% and 99.3%, respectively. Compared to prior works, our VLSI system achieves higher processing speeds, higher computing efficiency, comparable accuracy, and lower resource costs. Full article
(This article belongs to the Special Issue Sensor Fusion for Object Detection, Classification and Tracking)
Show Figures

Figure 1

17 pages, 9104 KB  
Article
Time-of-Travel Methods for Measuring Optical Flow on Board a Micro Flying Robot
by Erik Vanhoutte, Stefano Mafrica, Franck Ruffier, Reinoud J. Bootsma and Julien Serres
Sensors 2017, 17(3), 571; https://doi.org/10.3390/s17030571 - 11 Mar 2017
Cited by 18 | Viewed by 7758
Abstract
For use in autonomous micro air vehicles, visual sensors must not only be small, lightweight and insensitive to light variations; on-board autopilots also require fast and accurate optical flow measurements over a wide range of speeds. Using an auto-adaptive bio-inspired Michaelis–Menten Auto-adaptive Pixel [...] Read more.
For use in autonomous micro air vehicles, visual sensors must not only be small, lightweight and insensitive to light variations; on-board autopilots also require fast and accurate optical flow measurements over a wide range of speeds. Using an auto-adaptive bio-inspired Michaelis–Menten Auto-adaptive Pixel (M 2 APix) analog silicon retina, in this article, we present comparative tests of two optical flow calculation algorithms operating under lighting conditions from 6 × 10 7 to 1 . 6 × 10 2 W·cm 2 (i.e., from 0.2 to 12,000 lux for human vision). Contrast “time of travel” between two adjacent light-sensitive pixels was determined by thresholding and by cross-correlating the two pixels’ signals, with measurement frequency up to 5 kHz for the 10 local motion sensors of the M 2 APix sensor. While both algorithms adequately measured optical flow between 25 /s and 1000 /s, thresholding gave rise to a lower precision, especially due to a larger number of outliers at higher speeds. Compared to thresholding, cross-correlation also allowed for a higher rate of optical flow output (99 Hz and 1195 Hz, respectively) but required substantially more computational resources. Full article
(This article belongs to the Special Issue UAV-Based Remote Sensing)
Show Figures

Figure 1

Back to TopTop