Next Article in Journal
Enhancing Sensor Network Security with Improved Internal Hardware Design
Previous Article in Journal
Plasmonic Color Filter Array with High Color Purity for CMOS Image Sensors
Previous Article in Special Issue
A Survey of Energy-Efficient Communication Protocols with QoS Guarantees in Wireless Multimedia Sensor Networks
Article Menu

Export Article

Open AccessArticle
Sensors 2019, 19(8), 1751; https://doi.org/10.3390/s19081751

Bandwidth Modeling of Silicon Retinas for Next Generation Visual Sensor Networks

Wireless and Multimedia Networking Research Group, Faculty of Science, Engineering and Computing, Kingston University, Penrhyn Rd, Kingston upon Thames KT1 2EE 1, UK
*
Author to whom correspondence should be addressed.
Received: 23 January 2019 / Revised: 23 March 2019 / Accepted: 3 April 2019 / Published: 12 April 2019
(This article belongs to the Special Issue Visual Sensor Networks and Related Applications)
  |  
PDF [2421 KB, uploaded 12 April 2019]
  |  

Abstract

Silicon retinas, also known as Dynamic Vision Sensors (DVS) or event-based visual sensors, have shown great advantages in terms of low power consumption, low bandwidth, wide dynamic range and very high temporal resolution. Owing to such advantages as compared to conventional vision sensors, DVS devices are gaining more and more attention in various applications such as drone surveillance, robotics, high-speed motion photography, etc. The output of such sensors is a sequence of events rather than a series of frames as for classical cameras. Estimating the data rate of the stream of events associated with such sensors is needed for the appropriate design of transmission systems involving such sensors. In this work, we propose to consider information about the scene content and sensor speed to support such estimation, and we identify suitable metrics to quantify the complexity of the scene for this purpose. According to the results of this study, the event rate shows an exponential relationship with the metric associated with the complexity of the scene and linear relationships with the speed of the sensor. Based on these results, we propose a two-parameter model for the dependency of the event rate on scene complexity and sensor speed. The model achieves a prediction accuracy of approximately 88.4% for the outdoor environment along with the overall prediction performance of approximately 84%. View Full-Text
Keywords: neuromorphic engineering; dynamic and active-pixel vision sensor; scene complexity; neuromorphic event rate; gradient approximation; scene texture; Sobel; Roberts; Prewitt neuromorphic engineering; dynamic and active-pixel vision sensor; scene complexity; neuromorphic event rate; gradient approximation; scene texture; Sobel; Roberts; Prewitt
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Khan, N.; Martini, M.G. Bandwidth Modeling of Silicon Retinas for Next Generation Visual Sensor Networks. Sensors 2019, 19, 1751.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top