Next Article in Journal
A Methodology to Design Static NCL Libraries
Next Article in Special Issue
Implementing a Timing Error-Resilient and Energy-Efficient Near-Threshold Hardware Accelerator for Deep Neural Network Inference
Previous Article in Journal / Special Issue
Low-Overhead Reinforcement Learning-Based Power Management Using 2QoSM
 
 
Article

Embedded Object Detection with Custom LittleNet, FINN and Vitis AI DCNN Accelerators

Embedded Vision Systems Group, Department of Automatic Control and Robotics, Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering AGH, University of Science and Technology, 30-059 Krakow, Poland
*
Author to whom correspondence should be addressed.
Academic Editors: Aatmesh Shrivastava, Vishal Saxena and Xinfei Guo
J. Low Power Electron. Appl. 2022, 12(2), 30; https://doi.org/10.3390/jlpea12020030
Received: 4 April 2022 / Revised: 6 May 2022 / Accepted: 13 May 2022 / Published: 20 May 2022
(This article belongs to the Special Issue Hardware for Machine Learning)
Object detection is an essential component of many systems used, for example, in advanced driver assistance systems (ADAS) or advanced video surveillance systems (AVSS). Currently, the highest detection accuracy is achieved by solutions using deep convolutional neural networks (DCNN). Unfortunately, these come at the cost of a high computational complexity; hence, the work on the widely understood acceleration of these algorithms is very important and timely. In this work, we compare three different DCNN hardware accelerator implementation methods: coarse-grained (a custom accelerator called LittleNet), fine-grained (FINN) and sequential (Vitis AI). We evaluate the approaches in terms of object detection accuracy, throughput and energy usage on the VOT and VTB datasets. We also present the limitations of each of the methods considered. We describe the whole process of DNNs implementation, including architecture design, training, quantisation and hardware implementation. We used two custom DNN architectures to obtain a higher accuracy, higher throughput and lower energy consumption. The first was implemented in SystemVerilog and the second with the FINN tool from AMD Xilinx. Next, both approaches were compared with the Vitis AI tool from AMD Xilinx. The final implementations were tested on the Avnet Ultra96-V2 development board with the Zynq UltraScale+ MPSoC ZCU3EG device. For two different DNNs architectures, we achieved a throughput of 196 fps for our custom accelerator and 111 fps for FINN. The same networks implemented with Vitis AI achieved 123.3 fps and 53.3 fps, respectively. View Full-Text
Keywords: DCNN; AI; FPGA; FINN; Vitis AI; GCIoU; hardware accelerator; object detection DCNN; AI; FPGA; FINN; Vitis AI; GCIoU; hardware accelerator; object detection
Show Figures

Figure 1

MDPI and ACS Style

Machura, M.; Danilowicz, M.; Kryjak, T. Embedded Object Detection with Custom LittleNet, FINN and Vitis AI DCNN Accelerators. J. Low Power Electron. Appl. 2022, 12, 30. https://doi.org/10.3390/jlpea12020030

AMA Style

Machura M, Danilowicz M, Kryjak T. Embedded Object Detection with Custom LittleNet, FINN and Vitis AI DCNN Accelerators. Journal of Low Power Electronics and Applications. 2022; 12(2):30. https://doi.org/10.3390/jlpea12020030

Chicago/Turabian Style

Machura, Michal, Michal Danilowicz, and Tomasz Kryjak. 2022. "Embedded Object Detection with Custom LittleNet, FINN and Vitis AI DCNN Accelerators" Journal of Low Power Electronics and Applications 12, no. 2: 30. https://doi.org/10.3390/jlpea12020030

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop