Next Article in Journal
A Novel Fuzzy PID Congestion Control Model Based on Cuckoo Search in WSNs
Next Article in Special Issue
A Light-Weight Practical Framework for Feces Detection and Trait Recognition
Previous Article in Journal
Real-Time Monitoring System for Shelf Life Estimation of Fruit and Vegetables
Previous Article in Special Issue
Mask-Refined R-CNN: A Network for Refining Object Details in Instance Segmentation
Open AccessArticle

Mixed YOLOv3-LITE: A Lightweight Real-Time Object Detection Method

1
The Institute of Geospatial Information, Strategic Support Force Information Engineering University, Zhengzhou 450001, China
2
Beijing Institute of Remote Sensing Information, Beijing 100192, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(7), 1861; https://doi.org/10.3390/s20071861
Received: 16 February 2020 / Revised: 25 March 2020 / Accepted: 26 March 2020 / Published: 27 March 2020
(This article belongs to the Special Issue Visual Sensor Networks for Object Detection and Tracking)
Embedded and mobile smart devices face problems related to limited computing power and excessive power consumption. To address these problems, we propose Mixed YOLOv3-LITE, a lightweight real-time object detection network that can be used with non-graphics processing unit (GPU) and mobile devices. Based on YOLO-LITE as the backbone network, Mixed YOLOv3-LITE supplements residual block (ResBlocks) and parallel high-to-low resolution subnetworks, fully utilizes shallow network characteristics while increasing network depth, and uses a “shallow and narrow” convolution layer to build a detector, thereby achieving an optimal balance between detection precision and speed when used with non-GPU based computers and portable terminal devices. The experimental results obtained in this study reveal that the size of the proposed Mixed YOLOv3-LITE network model is 20.5 MB, which is 91.70%, 38.07%, and 74.25% smaller than YOLOv3, tiny-YOLOv3, and SlimYOLOv3-spp3-50, respectively. The mean average precision (mAP) achieved using the PASCAL VOC 2007 dataset is 48.25%, which is 14.48% higher than that of YOLO-LITE. When the VisDrone 2018-Det dataset is used, the mAP achieved with the Mixed YOLOv3-LITE network model is 28.50%, which is 18.50% and 2.70% higher than tiny-YOLOv3 and SlimYOLOv3-spp3-50, respectively. The results prove that Mixed YOLOv3-LITE can achieve higher efficiency and better performance on mobile terminals and other devices. View Full-Text
Keywords: object detection; computer vision; convolutional neural network; embedded system; real-time performance object detection; computer vision; convolutional neural network; embedded system; real-time performance
Show Figures

Figure 1

MDPI and ACS Style

Zhao, H.; Zhou, Y.; Zhang, L.; Peng, Y.; Hu, X.; Peng, H.; Cai, X. Mixed YOLOv3-LITE: A Lightweight Real-Time Object Detection Method. Sensors 2020, 20, 1861.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop