Next Article in Journal
Data Compression Based on Stacked RBM-AE Model for Wireless Sensor Networks
Previous Article in Journal
A Review of Wearable Solutions for Physiological and Emotional Monitoring for Use by People with Autism Spectrum Disorder and Their Caregivers
Previous Article in Special Issue
Face Recognition Using the SR-CNN Model
Article Menu

Article Versions

Export Article

Open AccessArticle
Sensors 2018, 18(12), 4272; https://doi.org/10.3390/s18124272

An Improved YOLOv2 for Vehicle Detection

1,2,* , 1,2
,
1,2
,
1,2
,
1,2
,
1,2
and
1,2
1
Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 40004, China
2
School of Big Data & Software Engineering, Chongqing University, Chongqing 401331, China
*
Author to whom correspondence should be addressed.
Received: 26 October 2018 / Revised: 23 November 2018 / Accepted: 30 November 2018 / Published: 4 December 2018
(This article belongs to the Special Issue Deep Learning-Based Image Sensors)
PDF [2013 KB, uploaded 4 December 2018]   |   Review Reports

Abstract

Vehicle detection is one of the important applications of object detection in intelligent transportation systems. It aims to extract specific vehicle-type information from pictures or videos containing vehicles. To solve the problems of existing vehicle detection, such as the lack of vehicle-type recognition, low detection accuracy, and slow speed, a new vehicle detection model YOLOv2_Vehicle based on YOLOv2 is proposed in this paper. The k-means++ clustering algorithm was used to cluster the vehicle bounding boxes on the training dataset, and six anchor boxes with different sizes were selected. Considering that the different scales of the vehicles may influence the vehicle detection model, normalization was applied to improve the loss calculation method for length and width of bounding boxes. To improve the feature extraction ability of the network, the multi-layer feature fusion strategy was adopted, and the repeated convolution layers in high layers were removed. The experimental results on the Beijing Institute of Technology (BIT)-Vehicle validation dataset demonstrated that the mean Average Precision (mAP) could reach 94.78%. The proposed model also showed excellent generalization ability on the CompCars test dataset, where the “vehicle face” is quite different from the training dataset. With the comparison experiments, it was proven that the proposed method is effective for vehicle detection. In addition, with network visualization, the proposed model showed excellent feature extraction ability.
Keywords: vehicle detection; object detection; YOLOv2; convolutional neural network vehicle detection; object detection; YOLOv2; convolutional neural network
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Sang, J.; Wu, Z.; Guo, P.; Hu, H.; Xiang, H.; Zhang, Q.; Cai, B. An Improved YOLOv2 for Vehicle Detection. Sensors 2018, 18, 4272.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top