Next Article in Journal
Modular Edge-Gracefulness of Graphs without Stars
Previous Article in Journal
DSM: Delayed Signature Matching in Deep Packet Inspection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vehicle Detection Using Deep Learning Technique in Tunnel Road Environments

Department of Software Engineering, Sejong Cyber University, Seoul 05000, Korea
Symmetry 2020, 12(12), 2012; https://doi.org/10.3390/sym12122012
Submission received: 13 November 2020 / Revised: 3 December 2020 / Accepted: 4 December 2020 / Published: 5 December 2020
(This article belongs to the Section Computer)

Abstract

:
This paper proposes a real-time detection method for a car driving ahead in real time on a tunnel road. Unlike the general road environment, the tunnel environment is irregular and has significantly lower illumination, including tunnel lighting and light reflected from driving vehicles. The environmental restrictions are large owing to pollution by vehicle exhaust gas. In the proposed method, a real-time detection method is used for vehicles in tunnel images learned in advance using deep learning techniques. To detect the vehicle region in the tunnel environment, brightness smoothing and noise removal processes are carried out. The vehicle region is learned after generating a learning image using the ground-truth method. The YOLO v2 model, with an optimal performance compared to the performances of deep learning algorithms, is applied. The training parameters are refined through experiments. The vehicle detection rate is approximately 87%, while the detection accuracy is approximately 94% for the proposed method applied to various tunnel road environments.

1. Introduction

Recently, various technologies for autonomous vehicles have emerged. A support system for the safe driving of vehicles has been achieved by combining various sensors included in the vehicle, such as lane maintenance, omnidirectional vehicle distance estimation, side vehicle detection, and vehicle distance maintenance sensors [1,2,3]. This paves the way for the realization of fully autonomous driving. Among the sensors installed in vehicles to support autonomous driving, the charge-coupled device (CCD) vision sensor is the most important [4,5,6,7,8,9]. Most driving tasks are the driver’s visual tasks. The road environment information is analyzed through visual information, the situation is recognized, and the vehicle steering task is finally determined through the driving task. Thus, image recognition through vision sensors is important for the safety support of autonomous vehicles. Among the sensors required for safe driving support, the most commonly mounted or installed sensor on vehicles is a vehicle-type black-box device, which is a front and rear video recording device [10,11].
However, the current vehicle black-box system is simply used as a video recording device for accident identification. If the black-box product is equipped with a function for safe driving support, it will be possible to support safe driving. Some products include image processing functions such as lane keeping, determination of whether the vehicle in front is starting or not, and traffic sign recognition. However, existing black-box systems capable of recognizing intelligent road conditions can be applied in environments where lighting or road conditions do not significantly change. For example, it is impossible to recognize the correct road situation in an environment with poor lighting that is not a general road environment, such as those in tunnels or bridges. Traffic accidents have been continuously occurring in tunnels in Korea, and the number of deaths is also increasing [12]. Figure 1 shows the scene of a traffic accident in a tunnel [13]. As shown in Figure 1, most traffic accidents in tunnels are caused by collisions with vehicles in front.
According to the traffic accident analysis system of the Korea Road Traffic Authority [12], the traffic accident status in tunnels in the period of 2010 to 2019 is shown in Figure 2. The number of injured persons has increased owing to the increase in the number of tunnel traffic accidents over the past four years. In addition, according to the analysis of traffic accident types in tunnels in the period of 2015 to 2019 in Figure 3, the ratio of vehicle-to-vehicle traffic accidents was above 88%, owing to the nature of the tunnel. Figure 4 and Figure 5 show diagrams demonstrating the violation of road traffic regulations and status of each vehicle type among traffic accident types in tunnels. Approximately 60% of the traffic accidents in tunnels in the last five years were due to negligence, while 25% of the accidents were caused by not maintaining a safe distance between vehicles. More than 76% of vehicles were passenger vehicles.
Through the analysis of big traffic accident data [12], it is necessary to provide a guidance for vehicle drivers in tunnels to maintain a safe distance from the vehicle driving ahead and be attentive to the scene in front. Therefore, through a support system for the presence or absence of a vehicle ahead in such a tunnel, the number of traffic accidents in the tunnel can be reduced.
In the last five years, 3218 tunnel traffic accidents have occurred in Korea, in which 7472 people have been killed and injured. Thus, approximately 2.32 people were affected in a tunnel traffic accident. The risk is very high compared to 1.52 people per year in traffic accidents. Therefore, a safe driving support system that can inform the driver about whether the vehicle is driving ahead in real time to a vehicle running in a tunnel can largely reduce the number of traffic accidents in tunnels. Various methods for the detection and recognition of vehicles on roads have been proposed [14,15,16]. These methods involve various sensors. However, deep learning models [17,18,19] that can be applied to vehicle recognition using the image processing function are mainly the result of learning from vehicle images acquired in the daytime driving road environment, while the vehicle recognition rate is very low under tunnel-like environments. Therefore, in this paper, we propose an omnidirectional vehicle detection method in a tunnel environment. The tunnel environment has various brightnesses levels and colors depending on the characteristics of the lighting applied to the tunnel. In this study, to minimize the effect of the illumination light in the tunnel, the brightness of the image is smoothed and the effect of noise is minimized. The images of cars driving in the tunnel are learned using a deep learning model. In addition, we propose a method to detect a vehicle running in a tunnel using the learned deep learning model.

2. Proposed Method

In this paper, we propose a real-time detection method for a vehicle in a tunnel environment. Figure 6 shows images of vehicles in a tunnel road environment.
Tunnel images have low illumination compared to general road images, diffused reflections frequently occur due to the tunnel lighting, and they contain noise due to automobile smoke. In addition, it is challenging to detect the vehicle area visually at the entrance and exit of the tunnel owing to the sudden change in illumination. Therefore, in this study, a deep learning technique is applied to learn vehicles running in the tunnel. Brightness balance and noise removal steps are implemented to minimize the effects of various tunnel illumination lights and noise on the tunnel image. The input tunnel images are acquired from a black box installed in the vehicle.

2.1. Overview

We propose a method for the real-time detection of vehicles in vehicle black-box images acquired on tunnel roads. On tunnel roads, generally, the tunnel image quality is reduced owing to the irregular lighting, diffused reflection by the tunnel lighting, light reflected from the surface of driving vehicles, and exhaust gas from vehicles, in contrast to general roads. The image acquired on the tunnel road includes haze, light leakage, and blurring. When a vehicle detection method based on color and shape is applied to a tunnel road, error occurs in the detection. Therefore, in the proposed method, vehicle detection is performed by image brightness equalization and noise removal in advance.
Figure 7 shows a flowchart of the vehicle detection process in the tunnel proposed in this paper. The black-box image is a 1920 × 1080 pixel, full color high-definition (HD) quality image, which requires a large time period to be processed. In this study, the image was reduced by applying the bilinear interpolation method to a ½ image. In addition, to correct the brightness of the image, illuminance smoothing was performed and the noise was removed by applying an average-value filter of pixel values. In the image post-processing step, the execution time was minimized by selectively using only the middle area of the image where the vehicle driving ahead appeared, not the entire input image. In the training stage, the YOLO v2 model was used for the images that were previously labeled with the ground-truth method. In the final vehicle detection step, a vehicle detector was used to detect the position of the vehicle in the tunnel image.

2.2. Pre-Processing

In this step, to effectively detect vehicles in a tunnel image, the processing amount is reduced and the image quality is improved. To reduce the processing calculation amount, the image size is reduced and the brightness is corrected to improve the image quality. In addition, the noise generated by diffuse reflection by exhaust gas and tunnel lighting in the tunnel environment is removed. A black-box, a video recording device for vehicles, is used to record the driving situation of the vehicle. Consequently, vehicle black box devices require a wide angle of view and high image quality to store road images. To this end, most black-box devices are tapped as a CCD sensors that provides a high HD-level quality. The size of the image acquired from the black-box device is 1920×1080 pixels (24-bit red-green-blue (RGB) color image). With the processing of the high-resolution image to detect a vehicle, the vehicle detection rate is high and the position can be accurately detected. However, the calculation amount is increased. Moreover, if a high-resolution image is used to learn a vehicle from a tunnel road image, a limit exists, which leads to an increase in the learning time. In the proposed method, the size of the input black-box HD-level image is reduced through bilinear interpolation. The advantage of the bilinear interpolation method is that it can output smoother images than those obtained by the nearest-neighbor interpolation method. Through the proposed method, the input tunnel image is reduced to 1/2 pixel size. As stated above, most tunnel environments have low illuminations compared to the general road environment. In the pre-processing, the first step in improving the brightness of the input image is performed. In the tunnel environment, the image quality is largely reduced owing to the haze attributed to light scattering from vehicle exhaust gas, road dust, and tunnel lighting. The haze contribution is stored together in the vehicle black box as a noise. Therefore, in the pre-processing step, the brightness of the image is improved and the included haze is minimized. The tunnel image (I) obtained from the black box can be expressed by
I(x) = J(x)t(x) + L(1 − t(x)).
where x is the two-dimensional (2D) image pixel coordinates, I(x) is the observed image, J(x) is the original image, L is the atmospheric light, and t(x) is the transmission map describing the portion of light. The original image J(x) to be estimated by Equation (1) can be expressed using the atmospheric light and transmission map. Therefore, the original image is estimated using the Retinex theory [20,21,22] to remove the noise. In the step of estimating J(x) of the original image from which the noise was removed, the atmospheric light L is estimated using a dark channel prior. The transmission map t(x) is estimated using atmospheric light. The original image J(x) without noise is estimated using the estimated L and t(x).
Figure 8 shows the result of the estimation of the image with an improved brightness and removed noise from the tunnel image. Figure 8b shows an inverted image obtained by calculating the complement from the image in Figure 8a, acquired from the black box. Figure 8c shows a dark channel image, representing the lowest brightness in each RGB channel of that in (b). Using the image in Figure 8b, the atmospheric light L is estimated using the method reported by Dubok et al. [23]. The transmission map (Figure 8d) t(x) is estimated using the atmospheric light L. Using the atmospheric light L and transmission map t(x), and image with a smoothed brightness and removed noise is generated, as shown in Figure 8e. Finally, the image with a complement is obtained (Figure 8f).

2.3. Vehicle Detection

In this step, the vehicle region is detected in the tunnel image using the YOLO v2 model. It utilizes the vehicle detector created through the vehicle learning step in advance. We use the pre-trained model to learn the vehicle region. Rather than constructing a new learning model, the YOLO v2 vehicle detector was created by effectively modifying the pre-trained learning model. ResNet-50 was used as the prior learning model [24]. ResNet-50 is a 50-layer convolutional neural network trained on over 1 million images in the ImageNet database. This model can be classified into approximately 1000 categories. The image input size of the neural network is 224 × 224. The YOLO v2 neural network consists of two sub-neural networks, feature extraction and detection neural networks. The feature extraction neural network used in this study uses the previously learned RestNet-50 CNN model. The detection neural network consists of several convolutional layers and a YOLO v2 dedicated layer. The inputs used to parameterize the YOLO v2 neural network are the neural network input size, anchor box, and feature extraction neural network. The size of the neural input was set to [224 224 3], while the number of anchor boxes was set to 11. The feature extraction neural network used 40 activation rectified linear units (ReLUs). Figure 9 shows the neural network structure of the YOLO v2 model used for the vehicle region detection by the proposed method.

3. Experimental Results

To evaluate of the proposed method, an experiment was carried out on 1920 × 1080 24-bit color images of driving videos acquired from a car black-box in various tunnels. The experiment was carried out using MATLAB. For the training data, we estimated the anchor size and number of the YOLO v2 model that could most effectively represent the vehicle region according to the size of the vehicle region designated by the ground-truth method and ratio of horizontal/vertical pixels. In the image input from the training data, it is important to use a mask that matches the size of the vehicle region in the extraction of feature information during the model learning according to the size distribution of the vehicle region. It is necessary to set the anchor box size and number of candidates, most effective for the vehicle area sizes, through cross-comparison of the vehicle area in the experimental data and vehicle area detected during the experiment. Figure 10 shows the size and width/length ratio of the vehicle region in the experimental data. Figure 11 shows the accuracy of intersection of detected vehicle regions according to the number of YOLO learning anchors.
According to the experiment, the size of the anchor box that most effectively detects the vehicle area in the training data was set to 11 and the YOLO learner was used. For the network training, stochastic gradient-descent optimization functions were used, the initial learning rate was 0.0001, the mini-batch size for each training iteration was set to 64, and the maximum number of iterations was set to 30. Only 70% of the data were used, 15% were used for verification, while the remaining 15% were used for testing. Figure 12 shows a graph of the precision and recall obtained using the YOLO v2 vehicle detector generated after the training. The average accuracy of the resulting vehicle detectors was approximately 95%. Figure 13 shows the results of applying Aggregated Channel Features (ACF) [24,25], Fast R-CNN [26,27], Single Shot Detector (SSD) [28,29], feature information-based vehicle detectors, and the proposed method to detect vehicles in various terminal environments. The proposed method provides good results for the detection of vehicles in a tunnel environment. However, a vehicle cannot be correctly detected in a road portion where a sudden change in illuminance occurs, such as a tunnel entrance/exit portion. Figure 13 shows the results of vehicle detection in a tunnel using different learners.
The ACF-based vehicle detectors decompose the learning vehicle images into 10 feature channels and reduce them in multiple steps to calculate the features of the vehicle region. In addition, to classify the features of the vehicle region with the AdaBoost algorithm, only the regions where the features of the vehicle are located are classified in stages using several weak classifiers. The number of weak classifiers used for learning was set to a maximum of 2048, while the number of iterations of learning process was set to 10 to proceed with the learning. The Fast R-CNN-based vehicle detector uses a deep convolutional neural network based on a region of interest. For learning, VCG-16 was used as the pre-training model, the mini-batch size was set to 16, the initial learning rate was set to 0.0001, and the maximum number of epochs was set to 30. The SSD-based vehicle detectors use the pre-training model ResNet-50 for feature extraction and stochastic gradient descent with momentum for learning. The initial learning rate of the learner was set to 0.0001, the mini-batch size was set to 16, and the maximum number of epochs was set to 30. According to the experiment, the proposed method provided good results for vehicle detection in tunnels.
The ACF-based vehicle detection method [27] could not detect vehicles at a distance or at the entrance and exit of a tunnel. For the Fast R-CNN-based vehicle detection method [28], the vehicle detection rate was the lowest, owing to the use of a vehicle model learned on a general road in a tunnel. The SSD-based vehicle detection method [29] could not detect vehicles located at a distance. The proposed method has a relatively effective vehicle detection rate regardless of the distance.
The comparison of the vehicle detection rates in various tunnel environments shows an accuracy improvement of approximately 10.7% with the introduction of the pre-processing. Table 1 compares the vehicle detection rates with and without the preprocessing in the vehicle detection step. A vehicle cannot be detected in the tunnel when two or more vehicles overlap owing to lane changes while driving. In addition, it could not detect a vehicle in progress behind a large bus or truck. According to the experiment, the average vehicle detection rate of the proposed method was approximately 86.8%. The comparison of the vehicle detection accuracy shows a performance of approximately 94.1%. The vehicle detection was judged successful if it overlapped the location of the vehicle area by approximately 50% or more by the ground-truth method in advance.

4. Conclusions and Future Work

In this paper, we proposed a method to detect a vehicle driving ahead in a tunnel environment. In the proposed scheme, a vehicle detector was created using a YOLO v2 learner. The learning was performed on road images acquired in various tunnel environments to generate the detector. To increase the accuracy of vehicle detection in a tunnel environment, vehicle detection performance was improved by applying the noise reduction and illuminance smoothing steps to the tunnel image in advance. In addition, according to the application of several deep learning learners, the YOLO v2 network was effective for vehicle detection in a tunnel environment. However, it was challenging to detect vehicles at the entrance and exit of the tunnel owing to the sudden change in brightness. We intend to continue with studies on vehicle detection using Kalman filters, estimation of the distance between vehicles in the tunnel, and discrimination of brake application through the detection of brake lights.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2020R1F1A106890011).

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Butakov, V.A.; Ioannou, P. Personalized Driver/Vehicle Lane Change Models for ADAS. IEEE Trans. Veh. Technol. 2015, 64, 4422–4431. [Google Scholar] [CrossRef]
  2. Li, L.; Wen, D.; Zheng, N.-N.; Shen, L.-C. Cognitive Cars: A New Frontier for ADAS Research. IEEE Trans. Intell. Transp. Syst. 2011, 13, 395–407. [Google Scholar] [CrossRef]
  3. Noy, I.Y.; Shinar, D.; Horrey, W.J. Automated driving: Safety blind spots. Saf. Sci. 2018, 102, 68–78. [Google Scholar] [CrossRef]
  4. Xing, Y.; Lv, C.; Wang, H.; Wang, H.; Ai, Y.; Cao, D.; Velenis, E.; Wang, F.-Y. Driver Lane Change Intention Inference for Intelligent Vehicles: Framework, Survey, and Challenges. IEEE Trans. Veh. Technol. 2019, 68, 4377–4390. [Google Scholar] [CrossRef] [Green Version]
  5. Al-Sultan, S.; Al-Bayatti, A.H.; Zedan, H. Context-aware driver behavior detection system in intelligent transportation systems. IEEE Trans. Veh. Technol. 2013, 62, 4264–4275. [Google Scholar] [CrossRef]
  6. Winner, H.; Hakuli, S.; Singer, C. Handbook of Driver Assistance Systems: Basic Information Components and Systems for Active Safety and Comfort; Springer International Publishing: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  7. Zhu, L.; Yu, F.R.; Wang, Y.; Ning, B.; Tang, T. Big Data Analytics in Intelligent Transportation Systems: A Survey. IEEE Trans. Intell. Transp. Syst. 2019, 20, 383–398. [Google Scholar] [CrossRef]
  8. King, R. Traffic management in a connected or autonomous vehicle environment. In Proceedings of the Autonomous Passenger Vehicles, London, UK, 14 May 2015; pp. 1–20. [Google Scholar] [CrossRef]
  9. An, S.-H.; Lee, B.-H.; Shin, D.-R. A survey of intelligent transportation systems. In Proceedings of the 2011 Third International Conference on Computational Intelligence, Communication Systems and Networks, Bali, Indonesia, 26–28 July 2011; pp. 332–337. [Google Scholar] [CrossRef]
  10. Prasad, M.J.; Arundathi, S.; Anil, N.; Harshikha; Kariyappa, B.S. Automobile black box system for accident analysis. In Proceedings of the 2014 International Conference on Advances in Electronics Computers and Communications, Bangalore, India, 10–11 October 2014; 2014; pp. 1–5. [Google Scholar] [CrossRef]
  11. Chet, N.C. Design of black box for moving vehicle warning system. In Proceedings of the Student Conference on Research and Development, Putrajaya, Malaysia, 25–26 August 2003; pp. 193–196. [Google Scholar]
  12. Korea Traffic Accident Analysis Systems. Available online: http://taas.koroad.or.kr/ (accessed on 10 September 2020).
  13. Traffic Accident Press Release in Tunnel (Up-Down, Left-Right). Available online: https://www.socialfocus.co.kr/news/articleView.html?idxno=7398 (accessed on 14 September 2020).
  14. Veres, M.; Moussa, M. Deep Learning for Intelligent Transportation Systems: A Survey of Emerging Trends. IEEE Trans. Intell. Transp. Syst. 2020, 21, 3152–3168. [Google Scholar] [CrossRef]
  15. Ma, X.; Dai, Z.; He, Z.; Ma, J.; Wang, Y.; Wang, Y. Learning traffic as images: A deep convolutional neural network for large-scale transportation network speed prediction. Sensors 2017, 17, 818. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Peng, X.; Li, Y.; Wei, X.; Luo, J.; Murphey, Y.L. Traffic sign recognition with transfer learning. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27 November–1 December 2017; pp. 1–7. [Google Scholar]
  17. Lee, K.B.; Shin, H.S. An Application of a Deep Learning Algorithm for Automatic Detection of Unexpected Accidents Under Bad CCTV Monitoring Conditions in Tunnels. In Proceedings of the 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML), Istanbul, Turkey, 26–28 August 2019; pp. 7–11. [Google Scholar]
  18. Zhao, K.; Zhu, X.; Jiang, H.; Zhang, C.; Wang, Z.; Fu, B. Dynamic loss for one-stage object detectors in computer vision. Electron. Lett. 2018, 54, 1433–1434. [Google Scholar] [CrossRef]
  19. Pan, M.; Liu, Y.; Cao, J.; Li, Y.; Li, C.; Chen, C.-H. Visual Recognition Based on Deep Learning for Navigation Mark Classification. IEEE Access 2020, 8, 32767–32775. [Google Scholar] [CrossRef]
  20. Dollar, P.; Appel, R.; Belongie, S.; Perona, P. Fast Feature Pyramids for Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1532–1545. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
  23. He, R.; Guo, X.; Shi, Z. SIDE—A Unified Framework for Simultaneously Dehazing and Enhancement of Nighttime Hazy Images. Sensors 2020, 20, 5300. [Google Scholar] [CrossRef] [PubMed]
  24. Yoon, J.; Choe, Y. Retinex Based Image Enhancement via General Dictionary Convolutional Sparse Coding. Appl. Sci. 2020, 10, 4395. [Google Scholar] [CrossRef]
  25. Ye, X.; Yang, H.; Li, C.; Jia, Y.; Li, P. A Gray Scale Correction Method for Side-Scan Sonar Images Based on Retinex. Remote Sens. 2019, 11, 1281. [Google Scholar] [CrossRef] [Green Version]
  26. Park, D.; Park, H.; Han, D.K.; Ko, H. Single Image Dehazing with Image Entropy and Information Fidelity. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 4037–4041. [Google Scholar]
  27. Kim, J. Efficient Vehicle Detection and Distance Estimation Based on Aggregated Channel Features and Inverse Perspective Mapping from a Single Camera. Symmetry 2019, 11, 1205. [Google Scholar] [CrossRef] [Green Version]
  28. Hsu, S.-C.; Huang, C.-L. Vehicle detection using simplified fast R-CNN. In Proceedings of the 2018 International Workshop on Advanced Image Technology (IWAIT), Chiang Mai, Thailand, 7–9 January 2018; pp. 1–3. [Google Scholar]
  29. Chen, Q.; Huang, N.; Zhou, J.; Tan, Z. An SSD Algorithm Based on Vehicle Counting Method. In Proceedings of the2018 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; pp. 7673–7677. [Google Scholar]
Figure 2. Traffic accidents in tunnels in the last 10 years (Korea Road Traffic Authority, traffic accident analysis system).
Figure 2. Traffic accidents in tunnels in the last 10 years (Korea Road Traffic Authority, traffic accident analysis system).
Symmetry 12 02012 g002
Figure 3. Current status of traffic accidents in tunnels over the past five years according to the accident type.
Figure 3. Current status of traffic accidents in tunnels over the past five years according to the accident type.
Symmetry 12 02012 g003
Figure 4. Traffic accidents in tunnels by violation of laws and regulations in the past five years.
Figure 4. Traffic accidents in tunnels by violation of laws and regulations in the past five years.
Symmetry 12 02012 g004
Figure 5. Traffic accidents in tunnels in the last five years according to the vehicle type (Road Traffic Authority, traffic accident analysis system).
Figure 5. Traffic accidents in tunnels in the last five years according to the vehicle type (Road Traffic Authority, traffic accident analysis system).
Symmetry 12 02012 g005
Figure 6. Examples of a vehicle driving ahead in a tunnel environment.
Figure 6. Examples of a vehicle driving ahead in a tunnel environment.
Symmetry 12 02012 g006
Figure 7. Flow chart of the proposed method.
Figure 7. Flow chart of the proposed method.
Symmetry 12 02012 g007
Figure 8. Results of pre-processing step. (a) Input image; (b) complement image of (a); (c) dark channel image; (d) transmission map; (e) image with the brightness and noise removed; (f) pre-processed image.
Figure 8. Results of pre-processing step. (a) Input image; (b) complement image of (a); (c) dark channel image; (d) transmission map; (e) image with the brightness and noise removed; (f) pre-processed image.
Symmetry 12 02012 g008aSymmetry 12 02012 g008b
Figure 9. Neural network structure of the YOLO v2 model.
Figure 9. Neural network structure of the YOLO v2 model.
Symmetry 12 02012 g009
Figure 10. Accumulated graph of the size and width/length ratio of the vehicle area of the training data.
Figure 10. Accumulated graph of the size and width/length ratio of the vehicle area of the training data.
Symmetry 12 02012 g010
Figure 11. Comparison of the accuracy of crossing the experimental area according to the number of anchor boxes.
Figure 11. Comparison of the accuracy of crossing the experimental area according to the number of anchor boxes.
Symmetry 12 02012 g011
Figure 12. Recall and precision results of the vehicle detection experiment by the proposed method.
Figure 12. Recall and precision results of the vehicle detection experiment by the proposed method.
Symmetry 12 02012 g012
Figure 13. Vehicle detection results obtained by various vehicle detection models. (a) ACF-based vehicle detection; (b) fast R-CNN-based vehicle detection; (c) SSD-based vehicle detection; (d) proposed method.
Figure 13. Vehicle detection results obtained by various vehicle detection models. (a) ACF-based vehicle detection; (b) fast R-CNN-based vehicle detection; (c) SSD-based vehicle detection; (d) proposed method.
Symmetry 12 02012 g013
Table 1. Comparison of vehicle detection rates (%) with and without pre-processing steps.
Table 1. Comparison of vehicle detection rates (%) with and without pre-processing steps.
Tunnel ScenesWithout Pre-ProcessingWith Pre-Processing
#181.487.6
#282.188.2
#379.485.0
#479.686.5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, J. Vehicle Detection Using Deep Learning Technique in Tunnel Road Environments. Symmetry 2020, 12, 2012. https://doi.org/10.3390/sym12122012

AMA Style

Kim J. Vehicle Detection Using Deep Learning Technique in Tunnel Road Environments. Symmetry. 2020; 12(12):2012. https://doi.org/10.3390/sym12122012

Chicago/Turabian Style

Kim, JongBae. 2020. "Vehicle Detection Using Deep Learning Technique in Tunnel Road Environments" Symmetry 12, no. 12: 2012. https://doi.org/10.3390/sym12122012

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop