You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

14 January 2024

Analyzing Performance of YOLOx for Detecting Vehicles in Bad Weather Conditions

,
,
and
1
Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
2
Institute of Information and Communication, Yeungnam University, Gyeongsan 38541, Republic of Korea
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Design, Communication, and Control of Autonomous Vehicle Systems

Abstract

Recent advancements in computer vision technology, developments in sensors and sensor-collecting approaches, and the use of deep and transfer learning approaches have excelled in the development of autonomous vehicles. On-road vehicle detection has become a task of significant importance, especially due to exponentially increasing research on autonomous vehicles during the past few years. With high-end computing resources, a large number of deep learning models have been trained and tested for on-road vehicle detection recently. Vehicle detection may become a challenging process especially due to varying light and weather conditions like night, snow, sand, rain, foggy conditions, etc. In addition, vehicle detection should be fast enough to work in real time. This study investigates the use of the recent YOLO version, YOLOx, to detect vehicles in bad weather conditions including rain, fog, snow, and sandstorms. The model is tested on the publicly available benchmark dataset DAWN containing images containing four bad weather conditions, different illuminations, background, and number of vehicles in a frame. The efficacy of the model is evaluated in terms of precision, recall, and mAP. The results exhibit the better performance of YOLOx-s over YOLOx-m and YOLOx-l variants. YOLOx-s has 0.8983 and 0.8656 mAP for snow and sandstorms, respectively, while its mAP for rain and fog is 0.9509 and 0.9524, respectively. The performance of models is better for snow and foggy weather than rainy weather sandstorms. Further experiments indicate that enhancing image quality using multiscale retinex improves YOLOx performance.

1. Introduction

The last decade witnessed automation in several areas like industry, engineering, etc. The autonomous vehicle (AV) paradigm emerged as a potential solution to overcome the limitations of human drivers. In addition, AVs facilitate safe, comfortable, and eco-friendly future transportation. The current estimation of road accidents causing death is 1.3 million while the injuries are approximately 3.5, as stated by the World Health Organization [1]. Such accidents are predominantly caused by human error of judgment, slow reflexes due to intoxication, fatigue and sleep, and violations [2,3]. The prime objective of AVs is to avoid such errors and violations and eradicate or mitigate the chances of accidents, the final goal is to replace the human driver completely. The AV paradigm targets to reduce these accidents gradually with different levels of automation which are defined by the Society of Automotive Engineers (SAE). With each level of automation, the objective is to enable automation to reduce human errors and mitigate the probability of accidents. Finally, with full automation (level 5 of SAE), humans will be fully replaced and AVs will make all decisions. Test drives are now underway for level 4 (high automation) by different companies such as Tesla, Waymo, Audi, etc.
AVs are equipped with a large range and variety of sensors, both in terms of hardware and function. Such embedded sensors collect data for the surrounding environment continuously. The data collected by these sensors are later used for decision making, employing artificial intelligence (AI)-based frameworks. Radars, light detection and ranging (LiDAR), video cameras, infrared sensors, and thermal cameras are the commonly used AV sensors. The video camera provides continuous streams of traffic environment and is the common choice for environment perception during the daytime. Vehicle detection using video data is an important task for AVs for drivable space determining, path planning, traffic flow, vehicle counting, etc. However, vehicle detection using video data may become very challenging due to varying traffic environments where multiple vehicles and multiple categories of vehicles are to be detected. Even worse, bad weather conditions like snow, haze, and sandstorms make vehicle detection much more challenging.
Two desirable functions of a vehicle detection approach are its capacity to operate in real time and its ability to perform detection in illumination changing and bad weather conditions [4]. Several studies have presented the approaches to vehicle detection in different weather conditions [5]. Such studies utilize different kinds of information for vehicle detection, such as vehicle shadow [6], taillight [7], edge and color information [8,9], symmetry [10], etc. For vehicle detection, different machine learning and deep learning models are adopted. Machine learning models require feature extraction; consequently, a large variety of features has been investigated such as HOG [11], DPM [12], Haar or Haar-like [13], wavelet features, SURF [14], LBP, PCA, SIFT [15] features, etc. These features are investigated with different machine learning models like AdaBoost, k nearest neighbor, decision tree, etc. However, the support vector machine model is reported to produce better results.
Several pre-trained deep learning models have been introduced during the past few years to reduce the training requirements for vision tasks such as VGG16, ResNet, R-CNN, etc. The You Look Only Once (YOLO) series is one of the widely adopted models for object detection and has been well investigated in the literature. The YOLOx series is relatively new and this study considers it in the context of bad weather conditions. This study makes the following contributions:
  • YOLOx has not been extensively studied, especially in the context of vehicle detection in bad weather conditions like snow and sandstorms. This study adopts three variants of YOLOx such as x, m, and l variants for performance analysis in rainy, foggy, sandstorm, and snowy weather conditions.
  • Two strategies are adopted for experiments: using the original weather-affected dataset and using enhanced images. For image enhancement, a multiscale retinex approach is used in this study.
  • For reproducibility, experiments involving using a publicly available dataset DAWN. Performance comparison is carried out using precision, recall, mean average precision, etc., as well as, comparing the performance of YOLOx with existing studies.
The rest of the paper is organized into four sections. Section 2 presents the related work. The methodology is presented in Section 3 while the experimental results and discussions are presented in Section 4. In the end, Section 5 concludes this study.

3. Methodology

This study uses the YOLOx model for vehicle detection in snow and sandstorms where the visibility and illumination conditions are low and investigates the efficiency of three variants of YOLOx. Figure 1 shows the workflow of the adopted methodology for vehicle detection using YOLOx.
Figure 1. Architecture of the adopted methodology for vehicle detection.

3.1. YOLOx

YOLOx is an improvement in the YOLO series and is anchor-free. Decoupled head and leading assignment strategies are also adopted [32]. YOLOv3, which is widely used for vehicle detection, and YOLOv4 and YOLOv5 are anchor-based approaches. The anchor-free detectors of YOLOx reduce the number of design parameters while decoupled head improves convergence speed. Training time is improved by SimOTA advanced label assignment. This study utilizes the pre-trained models that can perform classification on 80 different classes.

3.2. Dataset

For experiments, the publicly available dataset DAWN is used [33]. The DAWN dataset contains 1000 images from real traffic conditions in different weather conditions. It contains images of rain, fog, snow, and sandstorms. Data collection scenarios involve urban, highway, and freeway and contain single to multiple vehicles in a single image. Four types of vehicles are present in the dataset including cars (the predominant class), motorcycles, buses, and trucks. For each scenario, the number of images is different, as shown in Table 1.
Table 1. Images for each category in the dataset.

3.3. Vehicle Detection

The methodology adopted for vehicle detection in sand and snow storms is shown in Figure 1. The images for snow, rain, fog, and sandstorms are input for preprocessing. The collected images have different dimensions, so image resizing is performed to transform the image into 640 × 640 dimensions, and image rescaling is carried out to maintain the height-to-width ratio. The processed images are then fed into a pre-trained YOLOx model for vehicle detection. The model’s performance is analyzed in terms of mean average precision and precision–recall curve.

4. Results and Discussions

This section discusses the results of the ‘s’, ‘m’, and ‘l’ variants of the YOLOx series for vehicle detection in two bad weather conditions including rain and snow.

4.1. Experimental Setup

Experiments are performed on an Intel Core i7 machine running on a Windows 10 operating system and 16 GB RAM. Matlab 2022b is used for model implementation. The number of snow- and sandstorm-affected images used for model testing is different.

4.2. Results for Snow

Table 2 shows the precision results of all three variants of YOLOx for vehicle detection under snowstorm conditions. The precision is calculated for each class in the dataset. True positives are calculated using an intersection over union (IoU), and true positives are those with an IoU ≥ 0.5.
Table 2. Results for class-wise precision of YOLOx for snow.
Figure 2 illustrates the precision–recall curve for YOLOx models for snow storms. The plot indicates that the performance of YOLOx-s is better compared to other variants concerning car, motorcycle, and truck detection. On the other hand, YOLOx-m shows better performance for bus detection. YOLOx-l has marginally lower precision for truck and car detection than YOLOx-s and YOLOx-m, respectively. The main emphasis is on the precision–recall curve of the ‘car’ class as it has the highest number of images. Precision for the YOLOx-s is 0.9588 for the car class which is better than 0.9167 and 0.9317 from YOLOx-m and YOLOx-l models, respectively.
Figure 2. Precision–recall curve for YOLOx in snow storms, (a) YOLOx-s, (b) YOLOx-m, and (c) YOLOx-l.

4.3. Results for Sand Storms

Class-wise precision results for the YOLOx model in sandstorms are given in Table 3. In contrast to results for snow conditions, where YOLOx-s performed better for car detection, YOLOX-l shows better performance for car detection in sandstorms with a 0.8543 precision compared to 0.8526 and 0.8501 from YOLOx-m and YOLOx-s, respectively. However, for motorcycle, bus, and truck detection in sandstorms, the precision scores for YOLOx-s are better. Figure 3 shows precision–recall curves for all models in sandstorms. It shows the mixed performance of models for different classes in the case of vehicle detection in sandstorms.
Table 3. Precision and recall of YOLOx series for sandstorms.
Figure 3. Precision–recall curve for YOLOx in sandstorms, (a) YOLOx-s, (b) YOLOx-m, and (c) YOLOx-l.

4.4. Results for Rainy Conditions

Table 4 shows the results of all three variants of YOLOx for vehicle detection under bad weather conditions concerning precision and average recall. Average precision and average recall are calculated for all images concerning vehicle detection. True positives are calculated using an intersection over union (IoU) indicating that the prediction is correct if IoU ≥ 0.5.
Table 4. Class-wise precision and recall of YOLOx series for rain conditions.
Table 4 shows the performance of YOLOx for rainy conditions where class-wise precision is reported. Figure 4 shows the precision–recall curve for YOLOx models for rainy conditions; Figure 4a–c show the precision–recall curves for YOLOx-s, YOLOx-m, and YOLOx-l, respectively. It shows that the performance of YOLOx-s is better compared to other variants. The main emphasis is on the precision–recall curve of the ‘car’ class as it has the highest number of samples in the dataset. Precision for the YOLOx-s is 0.9283 for the car class, which is better than 0.9090 and 0.9124 for the YOLOx-m and YOLOx-l models, respectively. Motorcycle objects are few and they can have high precision and better precision–recall curves.
Figure 4. Precision–recall curves for YOLOx-s for rainy conditions, (a) YOLOx-s, (b) YOLOx-m, and (c) YOLOx-l.

4.5. Results for Foggy Conditions

Results for YOLOx mode in foggy weather conditions are given in Table 5. YOLOx-s again indicates a better precision score of 0.9617 than other variants for the car class. The same is true for other classes as well. Figure 5 shows precision–recall curves for all models in foggy conditions. It indicates better performance of YOLOx-s compared to other models.
Table 5. Class-wise precision and recall of YOLOx series for fog conditions.
Figure 5. Precision-recall curve for YOLOx in foggy conditions, (a) Curve for YOLOx-s model, (b) Curve for YOLOx-m model, and (c) Curve for YOLOx-l model.

4.6. Results Regarding Mean Average Precision

Besides the precision and precision–recall curve, the mAP metric is a widely used performance evaluation metric for object detection. It is especially important when the detection task involves multi-class detection. This study also uses mAP which is calculated using
m A P = 1 N i = 1 N A P i
where N indicates the total number of classes, four in this case, and A P i indicates the average precision of class i.
Table 6 shows mAP for all models for rain, fog, sand, and snow storms, demonstrating the better performance of the YOLOx-s model over other models.
Table 6. Experimental results for YOLOx concerning mAP.
For snowstorm weather, the performance of YOLOx-s is marginally better with an mAP of 0.8983 than YOLOx-m, which has an mAP of 0.8930. However, for sandstorms, it shows much better performance with 0.8656 mAP compared to 0.8476 and 0.8130 mAP scores of YOLOx-m and YOLOx-l, respectively. Similarly, YOLOx-s shows superior performance for rainy and foggy conditions with 0.9509 and 0.9524 mAP which is better than other variants.
Figure 6 demonstrates the precision–recall curve of all YOLOx variants for sand and snow storms. Two important observations are the change in the performance of models in snow and sandstorms and the difference in models’ individual performance. First, the YOLOx-s model proves to be more precise with a better precision–recall curve while the second is the better performance of models in snow storms. Figure 6 shows that models perform poorly in sandstorms.
Figure 6. Precision–recall curve of all models for all classes in snow and sand.
Figure 7 illustrates the precision–recall curve of all YOLOx variants for rain and foggy weather. It shows two noteworthy points: the first is the better performance of the YOLOx-s model for object detection while the second is the better performance in foggy conditions. The results displayed in Figure 7 show that models perform poorly in rainy conditions.
Figure 7. Precision–recall curve of all models for all classes in rain and fog.
Table 7 shows the results regarding the number of ground truth objects and detected objects by each model for snow and sandstorms. As stated earlier, the images predominantly contain the ‘car’ class, followed by truck, bus, and motorcycle. The results indicate that the YOLOx-m and YOLOx-l detect a higher number of objects than the YOLOx-s model. However, these detections contain many false positive samples which degrade their overall performance.
Table 7. Results for original and detected vehicles from YOLOx.
Results regarding the number of ground truth objects and detected objects by each model for rainy and foggy conditions are provided in Table 8. As stated earlier, predominantly the images contain the ‘car’ class, followed by truck, bus, and motorcycle. The results indicate that the YOLOx-m and YOLOx-l detect a higher number of objects than the YOLOx-s model. However, these detections contain many false positive samples which degrade their overall performance.
Table 8. Original and detected vehicles from YOLOx.

4.7. Performance of YOLO with Enhanced Images

The images in the used dataset contain noise introduced by weather conditions like snow, rain, etc., which affects the performance of YOLO. For improving the performance of the model, image enhancement is carried out before feeding it to the YOLO, as shown in Figure 8. In addition to the steps followed for vehicle detection using YOLO, an image enhancement strategy is adopted to improve the image quality. This step involves enhancing image quality using color restoration by multiscale retinex (MSR) adopted from [34].
Figure 8. Architecture of the methodology involving image enhancement for vehicle detection.
Contrary to single-scale retinex, which requires a trade-off between range compression and color rendition, MSR affords a better trade-off between local dynamic range and color. The following equation is used in MSR
R M S R i = n = 1 N w n R n i = n = 1 N l o g I i ( x , y ) l o n g ( F n ( x , y ) I i ( x , y ) )
where N, and w n show the number of scales and their weight, respectively, while F n ( x , y ) = C n e x p [ ( x 2 + y 2 ) / 2 σ n 2 ] .
MSR image enhancement helps improve light and color transformation which is expected to improve the performance of YOLO vehicle detection. A few images before and after restoration are presented in Figure 9.
Figure 9. Original and enhanced images using multi scale retinex, (a,b) rain-affected, (c,d) fog-affected, (eh) enhanced images for rain and fog, (i,j) snow-affected, (k,l) sand-affected, and (mp) enhanced images for snow and sand.
After improving the image quality, the same procedure is followed for vehicle detection as was carried out for vehicle detection from the original images using YOLOx. Precision recall curves for all variants of the YOLOx model are presented here in comparison to the curves for the original images. Figure 10 shows a performance comparison of YOLO variants before and after image enhancement is carried out. The results are indicative of improved performance thereby showing the potential of image enhancement to improve the vehicle detection performance of the YOLO model.
Figure 10. Precision–recall curves for YOLO variants using enhanced images, (a) YOLOx-s model performance for all classes, (b) YOLOx-m model performance for all classes, and (c) YOLOx-l model performance for all classes.

4.8. Speed and Floating Point Operations per Second

Speed and floating point operations per second (FLOPs) are regarded as important parameters to evaluate computer performance and are considered a better measure compared to instructions per second. Table 9 shows the speed, number of parameters, and GFLOPs of the three YOLO variants used in this study.
Table 9. Speed, FLOPS, and number of parameters for YOLO models.

4.9. Performance Comparison with Existing Studies

A comparative analysis with existing studies is also carried out to evaluate the efficacy of YOLOx in relation to other variants. Table 10 shows the comparison of the [30,31] in the context of YOLOx variants for foggy conditions. The results show that YOLOx performs better. However, it must be noted that the CF-YOLO model is tested in hard foggy conditions where vehicle detection is difficult in comparison to light or medium fog conditions which are adopted in [31].
Table 10. Performance analysis of YOLOx concerning existing works.

4.10. Discussions

This study leverages the YOLOx model, the latest addition in the YOLO series, for vehicle detection in sand and snow storms. During experiments, several important points are observed which are discussed here.
The first problem originates from car-carrying trailers. As shown in Figure 11, YOLOx treats both as different vehicles and detects them separately. At the same time, the car carrying the trailer is also detected as a ‘truck’ which is a false positive. The yellow rectangle indicates YOLOx detection while the cyan rectangle shows the ground truths.
Figure 11. Car-carrying trailer detected as ‘truck’ in snow.
Secondly, another type of false positive from YOLOx models comes in the form of a ‘rickshaw’ classified as a ‘truck’. One sample of such false positives is shown in Figure 12 where the rickshaw is labeled as the truck.
Figure 12. A sample of false positive from YOLOx in snow.
Thirdly, several cases are observed where the YOLOx model detects a partial part of a vehicle as the vehicle which means that the boundary identification is not proper. For example, Figure 13 shows one such instance where the wheel part of the truck is identified as the truck by the model.
Figure 13. Wrong detections from YOLOx in a sandstorm.
Finally, the performance of YOLOx for vehicle detection is affected primarily on account of the higher number of vehicles in a scene rather than sand or snow storms. For example, Figure 14a shows detection in rainy conditions where the road is covered by snow and moving traffic is affecting visibility. It can be observed that several vehicles are not detected by the model.
Figure 14. Missed vehicle from YOLOx in bad weather, (a) snow, and (b) sandstorm.
Similarly, Figure 14b shows a large number of vehicles in sandstorms. It can be seen that the visibility is better compared to Figure 14a. However, the model misses more than 15 vehicles on the scene. The model specifically shows inefficacy in detecting partially occluded and partial vehicles. It emphasizes the need for image correction and enhancement approaches to improve the detection performance of the YOLOx model.
Experiments involving rainy and foggy conditions, also highlight several problems of the YOLOx model for vehicle detection. The first is the pattern of vehicle detection by YOLOx which is affected mostly on account of the higher number of vehicles in a scene rather than weather conditions. Figure 15a shows detection in rainy conditions where the road is covered by water and moving traffic is affecting the visibility. The yellow rectangle indicates YOLOx detection while the cyan rectangle shows the ground truths. It can be observed that nine vehicles are missed by the model. In Figure 15b, the visibility is better; however, the number of vehicles is higher. In this case, as well, the model misses more than 15 vehicles. The model shows inefficacy in detecting partially occluded and partial vehicles.
Figure 15. Detections from YOLOx in bad weather, (a) Wrong detections in medium rain, and (b) Wrong detections in light rain.
Secondly, some obvious vehicles are missed by the model. Let us have a look at Figure 16, which contains only one vehicle. Due to the water splash, the vehicle is occluded and the model is unable to detect the vehicle, although it is very visible.
Figure 16. Missed vehicle from YOLOx in bad weather.
Figure 17 shows another case of vehicle detection affected by the rain. The cases where the windshield is partially blocked by the rainwater lead to no vehicle detection. The visibility is severely affected and vehicles are not visible except for the tail lights. YOLOx shows poor performance and can not detect vehicles. It emphasizes the need for image correction and enhancement approaches to improve the detection performance of the YOLOx model.
Figure 17. No detection in screen covered by rainwater.
Finally, a few samples are observed where YOLOx detects vehicles that are not in the ground truth data. For example, Figure 18 shows one such instance where a car is detected by the YOLOx. The detected car is a reflection of an on-road car in the window of a house.
Figure 18. YOLOx car detection in window glass.
YOLO model has shown promising results for object detection and has been considered more efficient than CNN, Faster R-CNN, and similar other models. YOLO is proven to be more efficient for object detection due to its end-to-end training. It provides more robust and accurate results. However, weather-affected images contain noise in the form of rain and fog drops or sand grains, etc., which results in poor image quality. It is observed that the results of the YOLOx variant are bad when there is less contrast and poor light conditions. Poor light conditions also lead to poor contrast which affects the model’s performance for vehicle detection.

5. Conclusions

The objective of this study is to analyze the efficacy of the YOLOx model for vehicle detection in bad weather conditions, particularly rainy and foggy conditions, and snow and snow storms. For experiments, the publicly available benchmark dataset DAWN is used, and ‘s’, ‘m’, and ‘l’ variants of YOLOx are utilized. The results show that YOLOx often shows better performance for different classes of vehicles than its counterparts. It has a 0.8983 mAP for snowy conditions and achieves a 0.8656 mAP for sandstorms. Similarly, experimental results for rainy and foggy conditions demonstrate a better performance of YOLOx-s over the other two models with an mAP of 0.9509 in rain and 0.9524 in foggy conditions. Overall, the models show better performance in snow storms than in sandstorms. All models tend to perform better in fog than in rain. It is also observed that the performance of YOLOx is degraded when the image has a higher number of vehicles, partially occluded vehicles, and low visibility indicating the scope of image enhancement approaches for better performance. The model experienced degraded results for poor light conditions where the contrast is low; we intend to perform further experiments with image enhancement approaches. Moreover, the categorization of weather conditions into light, medium, and hard should also be taken into account as weather intensity has a huge impact on YOLO performance. Performance comparison in the context of other models like Faster CNN, etc., will also be considered.

Author Contributions

Conceptualization, I.A.; Data curation, S.H.; Formal analysis, S.H.; Funding acquisition, Y.P.; Investigation, Y.P.; Methodology, I.A.; Project administration, G.K.; Software, G.K.; Validation, G.K.; Visualization, S.H.; Writing—original draft, I.A.; Writing—review & editing, Y.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2021R1A2B5B02086773) and by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education(NRF-2021R1A6A1A03039493). This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education(NRF-2022R1I1A1A01070998).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. World Health Organization. Road Traffic Injuries. Available online: https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries (accessed on 25 September 2023).
  2. National Highway Traffic Safety Administration. National motor vehicle crash causation survey: Report to congress. Natl. Highw. Traffic Saf. Adm. Tech. Rep. Dot 2008, 811, 059. [Google Scholar]
  3. Ashraf, I.; Hur, S.; Shafiq, M.; Park, Y. Catastrophic factors involved in road accidents: Underlying causes and descriptive analysis. PLoS ONE 2019, 14, e0223473. [Google Scholar] [CrossRef] [PubMed]
  4. Xia, Y.; Shi, X.; Song, G.; Geng, Q.; Liu, Y. Towards improving quality of video-based vehicle counting method for traffic flow estimation. Signal Process. 2016, 120, 672–681. [Google Scholar] [CrossRef]
  5. Wang, Z.; Zhan, J.; Duan, C.; Guan, X.; Lu, P.; Yang, K. A review of vehicle detection techniques for intelligent vehicles. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 3811–3831. [Google Scholar] [CrossRef]
  6. Chen, X.; Chen, H.; Xu, H. Vehicle detection based on multifeature extraction and recognition adopting RBF neural network on ADAS system. Complexity 2020, 2020, 1–11. [Google Scholar] [CrossRef]
  7. Satzoda, R.K.; Trivedi, M.M. Looking at vehicles in the night: Detection and dynamics of rear lights. IEEE Trans. Intell. Transp. Syst. 2016, 20, 4297–4307. [Google Scholar] [CrossRef]
  8. Mu, K.; Hui, F.; Zhao, X.; Prehofer, C. Multiscale edge fusion for vehicle detection based on difference of Gaussian. Optik 2016, 127, 4794–4798. [Google Scholar] [CrossRef]
  9. Shao, H.X.; Duan, X.M. Video Vehicle Detection Method Based on Multiple Color Space Information Fusion; Advanced Materials Research; Trans Tech Publications: Zurich, Switzerland, 2012; Volume 546, pp. 721–726. [Google Scholar]
  10. Teoh, S.S.; Bräunl, T. Symmetry-based monocular vehicle detection system. Mach. Vis. Appl. 2012, 23, 831–842. [Google Scholar] [CrossRef]
  11. Cao, X.; Wu, C.; Yan, P.; Li, X. Linear SVM classification using boosting HOG features for vehicle detection in low-altitude airborne videos. In Proceedings of the 2011 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 2421–2424. [Google Scholar]
  12. Niknejad, H.T.; Takeuchi, A.; Mita, S.; McAllester, D. On-road multivehicle tracking using deformable object model and particle filter with improved likelihood estimation. IEEE Trans. Intell. Transp. Syst. 2012, 13, 748–758. [Google Scholar] [CrossRef]
  13. Wen, X.; Shao, L.; Fang, W.; Xue, Y. Efficient feature selection and classification for vehicle detection. IEEE Trans. Circuits Syst. Video Technol. 2014, 25, 508–517. [Google Scholar]
  14. Hsieh, J.W.; Chen, L.C.; Chen, D.Y. Symmetrical SURF and its applications to vehicle detection and vehicle make and model recognition. IEEE Trans. Intell. Transp. Syst. 2014, 15, 6–20. [Google Scholar] [CrossRef]
  15. Chen, X.; Meng, Q. Vehicle detection from UAVs by using SIFT with implicit shape model. In Proceedings of the 2013 IEEE International Conference on Systems, Man, and Cybernetics, Manchester, UK, 13–16 October 2013; pp. 3139–3144. [Google Scholar]
  16. Song, H.; Liang, H.; Li, H.; Dai, Z.; Yun, X. Vision-based vehicle detection and counting system using deep learning in highway scenes. Eur. Transp. Res. Rev. 2019, 11, 51. [Google Scholar] [CrossRef]
  17. Wang, H.; Yu, Y.; Cai, Y.; Chen, X.; Chen, L.; Liu, Q. A comparative study of state-of-the-art deep learning algorithms for vehicle detection. IEEE Intell. Transp. Syst. Mag. 2019, 11, 82–95. [Google Scholar] [CrossRef]
  18. Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.; Villena-Martinez, V.; Garcia-Rodriguez, J. A review on deep learning techniques applied to semantic segmentation. arXiv 2017, arXiv:1704.06857. [Google Scholar]
  19. Hassaballah, M.; Kenk, M.A.; Muhammad, K.; Minaee, S. Vehicle detection and tracking in adverse weather using a deep learning framework. IEEE Trans. Intell. Transp. Syst. 2020, 22, 4230–4242. [Google Scholar] [CrossRef]
  20. Chen, X.Z.; Chang, C.M.; Yu, C.W.; Chen, Y.L. A real-time vehicle detection system under various bad weather conditions based on a deep learning model without retraining. Sensors 2020, 20, 5731. [Google Scholar] [CrossRef] [PubMed]
  21. Ghosh, R. On-road vehicle detection in varying weather conditions using faster R-CNN with several region proposal networks. Multimed. Tools Appl. 2021, 80, 25985–25999. [Google Scholar] [CrossRef]
  22. Canziani, A.; Paszke, A.; Culurciello, E. An analysis of deep neural network models for practical applications. arXiv 2016, arXiv:1605.07678. [Google Scholar]
  23. Wang, Z.; Zhan, J.; Li, Y.; Zhong, Z.; Cao, Z. A new scheme of vehicle detection for severe weather based on multi-sensor fusion. Measurement 2022, 191, 110737. [Google Scholar] [CrossRef]
  24. Wu, B.F.; Juang, J.H. Adaptive vehicle detector approach for complex environments. IEEE Trans. Intell. Transp. Syst. 2012, 13, 817–827. [Google Scholar] [CrossRef]
  25. Singh, A.; Kumar, D.P.; Shivaprasad, K.; Mohit, M.; Wadhawan, A. Vehicle detection and accident prediction in sand/dust storms. In Proceedings of the 2021 International Conference on Computing Sciences (ICCS), Phagwara, India, 4–5 December 2021; pp. 107–111. [Google Scholar]
  26. Humayun, M.; Ashfaq, F.; Jhanjhi, N.Z.; Alsadun, M.K. Traffic management: Multi-scale vehicle detection in varying weather conditions using yolov4 and spatial pyramid pooling network. Electronics 2022, 11, 2748. [Google Scholar] [CrossRef]
  27. Li, W. Vehicle detection in foggy weather based on an enhanced YOLO method. In Proceedings of the 2022 International Conference on Machine Vision, Automatic Identification and Detection (MVAID 2022), Nanjing, China, 8–10 April 2022; Journal of Physics: Conference Series. IOP Publishing: Bristol, UK, 2022; Volume 2284, p. 012015. [Google Scholar]
  28. Sun, Z.; Liu, C.; Qu, H.; Xie, G. PVformer: Pedestrian and vehicle detection algorithm based on Swin transformer in rainy scenes. Sensors 2022, 22, 5667. [Google Scholar] [CrossRef]
  29. Tao, H.; Duan, Q.; Lu, M.; Hu, Z. Learning Discriminative Feature Representation with Pixel-level Supervision for Forest Smoke Recognition. Pattern Recognit. 2023, 143, 109761. [Google Scholar] [CrossRef]
  30. Ding, Q.; Li, P.; Yan, X.; Shi, D.; Liang, L.; Wang, W.; Xie, H.; Li, J.; Wei, M. CF-YOLO: Cross Fusion YOLO for Object Detection in Adverse Weather With a High-Quality Real Snow Dataset. IEEE Trans. Intell. Transp. Syst. 2023, 24, 10749–10759. [Google Scholar] [CrossRef]
  31. Liu, W.; Ren, G.; Yu, R.; Guo, S.; Zhu, J.; Zhang, L. Image-adaptive YOLO for object detection in adverse weather conditions. Proc. Aaai Conf. Artif. Intell. 2022, 36, 1792–1800. [Google Scholar] [CrossRef]
  32. Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. Yolox: Exceeding yolo series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
  33. Kenk, M.A.; Hassaballah, M. DAWN: Vehicle detection in adverse weather nature dataset. arXiv 2020, arXiv:2008.05402. [Google Scholar]
  34. Petro, A.B.; Sbert, C.; Morel, J.M. Multiscale retinex. Image Process. Line 2014, 71–88. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.