Next Article in Journal
Dynamic Initial Weight Assignment for MaxSAT
Next Article in Special Issue
A Safety Prediction System for Lunar Orbit Rendezvous and Docking Mission
Previous Article in Journal
On a Robust and Efficient Numerical Scheme for the Simulation of Stationary 3-Component Systems with Non-Negative Species-Concentration with an Application to the Cu Deposition from a Cu-(β-alanine)-Electrolyte
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Short Communication: Detecting Heavy Goods Vehicles in Rest Areas in Winter Conditions Using YOLOv5

Capia AS, 9008 Tromsø, Norway
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Algorithms 2021, 14(4), 114; https://doi.org/10.3390/a14040114
Submission received: 28 February 2021 / Revised: 26 March 2021 / Accepted: 29 March 2021 / Published: 31 March 2021
(This article belongs to the Special Issue Machine-Learning in Computer Vision Applications)

Abstract

:
The proper planning of rest periods in response to the availability of parking spaces at rest areas is an important issue for haulage companies as well as traffic and road administrations. We present a case study of how You Only Look Once (YOLO)v5 can be implemented to detect heavy goods vehicles at rest areas during winter to allow for the real-time prediction of parking spot occupancy. Snowy conditions and the polar night in winter typically pose some challenges for image recognition, hence we use thermal network cameras. As these images typically have a high number of overlaps and cut-offs of vehicles, we applied transfer learning to YOLOv5 to investigate whether the front cabin and the rear are suitable features for heavy goods vehicle recognition. Our results show that the trained algorithm can detect the front cabin of heavy goods vehicles with high confidence, while detecting the rear seems more difficult, especially when located far away from the camera. In conclusion, we firstly show an improvement in detecting heavy goods vehicles using their front and rear instead of the whole vehicle, when winter conditions result in challenging images with a high number of overlaps and cut-offs, and secondly, we show thermal network imaging to be promising in vehicle detection.

1. Introduction

To improve road safety, drivers of heavy goods vehicles must comply with strict rules regarding driving time and rest periods. Due to these regulations and contractual delivery agreements, heavy goods vehicle traffic is highly schedule driven. Arriving at a crowded rest area after long journeys leads to drivers exceeding the permitted driving time or having to rest outside of designated areas. As both can lead to increased traffic risk, the Barents Intelligent Transport System has initiated a pilot project with the aim of automatically reporting and forecasting the current and future availability of parking spaces at rest areas in the Barents region. The pilot project ran from January to April 2021 in two rest areas, one in northern Norway and one in northern Sweden. A crucial part of this pilot project was the detection of heavy goods vehicles on images from a thermal network camera. In this short communication, we propose a feasible solution for heavy goods vehicle detection. Computer Vision algorithms have been implemented for various tasks in traffic monitoring for many years, e.g., traffic sign recognition [1,2,3,4,5,6,7]; intelligent traffic light system [8]; vehicle speed monitoring [9]; traffic violation monitoring [10]; vehicle tracking [11,12,13]; vehicle classification [14,15,16,17,18,19,20,21,22,23,24,25,26]; vehicle counting system on streets and highways [27,28,29,30,31]; parking spot detection from the point of view of the car for parking assistants [32,33]; and parking spot monitoring [34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49]. Most of the previous studies about parking spot monitoring use data from parking areas for passenger cars which have marked parking spots for each car or in settings in which the cars park in one row along a street [34,35,36,37,38,39,40,41,42,43,44,45,46,47,48]. This differs from the settings of the two rest areas used in our study, which are primarily used by heavy goods vehicles. Passenger cars also pass through these rest areas, but only a few and generally only for brief stops. In winter, the markings of the parking spots are covered by snow and ice and therefore not visible, so heavy goods vehicles do not park in a line or in marked parking spots. This leads to several challenges in detecting heavy goods vehicles: the vehicles face the camera from different angles (front, back, side); so the size of the vehicles differs depending on the distance to the camera, and there is a high overlap of vehicles on the camera image and many vehicles are cut off, so see Figure 1 for examples.
In this paper, we used the latest version of the You Only Look Once (YOLO) object detection algorithm [50] to detect vehicles. As computer vision practitioners, our focus was on the application of the algorithm, data acquisition and data annotation. The remainder of this paper is organised as follows. Section 2 describes the selection of the algorithm and the dataset. The training and results are described in Section 3, ideas for further improvement and development are discussed in Section 4, followed by a conclusion in Section 5.

2. Materials and Methods

2.1. Selection of Algorithm

The decision to use a convolutional neural networks was made due to their ease of use. There are a number of pre-trained models that can be tuned for a variety of tasks. They are also readily available, computationally inexpensive and show good performance metrics. Object recognition systems from the YOLO family [51,52] are often used for vehicle recognition tasks, e.g., [27,28,29,37] and have been shown to outperform other target recognition algorithms [53,54]. YOLOv5 has proven to significantly improve the processing time of deeper networks [50]. This attribute will gain in importance when moving forward with the project to bigger datasets and real-time detection. YOLOv5 was pre-trained on the Common Objects in Context (COCO) dataset, an extensive dataset for object recognition, segmentation, and labelling. This dataset contains over 200,000 labelled images with 80 different classes, including the classes car and truck [50,55]. Therefore, YOLOv5 can be used as such to detect heavy goods vehicles and can be used as a starting point for an altered model to detect heavy goods vehicle features like their front and rear.

2.2. The Rest Area Dataset

At each rest area, a thermal imaging network camera is installed in a fixed position facing the main parking area of the rest area. One of the cameras was installed in front of a pole, which appears as a grey area in the centre of the image. The thermal network cameras have an uncooled microbolometre image sensor with a thermal sensitivity (noise equivalent temperature difference) of <50 mK and a thermal sensor resolution of 640 × 480. The Ffmpeg library [56] was used to capture images from the video streams with settings that capture frames with a scene change of more than 2%. This is at the threshold where random sensor noise also triggers [57]. The captured frames have dimensions of 640 × 480. Figure 2 shows images from the camera under different light and weather conditions.
Between 15 January 2021 and 22 February 2021, 100,179 images were collected from the cameras. During this period, data collection from both cameras was partially interrupted. These interruptions could last from a few hours to several days. The longest interruption occurred at rest area B, where the camera was offline for the first days of the data collection period. Therefore, less data were available from rest area B. One consequence of the sensitive setting of the motion detector was that it reacted to temperature changes caused by wind. Therefore, many of the images only differed from each other in their grey scale due to temperature changes and not due to changes in vehicle position. The two rest areas in this study are mainly used for long breaks (about 8 h). Therefore, there are long periods of inactivity on a specific parking space. A total of 427 images were selected for annotation and split into the training, validation and test datasets. To prevent testing the model with images that are very similar to the images in the training dataset, the datasets were chronologically split. Table 1 shows how the data were split.
The data were annotated using bounding boxes for three classes: truck_front, truck_back and car, see Figure 3 for example. We chose to focus on the driver’s cabin in front and the side view (truck_front) and the front view on the rear of the truck (truck_back), as the bounding boxes overlapped too much around the whole vehicle. In addition, this also makes it possible to recognise vehicles where the front or rear is cut off. In the 427 annotated images, there were 768 objects labelled as truck_front, 378 as truck_back and 17 as car.
The 264 images from the training dataset were augmented to 580 images. For each image, a maximum of 3 augmented versions were generated by randomly applying horizontal mirroring, resizing (cropping from 19% minimum zoom to 67% maximum zoom) and changes in the grey scale (brightness variations between ±35%). Examples of augmented images are shown in Figure 4.

3. Experiments

3.1. Training

The model was trained using Google Colab, which provides free access to powerful GPUs and requires no configuration. We used a notebook developed by Roboflow.ai [58] which is based on YOLOv5 [50] and uses pre-trained COCO weights. We added the rest area dataset and adjusted the number of epochs to be trained as well as the stack size to train the upper layers of the model to detect our classes. Training a model for 500 epochs takes about 120 min. The improvement in our model can be seen in the graphs in Figure 5, which display different performance metrics for both the training and validation sets.
There are three different types of loss shown in Figure 5: box loss, objectness loss and classification loss. The box loss represents how well the algorithm can locate the centre of an object and how well the predicted bounding box covers an object. Objectness is essentially a measure of the probability that an object exists in a proposed region of interest. If the objectivity is high, this means that the image window is likely to contain an object. Classification loss gives an idea of how well the algorithm can predict the correct class of a given object.
The model improved swiftly in terms of precision, recall and mean average precision before plateauing after about 150 epochs. The box, objectness and classification losses of the validation data also showed a rapid decline until around epoch 150. We used early stopping to select the best weights.

3.2. Experimental Analysis

After training our model, we made predictions for the new and unseen pictures in our test set. The examples in Figure 6 show that the algorithm can detect the front of a truck to a higher degree of certainty. However, it has difficulty recognising the rear of a truck, especially when these are located far away from the camera. It also detects a car as a truck_front in two of the images.
It can be seen that the algorithm currently struggles to correctly differentiate between cars and cabins, and this becomes worse the more truck fronts are present in an image. It is also difficult for the algorithm to correctly recognise truck rears in an image. Strategies to overcome these shortcomings are proposed in Section 4.
To evaluate the model trained with the rest area dataset, we compared it to YOLOv5 [50], without it being trained on any additionally data as a baseline model only using COCO weights. This model contains, amongst other classes, the car and truck class, however, it does not distinguish between truck_front and truck_back. Table 2 shows the accuracy of the baseline and the altered model for the four available classes.
The baseline model, which is trained on heavy goods vehicles as a whole, had difficulties detecting them on the test images of the rest of the area’s dataset. It either did not recognise the trucks or it did so with much less certainty than the altered model with the two new classes. The additional training also improved detecting heavy goods vehicles on images on which the cabin was cut off. Some examples of detection for the test data of the two models are shown in Figure 7.

4. Discussion

We see the greatest potential for improving performance in adjusting the physical data collection and in improving the data annotation.
For most applications, changes to the physical data collection cannot be influenced. However, as this is a pilot project running on only two rest areas, there is the possibility of changing the physical setup for data collection if more rest areas are added. Our recommendations for the setup are: to continue using thermal network cameras as it is not possible to read number plates or identify detailed human characteristics in their images and the data are automatically anonymised. Furthermore, the camera delivered usable images for all light and weather conditions that occurred during the project period. However, we suggest using a wider angle camera to capture more and more complete heavy goods vehicles, avoid obstacles in the camera’s field of view and increase the resolution of the images.
The three classes in the dataset used in this paper are very unbalanced. Cars are highly underrepresented in the dataset, reflecting the fact that the rest areas are mainly used by trucks. One strategy to deal with this is to train on only two classes, truck_front and truck_back, or to emphasise the annotations for cars more by adding more images with cars. The performance in recognising cars could be increased by adding images with cars from other publicly available datasets. However, there are also only half as many images with the truck_back label as with the truck_front label. We assume that performance can be increased after collecting and labelling more images, especially by balancing the number of images from both resting areas and increasing the number of images in the two smaller classes, car and truck_back.
In addition, we suggest reviewing the data augmentation strategies and using a higher augmentation rate to benefit more from the positive effects of augmentation [59].
One way to deal with a static obstacle, such as the pole located in the middle of the images of a rest area, could be to crop it out of the image, since, for example, a truck cabin with a removed obstacle has more features in common with cabins without obstacles than the cabin with an obstacle has with a cabin without an obstacle (Figure 8B is more similar to Figure 8C,D) than Figure 8A is to Figure 8C,D).
Currently, heavy goods vehicles with both the cab and the rear outside the image, or which are obscured by other vehicles, are rarely detected by our algorithm. To get closer to the goal of detecting all heavy goods vehicles in the picture, we first propose to further specialise our current model. Instead of training it to detect cabins in frontal and side view, it could be trained to detect only in frontal view (windscreen, front lights and number plate facing the camera). Secondly, we propose to add an additional model to the analysis for recognition. The additional model could either detect other characteristic features of heavy goods vehicles that are easily visible from the side, such as wheels, or it could classify the images into categories indicating the number of heavy goods vehicles. Knowing how many of the individual features of a heavy goods vehicle are detected in an image enables us to combine this information to estimate the number of heavy goods vehicles in an image and enables us to predict occupancy rates.

5. Conclusions

Section 4 shows that there are many steps that still need to be taken to improve the detection of heavy goods vehicles in rest areas. However, we already showed that when analysing images from small angle cameras to detect objects that occur in groups and have a high number of overlaps and cut-offs, the model can be improved by detecting certain characteristic features instead of the whole object. Furthermore, the usage of thermal network cameras has proven to be valuable given the purpose of the project and the dark and snowy winter conditions in northern Scandinavia. We are confident that with a bigger training set and the implementation of the changes suggested in Section 4, the algorithm can be improved even further.

Author Contributions

Creation of model and experiments, N.H.; research and investigation, M.K.-E.; computing resources and automated data collection, S.B.; data curation and writing, N.H. and M.K.-E.; mentoring, Ø.M. and P.E.K.; project administration, T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Norwegian county municipalities Troms og Finnmark and Nordland, the Norwegian Public Roads Administration, Kolarctic CBC, the county administrative boards of Länsstyrelsen Norrbotten and Länsstyrelsen Västerbotten in Sweden and the Swedish Transport Administration as part of a Kolartic project Barents Intelligent Transport System (ITS). The Kolartic project is part of the regional Barents cooperation in transport.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We thank the funders of this project (Norwegian county municipalities Troms og Finnmark and Nordland, Norwegian Public Roads Administration, Kolarctic CBC, the county administrative boards of Länsstyrelsen Norrbotten and Länsstyrelsen Västerbotten, Swedish Transport Administration) for initiating this project, organising a cross-border collaboration, setting up cameras and ensuring access to them.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Zhang, J.; Huang, M.; Jin, X.; Li, X. A Real-Time Chinese Traffic Sign Detection Algorithm Based on Modified YOLOv2. Algorithms 2017, 10, 127. [Google Scholar] [CrossRef] [Green Version]
  2. Haque, W.A.; Arefin, S.; Shihavuddin, A.; Hasan, M.A. DeepThin: A novel lightweight CNN architecture for traffic sign recognition without GPU requirements. Expert Syst. Appl. 2021, 168, 114481. [Google Scholar] [CrossRef]
  3. Zhang, B.; Wang, G.; Wang, H.; Xu, C.; Li, Y.; Xu, L. Detecting Small Chinese Traffic Signs via Improved YOLOv3 Method. Math. Probl. Eng. 2021, 2021. [Google Scholar] [CrossRef]
  4. Zhou, K.; Zhan, Y.; Fu, D. Learning Region-Based Attention Network for Traffic Sign Recognition. Sensors 2021, 21, 686. [Google Scholar] [CrossRef] [PubMed]
  5. Sun, C.; Ai, Y.; Wang, S.; Zhang, W. Dense-RefineDet for Traffic Sign Detection and Classification. Sensors 2020, 20, 6570. [Google Scholar] [CrossRef]
  6. Du, L.; Ji, J.; Pei, Z.; Zheng, H.; Fu, S.; Kong, H.; Chen, W. Improved detection method for traffic signs in real scenes applied in intelligent and connected vehicles. IET Intell. Transp. Syst. 2020, 14, 1555–1564. [Google Scholar] [CrossRef]
  7. Yazdan, R.; Varshosaz, M. Improving traffic sign recognition results in urban areas by overcoming the impact of scale and rotation. ISPRS J. Photogramm. Remote Sens. 2021, 171, 18–35. [Google Scholar] [CrossRef]
  8. Nodado, J.T.G.; Morales, H.C.P.; Abugan, M.A.P.; Olisea, J.L.; Aralar, A.C.; Loresco, P.J.M. Intelligent Traffic Light System Using Computer Vision with Android Monitoring and Control. In Proceedings of the TENCON 2018—2018 IEEE Region 10 Conference, Jeju, Korea, 28–31 October 2018; pp. 2461–2466. [Google Scholar] [CrossRef]
  9. Poddar, M.; Giridhar, M.K.; Prabhu, A.S.; Umadevi, V. Automated traffic monitoring system using computer vision. In Proceedings of the 2016 International Conference on ICT in Business Industry & Government (ICTBIG), Indore, India, 18–19 November 2016; pp. 1–5. [Google Scholar]
  10. Wu, W.; Bulan, O.; Bernal, E.A.; Loce, R.P. Detection of Moving Violations. In Computer Vision and Imaging in Intelligent Transportation Systems; Loce, R.P., Bala, R., Trivedi, M., Eds.; Wiley-IEEE Press: Hoboken, NJ, USA, 2017; Chapter 5; pp. 101–130. [Google Scholar]
  11. Al-qaness, M.A.A.; Abbasi, A.A.; Fan, H.; Ibrahim, R.A.; Alsamhi, S.H.; Hawbani, A. An improved YOLO-based road traffic monitoring system. Computing 2021, 103, 211–230. [Google Scholar] [CrossRef]
  12. Xu, T.; Zhang, Z.; Wu, X.; Qi, L.; Han, Y. Recognition of lane-changing behaviour with machine learning methods at freeway off-ramps. Phys. A Stat. Mech. Appl. 2021, 567. [Google Scholar] [CrossRef]
  13. Rosenbaum, D.; Kurz, F.; Thomas, U.; Suri, S.; Reinartz, P. Towards automatic near real-time traffic monitoring with an airborne wide angle camera system. Eur. Transp. Res. Rev. 2009, 1, 11–21. [Google Scholar] [CrossRef] [Green Version]
  14. Zhu, E.; Xu, M.; Pi, D.C. Vehicle Type Recognition Algorithm Based on Improved Network in Network. Complexity 2021, 2021. [Google Scholar] [CrossRef]
  15. Awang, S.; Azmi, N.M.A.N.; Rahman, M.A. Vehicle Type Classification Using an Enhanced Sparse-Filtered Convolutional Neural Network With Layer-Skipping Strategy. IEEE Access 2020, 8, 14265–14277. [Google Scholar] [CrossRef]
  16. Sun, W.; Zhang, X.; Shi, S.; He, X. Vehicle classification approach based on the combined texture and shape features with a compressive DL. IET Intell. Transp. Syst. 2019, 13, 1069–1077. [Google Scholar] [CrossRef]
  17. Kang, Q.; Zhao, H.; Yang, D.; Ahmed, H.S.; Ma, J. Lightweight convolutional neural network for vehicle recognition in thermal infrared images. Infrared Phys. Technol. 2020, 104. [Google Scholar] [CrossRef]
  18. Sun, W.; Zhang, X.; He, X.; Jin, Y.; Zhang, X. A Two-Stage Vehicle Type Recognition Method Combining the Most Effective Gabor Features. CMC-Comput. Mater. Contin. 2020, 65, 2489–2510. [Google Scholar] [CrossRef]
  19. Uus, J.; Krilavičius, T. Detection of Different Types of Vehicles from Aerial Imagery. 2019. Available online: https://www.vdu.lt/cris/handle/20.500.12259/102060 (accessed on 28 March 2021).
  20. Adu-Gyamfi, Y.O.; Asare, S.K.; Sharma, A.; Titus, T. Automated Vehicle Recognition with Deep Convolutional Neural Networks. Transp. Res. Rec. 2017, 2645, 113–122. [Google Scholar] [CrossRef] [Green Version]
  21. Huttunen, H.; Yancheshmeh, F.S.; Chen, K. Car type recognition with Deep Neural Networks. In Proceedings of the 2016 IEEE Intelligent Vehicles Symposium (IV), Gothenburg, Sweden, 19–22 June 2016. [Google Scholar] [CrossRef] [Green Version]
  22. Zhou, Y.; Cheung, N.M. Vehicle classification using transferable deep neural network features. arXiv 2016, arXiv:1601.01145. [Google Scholar]
  23. Moussa, G. Vehicle Type Classification with Geometric and Appearance Attributes. World Acad. Sci. Eng. Technol. Int. J. Civ. Environ. Struct. Constr. Archit. Eng. 2014, 8, 277–282. [Google Scholar]
  24. Asaidi, H.; Aarab, A.; Bellouki, M. Shadow Elimination and Vehicles Classification Approaches in Traffic Video Surveillance Context. J. Vis. Lang. Comput. 2014, 25, 333–345. [Google Scholar] [CrossRef]
  25. Han, D.; Leotta, M.J.; Cooper, D.B.; Mundy, J.L. Vehicle Class Recognition from Video-Based on 3D Curve Probes. In Proceedings of the 2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, Beijing, China, 15–16 October 2005; pp. 285–292. [Google Scholar] [CrossRef] [Green Version]
  26. Ferryman, J.M.; Worrall, A.D.; Sullivan, G.D.; Baker, K.D. A Generic Deformable Model for Vehicle Recognition. In BMVC; Citeseer: Princeton, NJ, USA, 1995; Volume 1, p. 2. [Google Scholar]
  27. Fachrie, M. A Simple Vehicle Counting System Using Deep Learning with YOLOv3 Model. Jurnal RESTI (Rekayasa Sistem Dan Teknologi Informasi) 2020, 4, 462–468. [Google Scholar] [CrossRef]
  28. Song, H.; Liang, H.; Li, H.; Dai, Z.; Yun, X. Vision-based vehicle detection and counting system using deep learning in highway scenes. Eur. Transp. Res. Rev. 2019, 11. [Google Scholar] [CrossRef] [Green Version]
  29. Alghyaline, S.; El-Omari, N.; Al-Khatib, R.M.; Al-Kharbshh, H. RT-VC: An Efficient Real-Time Vehicle Counting Approach. J. Theor. Appl. Inf. Technol. 2019, 97, 2062–2075. [Google Scholar]
  30. Iftikhar, Z.; Dissanayake, P.; Vial, P. Computer Vision Based Traffic Monitoring System for Multi-track Freeways. In Intelligent Computing Methodologies; Huang, D.S., Jo, K.H., Wang, L., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 339–349. [Google Scholar]
  31. Kun, A.J.; Vamossy, Z. Traffic monitoring with computer vision. In Proceedings of the 2009 7th International Symposium on Applied Machine Intelligence and Informatics, Herlany, Slovakia, 30–31 January 2009; pp. 131–134. [Google Scholar] [CrossRef]
  32. Jiang, S.; Jiang, H.; Ma, S.; Jiang, Z. Detection of Parking Slots Based on Mask R-CNN. Appl. Sci. 2020, 10, 4295. [Google Scholar] [CrossRef]
  33. Kim, S.; Kim, J.; Ra, M.; Kim, W.Y. Vacant Parking Slot Recognition Method for Practical Autonomous Valet Parking System Using around View Image. Symmetry 2020, 12, 1725. [Google Scholar] [CrossRef]
  34. Zhang, C.; Du, B. Image-Based Approach for Parking-Spot Detection with Occlusion Handling. J. Transp. Eng. Part Syst. 2020, 146. [Google Scholar] [CrossRef]
  35. Tătulea, P.; Călin, F.; Brad, R.; Brâncovean, L.; Greavu, M. An Image Feature-Based Method for Parking Lot Occupancy. Future Internet 2019, 11, 169. [Google Scholar] [CrossRef] [Green Version]
  36. Cai, B.Y.; Alvarez, R.; Sit, M.; Duarte, F.; Ratti, C. Deep Learning-Based Video System for Accurate and Real-Time Parking Measurement. IEEE Internet Things J. 2019, 6, 7693–7701. [Google Scholar] [CrossRef] [Green Version]
  37. Ding, X.; Yang, R. Vehicle and Parking Space Detection Based on Improved YOLO Network Model. J. Phys. Conf. Ser. 2019, 1325, 012084. [Google Scholar] [CrossRef]
  38. Acharya, D.; Yan, W.; Khoshelham, K. Real-Time Image-Based Parking OCCUPANCY detection Using Deep Learning. Research@ Locate. 2018, pp. 33–40. Available online: https://www.researchgate.net/publication/323796590 (accessed on 28 March 2021).
  39. Amato, G.; Carrara, F.; Falchi, F.; Gennaro, C.; Meghini, C.; Vairo, C. Deep Learning for Decentralized Parking Lot Occupancy Detection. Expert Syst. Appl. 2016, 72. [Google Scholar] [CrossRef]
  40. Masmoudi, I.; Wali, A.; Jamoussi, A.; Alimi, M.A. Trajectory analysis for parking lot vacancy detection system. IET Intell. Transp. Syst. 2016, 10, 461–468. [Google Scholar] [CrossRef]
  41. Valipour, S.; Siam, M.; Stroulia, E.; Jagersand, M. Parking-stall vacancy indicator system, based on deep convolutional neural networks. In Proceedings of the 2016 IEEE 3rd World Forum on Internet of Things (WF-IoT), Reston, VA, USA, 12–14 December 2016; pp. 655–660. [Google Scholar] [CrossRef] [Green Version]
  42. Menéndez, J.M.; Postigo, C.; Torres, J. Vacant parking area estimation through background subtraction and transience map analysis. IET Intell. Transp. Syst. 2015, 9. [Google Scholar] [CrossRef] [Green Version]
  43. De Almeida, P.R.; Oliveira, L.S.; Britto, A.S., Jr.; Silva, E.J., Jr.; Koerich, A.L. PKLot—A Robust Dataset for Parking Lot Classification. Expert Syst. Appl. 2015, 42. [Google Scholar] [CrossRef] [Green Version]
  44. Jermsurawong, J.; Ahsan, U.; Haidar, A.; Dong, H.; Mavridis, N. One-Day Long Statistical Analysis of Parking Demand by Using Single-Camera Vacancy Detection. J. Transp. Syst. Eng. Inf. Technol. 2014, 14, 33–44. [Google Scholar] [CrossRef]
  45. Fabian, T. A Vision-Based Algorithm for Parking Lot Utilization Evaluation Using Conditional Random Fields. In Proceedings of the International Symposium on Visual Computing, Crete, Greece, 29–31 July 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 222–233. [Google Scholar]
  46. Huang, C.; Tai, Y.; Wang, S. Vacant Parking Space Detection Based on Plane-Based Bayesian Hierarchical Framework. IEEE Trans. Circuits Syst. Video Technol. 2013, 23, 1598–1610. [Google Scholar] [CrossRef]
  47. Ichihashi, H.; Notsu, A.; Honda, K.; Katada, T.; Fujiyoshi, M. Vacant parking space detector for outdoor parking lot by using surveillance camera and FCM classifier. In Proceedings of the 2009 IEEE International Conference on Fuzzy Systems, Jeju, Korea, 20–24 August 2009; pp. 127–134. [Google Scholar] [CrossRef]
  48. Bong, D.; Ting, K.C.; Lai, K.C. Integrated Approach in the Design of Car Park Occupancy Information System (COINS). IAENG Int. J. Comput. Sci. 2008, 35, 1. [Google Scholar]
  49. Funck, S.; Mohler, N.; Oertel, W. Determining car-park occupancy from single images. In Proceedings of the IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 325–328. [Google Scholar] [CrossRef]
  50. Jocher, G.; Stoken, A.; Borovec, J.; NanoCode012; ChristopherSTAN; Changyu, L.; Laughing; tkianai; yxNONG; Hogan, A.; et al. Ultralytics/yolov5: v4.0—nn.SiLU() Activations, Weights & Biases Logging, PyTorch Hub Integration. 2021. Available online: https://doi.org/10.5281/zenodo.4418161 (accessed on 28 March 2021).
  51. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. 2016, pp. 779–788. Available online: https://doi.org/10.1109/CVPR.2016.91 (accessed on 3 February 2021).
  52. Joseph, R.; Santosh, D.; Ross, G.; Ali, F. YOLO: Real-Time Object Detection. 2016. Available online: https://pjreddie.com/darknet/yolo/ (accessed on 3 March 2021).
  53. Benjdira, B.; Khursheed, T.; Koubaa, A.; Ammar, A.; Ouni, K. Car Detection using Unmanned Aerial Vehicles: Comparison between Faster R-CNN and YOLOv3. 2018. Available online: http://xxx.lanl.gov/abs/1812.10968 (accessed on 3 February 2021).
  54. Ouyang, L.; Wang, H. Vehicle target detection in complex scenes based on YOLOv3 algorithm. IOP Conf. Ser. Mater. Sci. Eng. 2019, 569, 052018. [Google Scholar] [CrossRef]
  55. Lin, T.; Maire, M.; Belongie, S.J.; Bourdev, L.D.; Girshick, R.B.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. CoRR 2014. Available online: http://xxx.lanl.gov/abs/1405.0312 (accessed on 3 February 2021).
  56. Fabrice, B. FFmpeg. Available online: https://ffmpeg.org/ (accessed on 3 February 2021).
  57. Fabrice, B. FFmpeg Filters Documentation Select, Aselect. Available online: https://ffmpeg.org/ffmpeg-filters.html#select_002c-aselect (accessed on 3 February 2021).
  58. Roboflow. How to Train YOLOv5 on Custom Objects. 2016. Available online: https://colab.research.google.com/drive/1gDZ2xcTOgR39tGGs-EZ6i3RTs16wmzZQ (accessed on 3 February 2021).
  59. Zoph, B.; Cubuk, E.D.; Ghiasi, G.; Lin, T.Y.; Shlens, J.; Le, Q.V. Learning Data Augmentation Strategies for Object Detection. arXiv 2019, arXiv:1906.11172. [Google Scholar]
Figure 1. Typical examples of images from the thermal network cameras at the two rest areas.
Figure 1. Typical examples of images from the thermal network cameras at the two rest areas.
Algorithms 14 00114 g001
Figure 2. Example images from the thermal network camera taken at daylight (A,B), in the dark (C,D), when it was raining 1.1 mm (A), when it was snowing 1.5 mm (C) and without any precipitation (B,D).
Figure 2. Example images from the thermal network camera taken at daylight (A,B), in the dark (C,D), when it was raining 1.1 mm (A), when it was snowing 1.5 mm (C) and without any precipitation (B,D).
Algorithms 14 00114 g002
Figure 3. Examples of annotated images with bounding boxes for the three classes: truck_front, truck_back and car.
Figure 3. Examples of annotated images with bounding boxes for the three classes: truck_front, truck_back and car.
Algorithms 14 00114 g003
Figure 4. Examples of augmented training data illustrating the horizontal mirroring, resizing and changes of the grey scale.
Figure 4. Examples of augmented training data illustrating the horizontal mirroring, resizing and changes of the grey scale.
Algorithms 14 00114 g004
Figure 5. Plots of box loss, objectness loss, classification loss, precision, recall and mean average precision (mAP) over the training epochs for the training and validation set.
Figure 5. Plots of box loss, objectness loss, classification loss, precision, recall and mean average precision (mAP) over the training epochs for the training and validation set.
Algorithms 14 00114 g005
Figure 6. Images form the test dataset showing the performance for detecting the three classes truck_front; truck_back; and car.
Figure 6. Images form the test dataset showing the performance for detecting the three classes truck_front; truck_back; and car.
Algorithms 14 00114 g006
Figure 7. Images from the test dataset evaluated by the baseline model, YOLOv5, on the left and by the altered model, YOLOv5 trained with additional data from the rest area dataset, on the right.
Figure 7. Images from the test dataset evaluated by the baseline model, YOLOv5, on the left and by the altered model, YOLOv5 trained with additional data from the rest area dataset, on the right.
Algorithms 14 00114 g007
Figure 8. (A)—Cabin with obstacle; (B)—Cabin A with removed obstacle; (C,D)—Cabin front without any manipulations.
Figure 8. (A)—Cabin with obstacle; (B)—Cabin A with removed obstacle; (C,D)—Cabin front without any manipulations.
Algorithms 14 00114 g008
Table 1. Split of the data into training, validation and test dataset.
Table 1. Split of the data into training, validation and test dataset.
DatasetRest AreaAugmentation#ImagesPeriod
TrainingAyes32925 January 2021 to 15 January 2021
Byes25117 February 2021 to 2 February 2021
ValidationAno6416 February 2021 to 25 January 2021
Bno3917 February 2021 to 18 February 2021
TestAno3016 February 2021 to 22 February 2021
Bno3018 February 2021 to 22 February 2021
Table 2. Accuracy of the baseline model, You Only Look Once (YOLO)v5, for the classes truck and car, and the altered model, YOLOv5, trained with additional data from the rest area dataset for the classes truck_front; truck_back; and car.
Table 2. Accuracy of the baseline model, You Only Look Once (YOLO)v5, for the classes truck and car, and the altered model, YOLOv5, trained with additional data from the rest area dataset for the classes truck_front; truck_back; and car.
ClassYOLOv5YOLOv5 + Rest Area Data
truck0.42-
truck_front-0.63
truck_back-0.52
car0.780.93
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kasper-Eulaers, M.; Hahn, N.; Berger, S.; Sebulonsen, T.; Myrland, Ø.; Kummervold, P.E. Short Communication: Detecting Heavy Goods Vehicles in Rest Areas in Winter Conditions Using YOLOv5. Algorithms 2021, 14, 114. https://doi.org/10.3390/a14040114

AMA Style

Kasper-Eulaers M, Hahn N, Berger S, Sebulonsen T, Myrland Ø, Kummervold PE. Short Communication: Detecting Heavy Goods Vehicles in Rest Areas in Winter Conditions Using YOLOv5. Algorithms. 2021; 14(4):114. https://doi.org/10.3390/a14040114

Chicago/Turabian Style

Kasper-Eulaers, Margrit, Nico Hahn, Stian Berger, Tom Sebulonsen, Øystein Myrland, and Per Egil Kummervold. 2021. "Short Communication: Detecting Heavy Goods Vehicles in Rest Areas in Winter Conditions Using YOLOv5" Algorithms 14, no. 4: 114. https://doi.org/10.3390/a14040114

APA Style

Kasper-Eulaers, M., Hahn, N., Berger, S., Sebulonsen, T., Myrland, Ø., & Kummervold, P. E. (2021). Short Communication: Detecting Heavy Goods Vehicles in Rest Areas in Winter Conditions Using YOLOv5. Algorithms, 14(4), 114. https://doi.org/10.3390/a14040114

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop