Abstract
This paper presents research on the collection, analysis, and evaluation of the fundamental data needed for road traffic systems. The basis for the research, analysis, planning and projections for traffic systems are traffic counts and data collection related to traffic volume and type. The quality and accuracy of this data are very important for traffic planning or optimization. Therefore, the purpose of this research is to apply advanced methods of automatic counting of motorized traffic and to evaluate the impact of this data on the measurement of important traffic indicators. The accuracy of measurements arising from the traditional method of data collection through manual counting will be compared with the most advanced methods of automatic counting through cameras. For this purpose, an analytical algorithm for the recognition and processing of data related to road users as a function of the time of day was applied. The program was written in the programming language Python, and the accuracy of the data and its effect on the results of qualitative traffic indicators were analyzed using the Synchro software model. The developed program is capable of recognizing and classifying different types of vehicles in traffic, such as motorbikes, motorcycles, cars, pick-ups, trucks, vans and buses, as well as counting the traffic volume over time. The results obtained from these two models show the advantages of applying advanced methods of data collection and processing related to dynamic traffic processes, as well as the quality in terms of the impact on the measurement of qualitative traffic indicators. A comparison of the quality of results for the different time intervals and varying levels of visibility in traffic is presented using tables and graphs. At nighttime, when visibility was poor, the discrepancy between the manual and automatic counting methods was around 9.5%. However, when visibility was good, the difference between manual counting and the automated program was 4.87% for the period 19:00–19:15 and 3.64% for the period 05:00–05:15. This discrepancy was especially noticeable when distinguishing between vehicle categories, due to the limitations in the accuracy in recognizing and measuring the dimensions of these vehicles. The difference between the two calculation models has a minor effect on qualitative traffic indicators such as: approach LOS, progression factor, v/s, v/c, clearance time, lane group flow, adj. flow, satd, and flow approach delay.
1. Introduction
Planning a traffic road network, designing traffic systems, or modelling these systems in order to manage traffic problems starts with collecting data about the traffic volume or basic data related to the road users. The demand for motorized traffic has increased enormously in urban centers and this is accompanied by major problems in traffic operation being manifested in a decrease in the quality of motorized and unmotorized movement, traffic delays and other safety problems []. The main indicators that are related to traffic demand and are important for modelling, optimization, and the analysis of proposed solutions are based on these traffic measurements. In this regard, the quality and accuracy of these basic data are very important for the other processes of traffic planning or optimization to show acceptable results. In relation to the increased demand for motorized movement and the need for temporal and spatial non-linearity analysis, there is a need to find a new approach or a new, more favorable measurement model such as the automatic numbering model, which would save time compared to manual counting and the results would be qualitative. Due to the impact on the measurement of qualitative indicators of traffic, especially of heavy vehicles in traffic, the accuracy of the categorization of the vehicles that are part of motorized traffic is very important. The objective of the study is to enhance the accuracy of qualitative traffic indicators by exploring the effect of the techniques employed for qualitative baseline measurements.
Consequently, we will answer three questions:
- What is the difference in data accuracy between the manual and automatic numbering models?
- What is the impact of the basic data obtained from two models in the determination of qualitative traffic indicators?
- How accurate is the data obtained by automatic counting in conditions of poor visibility at night?
We address these questions below, with the concrete results obtained by also applying the data comparison model. Traffic processes are complex and dynamic; therefore, there is a need for frequent traffic counts in relation to time and space, especially in urban areas, because there is a need to adapt the forms of traffic control, traffic optimization or supply, in accordance with the traffic’s demand []. Until recently, traffic volume counts were usually realized through a manual method. Now, with the beginning of the use of advanced automatic techniques, special devices are being applied through which the realization of traffic counts is enabled automatically. Therefore, the analysis and evaluation of the quality of the basic information (traffic counts) obtained through cameras and the processing of this information in comparison to the methods of manual traffic counting have been the target of this research. Opportunities to bring engineering support specialties together through technology may be found by utilizing the performance technology and database capabilities of technology information management systems []. YOLO has developed a similar system which consists of dividing an image into N equally sized grids of M × M. Each grid is responsible for detecting and localizing any objects that are present in its region []. These grids can also predict the bounding box coordinates of an object, relative to the coordinates of the cell, as well as the object marker and the probability of the object being present in the cell. This process reduces the computational burden since both the detection and recognition of objects are handled by the cells of the image; however, it can lead to a lot of repetitive predictions due to numerous cells detecting the same object with distinct bounding box predictions []. Some authors [] have proposed a scheme as part of a research project that concluded shortly before the public release of YOLO version 4, which includes new methods that likely improve classification, detection, and counting rates []. Abdelwahab [] evaluated the proposed method; three experiments were employed using five videos representing different circumstances. In all experiments, the GMM was used for creating the background model for only the ROI []. Other authors (Ref. Lin Zheng, Junjiao) provided a new method for the rapid detection and classification of traffic anomalies and improved the accuracy of detection and classification []. Although manual methods for traffic counting have been shown to be successful in terms of the quality of the data obtained, they require time and commitment, a large number of staff, and also the data processing requires time equivalent to the counting time. Therefore, the application of automatic counting methods, enabling efficiency in terms of the time required to obtain the information and the data processing as well as the quality of this information, is an enormous contribution to the planning, projection and efficiency of traffic management. Automated traffic data collection has been sought in transportation applications to reduce cost and improve efficiency compared to manual data collection [].
Approach and Methodology
For research purposes, an algorithmic model was applied to recognize the different categories of vehicles in traffic, enabling data processing. The program was built using the programming language “Python” and the accuracy of the data was analyzed through the built-in software model “Synchro” to assess the impact on the results of qualitative traffic indicators. Object detection is the process of predicting the class of one or more objects within an image and drawing a bounding box around them (Figure 1). This is commonly used to identify specific objects, such as vehicles, in a given frame []. Deep learning is a form of artificial intelligence where the system tries to learn from data without the need for labels or pre-defined features. In particular, convolutional neural networks (CNNs) are used to perform unsupervised object detection [].
Figure 1.
Object detection process of predicting the class of one or more objects within an image.
Object detection models generally have two corridors:
- An encoder takes a frame (image) as input and applies a series of layers and blocks to it in order to generate statistical data that can be used to identify and label the objects within the frame, such as vehicles;
- The encoder will send the data to a decoder, which will then use the information to generate bounding boxes and labels for every object present [,,].
The tracker of objects utilizes the Euclidean distance concept to monitor an object [].
The authors of [] formulated a speedy algorithm that can be used to count vehicles in traffic videos without requiring vehicle tracking. A reference model was generated for a small area in the video frame to enable this and the proposed algorithm increased the speed of video processing by utilizing every third frame instead of every frame [].
The authors of [] implemented a virtual detection line in each of the traffic lanes to count the number of cars. The points at which the vehicles entirely crossed these lines were determined by these lines []. In this research, they offered an alternative and new approach for counting the number of vehicles crossing the road in video sequences by recognizing motion using incremental subspace learning []. YOLOv4 is an integrated algorithm combining detection and recognition, which can directly obtain the location and category of the target from an image. This algorithm is an improved version of YOLOv1, YOLOv2, and YOLOv3 []. One of the primary innovations of the work by the authors [] is the edge-based technique for preprocessing and filling missing sensor data, which takes into consideration both temporal and geographical information. It achieves better accuracy in handling small targets due to the addition of CSPNet in the CNN design of YOLOv4. YOLOv4-Tiny is a simplified version of YOLOv4, which reduces the accuracy compared to YOLOv4. This is because the YOLOv4-Tiny backbone network is relatively shallow and is unable to extract higher-level semantic features []. In order to meet the requirements of high detection speed and accuracy, the proposed target detection algorithm improved YOLOv4-Tiny. YOLOv4 divides an image into an M × M grid before it is fed into the neural network, with each grid cell responsible for predicting objects. Each grid is capable of predicting up to B bounding boxes, each of which has an associated confidence score []. Each box contains five variables which are defined in Equation (1).
2. Model Design and Automatic Measurements through Cameras
For the needs of the model design and to achieve the goal of this research, traffic counts were made in the road segments between the 10 intersections of the national road N9 in Fushe Kosove, Figure 2.
Figure 2.
Road network maps where traffic counts were made.
The counts were realized within a time interval of 24 h, applying the methods of manual and automatic counting through cameras. Then, a comparison of the data obtained through these two methods was made in terms of the quality of this data and the impact they may have on the application of other important models for the management of traffic problems. Figure 3 shows the flow chart of the methodological approach for the traffic count and qualitative indicator measurement.
Figure 3.
Process and methodology used for the traffic count.
Parts of the analytical pseudo Algorithms 1 and 2 are shown below.
| Algorithm 1: Finding the left of the frame (image). |
| Input: |
| Output: |
| # Finding the position of the vehicle |
| and |
| Algorithm 2: Part of the algorithm for counting and categorizing vehicles during automatic counting through cameras. |
| import cv2 |
| class RealTime(Processing): |
| def __init__(self): |
| super().__init__() |
| def realTime(self): |
| # Set the frame rate of the video capture device |
| cap = cv2.VideoCapture(0) |
| cap.set(cv2.CAP_PROP_FPS, 800) |
| while True: |
| # Read the video frame and resize it |
| ret, frame = cap.read() |
| frame = cv2.resize(frame, (self.width, self.height)) |
| # Create a blob from the image |
| blob = cv2.dnn.blobFromImage(frame, self.scale, (self.width, self.height), (self.mean, self.mean, self.mean), swapRB=True, crop=False) |
| # Set the input of the neural network |
| self.net.setInput(blob) |
| # Get the output layers of the network |
| layers = self.net.getUnconnectedOutLayersNames() |
| # Feed the data to the network and get the output |
| outs = self.net.forward(layers) |
| # Call the postProcess() function from the Processing class |
| objects = self.postProcess(frame, outs) |
| # Draw the counting texts in the frame |
| for obj in objects: |
| label = obj [0] |
| confidence = obj |
| xmin = obj [2] |
| ymin = obj [3] |
| xmax = obj [4] |
| ymax = obj [5] |
| cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), (0, 255, 0), 2) |
| cv2.putText(“Kamionete, Autobus, Kamion, Veture, Biciklete, kem-besore “, frame, label + “ “ + str(round(confidence, 2)), (xmin, ymin–5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2) |
| # Write the counting data into a csv file |
| self.writeData(objects) |
| # Show the frame |
| cv2.imshow(“Real Time”, frame) |
| # Press ‘q’ to quit |
| if cv2.waitKey(1) & 0xFF == ord(‘q’): |
| break |
| # Release the video capture device and destroy all windows |
| cap.release() |
| cv2.destroyAllWindows() |
Measurement data from recordings through cameras and the application, for several 15 min intervals, for different visibility conditions, for one intersection, are presented in Figure 4, Figure 5 and Figure 6 as well as Table 1, Table 2 and Table 3.
Figure 4.
Traffic count from camera and data processing in program (05:00–05:15).
Figure 5.
Traffic count from camera and data processing in program (time: 19:00–19:15).
Figure 6.
Traffic count from camera and data processing in program (time: 21:00–21:15).
Table 1.
Results obtained from camera count in the morning.
Table 2.
Results obtained from camera count during the day.
Table 3.
Results obtained from camera count in the evening.
The results of automatic measurements from the video recording for the period 05:00–05:15, for different categories of vehicles in traffic are presented in Table 1.
The results of automatic measurements from the video recording for the period 19:00–19:15 (Figure 5), for different categories of vehicles in traffic are presented in Table 2.
The results of automatic measurements from video recording (Figure 7) for the period 21:00–21:25, for different categories of vehicles in traffic are presented in Table 3.
Figure 7.
Road traffic network in “Synchro” model.
The summary results, as shown in Table 4 and Table 5 and Figure 8, Figure 9 and Figure 10, prove the quality of the data obtained from the automatic count in terms of accuracy of measurements and adequate categorization of vehicles participating in the traffic.
Table 4.
Results obtained from camera count in the different time periods.
Table 5.
Result obtained by “Synchro” model.
Figure 8.
Difference between automatic and manual vehicle counts (time 21:10–21:25).
Figure 9.
Difference between automatic and manual vehicle counts (time 19:00–19:15).
Figure 10.
Difference between automatic and manual counts of vehicles (time 05:00–05:15).
Referring to the results obtained from the automatic method and manual counts performed by personnel employed to count vehicles, the largest changes in data quality are observed in the period of limited overnight visibility (time 21:10–21:25).
In this period, the counting data had a deviation of about 9.5%, while in other measurement periods, when visibility was good, the differences between the manual counting method and the automatic method through the applied program followed a deviation of about 4.87% for the period 19:00−19:15 and 4.54% for the period of measurements 05:00–05:15. In the quality of automatic measurements and accurate categorization of vehicles, the position of the cameras is also important; in this case, they were put at a certain angle to the direction of movement of the vehicles; therefore, the accuracy loss was likely to be greater. These deviations are more pronounced especially in the process of categorizing vehicles, due to the reduction in opportunities for accurate recognition and measurement of the dimensions of these vehicles.
3. Impact on Qualitative Traffic Indicators
The analysis and evaluation of the qualitative indicators of traffic operation are based on the basic data obtained in the research, traffic surveys or recordings; therefore, it is very important that this data be of the highest quality. In the concrete case of this research, based on the maximum difference of 9.5% between the sets of data obtained from manual and automatic counts, for the period with limited visibility at night (21:10–21:25), we used the “Synchro” model to analyze the impact of this on the results of important indicators of traffic operation such as: lane utilization factor, standard flow, adjusted flow, lane group flow, v/s ratio, v/c ratio, progression factor, approach delay and approach LOS. The construction of a road network in the model “Synchro”, including traffic modes and certain traffic demands by means of manual and automatic methods of measurements as well as the simulation of traffic operation, is presented in Figure 7 and Figure 11.
Figure 11.
Road traffic simulation in “Sim” model.
The results of the qualitative traffic indicators obtained from the "Synchro" model are shown in Figure 8. Referring to the results obtained, no significant changes were observed in the results of traffic indicators which were analyzed through the "Synchro" model (Figure 8); therefore, we can conclude that both the quality, and the accuracy of the data obtained from the use of the automatic method through the cameras and the application of the analytical algorithm formulated in the programming language "Python" are high. As such, we recommend this model for use for the purpose of traffic counts and categorization of vehicles circulating in traffic, because even in the most extreme conditions with the possibility of greater accuracy loss in results at night (conditions of limited visibility), the impact of these losses on the analysis of important traffic indicators is negligible.
4. Conclusions
The analysis and evaluation of the qualitative indicators of traffic operation are based on the basic data obtained in the research, traffic surveys or recordings; therefore, it is very important that this data be of the highest quality. Referring to the results obtained from the automatic method (Python algorithm application) and manual counts performed by personnel employed to count vehicles, the largest changes in data quality were observed in the period of limited overnight visibility (time 21:10–21:25). In this period, the counting data had a deviation of about 9.5%, while in other measurement periods, when visibility was good, the differences between the manual counting method and the automatic method through the applied program followed a deviation of about 4.87% for the period 19:00–19:15, 4.54% for the time period of measurements 05:00–05:15. These deviations are more pronounced especially in the process of categorizing vehicles, due to the reduction in opportunities for the accurate recognition and measurement of the dimensions of these vehicles. Referring to the results obtained, no significant changes were observed in the results of the traffic indicators which were analyzed through the “Synchro” model; therefore, we can conclude that both the quality, and the accuracy of the data obtained from the use of the automatic method through the cameras and the application of the analytical algorithm formulated in the programming language “Python” is high. This model provides accurate results even during traffic counts in conditions with poor visibility at night. As such, we recommend this model for use for the purpose of traffic counts and the categorization of vehicles circulating in traffic because, even the most extreme conditions giving the possibility of a greater accuracy loss in results at night (conditions of limited visibility), the impact on the analysis of important traffic indicators of these losses is negligible. The authors′ next study will focus on developing a methodology for measuring the accuracy of traffic counts and traffic quality indicators in adverse weather conditions such as snow, rain, and fog using edge computing and IoT.
Author Contributions
Conceptualization, G.H.; methodology, G.H.; software, A.F.; validation, G.H. and X.B.; formal analysis, X.B.; investigation, G.H.; resources, G.H.; data curation, G.H.; writing—original draft preparation, X.B.; writing—review and editing, X.B. and G.H.; visualization, X.B. and G.H. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
| LOS | level of service |
| N9 | national road of Kosovo |
| CNN | convolutional neural networks |
| Blob | binary large object |
| v/s | the highest flow ratio for a given signal phase |
| v/c | volume capacity ratio |
References
- Gëzim, H.; Ahmet, S.; Ramë, L.; Xhevahir, B. Mathematical Model for Velocity Calculation of Three Types of Vehicles in the Case of Pedestrian Crash. Stroj. Časopis-J. Mech. Eng. 2018, 68, 95–110. [Google Scholar] [CrossRef]
- Hoxha, G. Urban Mobility; Dispensë për Përdorim të Brendshëm: Prishtinë, Kosovo, 2022. [Google Scholar]
- Dinh, H.; Tang, H. Simple method for camera calibration of roundabout traffic scenes using a single circle. IET Intell. Transp. Syst. 2014, 8, 175–182. [Google Scholar] [CrossRef]
- Vehicle Counting, Classification & Detection using OpenCV & Python. Available online: https://techvidvan.com/tutorials/opencv-vehicle-detection-classification-counting (accessed on 5 January 2023).
- Sonnleitner, E.; Barth, O.; Palmanshofer, A.; Kurz, M. Traffic, Measurement and Congestion Detection Based on Real-Time Highway Video Data. Appl. Sci. 2020, 10, 6270. [Google Scholar] [CrossRef]
- Abdelwahab, M.A. Fast approach for efficient vehicle counting. Electron. Lett. 2019, 55, 20–22. [Google Scholar] [CrossRef]
- Zheng, L.; Li, J. Application of Fast P2P Traffic Recognition Technology Based on Decision Tree in the Detection of Network Traffic Data. J. Electr. Comput. Eng. 2022, 2022, 8320049. [Google Scholar] [CrossRef]
- Subedi, S.; Tang, H. Development of a multiple-camera 3D vehicle tracking system for traffic data collection at intersections. IET Intell. Transp. Syst. 2018, 13, 614–621. [Google Scholar] [CrossRef]
- Gidaris, S.; Komodakis, N. Object detection via a multi-region & semantic segmentation-aware CNN model. In Proceedings of the IEEE International Conference on Computer Vision, Las Condes, Chile, 11–18 December 2015. [Google Scholar]
- Du, J. Understanding of object detection based on CNN family and YOLO. J. Phys. Conf. Ser. 2018, 1004, 012029. [Google Scholar] [CrossRef]
- Chandra, A.M.; Rawat, A. A Review on YOLO (You Look Only One)-An Algorithm for Real Time Object Detection. J. Eng. Sci. 2020, 11, 554–557. [Google Scholar]
- Hoxha, G.; Shala, A.; Likaj, R. Vehicle Speed Determination in Case of Road Accident by Software Method and Comparing of Results with the Mathematical Model. Stroj. Časopis-J. Mech. Eng. 2017, 67, 51–60. [Google Scholar] [CrossRef]
- Chen, Y.; Lu, J. Multi-Loop Vehicle-Counting Method under Gray Mode and RGB Mode. Appl. Sci. 2021, 11, 6831. [Google Scholar] [CrossRef]
- Harikrishnan, P.M.; Thomas, A.; Nisha, J.S.; Gopi, V.P.; Palanisamy, P. Pixel matching search algorithm for counting moving vehicle in highway traffic videos. Multimed. Tools Appl. 2020, 80, 3153–3172. [Google Scholar]
- Rosas-Arias, L.; Portillo-Portillo, J.; Hernandez-Suarez, A.; Olivares-Mercado, J.; Sanchez-Perez, G.; Toscano-Medina, K.; Perez-Meana, H.; Orozco, A.L.S.; Villalba, L.J.G. Vehicle counting in video sequences: An incremental subspace learning approach. Sensors 2019, 19, 2848. [Google Scholar] [CrossRef] [PubMed]
- YOLOv4 vs YOLOv4-Tiny. Available online: https://medium.com/analytics-vidhya/yolov4-vs-yolov4-tiny-97932b6ec8ec (accessed on 5 January 2023).
- Ojagh, S.; Cauteruccio, F.; Terracina, G.; Liang, S.H. Enhanced air quality prediction by edge-based spatiotemporal data preprocessing. Comput. Electr. Eng. 2021, 96, 107572. [Google Scholar] [CrossRef]
- Yao, J.; Cai, D.; Fan, X.; Li, B. Improving YOLOv4-Tiny’s Construction Machinery and Material Identification Method by Incorporating Attention Mechanism. Mathematics 2022, 10, 1453. [Google Scholar] [CrossRef]
- Fredianelli, L.; Carpita, S.; Bernardini, M.; Del Pizzo, L.G.; Brocchi, F.; Bianco, F.; Licitra, G. Traffic flow detection using camera images and machine learning methods in ITS for noise map and action plan optimization. Sensors 2022, 22, 1929. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).