# Estimation of Lane-Level Traffic Flow Using a Deep Learning Technique

^{*}

## Abstract

**:**

## Featured Application

**This paper proposes an object detection and tracking system that can count vehicles, estimate the velocity of vehicles, and provide traffic flow estimations for traffic monitoring and control applications.**

## Abstract

## 1. Introduction

## 2. Related Works

## 3. Methods

#### 3.1. Process and Flow Chart

#### 3.2. Traffic Flow Calculation Model

_{1}(t), v

_{2}(t), …, v

_{n}(t), and the acceleration equation: ${a}_{a}(t)={\stackrel{\u2022}{v}}_{a}(t)=\frac{d{v}_{a}(t)}{dt}$ [20].

_{2}and v

_{2}, then according to the “follow the leader” model [20] the acceleration is expressed as:

#### 3.3. Object Identification and Vehicle Tracking

#### 3.3.1. Pretrain and Retrain for Object Identification

#### 3.3.2. Object Tracking by Drawing Virtual Lines and Hot Zones

#### 3.3.3. Estimation of Velocities of Each Vehicle

_{1}and t

_{2}) using the aforementioned object tracking technique, the velocity of the vehicle can be calculated as follows:

_{1}, p

_{2}, …, p

_{n}: the positions of vehicles;

_{1}, v

_{2}, …, v

_{n}: the velocities of vehicles.

## 4. Experiments

#### 4.1. Digital Image Processing

#### 4.2. Object Detection Results

#### 4.3. Vehicle Counting in Both Northbound and Southbound Directions

#### 4.4. Vehicle Counting in Each Lane in Both Northbound and Southbound Directions

#### 4.5. Velocity Estimation

#### 4.6. Velocity Level

## 5. Discussion

## 6. Conclusions and Future Work

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Li, B.; Zhang, T.; Xia, T. Vehicle detection from 3d lidar using fully convolutional network. arXiv
**2016**, arXiv:1608.07916. [Google Scholar] - Tian, B.; Yao, Q.; Gu, Y.; Wang, K.; Li, Y. Video processing techniques for traffic flow monitoring: A survey. In Proceedings of the 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011; pp. 1103–1108. [Google Scholar]
- Lv, Y.; Duan, Y.; Kang, W.; Li, Z.; Wang, F.-Y. Traffic flow prediction with big data: A deep learning approach. IEEE Trans. Intell. Transp. Syst.
**2015**, 16, 865–873. [Google Scholar] [CrossRef] - Polson, N.G.; Sokolov, V.O. Deep learning for short-term traffic flow prediction. Transp. Res. Part C Emerg. Technol.
**2017**, 79, 1–17. [Google Scholar] [CrossRef] [Green Version] - McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys.
**1943**, 5, 115–133. [Google Scholar] [CrossRef] - Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G. Human-level control through deep reinforcement learning. Nature
**2015**, 518, 529. [Google Scholar] [CrossRef] [PubMed] - Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M. Imagenet large scale visual recognition challenge. Int. J. Comput. Vision
**2015**, 115, 211–252. [Google Scholar] [CrossRef] [Green Version] - Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv
**2018**, arXiv:1804.02767. [Google Scholar] - Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv
**2020**, arXiv:2004.10934. [Google Scholar] - Wang, C.-Y.; Liao, H.-Y.M.; Wu, Y.-H.; Chen, P.-Y.; Hsieh, J.-W.; Yeh, I.-H. CSPNet: A new backbone that can enhance learning capability of CNN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 390–391. [Google Scholar]
- Ciaparrone, G.; Sánchez, F.L.; Tabik, S.; Troiano, L.; Tagliaferri, R.; Herrera, F. Deep learning in video multi-object tracking: A survey. Neurocomputing
**2020**, 381, 61–88. [Google Scholar] [CrossRef] [Green Version] - Wojke, N.; Bewley, A.; Paulus, D. Simple online and realtime tracking with a deep association metric. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3645–3649. [Google Scholar]
- Bewley, A.; Ge, Z.; Ott, L.; Ramos, F.; Upcroft, B. In Simple online and realtime tracking. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; 2016; pp. 3464–3468. [Google Scholar]
- Wang, Z.; Zheng, L.; Liu, Y.; Wang, S. Towards real-time multi-object tracking. arXiv
**2019**, arXiv:1909.12605. [Google Scholar] - Fedorov, A.; Nikolskaia, K.; Ivanov, S.; Shepelev, V.; Minbaleev, A. Traffic flow estimation with data from a video surveillance camera. J. Big Data
**2019**, 6, 1–15. [Google Scholar] [CrossRef] - Santos, A.M.; Bastos-Filho, C.J.; Maciel, A.M.; Lima, E. Counting Vehicle with High-Precision in Brazilian Roads Using YOLOv3 and Deep SORT. In Proceedings of the 2020 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Porto de Galinhas, Brazil, 7–10 November 2020; pp. 69–76. [Google Scholar]
- Punn, N.S.; Sonbhadra, S.K.; Agarwal, S. Monitoring COVID-19 social distancing with person detection and tracking via fine-tuned YOLO v3 and Deepsort techniques. arXiv
**2020**, arXiv:2005.01385. [Google Scholar] - Qiu, Z.; Zhao, N.; Zhou, L.; Wang, M.; Yang, L.; Fang, H.; He, Y.; Liu, Y. Vision-Based Moving Obstacle Detection and Tracking in Paddy Field Using Improved Yolov3 and Deep SORT. Sensors
**2020**, 20, 4082. [Google Scholar] [CrossRef] [PubMed] - Seibold, B. A mathematical introduction to traffic flow theory. In Proceedings of the Mathematical Approaches for Traffic Flow Management Tutorials, Los Angeles, CA, USA, 8–11 December 2015; Los Angeles, CA, USA, Institute for Pure and Applied Mathematics, UCLA: Los Angeles, CA, USA.
- Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; pp. 740–755. [Google Scholar]
- Kavitha, T.; Sridharan, D. Security vulnerabilities in wireless sensor networks: A survey. J. Inf. Assur. Secur.
**2010**, 5, 31–44. [Google Scholar] - Djenna, A.; Harous, S.; Saidouni, D.E. Internet of Things Meet Internet of Threats: New Concern Cyber Security Issues of Critical Cyber Infrastructure. Appl. Sci.
**2021**, 11, 4580. [Google Scholar] [CrossRef]

**Figure 1.**Object tracking procedure of SORT [14].

**Figure 2.**Object tracking procedure of DeepSORT [13].

**Figure 7.**Object detection results on the video taken on National Freeway No.1 (

**a**–

**c**). The identified vehicles were wrapped with their respective bounding boxes. The colors of the bounding boxes represent the categories of the vehicles: cars (magenta), trucks (blue), and buses (green).

**Figure 8.**The results of vehicle counting in both the northbound and southbound lanes from the video taken on National Freeway No.1 (

**a**–

**c**). The red line represents the virtual line in the northbound direction, while the blue line represents the virtual line in the southbound direction. Note that only the vehicles approaching the virtual lines were tracked (marked with green boxes).

**Figure 9.**The results of vehicle counting in both the northbound and southbound lanes from the video taken on National Freeway No.1 (

**a**,

**b**). The red line represents the connected virtual lines in the three northbound lanes, while the blue line represents the connected virtual lines in the three southbound lanes. The green boxes mark the vehicles tracked in the hot zones.

**Table 1.**The architecture of DarkNet-53 [9].

Type | Filters | Size | Output | |
---|---|---|---|---|

Convolutional | 32 | 3 × 3 | 256 × 256 | |

Convolutional | 64 | 3 × 3/2 | 128 × 128 | |

Convolutional | 32 | 1 × 1 | ||

Convolutional | 64 | 3 × 3 | 1 × | |

Residual | 128 × 128 | |||

Convolutional | 128 | 3 × 3/2 | 64 × 64 | |

Convolutional | 64 | 1 × 1 | ||

Convolutional | 128 | 3 × 3 | 2 × | |

Residual | 64 × 64 | |||

Convolutional | 256 | 3 × 3/2 | 32 × 32 | |

Convolutional | 128 | 1 × 1 | ||

Convolutional | 256 | 3 × 3 | 8 × | |

Residual | 32 × 32 | |||

Convolutional | 512 | 3 × 3/2 | 16 × 16 | |

Convolutional | 256 | 1 × 1 | ||

Convolutional | 512 | 3 × 3 | 8 × | |

Residual | 16 × 16 | |||

Convolutional | 1024 | 3 × 3/2 | 8 × 8 | |

Convolutional | 512 | 1 × 1 | ||

Convolutional | 1024 | 3 × 3 | 4 × | |

Residual | 8 × 8 | |||

Avgpool | Global | |||

Connected | 1000 | |||

Softmax |

Levels | Speed Range |
---|---|

Level 1 | 0 km/h ≤ v_{a} < 20 km/h, a = 1, 2, …, n |

Level 2 | 20 km/h ≤ v_{a} < 40 km/h, a = 1, 2, …, n |

Level 3 | 40 km/h ≤v_{a} < 60 km/h, a = 1, 2, …, n |

Level 4 | 60 km/h ≤v_{a} < 80 km/h, a = 1, 2, …, n |

Level 5 | v_{a} ≧ 80 km/h, a = 1, 2, …, n |

Northbound (toward the camera) | ||||||||

Lane 1 | Lane 2 | Lane 3 | ||||||

Actual | Detected | Error | Actual | Detected | Error | Actual | Detected | Error |

44 | 56 | 27.3% | 60 | 59 | 1.7% | 61 | 76 | 24.6% |

Southbound (away from the camera) | ||||||||

Lane 4 | Lane 5 | Lane 6 | ||||||

Actual | Detected | Error | Actual | Detected | Error | Actual | Detected | Error |

70 | 78 | 11.4% | 70 | 68 | 2.9% | 50 | 51 | 2% |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Liu, C.-M.; Juang, J.-C.
Estimation of Lane-Level Traffic Flow Using a Deep Learning Technique. *Appl. Sci.* **2021**, *11*, 5619.
https://doi.org/10.3390/app11125619

**AMA Style**

Liu C-M, Juang J-C.
Estimation of Lane-Level Traffic Flow Using a Deep Learning Technique. *Applied Sciences*. 2021; 11(12):5619.
https://doi.org/10.3390/app11125619

**Chicago/Turabian Style**

Liu, Chieh-Min, and Jyh-Ching Juang.
2021. "Estimation of Lane-Level Traffic Flow Using a Deep Learning Technique" *Applied Sciences* 11, no. 12: 5619.
https://doi.org/10.3390/app11125619