An Automatic Conflict Detection Framework for Urban Intersections Based on an Improved Time Difference to Collision Indicator
Abstract
:1. Introduction
2. Related Works
2.1. Deep Learning-Based Vehicle Detection
2.2. Object Tracking
2.3. Traffic Conflict Indicators
3. Deep Learning-Based Vehicle Detection and Trajectory Estimation
3.1. Unmanned Aerial Vehicle-Based Vehicle Detection Using Sparse RCNN
Sparse R CNN-Based Vehicle Detection at Intersections
3.2. Vehicle Trajectory Extraction Algorithm
- Read the video image sequence and the corresponding vehicle detection results.
- Extract the detection results of a single frame.
- If the frame is the first frame, initialize the tracker, that is, create and number the trackers for all the vehicles detected in the first frame of the video; if the frame is not the first frame, then only create and number the trackers for the vehicles located in the most tracked area (inlet or exit of the road intersection in all four directions).
- Update the tracker for each vehicle and make a bounding box prediction for the next frame.
- Obtain the detection results for the next frame and calculate the intersection ratio of each tracker's predicted bounding box to all the bounding boxes in the detection results separately and keep the results that meet the set threshold.
- Assign a corresponding detection result to each of the existing trackers using a linear assignment algorithm, that is, assign the result of the vehicle detection algorithm to the corresponding vehicle number.
- Return to Step 2 and cyclically execute the algorithm until the video sequence is processed.
4. Vehicle Conflict Indicator Metric Based on Improved Conflict Time Difference
4.1. The TDTC Calculation Method Applied to Vehicles at Intersections
4.1.1. Time Difference to Conflict Considering Vehicle Size
4.1.2. Calculation of the Vehicle Speed and Direction
4.1.3. Calculation of the Potential Conflict Distance of a Vehicle
4.1.4. Time Difference to Conflict Calculation and Threshold Determination
5. Experiments
5.1. Experimental Data
Dataset
5.2. Vehicle Detection Experiments
5.2.1. Setup
5.2.2. Evaluation Metrics
5.2.3. Model Training Results
5.2.4. Comparison of the Actual Detection Effect of the Models
5.3. Experiments of Vehicle Trajectory Extraction
Comparison of Different Trajectory Extraction Algorithms
5.4. Time Difference to Conflict Conflict Indicator Index Comparison
5.4.1. Experimental Results and Analysis
Experimental Results
5.5. Vehicle Conflict Detection Based on Sparse R-CNN with Improved TDTC Metrics
Comprehensive Experiments
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Acknowledgments
Conflicts of Interest
References
- Tageldin, A.; Sayed, T. Developing evasive action-based indicators for identifying pedestrian conflicts in less organized traffic environ ents. J. Adv. Transp. 2016, 50, 1193–1208. [Google Scholar] [CrossRef]
- Wang, D.; Wu, C.; Li, C.; Zou, Y.; Zou, Y.; Li, K. Design of Vehicle Accident Alarm System for Sudden Traffic Accidents. World Sci. Res. J. 2021, 7, 169–175. [Google Scholar]
- Qi, Y.G.; Brian, L.; Guo, S.J. Freeway Accident Likelihood Prediction Using a Panel Data Analysis Approach. J. Transp. Eng. 2007, 133, 149–156. [Google Scholar] [CrossRef]
- Fu, C.; Sayed, T.; Zheng, L. Multi-type Bayesian hierarchical modeling of traffic conflict extremes for crash estimation. Accid. Anal. Prev. 2021, 160, 106309. [Google Scholar] [CrossRef] [PubMed]
- Uzondu, C.; Jamson, S.; Lai, F. Exploratory study involving observation of traffic behaviour and conflicts in Nigeria using the Traffic Conflict Technique. Saf. Sci. 2018, 110, 273–284. [Google Scholar] [CrossRef]
- Vuong, T.Q. Traffic Conflict Technique Development for Traffic Safety Evaluation under Mixed Traffic Conditions of Developing Countries. J. Traffic Transp. Eng. 2017, 5, 228–235. [Google Scholar]
- Olszewski, P.; Osińska, B.; Szagała, P.; Włodarek, P.; Niesen, S.; Kidholm, O.; Madsen, T.; Van Haperen, W.; Johnsson, C.; Laureshyn, A.; et al. Review of Current Study Methods for VRU Safety. Part 1–Main Report; University of Technology: Warsaw, Poland, 2016. [Google Scholar]
- Hayward, J.C. Near miss determination through use of a scale of danger. In Proceedings of the 51 Annual Meeting of the Highway Research Board, Washington, DC, USA, 17–21 January 1972; pp. 24–34. [Google Scholar]
- Cooper, P.J. Experience with Traffic Conflicts in Canada with Emphasis on Post Encroachment Time Techniques. International Calibration Study of Traffic Conflict Techniques; Springer: Berlin/Heidelberg, Germany, 1984; pp. 75–96. [Google Scholar]
- Cooper, D.; Ferguson, N. A Conflict simulation model. Traffic Eng. Control 1976, 17, 306–309. [Google Scholar]
- Golakiya, H.D.; Chauhan, R.; Dhamaniya, A. Mapping Pedestrian-Vehicle Behavior at Urban Undesignated Mid-Block Crossings under Mixed Traffic Environment—A Trajectory-Based Approach. Transp. Res. Proc. 2020, 48, 1263–1277. [Google Scholar] [CrossRef]
- Zhao, C.; Zheng, H.; Sun, Y.; Liu, B.; Zhou, Y.; Liu, Y.; Zheng, X. Fabrication of Tannin-Based Dithiocarbamate Biosorbent and Its Application for Ni(II) Ion Removal. Water Air Soil Pollut. 2017, 228, 1–15. [Google Scholar] [CrossRef]
- Rifai, M.; Budiman, R.A.; Sutrisno, I.; Khumaidi, A.; Ardhana, V.Y.P.; Rosika, H.; Tibyani Setiyono, M.; Muhammad, F.; Rusmin, M.; Fahrizal, A. Dynamic time distribution system monitoring on traffic light using image processing and convolutional neural network method. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1175, 012005. [Google Scholar] [CrossRef]
- Cao, N.; Huo, W.; Lin, T.; Wu, G. Application of convolutional neural networks and image processing algorithms based on traffic video in vehicle taillight detection. Int. J. Sens. Netw. 2021, 35, 181–192. [Google Scholar] [CrossRef]
- Bautista, C.M.; Dy, C.A.; Manalac, M.I.; Orbe, R.A.; Cordel, M. Convolutional Neural Network for Vehicle Detection in Low Resolution Traffic Videos. In Proceedings of the 2016 IEEE Region 10 Symposium (TENSYMP), Bali, Indonesia, 9–11 May 2016; pp. 277–281. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Bochkovskiy, A.; Wang, C.; Liao, H.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Boston, MA, USA, 7–12 June 2015; pp. 1440–1448. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [Green Version]
- Xu, Y.; Yu, G.; Wang, Y.; Wu, X. Car Detection from Low-Altitude UAV Imagery with the Faster R-CNN. J. Adv. Transp. 2017, 2017, 1–10. [Google Scholar] [CrossRef] [Green Version]
- Sun, P.; Zhang, R.; Jiang, Y.; Kong, T.; Xu, C.; Zhan, W.; Tomizuka, M.; Li, L.; Yuan, Z.; Wang, C.; et al. Sparse R-CNN: End-to-End Object Detection with Learnable Proposals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 14454–14463. [Google Scholar]
- Cao, J.; Zhang, J.; Jin, X. A Traffic-Sign Detection Algorithm Based on Improved Sparse R-cnn. IEEE Access 2021, 9, 122774–122788. [Google Scholar] [CrossRef]
- Uijlings, J.R.R.; van de Sande, K.E.A.; Gevers, T.; Smeulders, A.W.M. Selective Search for Object Recognition. Int. J. Comput. Vis. 2013, 104, 154–171. [Google Scholar] [CrossRef] [Green Version]
- Grabner, H.; Bischof, H. On-Line Boosting and Vision. In Proceedings of the Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; pp. 260–267. [Google Scholar]
- Felzenszwalb, P.F.; Girshick, R.B.; McAllester, D.; Ramanan, D. Object Detection with Discriminatively Trained Part-Based Models. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1627–1645. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kalal, Z.; Mikolajczyk, K.; Matas, J. Tracking-Learning-Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1409–1422. [Google Scholar] [CrossRef] [Green Version]
- Lim, J.; Yang, M. Online Object Tracking: A Benchmark. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 2411–2418. [Google Scholar]
- Yun, S.; Choi, J.; Yoo, Y.; Yun, K. Action-Decision Networks for Visual object with Deep Reinforcement Learning. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 22–25 July 2017; pp. 1349–1358. [Google Scholar]
- Fan, H.; Ling, H. Sanet: Structure-Aware Network for Visual object. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 22–25 July 2017; pp. 2217–2224. [Google Scholar]
- Li, P.; Wang, D.; Wang, L.; Lu, H. Deep visual object: Review and experimental comparison. Pattern Recognition 2018, 76, 323–338. [Google Scholar] [CrossRef]
- Chen, Y.; Wang, J.; Xia, R.; Zhang, Q.; Cao, Z.; Yang, K. The visual object tracking algorithm research based on adaptive combination kernel. J. Ambient Intell. Humaniz. Comput. 2019, 10, 4855–4867. [Google Scholar] [CrossRef]
- Mahanta, G.B.; Rout, A.; Biswal, B.B.; Deepak, B.B.V.L. An improved multi-objective antlion optimization algorithm for the optimal design of the robotic gripper. J. Exp. Theor. Artif. Intell. 2020, 32, 309–338. [Google Scholar] [CrossRef]
- Ping, C.; Dan, Y. Improved Faster RCNN Approach for Vehicles and Pedestrian Detection. Int. Core J. Eng. 2020, 6, 119–124. [Google Scholar]
- Zheng, L.; Sayed, T. Bayesian hierarchical modeling of traffic conflict extremes for crash estimation: A non-stationary peak over threshold approach. Anal. Methods Accid. Res. 2019, 24, 100106. [Google Scholar] [CrossRef]
- Puri, A.; Valavanis, K.P.; Kontitsis, M. Statistical profile generation for traffic monitoring using real-time UAV based video data. In Proceedings of the 2007 Mediterranean Conference on Control & Automation, Athens, Greece, 27–29 June 2007; pp. 1–6. [Google Scholar]
- Meng, X.H.; Zhang, Z.Z.; Shi, Y.Y. Research on Traffic Safety on Freeway Merging Sections Based on TTC and PET. Appl. Mech. Mater. 2014, 587, 2224–2229. [Google Scholar] [CrossRef]
- Jiang, R.; Zhu, S.; Wang, P.; Chen, Q.; Zou, H.; Kuang, S.; Cheng, Z. In Search of the Consequence Severity of Traffic Conflict. J. Adv. Transp. 2020, 2020, 9089817. [Google Scholar] [CrossRef]
- St-Aubin, P.; Saunier, N.; Miranda-Moreno, L. Large-scale automated proactive road safety analysis using video data. Transp. Res. Part C 2015, 58, 363–379. [Google Scholar] [CrossRef]
- Charly, A.; Mathew, T.V. Estimation of traffic conflicts using precise lateral position and width of vehicles for safety assessment. Accid. Anal. Prev. 2019, 132, 105264. [Google Scholar] [CrossRef] [PubMed]
- Goecke, R.; Asthana, A.; Pettersson, N.; Pettersson, L. Visual vehicle egomotion estimation using the fourier-mellin transform. In Proceedings of the 2007 IEEE Intelligent Vehicles Symposium, Istanbul, Turkey, 13–15 June 2007; pp. 450–455. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
- Yaodong, W.; Yuren, C. A Method of Identifying Serious Conflicts of Motor and Non-motor Vehicles during Passing Maneuvers. J. Transp. Inform. Safety 2015, 4, 61–68. [Google Scholar]
Parameters | Range/Value |
---|---|
Time | 7:00 a.m.–9:00 a.m. and 5:00 p.m.–7:00 p.m. |
Weather | Not a rainy day, not a strong wind |
Temperature | 18 °C to 26 °C |
Resolution | 3840 × 2160 |
Frame rate | 30 frames/s |
Total duration | About 70 min |
Number of sections taken | 5 |
RS1 | RS2 | RS3 | RS4 | RS5 | Total | |
---|---|---|---|---|---|---|
Car | 12,265 | 9754 | 6355 | 8483 | 15,605 | 52,462 |
Bus | 3082 | 940 | 704 | 181 | 1411 | 6318 |
Track | 333 | 287 | 146 | 3177 | 1729 | 5672 |
Total | 15,680 | 10,981 | 7205 | 11,841 | 18,745 | 64,452 |
Dataset | Car | Bus | Track | Sum |
---|---|---|---|---|
Training set | 47,090 | 5671 | 5093 | 57,854 |
Test set | 5372 | 647 | 579 | 6598 |
Sum | 52,462 | 6318 | 5672 | 64,452 |
Model | Faster R-CNN | RetinaNet | Sparse R-CNN |
---|---|---|---|
Training time | 71 h 50 min | 70 h 48 min | 83 h |
Inference speed | 28.1 fps | 27.7 fps | 27.5 fps |
Model | mAP (%) | AP50 (%) | AP75 (%) |
---|---|---|---|
Sparse-RCNN | 76.27 | 96.89 | 93.46 |
Faster-RCNN | 72.47 | 90.12 | 88.9 |
RetinaNet | 71.79 | 90.15 | 88.84 |
YOLO | 73.7 | 94.8 | 90.85 |
Type | TP | FP | FN |
---|---|---|---|
Bus | 45 | 8 | 3 |
Car | 325 | 4 | 8 |
Truck | 23 | 10 | 1 |
Sum | 393 | 22 | 12 |
Type | TP | FP | FN |
---|---|---|---|
Bus | 46 | 6 | 2 |
Car | 327 | 10 | 6 |
Truck | 24 | 21 | 0 |
Sum | 397 | 37 | 8 |
Type | TP | FP | FN |
---|---|---|---|
Bus | 46 | 4 | 2 |
Car | 312 | 16 | 21 |
Truck | 24 | 20 | 0 |
Sum | 382 | 40 | 23 |
Model | Precision Ratio P (%) | Recall Ratio R (%) | F1-Score (%) |
---|---|---|---|
Sparse R-CNN | 94.7 | 97.04 | 95.85 |
Faster R-CNN | 91.48 | 98.02 | 94.64 |
RetinaNet | 90.52 | 94.32 | 92.38 |
Algorithm | MIL | CSRT | KCF | Ours |
---|---|---|---|---|
Total Error (pixels) | 4358.12 | 2640.48 | 895.16 | 646.8 |
Average error per frame (pixels) | 14.67 | 8.90 | 3.01 | 1.26 |
Model Discrimination Is Non-Conflicting | Model Discrimination Is a Conflict | |
---|---|---|
Non-conflict samples | 24 | 9 |
Conflict samples | 5 | 62 |
Accuracy (%) | Precision (%) | Recall (%) | F1-Score (%) | |
---|---|---|---|---|
Consider vehicle size | 86.00 | 87.30 | 92.50 | 89.80 |
Vehicle dimensions are not considered | 81.00 | 90.00 | 80.60 | 85.04 |
Accuracy (%) | Precision (%) | Recall (%) | F1-Score (%) | |
---|---|---|---|---|
Sparse R-CNN | 82.00 | 87.69 | 85.07 | 86.36 |
Faster R-CNN | 80.00 | 92.73 | 76.12 | 83.61 |
RetinaNet | 77.00 | 95.83 | 68.66 | 80.00 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, Q.; Lei, Z.; Zhu, J.; Chen, J.; Ma, T. An Automatic Conflict Detection Framework for Urban Intersections Based on an Improved Time Difference to Collision Indicator. Remote Sens. 2021, 13, 4994. https://doi.org/10.3390/rs13244994
Li Q, Lei Z, Zhu J, Chen J, Ma T. An Automatic Conflict Detection Framework for Urban Intersections Based on an Improved Time Difference to Collision Indicator. Remote Sensing. 2021; 13(24):4994. https://doi.org/10.3390/rs13244994
Chicago/Turabian StyleLi, Qing, Zhanzhan Lei, Jiasong Zhu, Jiaxin Chen, and Tianzhu Ma. 2021. "An Automatic Conflict Detection Framework for Urban Intersections Based on an Improved Time Difference to Collision Indicator" Remote Sensing 13, no. 24: 4994. https://doi.org/10.3390/rs13244994
APA StyleLi, Q., Lei, Z., Zhu, J., Chen, J., & Ma, T. (2021). An Automatic Conflict Detection Framework for Urban Intersections Based on an Improved Time Difference to Collision Indicator. Remote Sensing, 13(24), 4994. https://doi.org/10.3390/rs13244994