Multi-Tracking Sensor Architectures for Reconstructing Autonomous Vehicle Crashes: An Exploratory Study
Abstract
:1. Introduction
2. Related Works
3. The Proposed Method and System Design
3.1. Process Flow of SMTPE
3.2. Formulas for Developing SMTPE
3.2.1. Tracking Objects
3.2.2. State Update Model
3.2.3. Measurement Model
3.2.4. Generalized Optimal Sub-Pattern Assignment Metric
3.3. Crash Dataset for Tracking the Performance Evaluation and Classification of the Crash Type
- Front crash (the ego vehicle and the other vehicle are traveling in the same direction)
- Head-on crash (the ego vehicle and the other vehicle are traveling in opposite directions)
- Rear-end crash (the ego vehicle and the other vehicle are traveling in the same direction)
- Side-impact crash (the other vehicle can hit from any direction the ego vehicle’s left or right side).
3.4. Experimental Setup for Multi-Sensor-Based Surround Vehicle Sensor Fusion
- Head-on crash (CIREN accident no 664 or CIREN-664);
- Rear-end crash (CIREN accident no 816 or CIREN-816);
- Side-impact crash (CIREN accident no 226 or CIREN-226).
3.4.1. Sensor Setup
3.4.2. Tracking Architecture (TA) Setup
4. Results
- The 200 ms sensor update rate produced the smallest data size with the fastest processing for all the planned sensor setups, but it missed a few object detections.
- In contrast, the 50 ms sensor update rate produced the highest data size with detailed detections from the environment of the simulated scenarios.
- Of the three tracking architectures, TA3 (the track-to-track fusion using radar, cameras, and LIDAR) needed the highest processing time compared to TA1 (centralized fusion using radar and cameras) and TA2 (centralized fusion using cameras and LIDAR).
4.1. Multi-Sensor-Based Object Detection and Evaluation
4.2. Evaluation of the Tracking Performance
5. Discussion
- The size of the sensor-based data varies based on the driving scenario’s environment and the trajectory time’s duration. The size of the data should be optimal because sensor-based data are supplied for vehicle control and trajectory planning as the input. The SMTPE development process adopted the concept of using the sampling time for better tracking results [14]. We applied three sensor update rates (200 ms, 100 ms, and 50 ms) to decide on the optimal data generated from vehicle sensors. From the analysis of this part of the experiments, 100 ms produced the required object detections with optimal data sizes among these sensors’ update rates. Working with the optimal data generated from the sensors ensures better input for tracking performance enhancement.
- Vehicle tracking performance depends on sensor-based object detection. The proper positioning of the sensors in the vehicle for surround vehicle sensor fusion is essential to achieve better detection coverage. Our comparison between radar, cameras, and LIDAR shows that cameras have a lower object detection than radars for simulated crash scenarios. Our finding concurs with previous knowledge that a sensor’s perception has limitations, which can be mitigated by fusing multiple sensors [8]. Additionally, combining radar, cameras, or LIDAR for sensor fusion can enhance the detection performance for autopiloting [9].
- The SMTPE reveals valuable information that the lower the error value produced from the sensor fusion-based object tracking, the better the tracking architecture. This study has used three tracking architectures to examine all the types of crash scenarios (head-on crash, rear-end crash, and side-impact crash). We evaluated the tracking performance for the AV crash reconstruction using the GOSPA metric [21,22,23] and found that the centralized architecture of multi-sensor-based (radar and cameras) sensor fusion performed the best. The proposed SMTPE is a repeatable simulation method that can be used to decide on the best tracking architecture for future vehicle crash reconstruction and other AV improvement research.
Guidelines
- For the planned experiment, the available datasets or recordings of datasets from a real scenario need to be appropriate.
- The initial setup of a scenario needs to be accurate to produce the required and accurate information from the multi-sensor fusion.
- Precise knowledge is needed to deal with sensing the surroundings in different environments of the trajectories. Indeed, good expertise in the advantages and limitations of sensors is also essential for dealing with real-world environments such as perception at night, rain, fog, or driving on a mountainous road. Similarly, experimenting with different sensors (such as radar, cameras, and LIDAR) with multiple orientation-based setups can produce comprehensive results that can help to select the best experimental setup for planned research.
- Arranging a set of tracking architectures provides comparatively better architecture than experimenting with a single TA. So, it is recommended that tracking performances need to be tested using multiple TA setups.
- Selecting the appropriate tracking algorithm and object tracker requires insight into the tracking algorithm and tracker’s functionalities. Experimenting with different tracking algorithms and trackers can help select a better TA. A tracking metric delivers performance evaluation, and selected tracking assessment metrics must have a higher reputation.
- Determining a suitable tracking threshold value is essential. A performance threshold value (such as setting a 95% confidence interval for tracking performance) can be used to achieve the best tracking architecture. For example, one researcher may set the threshold value of at least 80% accuracy of tracking performance for AV obstacle detection. Similarly, another researcher may wish to experiment with a tracking arrangement that can achieve at least 90% target tracking accuracy for AV collision avoidance. In this research, our predefined threshold values were to accept, at best, 10% errors in a tracking arrangement by setting the factors of GOSPA error, localization error, missed target error, and false detection error at less than or equal to 10%. The tracking performance results support our predefined confidence level for all the factors below 5, as shown in Table 6.
- Finally, it is highly recommended that object tracking performance be evaluated before pursuing any sensor fusion-based AV development to build a higher confidence level.
6. Conclusions
- The most important contribution of this research is that, based on our simulations, a centralized multi-sensor-based surround vehicle fusion tracking architecture (SA2, TA1) is best for reconstructing crashes involving AVs. Our evaluations show that this setup can reconstruct crashes with a high degree of accuracy from data obtained in ideal situations. No extra road infrastructure or external sensor data are required for such reconstruction, thereby adding significantly to the notion of autonomous vehicle crash reconstruction in situations and contexts where such infrastructure is not available.
- We proposed a simulation method to select a good multi-tracking sensor architecture. The proposed method is expected to provide better input for future AV development.
- A further implication of this study is that it will be helpful when reconstructing an AV crash for forensic and investigative purposes, where there is a need for assurance that the sensors have captured all the required information, and that no important information has been lost.
- Finally, in addition to the research findings, brief guidelines are provided for repeating and reusing the proposed method in a similar research domain.
Challenges and Future Directions
- Selecting appropriate sensors and sensor arrangements, such as the number of sensors used from each sensor type, their position on the vehicle, and the required calibration of those sensors.
- The size of the data generated from sensors depends on the tracking architecture and sampling rate used for simulation. Thus, capturing the required information without data loss is challenging.
- Selecting algorithms, filtering functions, object trackers, and tracking architecture requires good knowledge.
- Creating a complete 360-degree surround vehicle sensor fusion is a challenge. To achieve a 360-degree perception, if multiple unnecessary sensors are used, the data produced from the sensors will be larger, hence incurring processing and data storage costs. In contrast, using limited sensors to detect 360-degree surroundings requires a higher angle of view or focal point, which may be impractical.
- Some overlapping of sensor coverage may occur during multi-sensor-based fusion, where inaccurate object detection may occur.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Qayyum, A.; Usama, M.; Qadir, J.; Al-Fuqaha, A. Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and the Way Forward. IEEE Commun. Surv. Tutor. 2020, 22, 998–1026. [Google Scholar] [CrossRef]
- Wang, J.; Zhang, L.; Huang, Y.; Zhao, J. Safety of Autonomous Vehicles. J. Adv. Transp. 2020, 2020, 8867757. [Google Scholar] [CrossRef]
- Liu, P.; Guo, Y.; Liu, P.; Ding, H.; Cao, J.; Zhou, J.; Feng, Z. What can we learn from the AV crashes? An association rule analysis for identifying the contributing risky factors. Accid. Anal. Prev. 2024, 199, 107492. [Google Scholar] [CrossRef] [PubMed]
- Liu, Q.; Wang, X.; Liu, S.; Yu, C.; Glaser, Y. Analysis of pre-crash scenarios and contributing factors for autonomous vehicle crashes at intersections. Accid. Anal. Prev. 2024, 195, 107383. [Google Scholar] [CrossRef] [PubMed]
- Kumar, D.; Muhammad, N. Object Detection in Adverse Weather for Autonomous Driving through Data Merging and YOLOv8. Sensors 2023, 23, 8471. [Google Scholar] [CrossRef] [PubMed]
- Van Brummelen, J.; O’Brien, M.; Gruyer, D.; Najjaran, H. Autonomous vehicle perception: The technology of today and tomorrow. Transp. Res. Part C Emerg. Technol. 2018, 89, 384–406. [Google Scholar] [CrossRef]
- Cheng, H. Autonomous Intelligent Vehicles: Theory, Algorithms, and Implementation; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
- Xiang, C.; Feng, C.; Xie, X.; Shi, B.; Lu, H.; Lv, Y.; Yang, M.; Niu, Z. Multi-Sensor Fusion and Cooperative Perception for Autonomous Driving: A Review. IEEE Intell. Transp. Syst. Mag. 2023, 15, 36–58. [Google Scholar] [CrossRef]
- Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef] [PubMed]
- Wang, S.; Li, Y.; Qi, G.; Sheng, A. Optimal Geometry and Motion Coordination for Multisensor Target Tracking with Bearings-Only Measurements. Sensors 2023, 23, 6408. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Q.; Liu, J.; Jiang, X. Lane Detection Algorithm in Curves Based on Multi-Sensor Fusion. Sensors 2023, 23, 5751. [Google Scholar] [CrossRef] [PubMed]
- Hou, H.; Shen, C.; Zhang, X.; Gao, W. CSMOT: Make One-Shot Multi-Object Tracking in Crowded Scenes Great Again. Sensors 2023, 23, 3782. [Google Scholar] [CrossRef] [PubMed]
- Shah, G.A.; Khan, S.; Memon, S.A.; Shahzad, M.; Mahmood, Z.; Khan, U. Improvement in the Tracking Performance of a Maneuvering Target in the Presence of Clutter. Sensors 2022, 22, 7848. [Google Scholar] [CrossRef] [PubMed]
- Choi, Y.; Lee, W.; Kim, J.; Yoo, J. A Variable-Sampling Time Model Predictive Control Algorithm for Improving Path-Tracking Performance of a Vehicle. Sensors 2021, 21, 6845. [Google Scholar] [CrossRef] [PubMed]
- MathWorks. Introduction to Track-To-Track Fusion. Available online: https://www.mathworks.com/help/fusion/ug/introduction-to-track-to-track-fusion.html (accessed on 12 January 2024).
- MathWorks. What Is Track-Level Fusion?|Understanding Sensor Fusion and Tracking, Part 6. 2020. Available online: https://www.mathworks.com/support/search.html/videos/sensor-fusion-part-6-what-is-track-level-fusion-1598607201282.html (accessed on 12 January 2024).
- National Highway Traffic Safety Administration. Crash Injury Research Engineering Network (Current). 2017–2023. Available online: https://www.nhtsa.gov/research-data/crash-injury-research (accessed on 12 January 2024).
- Shi, X.; Yang, F.; Tong, F.; Lian, H. A comprehensive performance metric for evaluation of multi-target tracking algorithms. In Proceedings of the 2017 3rd International Conference on Information Management (ICIM), Chengdu, China, 21–23 April 2017; pp. 373–377. [Google Scholar]
- Huang, Y.; Zhang, S.; Li, X.; Lu, R.; Sun, Q. Object Tracking Performance Evaluation Method Based on Adaptive Threshold and Background Suppression. In Proceedings of the 2022 IEEE 10th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, 17–19 June 2022; pp. 133–138. [Google Scholar]
- Ghazali, M.; Gupta, I.; Abdallah, M.B.; Clarke, J.; Indragandhi, V.; Hartavi, A.E. Performance Comparison of Three Rival AI-Powered Intelligent Trajectory Tracking Controllers for an Autonomous Delivery Van. Transp. Res. Procedia 2023, 72, 3039–3045. [Google Scholar] [CrossRef]
- Rahmathullah, A.S.; García-Fernández, Á.F.; Svensson, L. Generalized optimal sub-pattern assignment metric. In Proceedings of the 2017 20th International Conference on Information Fusion (Fusion), Xi’an, China, 10–13 July 2017; pp. 1–8. [Google Scholar]
- García-Fernández, Á.F.; Rahmathullah, A.S.; Svensson, L. A Metric on the Space of Finite Sets of Trajectories for Evaluation of Multi-Target Tracking Algorithms. IEEE Trans. Signal Process. 2020, 68, 3917–3928. [Google Scholar] [CrossRef]
- Su, Z.; Ji, H.; Tian, C.; Zhang, Y. Performance evaluation for multi-target tracking with temporal dimension specifics. Chin. J. Aeronaut. 2024, 37, 446–458. [Google Scholar] [CrossRef]
- Čehovin, L.; Leonardis, A.; Kristan, M. Visual Object Tracking Performance Measures Revisited. IEEE Trans. Image Process. 2016, 25, 1261–1274. [Google Scholar] [CrossRef] [PubMed]
- Song, J.; Hyun, S.-H. Extended Kalman Filter-Based Vehicle Tracking Using Uniform Planar Array for Vehicle Platoon Systems. Sensors 2024, 24, 2351. [Google Scholar] [CrossRef] [PubMed]
- Alai, H.; Rajamani, R. Low-cost camera and 2-D LIDAR fusion for target vehicle corner detection and tracking: Applications to micromobility devices. Mech. Syst. Signal Process. 2024, 206, 110891. [Google Scholar] [CrossRef]
- Guo, G.; Li, D. Adaptive Sliding Mode Control of Vehicular Platoons With Prescribed Tracking Performance. IEEE Trans. Veh. Technol. 2019, 68, 7511–7520. [Google Scholar] [CrossRef]
- Ahmad, N.S. Robust ∞-Fuzzy Logic Control for Enhanced Tracking Performance of a Wheeled Mobile Robot in the Presence of Uncertain Nonlinear Perturbations. Sensors 2020, 20, 3673. [Google Scholar] [CrossRef] [PubMed]
- MathWorks. Extended Kalman Filters. Available online: https://www.mathworks.com/help/fusion/ug/extended-kalman-filters.html (accessed on 1 February 2024).
- MathWorks. Constant Velocity State Update. Available online: https://www.mathworks.com/help/fusion/ref/constvel.html (accessed on 2 February 2024).
- MathWorks. Calculate Generalized Optimal Subpattern Assignment Metric. Available online: https://au.mathworks.com/help/fusion/ref/generalizedoptimalsubpatternassignmentmetric.html (accessed on 2 February 2024).
- Novat, N.; Kidando, E.; Kutela, B.; Kitali, A.E. A comparative study of collision types between automated and conventional vehicles using Bayesian probabilistic inferences. J. Saf. Res. 2023, 84, 251–260. [Google Scholar] [CrossRef] [PubMed]
- MathWorks. Multi-Object Trackers. Available online: https://au.mathworks.com/help/fusion/multi-object-trackers.html (accessed on 18 January 2024).
- MathWorks. Introduction to Tracking Metrics. Available online: https://www.mathworks.com/help/fusion/ug/introduction-to-tracking-metrics.html (accessed on 16 February 2024).
References | Key Contribution | Limitation/Efficacy |
---|---|---|
[18] | Proposed a metric (C-OSPA) to evaluate multi-target tracking performance | Performed mathematical simulations only |
[19] | Developed a tracking performance evaluation method | Used a single sensor for detection |
[20] | Algorithms for intelligent trajectory tracking performance evaluation | The perception mechanism was not explained |
[14] | Improved path-tracking performance by the proposed algorithm | Absent other vehicles in the experiment scenario |
[21,22] | Solved the issues of the OSPA metric and proposed GOSPA metric | Useful for tracking performance evaluation |
[23] | Improved the GOSPA metric using temporal dimension specifics | A potential metric for tracking evaluation |
[24] | Visual tracking measures for tracking performance | Considered only video data |
[25] | Improved the position estimation for vehicle tracking | Only numerical evaluation was performed |
[26] | Sensor fusion-based cost-effective vehicle tracking system | Limited to only front-end target tracking |
Sensor | Location | Position (m) [x, y, z] | Rotation (°) [Roll, Pitch, Yaw] | Max. Range (m) |
---|---|---|---|---|
Radar | Front | [3.7, 0, 0.2] | [0, 0, 0] | 160 |
Front-left | [2.8, 0.9, 0.2] | [0, 0, 45] | 30 | |
Front-right | [2.8, −0.9, 0.2] | [0, 0, −45] | 30 | |
Camera | Front | [2.95, 0, 1.1] | [0, 1, 0] | 250 |
Front-left | [2, 0.9, 0.7] | [0, 1, 65] | 80 | |
Front-right | [2, −0.9, 0.7] | [0, 1, −65] | 80 | |
Rear-left | [2.8, 0.9, 0.7] | [0, 1, 140] | 100 | |
Rear-right | [2.8, −0.9, 0.7] | [0, 1, −140] | 100 | |
LIDAR | Center | [1.5, 0, 1.6] | [0 0 0] | 120 |
Sensor | Location | Position (m) [x, y, z] | Rotation (°) [Roll, Pitch, Yaw] | Max. Range (m) |
---|---|---|---|---|
Radar | Front | [1.9, 0, 0.2] | [0, 0, 0] | 160 |
Front-left | [2.8, 0.9, 0.2] | [0, 0, 60] | 30 | |
Front-right | [2.8, −0.9, 0.2] | [0, 0, −60] | 30 | |
Rear-left | [0, 0.9, 0.2] | [0, 0, 120] | 30 | |
Rear-right | [0, −0.9, 0.2] | [0, 0, −120] | 30 | |
Rear | [0.95, 0, 0.2] | [0, 0, −180] | 160 | |
Camera | Front | [2.1, 0, 1.1] | [0, 1, 0] | 150 |
Rear | [0.56, −0.9, 1.1] | [0, 1, −180] | 150 | |
LIDAR | Center | [1.5, 0, 1.6] | [0 0 0] | 120 |
TA | Fusion Type | Tracker | Filtering Function | Sensors |
---|---|---|---|---|
TA-1 | Centralized | JPDA Tracker | helperInitializeCVEKFFilter | Radar, Camera |
TA-2 | Centralized | JPDA Tracker | helperInitLidarCameraFusionFilter | LIDAR, Camera |
TA-3 | Decentralized | JPDA, Track-To-Track Fuser | central2sensor, sensor2central | Radar, Camera, Lidar |
Sensor Update Rate (ms) | Sensor Setup | TA | Sensor Data (KB) CIREN-664 | Sensor Data (KB) CIREN-816 | Sensor Data (KB) CIREN-226 |
---|---|---|---|---|---|
200 | S1 | TA-1 | 584.63 | 326.90 | 1264.81 |
TA-2 | 179.57 | 283.05 | 237.63 | ||
TA-3 | 567.09 | 339.10 | 1228.31 | ||
S2 | TA-1 | 421.79 | 883.29 | 1374.46 | |
TA-2 | 136.50 | 196.90 | 136.31 | ||
TA-3 | 436.51 | 875.04 | 1370.06 | ||
100 | S1 | TA-1 | 929.62 | 654.08 | 2551.38 |
TA-2 | 346.61 | 540.92 | 456.40 | ||
TA-3 | 958.03 | 610.84 | 2457.92 | ||
S2 | TA-1 | 852.00 | 1683.90 | 2800.71 | |
TA-2 | 263.99 | 380.89 | 264.44 | ||
TA-3 | 897.13 | 1692.71 | 2827.99 | ||
50 | S1 | TA-1 | 1861.59 | 1300.67 | 4980.17 |
TA-2 | 698.62 | 1077.75 | 887.63 | ||
TA-3 | 1877.60 | 1212.71 | 4993.81 | ||
S2 | TA-1 | 1664.29 | 3422.20 | 5491.92 | |
TA-2 | 533.18 | 757.50 | 515.96 | ||
TA-3 | 1705.84 | 3389.04 | 5580.13 |
Rank | Tracking Architecture | Scenario | Average Localization Errors | Average Missed Target Errors | Average False Target Errors | Average GOSPA Score |
---|---|---|---|---|---|---|
Best | S2-TA1 (Centralized TA) | CIREN-226 | 2.54 | 0 | 0 | 2.54 |
Worst | S1-TA2 (Centralized TA) | CIREN-226 | 6.22 | 10.6 | 10.24 | 15.91 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Haque, M.M.; Ghobakhlou, A.; Narayanan, A. Multi-Tracking Sensor Architectures for Reconstructing Autonomous Vehicle Crashes: An Exploratory Study. Sensors 2024, 24, 4194. https://doi.org/10.3390/s24134194
Haque MM, Ghobakhlou A, Narayanan A. Multi-Tracking Sensor Architectures for Reconstructing Autonomous Vehicle Crashes: An Exploratory Study. Sensors. 2024; 24(13):4194. https://doi.org/10.3390/s24134194
Chicago/Turabian StyleHaque, Mohammad Mahfuzul, Akbar Ghobakhlou, and Ajit Narayanan. 2024. "Multi-Tracking Sensor Architectures for Reconstructing Autonomous Vehicle Crashes: An Exploratory Study" Sensors 24, no. 13: 4194. https://doi.org/10.3390/s24134194
APA StyleHaque, M. M., Ghobakhlou, A., & Narayanan, A. (2024). Multi-Tracking Sensor Architectures for Reconstructing Autonomous Vehicle Crashes: An Exploratory Study. Sensors, 24(13), 4194. https://doi.org/10.3390/s24134194