Online Quantitative Analysis of Perception Uncertainty Based on High-Definition Map
Abstract
:1. Introduction
- (1)
- The online uncertainty assessment model for static elements in perception was constructed based on a HD map. The model is capable of conducting real-time evaluations by integrating the lane line’s topological structure and pixel-level information. This integration allows for the assessment of both the existence uncertainty and spatial uncertainty associated with perception elements.
- (2)
- The online uncertainty assessment model for dynamic elements in perception was constructed. The model leverages the online evaluation of static element uncertainty to infer dynamic element uncertainty. Based on a HD map, the online assessment of overall perception uncertainty was realized.
- (3)
- A deep neural network model that performs online recognition of weather and scene factors, such as rain, snow, particulate matter, and illumination, was constructed. This model effectively identifies triggering factors for SOTIF and provides regulatory factors for the online assessment of uncertainty in perception elements, thereby enhancing the accuracy of the evaluation;
- (4)
- We validated the perception uncertainty obtained using our method on the nuScenes dataset, demonstrating the correctness and accuracy of the proposed algorithm.
2. Related Works
2.1. Perception Sensor Uncertainty Evaluation Methods
2.2. Perception Algorithm Uncertainty Evaluation Methods
3. Methodology
- (1)
- This paper quantitatively assesses perception uncertainty based on the HD map, considering the HD map as accurate prior information. Therefore, this paper does not take into account the impact of the update frequency of the HD map;
- (2)
- The perception DNN algorithm used in this paper is a multi-task unified network, meaning that the recognition of dynamic and static elements is performed by the same algorithm. Otherwise, the study on the correlation of uncertainty between dynamic and static elements in perception would be meaningless.
3.1. Selection of Environmental Feature Elements
3.2. Online Assessment of Uncertainty in Static Elements in Perception
3.2.1. Uncertainty Evaluation Based on Pixel
Algorithm 1 Lane line uncertainty assessment based on pixel-level. |
Input: The ith frame perception static result , and the static feature result in the HD map . Output: The optimal for static feature matching in the local range of the i-th frame.
|
3.2.2. Uncertainty Evaluation Based on Lane Line Topological Structure
Algorithm 2 Secondary clustering based on the K-means clustering method. |
Input: The clustering results . Output: Results after the secondary clustering .
|
3.3. Online Assessment of Uncertainty in Dynamic Elements in Perception
3.3.1. Online Uncertainty Assessment Model Construction
3.3.2. Offline Uncertainty Assessment Model Construction
Algorithm 3 Object matching algorithm based on IOU from the perspective of BEV. |
Input: The position coordinates (), size (), and orientation (r) of the detected objects and corresponding dataset annotations. Output: The IOU between the detected objects and corresponding dataset annotations, of objects matched.
|
3.4. Online Assessment of Perception Uncertainty Considering Weather and Scene Factors
3.4.1. Weather and Scene Detection
3.4.2. Online Assessment Model Optimization Considering Weather and Scene Factors
4. Experiment
4.1. Experiment Settings
- (1)
- Dataset selection
- (2)
- Selection of deep neural network algorithms
4.2. Dataset Validation Results Based on HD Map
4.2.1. Implementation Details
4.2.2. Experiment Results
- (1)
- Macro-statistics
- (2)
- Specific Scene Analysis
- (1)
- High correlation with good perception effectiveness
- (2)
- High correlation with poor perception effectiveness
- (3)
- Low correlation but can be enhanced through post-processing
- (4)
- Low correlation but cannot be enhanced through post-processing
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
NMS | Non-Maximum Suppression |
CNN | Convolutional Neural Networks |
DNN | Deep Neural Network |
DE | Deep Ensemble |
DBSCAN | Density-Based Spatial Clustering of Applications with Noise |
HD map | High-Definition Map |
BEV | Bird’s Eye View |
SVM | Support Vector Machines |
AD | Autonomous Driving |
MCD | Monte Carlo Dropout |
MCMC | Markov Chain Monte Carlo |
IOU | Intersection Over Union |
FS | Functional Safety |
SOTIF | Safety of the Intended Functionality |
KNN | K-Nearest Neighbors |
RNN | Recurrent Neural Networks |
RANSAC | Density-Based Spatial Clustering of Applications with Noise |
TP | True Positive |
FP | False Positive |
FN | False Negative |
References
- Wang, P.; McKeever, B.; Chan, C.Y. Automated vehicles industry survey of transportation infrastructure needs. Transp. Res. Rec. 2022, 2676, 554–569. [Google Scholar] [CrossRef]
- Hakak, S.; Gadekallu, T.R.; Maddikunta, P.K.R.; Ramu, S.P.; Parimala, M.; De Alwis, C.; Liyanage, M. Autonomous vehicles in 5G and beyond: A survey. Veh. Commun. 2023, 39, 100551. [Google Scholar] [CrossRef]
- Chu, J.; Zhao, T.; Jiao, J.; Yuan, Y.; Jing, Y. SOTIF-Oriented Perception Evaluation Method for Forward Obstacle Detection of Autonomous Vehicles. IEEE Syst. J. 2023, 17, 2319–2330. [Google Scholar] [CrossRef]
- Jiang, K.; Shi, Y.; Wijaya, B.; Yang, M.; Wen, T.; Xiao, Z.; Yang, D. Map Container: A Map-based Framework for Cooperative Perception. arXiv 2022, arXiv:2208.13226. [Google Scholar]
- Van Brummelen, J.; O’Brien, M.; Gruyer, D.; Najjaran, H. Autonomous vehicle perception: The technology of today and tomorrow. Transp. Res. Part Emerg. Technol. 2018, 89, 384–406. [Google Scholar] [CrossRef]
- Peng, L.; Li, B.; Yu, W.; Yang, K.; Shao, W.; Wang, H. SOTIF entropy: Online SOTIF risk quantification and mitigation for autonomous driving. arXiv 2023, arXiv:2211.04009. [Google Scholar] [CrossRef]
- Vargas, J.; Alsweiss, S.; Toker, O.; Razdan, R.; Santos, J. An overview of autonomous vehicles sensors and their vulnerability to weather conditions. Sensors 2021, 21, 5397. [Google Scholar] [CrossRef]
- Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef]
- Li, D.; Wang, Y.; Wang, J.; Wang, C.; Duan, Y. Recent advances in sensor fault diagnosis: A review. Sens. Actuators A Phys. 2020, 309, 111990. [Google Scholar] [CrossRef]
- Pao, W.Y.; Li, L.; Howorth, J.; Agelin-Chaab, M.; Roy, L.; Knutzen, J.; Baltazar y Jimenez, A.; Muenker, K. Wind Tunnel Testing Methodology for Autonomous Vehicle Optical Sensors in Adverse Weather Conditions. In Proceedings of the International Stuttgart Symposium, Stuttgart, Germany, 4–5 July 2023; pp. 13–39. [Google Scholar]
- TayebiHaghighi, S.; Koo, I. Sensor Fault Diagnosis Using a Machine Fuzzy Lyapunov-Based Computed Ratio Algorithm. Sensors 2022, 22, 2974. [Google Scholar] [CrossRef]
- Shanthamallu, U.S.; Spanias, A. Machine and Deep Learning Applications. In Machine and Deep Learning Algorithms and Applications; Springer: Berlin/Heidelberg, Germany, 2022; pp. 59–72. [Google Scholar]
- Ju, Z.; Zhang, H.; Li, X.; Chen, X.; Han, J.; Yang, M. A survey on attack detection and resilience for connected and automated vehicles: From vehicle dynamics and control perspective. IEEE Trans. Intell. Veh. 2022, 7, 815–837. [Google Scholar] [CrossRef]
- Diehm, A.L.; Hammer, M.; Hebel, M.; Arens, M. Mitigation of crosstalk effects in multi-LiDAR configurations. In Proceedings of the Electro-Optical Remote Sensing XII, Berlin, Germany, 12–13 September 2018; Volume 10796, pp. 13–24. [Google Scholar]
- Saad, M.A.; Bovik, A.C.; Charrier, C. Blind image quality assessment: A natural scene statistics approach in the DCT domain. IEEE Trans. Image Process. 2012, 21, 3339–3352. [Google Scholar] [CrossRef] [PubMed]
- Einecke, N.; Gandhi, H.; Deigmöller, J. Detection of camera artifacts from camera images. In Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, 8–11 October 2014; pp. 603–610. [Google Scholar]
- Nizami, I.F.; Rehman, M.U.; Majid, M.; Anwar, S.M. Natural scene statistics model independent no-reference image quality assessment using patch based discrete cosine transform. Multimed. Tools Appl. 2020, 79, 26285–26304. [Google Scholar] [CrossRef]
- Segata, M.; Cigno, R.L.; Bhadani, R.K.; Bunting, M.; Sprinkle, J. A lidar error model for cooperative driving simulations. In Proceedings of the 2018 IEEE Vehicular Networking Conference (VNC), Taipei, Taiwan, 5–7 December 2018; pp. 1–8. [Google Scholar]
- Javed, A.R.; Usman, M.; Rehman, S.U.; Khan, M.U.; Haghighi, M.S. Anomaly detection in automated vehicles using multistage attention-based convolutional neural network. IEEE Trans. Intell. Transp. Syst. 2020, 22, 4291–4300. [Google Scholar] [CrossRef]
- Pawar, Y.S.; Honnavalli, P.; Eswaran, S. Cyber Attack Detection On Self-Driving Cars Using Machine Learning Techniques. In Proceedings of the 2022 IEEE 3rd Global Conference for Advancement in Technology (GCAT), Bangalore, India, 7–9 October 2022; pp. 1–5. [Google Scholar]
- Safavi, S.; Safavi, M.A.; Hamid, H.; Fallah, S. Multi-sensor fault detection, identification, isolation and health forecasting for autonomous vehicles. Sensors 2021, 21, 2547. [Google Scholar] [CrossRef] [PubMed]
- Minh, D.; Wang, H.X.; Li, Y.F.; Nguyen, T.N. Explainable artificial intelligence: A comprehensive review. Artif. Intell. Rev. 2022, 36, 3503–3568. [Google Scholar] [CrossRef]
- Feng, D.; Harakeh, A.; Waslander, S.L.; Dietmayer, K. A review and comparative study on probabilistic object detection in autonomous driving. IEEE Trans. Intell. Transp. Syst. 2021, 23, 9961–9980. [Google Scholar] [CrossRef]
- Gawlikowski, J.; Tassi, C.R.N.; Ali, M.; Lee, J.; Humt, M.; Feng, J.; Kruspe, A.; Triebel, R.; Jung, P.; Roscher, R.; et al. A survey of uncertainty in deep neural networks. arXiv 2021, arXiv:2107.03342. [Google Scholar] [CrossRef]
- Choi, J.; Chun, D.; Kim, H.; Lee, H.J. Gaussian yolov3: An accurate and fast object detector using localization uncertainty for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 502–511. [Google Scholar]
- Mena, J.; Pujol, O.; Vitrià, J. A Survey on Uncertainty Estimation in Deep Learning Classification Systems from a Bayesian Perspective. ACM Comput. Surv. 2022, 54, 1–36. [Google Scholar] [CrossRef]
- Melucci, M. Relevance Feedback Algorithms Inspired by Quantum Detection. IEEE Trans. Knowl. Data Eng. 2016, 28, 1022–1034. [Google Scholar] [CrossRef]
- Gal, Y.; Ghahramani, Z. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 16–24 June 2016. [Google Scholar]
- Gal, Y.; Ghahramani, Z. A theoretically grounded application of dropout in recurrent neural networks. In Proceedings of the Annual Conference on Neural Information Processing Systems 2016, Barcelona, Spain, 5–10 December 2016; Volume 29. [Google Scholar]
- Ovadia, Y.; Fertig, E.; Ren, J.; Nado, Z.; Sculley, D.; Nowozin, S.; Dillon, J.; Lakshminarayanan, B.; Snoek, J. Can you trust your model’s uncertainty? Evaluating predictive uncertainty under dataset shift. In Proceedings of the Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, Vancouver, BC, Canada, 8–14 December 2019; Volume 32. [Google Scholar]
- Lakshminarayanan, B.; Pritzel, A.; Blundell, C. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. arXiv 2016, arXiv:1612.01474v3. [Google Scholar]
- Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuScenes: A multimodal dataset for autonomous driving. arXiv 2019, arXiv:1903.11027. [Google Scholar]
- Kumar, K.M.; Reddy, A.R.M. A fast DBSCAN clustering algorithm by accelerating neighbor searching using Groups method. Pattern Recognit. 2016, 58, 39–48. [Google Scholar] [CrossRef]
- Hahsler, M.; Piekenbrock, M.; Doran, D. dbscan: Fast density-based clustering with R. J. Stat. Softw. 2019, 91, 1–30. [Google Scholar] [CrossRef]
- Derpanis, K.G. Overview of the RANSAC Algorithm. Image 2010, 4, 2–3. [Google Scholar]
- Ahmed, M.; Seraj, R.; Islam, S.M.S. The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 2020, 9, 1295. [Google Scholar] [CrossRef]
- Peng, L.; Li, J.; Shao, W.; Wang, H. PeSOTIF: A challenging visual dataset for perception SOTIF problems in long-tail traffic scenarios. In Proceedings of the 2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 4–7 June 2023; pp. 1–8. [Google Scholar]
- Zhang, Y.; Zhu, Z.; Zheng, W.; Huang, J.; Huang, G.; Zhou, J.; Lu, J. Beverse: Unified perception and prediction in birds-eye-view for vision-centric autonomous driving. arXiv 2022, arXiv:2205.09743. [Google Scholar]
Uncertainty | Low | High |
---|---|---|
Low | ||
High |
Scenes | Normal | Rain | Snow | Particulate | Illumination |
---|---|---|---|---|---|
Precision | 99.97% | 85.20% | 89.68% | 93.1% | 94.36% |
Parameters | ||||||
---|---|---|---|---|---|---|
Threshold | 0.46 | 0.25 | 0.038 | 0.25 | 9 | 0.8 |
Parameters | ||||||
Threshold | 0.7 | 0.2 | 0.1 | 2 | 0.5 | 3 |
Good Static Perception | Poor Static Perception | |
---|---|---|
Good dynamic Perception | 1955 | 160 |
Poor dynamic Perception | 36 | 302 |
Good Static Perception | Poor Static Perception | |
---|---|---|
Good dynamic Perception | 2170 | 138 |
Poor dynamic Perception | 70 | 9 |
Good Static Perception | Poor Static Perception | |
---|---|---|
Good dynamic Perception | 1688 | 152 |
Poor dynamic Perception | 116 | 188 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, M.; Jiao, X.; Jiang, K.; Cheng, Q.; Yang, Y.; Yang, M.; Yang, D. Online Quantitative Analysis of Perception Uncertainty Based on High-Definition Map. Sensors 2023, 23, 9876. https://doi.org/10.3390/s23249876
Yang M, Jiao X, Jiang K, Cheng Q, Yang Y, Yang M, Yang D. Online Quantitative Analysis of Perception Uncertainty Based on High-Definition Map. Sensors. 2023; 23(24):9876. https://doi.org/10.3390/s23249876
Chicago/Turabian StyleYang, Mingliang, Xinyu Jiao, Kun Jiang, Qian Cheng, Yanding Yang, Mengmeng Yang, and Diange Yang. 2023. "Online Quantitative Analysis of Perception Uncertainty Based on High-Definition Map" Sensors 23, no. 24: 9876. https://doi.org/10.3390/s23249876
APA StyleYang, M., Jiao, X., Jiang, K., Cheng, Q., Yang, Y., Yang, M., & Yang, D. (2023). Online Quantitative Analysis of Perception Uncertainty Based on High-Definition Map. Sensors, 23(24), 9876. https://doi.org/10.3390/s23249876