AI-Based Vehicle State Estimation Using Multi-Sensor Perception and Real-World Data
Abstract
1. Introduction
1.1. Motivation and Potential of Perception Sensors for Estimating Vehicle Dynamics
1.2. State of the Art
- ADAS functions: These include systems such as lane keeping assist (LKA), traffic sign recognition, adaptive cruise control (ACC), emergency brake assist, or localization and object detection for trajectory planning
- Vehicle dynamics control (VDC): Systems such as the electronic stability program (ESC), anti-lock braking system (ABS), or other brake control systems.
1.2.1. Discussion of Related Work
Approach | ||
---|---|---|
Application Area | Model-Based | AI-Based |
Advanced Driver Assistance Systems (ADAS) | ACC-related estimates based on radar, lidar, and camera data, and Kalman filter [11] LKA-related estimates based on camera data and a multirate Kalman filter [12] Overview of localization and mapping methods, e.g., for trajectory planning [3]
| Vehicle’s ego-position estimation based on
Lane Estimation with a single radar sensor using a deep learning network [16] |
…. | … | |
Vehicle Dynamics Control Systems (VDC) | Via camera
| Via camera
|
1.2.2. Positioning of the Present Work
1.3. Contribution of This Work
- The perception sensors are used for high-dynamic functions of vehicle dynamics control systems instead of less dynamic top-level ADAS applications
- By using the tire-independent perception sensor information, two major improvements arise for a vehicle dynamics state estimator:
- Ease of application and transferability of the estimator to vehicle platforms of any kind. Trained AI-based vehicle state estimators using perception sensors can be transferred to any other vehicle without the need for adaptations.
- Robustification of vehicle state estimation. Estimation is also possible in driving scenarios, in which there is no longer any traction between the tires and the road (e.g., when driving on an icy surface).
- The state variables that are estimated are the vehicle sideslip angle as a safety-critical variable, as well as the vehicle velocity, which is one of the most essential vehicle states representing a basic variable for calculating a number of other VDC quantities.
- The fundamental kinematic relationships between the vehicle states and the perception sensor data are derived to gain a comprehensive understanding of the interactions.
- The performance of the developed toolchain is analyzed using real-world measurement data from test drives and compared with proven model-based methods, such as an Unscented Kalman Filter (UKF).
2. System Architecture of the AI-Based Vehicle State Estimator Utilizing Perception Sensors
2.1. Detection of Dynamic Interfering Objects
2.1.1. Object Detection in Image Data via YOLO
2.1.2. Object Detection in Point Clouds via Complex YOLO
2.2. Preprocessing the Lidar Point Cloud
2.2.1. Ground Segmentation
2.2.2. Considering a Region of Interest
- The lidar sensor used has a range of up to 200 m. However, points at large distances are associated with a high degree of uncertainty or noise and should not be used.
- Due to the structural characteristics of a road, the longitudinal environment should be considered at a longer distance than the lateral environment.
- The maximum height of the points considered should be limited in order to avoid reflections, for example, in moving treetops.
2.2.3. Downsampling
2.3. Motion Extraction from Perception Sensors
2.3.1. Relative Kinematics of the Vehicle Environment and Its Dynamics
2.3.2. Motion Extraction from Camera Image Using Optical Flow
2.3.3. Motion Extraction from Lidar Point Clouds Using Scene Flow
2.4. State Estimation Using Recurrent Neural Networks
3. Data Acquisition in Real-World Driving Tests
3.1. Test Vehicle AI for Mobility and Its Hardware Setup
3.2. Overview of the Driving Maneuvers
3.3. Data Synchronization and Analysis
3.3.1. Data Synchronization
3.3.2. Analysis of the Required Sampling Times
4. Implementation and Evaluation
4.1. State Estimator Setups
4.1.1. AI-Based Vehicle State Estimator
4.1.2. Model-Based Benchmark Estimators
4.1.3. Test Scenario
4.2. Results
4.2.1. High Tire–Road Friction
4.2.2. Low Tire–Road Friction
4.2.3. Results Overview
5. Summary and Outlook
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Supplementary Derivations of the Relative Kinematics Equations
Appendix B. Technical Data of the Perception Sensors
Property | Lidar Sensor | Camera |
Effective Range | ||
Vertical Field of View | ° | 54° |
Horizontal Field of View | 360° | 84° |
Resolution | points per scan ( vertical channels, points per channel) | Pixel |
Frame Rate | ||
Hardware Interface | Ethernet | USB Type-C |
Appendix C. Lists of Symbols, Nomenclature, and Abbreviations
Formula Symbol | Description |
Vehicle’s acceleration (longitudinal or lateral) | |
Vehicle’s side slip angle | |
Wheel steering angle | |
Distance between Vehicle and Optical Flow Point (Longitudinal and Lateral) | |
Distance between Vehicle and Object (Longitudinal and Lateral) | |
Optical flow vector (2D) | |
Scene flow vector (3D) | |
Image matrix (Pixels) | |
Look Angle between the Horizontal Line and the Line of Sight to an Object | |
Image Mask (Binary Matrix) | |
Coefficient of friction between tire and road | |
Cluster of Points from Point Cloud | |
Wheel speed | |
Individual Point from Pointcloud (3D Vector) | |
Point cloud (3D) | |
Vehicle’s yaw angle | |
Distance between Vehicle and Object Point | |
Vehicle’s velocity over ground |
Abbreviation | Explanation |
ADAS | Advanced Driver Assistance Systems |
AFM | AI For Mobility (DLR research vehicle) |
CNN | Convolutional neural network |
CoG | Center of gravity |
DLR | German Aerospace Center |
ESC | Electronic Stability Control |
GNSS | Global Navigation Satellite System |
GRU | Gated Recurrent Unit |
ICP | Iterative Closest Point |
IMU | Inertial measurement unit |
LOS | Line of sight |
MiDaS | Monocular Depth Estimation via Scale |
RNN | Recurrent neural network |
ROI | Region of interest |
ROS | Robot Operating System |
STM | Single-track model |
UKF | Unscented Kalman Filter |
VDC | Vehicle dynamics control |
YOLO | You Only Look Once |
Nomenclature | Explanation |
Quantity expressed in the car coordinate system with origin in CoG | |
Quantity expressed in the geodetic coordinate system | |
Quantity expressed in the image coordinate system (pixels) | |
Estimated state |
References
- Pacejka, H. Tire and Vehicle Dynamics, 3rd ed.; Butterworth-Heinemann: Oxford, UK; Waltham, MA, USA; Warrendale, PA, USA, 2012; p. 632. [Google Scholar]
- Rill, G. Tmeasy 6.0—A Handling Tire Model That Incorporates the First Two Belt Eigenmodes; EASDAthens: Athens, Greece, 2020; pp. 676–689. [Google Scholar]
- Das, S. State Estimation with Auto-Calibrated Sensor Setup. Ph.D. Dissertation, KTH Royal Institute of Technology, Stockholm, Switzerland, 2024. [Google Scholar]
- Velardocchia, M.; Vigliani, A. Control Systems Integration for Enhanced Vehicle Dynamics. Open Mech. Eng. J. 2013, 7, 58–69. [Google Scholar] [CrossRef]
- Lin, T.-C.; Ji, S.; Dickerson, C.E.; Battersby, D. Coordinated control architecture for motion management in ADAS systems. IEEE/CAA J. Autom. Sin. 2018, 5, 432–444. [Google Scholar] [CrossRef]
- Pandharipande, A.; Cheng, C.-H.; Dauwels, J.; Gurbuz, S.Z.; Ibanez-Guzman, J.; Li, G.; Piazzoni, A.; Wang, P.; Santra, A. Sensing and Machine Learning for Automotive Perception: A Review. IEEE Sens. J. 2023, 23, 11097–11115. [Google Scholar] [CrossRef]
- Wang, Y.; Nguyen, B.M.; Fujimoto, H.; Hori, Y. Vision-based vehicle body slip angle estimation with multi-rate Kalman filter considering time delay. In Proceedings of the 2012 IEEE International Symposium on Industrial Electronics, Hangzhou, China, 28–31 May 2012; pp. 1506–1511. [Google Scholar]
- Schlipsing, M.; Salmen, J.; Lattke, B.; Schroter, K.G.; Winner, H. Roll angle estimation for motorcycles: Comparing video and inertial sensor approaches. In Proceedings of the 2012 IEEE Intelligent Vehicles Symposium (IV), Madrid, Spain, 3–7 June 2012; pp. 500–505. [Google Scholar]
- Štironja, V.-J.; Petrović, L.; Peršić, J.; Marković, I.; Petrović, I. RAVE: A Framework for Radar Ego-Velocity Estimation. arXiv 2024, arXiv:2406.18850. [Google Scholar]
- Hayakawa, J.; Dariush, B. Ego-Motion and Surrounding Vehicle State Estimation Using a Monocular Camera; SPIE: Bellingham, WA, USA, 2019; pp. 2550–2556. [Google Scholar]
- Kim, T.-L.; Lee, J.-S.; Park, T.-H. Fusing Lidar, Radar, and Camera Using Extended Kalman Filter for Estimating the Forward Position of Vehicles. In Proceedings of the 2019 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), Bangkok, Thailand, 18–20 November 2019; pp. 374–379. [Google Scholar]
- Son, Y.S.; Kim, W.; Lee, S.-H.; Chung, C.C. Robust Multirate Control Scheme with Predictive Virtual Lanes for Lane-Keeping System of Autonomous Highway Driving. IEEE Trans. Veh. Technol. 2014, 64, 3378–3391. [Google Scholar] [CrossRef]
- Sohail, M.; Khan, A.U.; Sandhu, M.; Shoukat, I.A.; Jafri, M.; Shin, H. Radar sensor based machine learning approach for precise vehicle position estimation. Sci. Rep. 2023, 13, 13837. [Google Scholar] [CrossRef]
- Hu, Y.; Li, X.; Kong, D.; Wei, K.; Ni, P.; Hu, J. A Reliable Position Estimation Methodology Based on Multi-Source Information for Intelligent Vehicles in Unknown Environment. IEEE Trans. Intell. Veh. 2023, 9, 1667–1680. [Google Scholar] [CrossRef]
- Yin, W. Machine Learning for Adaptive Cruise Control Target Selection; KTH Royal Institute of Technology: Stockholm, Sweden, 2019. [Google Scholar]
- Choi, J.Y.; Kim, J.S.; Chung, C.C. Radar-Based Lane Estimation with Deep Neural Network for Lane-Keeping System of Autonomous Highway Driving. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 20–23 September 2020; pp. 1–6. [Google Scholar]
- Rai, P.K.; Strokina, N.; Ghabcheloo, R. 4DEgo: Ego-velocity estimation from high-resolution radar data. Front. Signal Process. 2023, 3, 1198205. [Google Scholar] [CrossRef]
- Liang, Y.; Müller, S.; Rolle, D.; Ganesch, D.; Schaffer, I. Vehicle side-slip angle estimation with deep neural network and sensor data fusion. In Proceedings of the 10th International Munich Chassis Symposium 2019, Wiesbaden, Germany, 25–26 June 2020; Springer Fachmedien Wiesbaden: Munich, Germany, 2020; pp. 159–178. [Google Scholar]
- Novotny, G.; Liu, Y.; Morales-Alvarez, W.; Wöber, W.; Olaverri-Monreal, C. Vehicle side-slip angle estimation under snowy conditions using machine learning. Integr. Comput. Eng. 2024, 31, 117–137. [Google Scholar] [CrossRef]
- Liang, Y. Robust Vehicle State Estimation with Multi-Modal Sensors Data Fusion: Vehicle Dynamics Sensors and Environmental Sensors; Technical University Berlin: Berlin, Germany, 2023. [Google Scholar]
- Mitta, N.R. AI-Enhanced Sensor Fusion Techniques for Autonomous Vehicle Perception: Integrating Lidar, Radar, and Camera Data with Deep Learning Models for Enhanced Object Detection, Localization, and Scene Understanding. J. Bioinform. Artif. Intell. 2024, 4, 121–162. [Google Scholar]
- Smith, M.L.; Smith, L.N.; Hansen, M.F. The quiet revolution in machine vision—A state-of-the-art survey paper, including historical review, perspectives, and future directions. Comput. Ind. 2021, 130, 103472. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. Comput. Ind. 2021, 130, 103472. [Google Scholar]
- Simon, M.; Milz, S.; Amende, K.; Gross, H.-M. Complex-YOLO: Real-time 3D Object Detection on Point Clouds. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Zou, Z.; Chen, K.; Shi, Z.; Guo, Y.; Ye, J. Object Detection in 20 Years: A Survey. Proc. IEEE 2023, 111, 257–276. [Google Scholar] [CrossRef]
- Wang, C.-Y.; Liao, H.-Y.M. YOLOv1 to YOLOv10: The Fastest and Most Accurate Real-time Object Detection Systems. APSIPA Trans. Signal Inf. Process. 2024, 13. [Google Scholar] [CrossRef]
- Python Lessons Team. GitHub Repository: TensorFlow-2.x-YOLOv3: YOLOv3 Implementation. 2022. Available online: https://github.com/pythonlessons/TensorFlow-2.x-YOLOv3/tree/master/yolov3 (accessed on 3 April 2025).
- Lin, T.-Y.; Maire, M.; Belongie, S.; Bourdev, L.; Girshick, R.; Hays, J.; Perona, P.; Ramanan, D.; Zitnick, C.L.; Dollár, P. Microsoft COCO: Common Objects in Context. arXiv 2015, arXiv:1405.0312. [Google Scholar]
- Zimmer, W.; Ercelik, E.; Zhou, X.; Ortiz, X.J.D.; Knoll, A. A Survey of Robust 3D Object Detection Methods in Point Clouds. arXiv 2022, arXiv:2204.00106. [Google Scholar]
- Nguyen, M. GitHub Repository: Complex-YOLOv4-Pytorch: The PyTorch Implementation of Complex-YOLO for Real-time 3D Object Detection on Point Clouds. 2022. Available online: https://github.com/maudzung/Complex-YOLOv4-Pytorch (accessed on 3 April 2025).
- Grothum, O. Classification of Mobile-Mapping-Pointclouds Based on Machine Learning Algorithms. AGIT J. Für Angew. Geoinformatik 2019, 5, 315–328. [Google Scholar]
- Lee, S.; Lim, H.; Myung, H. Patchwork++: Fast and Robust Ground Segmentation Solving Partial Under-Segmentation Using 3D Point Cloud. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 13276–13283. [Google Scholar]
- Olmin, A.; Lindsten, F. Robustness and Reliability When Training with Noisy Labels. arXiv 2022, arXiv:2110.03321. [Google Scholar]
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2003; p. 655. [Google Scholar]
- Horn, B.K.; Schunck, B.G. Determining Optical Flow. Artif. Intell. 1981, 17, 185–203. [Google Scholar]
- Chiang, B.; Bohg, J. Optical and Scene Flow; Stanford University: Stanford, CA, USA, 2022. [Google Scholar]
- Farnebäck, G. Two-Frame Motion Estimation Based on Polynomial Expansion. In Image Analysis; Bd. 2749; Springer: Berlin/Heidelberg, Germany, 2003; pp. 363–370. [Google Scholar]
- Barnum, P.; Hu, B.; Brown, C. Exploring the Practical Limits of Optical Flow. Available online: https://urresearch.rochester.edu/institutionalPublicationPublicView.action?institutionalItemId=182&versionNumber=1 (accessed on 3 July 2025).
- Lucas, B.D.; Kanade, T. An Iterative Image Registration Technique with an Application to Stereo Vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence, Vancouver, BC, Canada, 24–28 August 1981; pp. 674–679. [Google Scholar]
- Bradski, G. Dr. Dobb’s Journal of Software Tools; The OpenCV Library: Palo Alto, CA, USA, 2000. [Google Scholar]
- Ranftl, R.; Lasinger, K.; Hafner, D.; Schindler, K.; Koltun, V. Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 1623–1637. [Google Scholar]
- Ruggaber, J.; Ahmic, K.; Brembeck, J.; Baumgartner, D.; Tobolář, J. AI-For-Mobility—A New Research Platform for AI-Based Control Methods. Appl. Sci. 2023, 13, 2879. [Google Scholar]
- Joglekar, A.; Joshi, D.; Khemani, R.; Nair, S.; Sahare, S. Depth Estimation Using Monocular Camera. Int. J. Comput. Sci. Inf. Technol. 2011, 2, 1758–1763. [Google Scholar]
- Baur, S.; Emmerichs, D.; Moosmann, F.; Pinggera, P.; Ommer, B.; Geiger, A. SLIM: Self-Supervised LiDAR Scene Flow and Motion Segmentation. In Proceedings of the International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 11–17 October 2021. [Google Scholar]
- Lin, Y.; Caesar, H. ICP-Flow: LiDAR Scene Flow Estimation with ICP. arXiv 2024, arXiv:2402.17351. [Google Scholar]
- Liu, X.; Qi, C.R.; Guibas, L.J. FlowNet3D: Learning Scene Flow in 3D Point Clouds. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Puy, G.; Boulch, A.; Marlet, R. FLOT: Scene Flow on Point Clouds Guided by Optimal Transport. In Computer Vision—ECCV 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2020. [Google Scholar]
- Reddy, B.R.G.; Prasad, C.B.; Harika, P.; Sonam, S.K. State Estimation and Tracking using Recurrent Neural Networks. Int. J. Eng. Res. Technol. 2017, 6, 545–549. [Google Scholar]
- Wenjie, X.; Chen, X.; Yau, S. Recurrent Neural Networks are Universal Filters. IEEE Trans. Neural Netw. Learn. Syst. 2020, 34, 992–8006. [Google Scholar]
- Feldkamp, L.A.; Prokhorov, D.V. Recurrent Neural Networks for State Estimation. In Proceedings of the Workshop on Adaptive and Learning Systems, Portland, Oregon, USA, 20–24 July 2003. [Google Scholar]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA; London, UK, 2016; p. 775. [Google Scholar]
- German Aerospace Center (DLR), Department of Vehicle Dynamics and Control Engineering. Introducing the All New AI for Mobility Research Platform. Available online: https://vsdc.de/en/the-basic-vehicle-platform-of-afm/ (accessed on 28 January 2025).
- Open Robotics. ROS 2 Documentation, Message Filters. 2025. Available online: https://docs.ros.org/en/humble/p/message_filters/doc/index.html (accessed on 3 April 2025).
- Riekert, P.; Schunck, T. Zur Fahrmechanik des gummibereiften Kraftfahrzeugs. Ing. Arch. 1940, 11, 210–224. [Google Scholar]
- Kingma, D.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2017, arXiv:1412.698. [Google Scholar]
- Chollet, F. Keras: Deep Learning. 2025. Available online: https://keras.io (accessed on 3 April 2025).
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. White Paper. 2015. Available online: https://www.tensorflow.org/ (accessed on 3 July 2025).
- Modelica Association. Functional Mock-up Interface (FMI) Standard. Available online: https://fmi-standard.org (accessed on 22 May 2025).
- Ruggaber, J.; Brembeck, J. A Novel Kalman Filter Design and Analysis Method Considering Observability and Dominance Properties of Measurands Applied to Vehicle State Estimation. Sensors 2021, 21, 4750. [Google Scholar] [CrossRef]
- Brembeck, J. A Physical Model-Based Observer Framework for Nonlinear Constrained State Estimation Applied to Battery State Estimation. Sensors 2019, 19, 4402. [Google Scholar] [CrossRef]
- Brembeck, J. Nonlinear Constrained Moving Horizon Estimation Applied to Vehicle Position Estimation. Sensors 2019, 19, 2276. [Google Scholar] [CrossRef]
- Ljung, L. System Identification: Theory for the User, 2nd ed.; Prentice Hall PTR: Upper Saddle River, NJ, USA, 1998. [Google Scholar]
Description | Variable | Camera Image (Mono) | Lidar Point Cloud |
---|---|---|---|
Distance between vehicle and object | Must be determined by depth estimation | Measured directly | |
Longitudinal and lateral distance vehicle to object | Determined using the object’s pixel coordinates and the estimated depth (see Section 2.3.2) | Measured directly | |
Change of position of object relative to vehicle | Related to the optical flow of an object point | Scene flow—the 3D analog of the optical flow |
Parameter | RNN Type | Hidden Layers | Sequence Length | GRU Units | Dropout Rate | Learning Rate | |
---|---|---|---|---|---|---|---|
Feedforward | Recurrent | ||||||
Value | GRU |
Parameter | Activation Function | Loss Function | Batch Size | Volume of Training Data | |||
---|---|---|---|---|---|---|---|
State | Gate | Output | Time | Distance | |||
Value | tanh | sigmoid | linear | Mean Squared Error | 60 | 65 min | 80 km |
Criteria | Fit [%] | RMSE | ||
---|---|---|---|---|
State | [°] | [m/s] | ||
Estimator | ||||
RNN (low-pass filtered) | 73.7 | 49.7 | 0.25 | 1.19 |
Unscented Kalman Filter | 70.5 | 97.4 | 0.28 | 0.06 |
Luenberger Observer | 58.6 | 97.1 | 0.38 | 0.06 |
Section | High Tire Road Friction | Low Tire Road Friction | ||||||
---|---|---|---|---|---|---|---|---|
Criteria | Fit [%] | RMSE | Fit [%] | RMSE | ||||
State | [°] | [m/s] | [°] | [m/s] | ||||
Estimator | ||||||||
RNN (low-pass filtered) | 65.3 | 71.7 | 0.33 | 1.1 | 49.2 | 30.1 | 0.46 | 1.1 |
Unscented Kalman Filter | 64.4 | 98.4 | 0.33 | 0.1 | −200 | −56.6 | 2.92 | 1.7 |
Luenberger Observer | 55.8 | 98.3 | 0.41 | 0.1 | −66.3 | −53.2 | 1.52 | 1.7 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ruggaber, J.; Pölzleitner, D.; Brembeck, J. AI-Based Vehicle State Estimation Using Multi-Sensor Perception and Real-World Data. Sensors 2025, 25, 4253. https://doi.org/10.3390/s25144253
Ruggaber J, Pölzleitner D, Brembeck J. AI-Based Vehicle State Estimation Using Multi-Sensor Perception and Real-World Data. Sensors. 2025; 25(14):4253. https://doi.org/10.3390/s25144253
Chicago/Turabian StyleRuggaber, Julian, Daniel Pölzleitner, and Jonathan Brembeck. 2025. "AI-Based Vehicle State Estimation Using Multi-Sensor Perception and Real-World Data" Sensors 25, no. 14: 4253. https://doi.org/10.3390/s25144253
APA StyleRuggaber, J., Pölzleitner, D., & Brembeck, J. (2025). AI-Based Vehicle State Estimation Using Multi-Sensor Perception and Real-World Data. Sensors, 25(14), 4253. https://doi.org/10.3390/s25144253