Front–Rear Camera Switching Strategy for Indoor Localization in Automated Valet Parking Systems with Extended Kalman Filter and Fiducial Markers
Abstract
1. Introduction
2. System Overview
2.1. Vehicle System and Configuration
2.2. Localization System Configuration
3. Methodology
- An algorithm that utilizes fiducial markers and an EKF for precise positioning;
- A front–rear camera-switching algorithm designed to compensate for positioning gaps during parking maneuvers. Furthermore, this section details the performance evaluation of the proposed algorithms through the following:
- Simulations conducted in a digital twin environment using the CAR Learning to Act (CARLA) simulator;
- Real-vehicle experiments.
3.1. Fiducial Marker
3.2. Design of Extened Kalman Filter
3.2.1. Prediction Model
3.2.2. Correction Model
3.2.3. Coordinate Transformation for Measurement Model
- : The vehicle’s pose relative to the global frame. This is the state vector that we aim to estimate;
- : The camera’s pose relative to the vehicle frame. This is a static transformation, determined by the camera’s fixed mounting position and orientation. It is calibrated beforehand;
- : The marker’s pose relative to the camera frame. This is a dynamic measurement obtained from the marker recognition algorithm.
3.3. Front and Rear Camera Switching Algorithm
- Frequent switching in threshold regions based solely on position could lead to system instability;
- Forward perpendicular parking mode is necessary for charging services during parking.
- The vehicle’s motion direction (forward or reverse);
- Stabilization conditions in threshold regions.
Algorithm 1. Hysteresis-based Camera Switching Logic | |
Require: : Range of angles defining reverse motion. | |
Ensure: | |
1: | if then |
2: | return ‘Rear’ |
3: | else if |
4: | return ‘Front’ |
5: | else |
6: | return |
7: | end if |
4. Experiments and Results
4.1. Experimental Environment Setup
4.2. Simulation
4.2.1. Simulation Setup
4.2.2. Simulation Result
4.3. Real-Car Experiment
4.3.1. Real-Car Experiment Setup
4.3.2. EKF Experiment Results
4.3.3. Camera Switching Algorithm Experiment Results
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
AVP | Automated Valet Parking |
CAN | Controller Area Network |
CARLA | CAR Learning to Act |
DOF | Degrees of Freedom |
ECU | Electronic Control Unit |
SLAM | Simultaneous Localization and Mapping |
EKF | Extended Kalman Filter |
GPS | Global Positioning System |
IVN | In-Vehicle Network |
INS | Inertial Navigation System |
RMS | Root Mean Square |
ROS | Robot Operating System |
References
- Khalid, M.; Wang, K.; Aslam, N.; Cao, Y.; Ahmad, N.; Khan, M.K. From smart parking towards autonomous valet parking: A survey, challenges and future Works. J. Netw. Comput. Appl. 2021, 175, 102935. [Google Scholar] [CrossRef]
- Timpner, J.; Friedrichs, S.; van Balen, J.; Wolf, L. K-Stacks: High-density valet parking for automated vehicles. In Proceedings of the 2015 IEEE Intelligent Vehicles Symposium (IV), Seoul, Republic of Korea, 28 June–1 July 2015. [Google Scholar]
- Sturm, A.W.; Kascha, M.; Mejri, M.A.; Henze, R.; Heister, L.; Mueck, A. Automated Park and Charge: Concept and Energy Demand Calculation; No. 2024-01-2988, SAE Technical Paper; SAE International: Warrendale, PA, USA, 2024. [Google Scholar]
- ISO 23374-1; Intelligent Transport Systems-Automated Valet Parking Systems, Part 1, Annex C. International Organization for Standardization: Geneva, Switzerland, 2023.
- Bian, X.; Chen, W.; Ran, D.; Liang, Z. FiMa-reader: A cost-effective fiducial marker reader system for autonomous mobile robot docking in manufacturing environments. Appl. Sci. 2023, 13, 13079. [Google Scholar] [CrossRef]
- Shao, X.; Zhang, L.; Zhang, T.; Shen, Y.; Zhou, Y. Mofis slam: A multi-object semantic slam system with front-view, inertial, and surround-view sensors for indoor parking. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 4788–4803. [Google Scholar] [CrossRef]
- Leutenegger, S.; Lynen, S.; Bosse, M.; Siegwart, R.; Furgale, P. Keyframe-based visual–inertial odometry using nonlinear optimization. Int. J. Robot. Res. 2015, 34, 314–334. [Google Scholar] [CrossRef]
- Zhang, J.; Singh, S. LOAM: Lidar odometry and mapping in real-time. Robot. Sci. Syst. 2014, 3–8. [Google Scholar]
- Yin, H.; Wang, Y.; Tang, L.; Ding, X.; Huang, S.; Xiong, R. 3D LIDAR map compression for efficient localization on resource constrained vehicles. IEEE Trans. Intell. Transp. Syst. 2020, 22, 837–852. [Google Scholar] [CrossRef]
- Zhu, Y.; An, H.; Wang, H.; Xu, R.; Wu, M.; Lu, K. RC-SLAM: Road constrained stereo visual SLAM system based on graph optimization. Sensors 2024, 24, 536. [Google Scholar] [CrossRef] [PubMed]
- Engel, J.; Schöps, T.; Cremers, D. LSD-SLAM: Large-scale direct monocular SLAM. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
- Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
- Song, J.; Jo, H.; Jin, Y.; Lee, S.J. Uncertainty-aware depth network for visual inertial odometry of mobile robots. Sensors 2024, 24, 6665. [Google Scholar] [CrossRef] [PubMed]
- Zhang, H.; Du, L.; Bao, S.; Yuan, J.; Ma, S. LVIO-fusion: Tightly-coupled LiDAR-visual-inertial odometry and mapping in degenerate environments. IEEE Robot. Autom. Lett. 2024, 9, 3783–3790. [Google Scholar] [CrossRef]
- Jiang, S.; Zhao, C.; Zhu, Y.; Wang, C.; Du, Y. A practical and economical ultra-wideband base station placement approach for indoor autonomous driving systems. J. Adv. Transp. 2022, 2022, 3815306. [Google Scholar] [CrossRef]
- Qin, T.; Chen, T.; Chen, Y.; Su, Q. AVP-SLAM: Semantic visual mapping and localization for autonomous vehicles in the parking lot. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020. [Google Scholar]
- Qin, T.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
- Le Gentil, C.; Vidal-Calleja, T.; Huang, S. In2lama: Inertial lidar localisation and mapping. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
- Lim, H.; Lee, Y.S. Real-time single camera SLAM using fiducial markers. In Proceedings of the 2009 ICCAS-SICE, Fukuoka, Japan, 18–21 August 2009. [Google Scholar]
- Rezaei, S.; Sengupta, R. Kalman filter-based integration of DGPS and vehicle sensors for localization. IEEE Trans. Control. Syst. Technol. 2007, 15, 1080–1088. [Google Scholar] [CrossRef]
- Polack, P.; Altché, F.; d’Andréa-Novel, B.; de La Fortelle, A. The kinematic bicycle model: A consistent model for planning feasible trajectories for autonomous vehicles? In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017. [Google Scholar]
- Julier, S.J.; Uhlmann, J.K. New extension of the Kalman filter to nonlinear systems. In Proceedings Volume 3068, Signal Processing, Sensor Fusion, and Target Recognition VI; SPIE: Bellingham, WA, USA, 1997. [Google Scholar]
- Spong, M.W.; Hutchinson, S.; Vidyasagar, M. Robot Modeling and Control; Wiley: New York, NY, USA, 2006; pp. 76–85. [Google Scholar]
- Lim, H.; Yang, J.-H.; Lee, Y.-S.; Kim, J.-G. Indoor single camera SLAM using fiducial markers. J. Inst. Control. Robot. Syst. 2009, 15, 353–364. [Google Scholar] [CrossRef]
- Häne, C.; Sattler, T.; Pollefeys, M. Obstacle detection for self-driving cars using only monocular cameras and wheel odometry. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015. [Google Scholar]
- Grimmett, H.; Buerki, M.; Paz, L.; Pinies, P.; Furgale, P.; Posner, I.; Newman, P. Integrating metric and semantic maps for vision-only automated parking. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015. [Google Scholar]
- Mourikis, A.I.; Roumeliotis, S.I. A multi-state constraint Kalman filter for vision-aided inertial navigation. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007. [Google Scholar]
- Schwesinger, U.; Rufli, M.; Furgale, P.; Siegwart, R. A sampling-based partial motion planning framework for system-compliant navigation along a reference path. In Proceedings of the 2013 IEEE Intelligent Vehicles Symposium (IV), City of Gold Coast, Australia, 23–26 June 2013. [Google Scholar]
- Brunker, A.; Wohlgemuth, T.; Frey, M.; Gauterin, F. Odometry 2.0: A slip-adaptive EIF-based four-wheel-odometry model for parking. IEEE Trans. Intell. Veh. 2018, 4, 114–126. [Google Scholar] [CrossRef]
Max Error (X) | RMS (X) | SD (X) | Rate of Error Reduction (X) | Max Error (Y) | RMS (Y) | SD (Y) | Rate of Error Reduction * (Y) | |
---|---|---|---|---|---|---|---|---|
INS | 0.2980 | 0.1813 | 0.0995 | - | 0.4878 | 0.2755 | 0.1863 | - |
EKF compensation | 0.3070 | 0.1813 | 0.1004 | 0% | 0.1777 | 0.0991 | 0.0962 | 64% |
Max Error (X) | RMS (X) | SD (X) | Rate of Error Reduction (X) | Max Error (Y) | RMS (Y) | SD (Y) | Rate of Error Reduction * (Y) | |
---|---|---|---|---|---|---|---|---|
INS | 0.8360 | 0.3654 | 0.3581 | - | 1.4490 | 0.5310 | 0.4536 | - |
EKF compensation | 0.2791 | 0.1455 | 0.1417 | 60% | 0.3261 | 0.1285 | 0.1285 | 76% |
Max Error (X) | RMS (X) | SD (X) | Rate of Error Reduction (X) | Max Error (Y) | RMS (Y) | SD (Y) | Rate of Error Reduction * (Y) | |
---|---|---|---|---|---|---|---|---|
Conventional method | 0.4239 | 0.1939 | 0.1852 | - | 0.3261 | 0.1389 | 0.1374 | - |
Proposed Algorithm | 0.2791 | 0.1455 | 0.1417 | 25% | 0.2987 | 0.1145 | 0.1139 | 17.6% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lee, Y.-W.; Kim, D.-J.; Jung, Y.-J.; Kim, M.-S. Front–Rear Camera Switching Strategy for Indoor Localization in Automated Valet Parking Systems with Extended Kalman Filter and Fiducial Markers. Appl. Sci. 2025, 15, 9927. https://doi.org/10.3390/app15189927
Lee Y-W, Kim D-J, Jung Y-J, Kim M-S. Front–Rear Camera Switching Strategy for Indoor Localization in Automated Valet Parking Systems with Extended Kalman Filter and Fiducial Markers. Applied Sciences. 2025; 15(18):9927. https://doi.org/10.3390/app15189927
Chicago/Turabian StyleLee, Young-Woo, Dong-Jun Kim, Yu-Jung Jung, and Moon-Sik Kim. 2025. "Front–Rear Camera Switching Strategy for Indoor Localization in Automated Valet Parking Systems with Extended Kalman Filter and Fiducial Markers" Applied Sciences 15, no. 18: 9927. https://doi.org/10.3390/app15189927
APA StyleLee, Y.-W., Kim, D.-J., Jung, Y.-J., & Kim, M.-S. (2025). Front–Rear Camera Switching Strategy for Indoor Localization in Automated Valet Parking Systems with Extended Kalman Filter and Fiducial Markers. Applied Sciences, 15(18), 9927. https://doi.org/10.3390/app15189927