Accurate and Robust Train Localization: Fusing Degeneracy-Aware LiDAR-Inertial Odometry and Visual Landmark Correction
Abstract
1. Introduction
- An accurate and robust positioning scheme tailored for railway trains has been designed. The railway train operating scenario spans a large scale, so the IMU preintegration factor incorporates the Earth’s rotation compensation. By combining chi-square testing and eigenvalues, a non-heuristic threshold is adopted for degeneracy detection, which determines the weight of the LiDAR-inertial odometry in the factor graph. This approach achieves a high-frequency and highly robust positioning output.
- We proposed a kilometer post-based error correction method. The process begins with hierarchical visual landmark detection: a lightweight neural network identifies kilometer posts and their numerical identifiers, which are then combined with point cloud data to acquire a precise position. Subsequently, kilometer post constraints are established to correct the pose.
- Extensive testing and experimentation have been conducted on the proposed method in real railway train operating environments. Experimental results demonstrate that the proposed method can achieve high-precision positioning under high-speed, long-distance railway train operations. This provides a new avenue for the development of intelligent railway trains.
2. Related Work
2.1. Train Section Localization
2.2. Train Global Localization
3. Methodology
3.1. Problem Definition
3.2. System Overview
3.3. Section Positioning Odometry
3.3.1. IMU Preintegration Factor
3.3.2. Odometry Factor
3.4. Hierarchical Visual Detector
3.4.1. Detection of Kilometer Post
3.4.2. Acquisition of Absolute Position
3.5. Global Positioning Odometry
4. Experiments
4.1. Experimental Setup
4.2. Ablation Study
4.3. Hierarchical Visual Detector Experiments
4.4. Accuracy Evaluation
4.5. Time Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zhang, Y.; An, H. Fusion Application Scheme of GPS+BDS Satellite Positioning Technology in ITCS System. Railw. Signal. Commun. 2021, 59, 37–41. [Google Scholar] [CrossRef]
- Wu, Z.; Ren, X.; Xu, K.; He, G. Research on Modeling and Simulation Method of Train Integrated Positioning Based on BDS/INS System. Railw. Signal. Commun. 2024, 60, 20–26. [Google Scholar] [CrossRef]
- Xu, W.; Cai, Y.; He, D.; Lin, J.; Zhang, F. FAST-LIO2: Fast Direct LiDAR-inertial Odometry. IEEE Trans. Robot. 2022, 38, 2053–2073. [Google Scholar] [CrossRef]
- Zhang, H.; Huo, J.; Huang, Y.; Wang, D. Perception-Aware Baesd Quadrotor Translation and Yaw Angle Planner in Unpredictable Dynamic Situations. IEEE Trans. Aerosp. Electron. Syst. 2024, 61, 47–60. [Google Scholar] [CrossRef]
- Cheng, R.; Song, Y.; Chen, D.; Chen, L. Intelligent Localization of a High-Speed Train Using LSSVM and the Online Sparse Optimization Approach. IEEE Trans. Intell. Transp. Syst. 2017, 18, 2071–2084. [Google Scholar] [CrossRef]
- Heirich, O.; Siebler, B. Onboard Train Localization with Track Signatures: Towards GNSS Redundancy. In Proceedings of the 30th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS + 2017), Portland, OR, USA, 25–29 September 2017; pp. 3231–3237. [Google Scholar] [CrossRef]
- Liu, J.; Cai, B.-G.; Wang, J. Track-constrained GNSS/odometer-based train localization using a particle filter. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Gothenburg, Sweden, 8 August 2016; pp. 877–882. [Google Scholar]
- Allotta, B.; D’Adamio, P.; Malvezzi, M.; Pugi, L.; Ridolfi, A.; Vettori, G. A localization algorithm for railway vehicles. In Proceedings of the 2015 IEEE International Instrumentation and Measurement Technology Conference (I2MTC) Proceedings, Pisa, Italy, 11–14 May 2015; pp. 681–686. [Google Scholar] [CrossRef]
- Ye, H.; Chen, Y.; Liu, M. Tightly Coupled 3D Lidar Inertial Odometry and Mapping. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3144–3150. [Google Scholar] [CrossRef]
- Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24–30 October 2020; pp. 5135–5142. [Google Scholar] [CrossRef]
- Xu, W.; Zhang, F. FAST-LIO: A Fast, Robust LiDAR-Inertial Odometry Package by Tightly-Coupled Iterated Kalman Filter. IEEE Robot. Autom. Lett. 2021, 6, 3317–3324. [Google Scholar] [CrossRef]
- Zhang, H.; Wang, D.; Huo, J. A Visual-Inertial Dynamic Object Tracking SLAM Tightly Coupled System. IEEE Sens. J. 2023, 23, 19905–19917. [Google Scholar] [CrossRef]
- Tschopp, F.; Schneider, T.; Palmer, A.W.; Nourani-Vatani, N.; Cadena, C.; Siegwart, R.; Nieto, J. Experimental Comparison of Visual-Aided Odometry Methods for Rail Vehicles. IEEE Robot. Autom. Lett. 2019, 4, 1815–1822. [Google Scholar] [CrossRef]
- Heirich, O.; Robertson, P.; Strang, T. RailSLAM-Localization of rail vehicles and mapping of geometric railway tracks. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 5212–5219. [Google Scholar] [CrossRef]
- Daoust, T.; Pomerleau, F.; Barfoot, T.D. Light at the End of the Tunnel: High-Speed LiDAR-Based Train Localization in Challenging Underground Environments. In Proceedings of the 2016 13th Conference on Computer and Robot Vision (CRV), Victoria, BC, Canada, 1–3 June 2016; pp. 93–100. [Google Scholar] [CrossRef]
- Wang, Y.; Song, W.; Lou, Y.; Zhang, Y.; Huang, F.; Tu, Z.; Liang, Q. Rail Vehicle Localization and Mapping with LiDAR-Vision-Inertial-GNSS Fusion. IEEE Robot. Autom. Lett. 2022, 7, 9818–9825. [Google Scholar] [CrossRef]
- Dai, X.; Song, W.; Wang, Y.; Xu, Y.; Lou, Y.; Tang, W. LiDAR–Inertial Integration for Rail Vehicle Localization and Mapping in Tunnels. IEEE Sens. J. 2023, 23, 17426–17438. [Google Scholar] [CrossRef]
- Zhang, H.; Wang, D.; Huo, J. Mounting Misalignment and Time Offset Self-Calibration Online Optimization Method for Vehicular Visual-Inertial-Wheel Odometer System. IEEE Trans. Instrum. Meas. 2024, 73, 5017113. [Google Scholar] [CrossRef]
- Zhen, W.; Scherer, S. Estimating the Localizability in Tunnel-like Environments using LiDAR and UWB. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 4903–4908. [Google Scholar] [CrossRef]
- Kabuka, M.; Arenas, A. Position verification of a mobile robot using standard pattern. IEEE J. Robot. Autom. 2003, 3, 505–516. [Google Scholar]
- Kobayashi, H. A new proposal for self-localization of mobile robot by self-contained 2D barcode landmark//SICE Conference. In Proceedings of the SICE Annual Conference (SICE), Akita, Japan, 20–23 August 2012; pp. 2080–2083. [Google Scholar]
- Samperio, R.; Hu, H. Real-time landmark modelling for visual-guided walking robots. Int. J. Comput. Appl. Technol. 2011, 41, 253–261. [Google Scholar]
- Zhao, L.; Hu, Y.; Han, F.; Dou, Z.; Li, S.; Zhang, Y.; Wu, Q. Multi-sensor missile-borne LiDAR point cloud data augmentation based on Monte Carlo distortion simulation. CAAI Trans. Intell. Technol. 2024, 10, 300–316. [Google Scholar]
- Yang, X.; Castillo, R.; Zou, Y.; Wotherspoon, L. Semantic segmentation of bridge point clouds with a synthetic data augmentation strategy and graph-structured deep metric learning. Autom. Constr. 2023, 150, 104838. [Google Scholar]
- Dong, J.; Wang, N.; Fang, H.; Lu, H.; Ma, D.; Hu, H. Automatic augmentation and segmentation system for three-dimensional point cloud of pavement potholes by fusion convolution and transformer. Adv. Eng. Inform. 2024, 60, 102378. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Jocher, G.; Stoken, A.; Borovec, J.; NanoCode012; Chaurasia, A.; Liu, C.; Xie, T.; Abhiram, V.; Laughing; Tkianai; et al. ultralytics/yolov5: v5.0-YOLOv5-P6 1280 Models, AWS, Supervise.ly and YouTube Integrations, Version v5.0; Zenodo: Geneva, Switzerland, 2021. [CrossRef]
- Wang, D.; Shi, X.; Zhang, H.; Huo, J.; Cai, C. Visual Landmark-Aided LiDAR–Inertial Odometry for Rail Vehicle. IEEE Sens. J. 2024, 24, 27653–27665. [Google Scholar] [CrossRef]
- Teng, D.; Zhao, Y.; Zhang, K.F.A.M. The Application Research of Autonomous Train Positioning Image Recognition Technology. Railw. Transp. Econ. 2020, 42, 43–48. [Google Scholar]
- Kaess, M.; Johannsson, H.; Roberts, R.; Ila, V.; Leonard, J.J.; Dellaert, F. iSAM2: Incremental smoothing and mapping using the Bayes tree. Int. J. Robot. Res. 2011, 31, 216–235. [Google Scholar]
- Groves, P.D. Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems; Artech House: Boston, MA, USA, 2008. [Google Scholar]
- Tang, H.; Zhang, T.; Niu, X.; Fan, J.; Liu, J. Impact of the Earth Rotation Compensation on MEMS-IMU Preintegration of Factor Graph Optimization. IEEE Sens. J. 2022, 22, 17194–17204. [Google Scholar]
- Qin, T.; Li, P.; Shen, S. VINS-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar]
- Lee, J.W.; Komatsu, R.; Shinozaki, M.; Kitajima, T.; Asama, H.; An, Q.; Yamashita, A. Switch-SLAM: Switching-Based LiDAR-Inertial-Visual SLAM for Degenerate Environments. IEEE Robot. Autom. Lett. 2024, 9, 7270–7277. [Google Scholar]
- Zhang, J.; Kaess, M.; Singh, S. On degeneracy of optimization-based state estimation problems. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 809–816. [Google Scholar] [CrossRef]
- Scaramuzza, D.; Fraundorfer, F. Visual Odometry [Tutorial]. IEEE Robot. Autom. Mag. 2011, 18, 80–92. [Google Scholar] [CrossRef]
- Shan, T.; Englot, B.; Ratti, C.; Rus, D. LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May 2021–5 June 2021; pp. 5692–5698. [Google Scholar] [CrossRef]
Variable Names | Variable Contents |
---|---|
States related to train positioning | |
The difference between the true state and the nominal state denoted as Error State | |
LiDAR Frame, IMU Frame, Camera Frame, World Frame defined by IMU initial frame | |
Extrinsic parameters between coordinate system A and coordinate system B | |
Logarithmic map from SO(3) to Euclidean space |
Equipment | Description |
---|---|
RS-M1 solid-state LiDAR (RoboSense Technology Co., Ltd., Shenzhen, China) | Irregular scanning pattern, FoV: 120° × 25°, 10 Hz |
Nvidia jetson agx orin (NVIDIA Corporation, Santa Clara, CA, USA) | 12-core Arm Cortex-A78AE 64-bit CPU, 64 GB RAM |
G90 integrated navigation suite (WHEELTECH Co., Ltd., Dongguan, China) | Contains a 9-axis IMU (8 ) and GNSS positioning receiver, used to provide ground truth trajectories |
ZED2 camera (Stereolabs, San Francisco, CA, USA) | 20 Hz, 960 × 540 resolution, only the left camera is used for image capture |
Dataset | Distance (m) | Duration (s) |
---|---|---|
morn_202411041117 | 3910.54 | 470 |
noon_202411061502 | 8375.53 | 1195 |
noon_202411061331 | 19,020.51 | 1263 |
night_202411061918 | 24,103.19 | 1120 |
Dataset | LE (w/o ERP *) | LE (w/ERP) |
---|---|---|
morn_202411041117 | 16.85 | 14.11 |
noon_202411061502 | 26.87 | 22.08 |
noon_202411061331 | 106.48 | 78.64 |
night_202411061918 | 125.34 | 86.21 |
Method | ATEMAX (m) | ATERMSE (m) | Length Error (m) |
---|---|---|---|
FAST-LIO2 | 343.91 | 165.86 | 18.77 |
LIO-SAM | 342.13 | 234.39 | 42.46 |
LVI-SAM | 738.26 | 1125.91 | 1187.19 |
Ours | 9.89 | 3.88 | 3.74 |
Settings | Module | Time (ms) |
---|---|---|
LIDAR: 10 Hz | Section positioning odometry | 22.99 |
Camera: 20 Hz, 960 × 540 | Global positioning odometry | 89.54 |
IMU: 100 Hz | Hierarchical visual detector | 28.57 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yue, L.; Wang, P.; Mu, J.; Cai, C.; Wang, D.; Ren, H. Accurate and Robust Train Localization: Fusing Degeneracy-Aware LiDAR-Inertial Odometry and Visual Landmark Correction. Sensors 2025, 25, 4637. https://doi.org/10.3390/s25154637
Yue L, Wang P, Mu J, Cai C, Wang D, Ren H. Accurate and Robust Train Localization: Fusing Degeneracy-Aware LiDAR-Inertial Odometry and Visual Landmark Correction. Sensors. 2025; 25(15):4637. https://doi.org/10.3390/s25154637
Chicago/Turabian StyleYue, Lin, Peng Wang, Jinchao Mu, Chen Cai, Dingyi Wang, and Hao Ren. 2025. "Accurate and Robust Train Localization: Fusing Degeneracy-Aware LiDAR-Inertial Odometry and Visual Landmark Correction" Sensors 25, no. 15: 4637. https://doi.org/10.3390/s25154637
APA StyleYue, L., Wang, P., Mu, J., Cai, C., Wang, D., & Ren, H. (2025). Accurate and Robust Train Localization: Fusing Degeneracy-Aware LiDAR-Inertial Odometry and Visual Landmark Correction. Sensors, 25(15), 4637. https://doi.org/10.3390/s25154637