Laser SLAM Matching Localization Method for Subway Tunnel Point Clouds
Abstract
1. Introduction
- Feature extraction part. Firstly, we preprocess the tunnel point clouds. The principal axis direction of the point cloud is estimated by PCA (Principal Component Analysis) and corrected to the Y-axis (tunnel longitudinal direction). Secondly, the cross-section of the point cloud is obtained by using the straight-through filter and then fitted to a circle. Thirdly, we establish the polar coordinate system and extract the initial feature point set by comparing the polar diameter of each point. At last, a global geometric feature extraction model is constructed, and the features are extracted from the scan point cloud and map point cloud.
- Registration part. Aiming at the difference in point cloud density between scan point cloud and map point cloud, we propose a coarse-to-fine registration strategy. Firstly, we divide the coarse registration into two parts of rotation and translation to solve. The rotation matrix and translation matrix are obtained step by step by constructing the constrained registration between feature sets, completing the coarse positioning. On the basis of obtaining the initial pose, the point-to-plane ICP is used to complete the precise registration, and the final pose is solved, which solves the mismatch phenomenon in the subway tunnels.
- Pose optimization part. Aiming at the phenomenon that some scans have a large error in a certain direction, we define the scans with little registration error as keyframe. Then motion compensation will be applied to the scans with error uniform motion model. Finally, we optimize the poses according to the keyframe poses.
- Section 2 (Materials and Methods) details the experimental data acquisition using Innovusion lidar, the global geometric feature extraction model (including cross-section extraction, line/plane feature extraction, and convex hull feature extraction), the coarse-to-fine registration strategy (rotation-translation decomposition and Point-Plane ICP), and the keyframe-based pose optimization method.
- Section 3 (Results) validates the algorithm’s performance through feature extraction visualizations, registration accuracy metrics (RMSE), and comparative experiments with traditional ICP and PCA + ICP methods.
- Section 4 (Discussion and Conclusions) discusses the algorithm’s advantages in addressing geometric similarity challenges in tunnels, compares it with related works, and summarizes the quantitative improvements (3 cm accuracy, 0.7 s processing speed) and engineering applications.
2. Materials and Methods
2.1. Experimental Data
2.2. Feature Extraction Model
2.2.1. Cross-Section Extraction
- Point cloud principal axis estimation
- 2.
- Tunnel coordinate system establishment
- 3.
- Cross-section extraction.
2.2.2. Initial Feature Point Set Extraction
- Fit circular point cloud
- 2.
- Rearrange the numbers of points
- 3.
- Extract the initial feature point set
- If (within ± 0.05 m, 2% of tunnel radius), it is considered that the point is on or near the fitted circle and can be classified as a surface point.
- If , and , the point can be considered as the starting point of the plane. Its point number is , and the point is denoted as . After a small threshold is given, points are taken backward and all points satisfying are recorded as plane point set (Figure 8c).
2.2.3. Line and Plan Feature Extraction
- Line feature extraction
- 2.
- Plane feature extraction
2.2.4. Convex Hull Feature Extraction
- For a point , given the neighbour radius , obtain the neighbour point set of the point through KD tree query. Neighbor radius = 0.2 m (covering typical protrusions like brackets) is used to query neighbors via KD tree.
- Estimate the normal vector of point and the normal vector of point by means of local surface fitting, and then calculate the angle between them by Equation (9):
- Calculate the normal vector change rate of point . Given a threshold , points whose are greater than the threshold can be considered to be the convex hull features, thus completing the extraction. Threshold = 0.1 rad for normal vector change rate (Equation (8)) identifies convex hulls. Only features on the right tunnel wall and track midline are extracted to reduce noise from pipes and improve registration efficiency (Figure 13).
2.3. Coarse-to-Fine Registration Strategy
2.3.1. Step-by-Step Coarse Registration
- Solve the rotation transformation by line features
- 2.
- Solve the translation transformation by line features
2.3.2. Point-Plane Iterative Closest Points Registration
2.4. Pose Optimization
2.4.1. Select Keyframe
2.4.2. Motion Compensation Model
2.4.3. Optimized Adjustment
3. Results
3.1. Feature Extraction Results
3.1.1. Cross-Section Extraction Results
3.1.2. Initial Feature Point Set Extraction Results
3.1.3. Line and Plane Feature Extraction Results
3.1.4. Convex Hull Feature Extraction Results
3.2. Registration Result
3.2.1. Coarse Registration Results
3.2.2. Fine Registration Results
3.2.3. Pose Optimization Results
3.2.4. Comparative Experiment
4. Discussion and Conclusions
- Matching and positioning accuracy is high. Although SLAM can stably perform localization and mapping in most environments, it is prone to tracking loss and drift caused by mismatching in environments with similar spatial structures such as tunnels. The registration algorithm in this paper can directly roughly locate near the correct position, avoiding mismatching, and the subsequent pose optimization algorithm makes a global correction for the still existing errors to ensure the accuracy of positioning in an environment with similar structures.
- The registration efficiency is high. The step-by-step registration strategy adopted by the algorithm in this paper in the coarse registration stage can transform the source point cloud to the approximate position of the target point cloud, which provides a good initial value for fine registration and greatly reduces the amount of calculation in the iterative process. Thus, the registration efficiency is improved.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. Robot. Sci. Syst. 2014, 2, 1–9. [Google Scholar] [CrossRef]
- Shan, T.; Englot, B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765. [Google Scholar] [CrossRef]
- Lin, J.; Zhang, F. Loam livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Online, 31 May–31 August 2020; pp. 3126–3131. [Google Scholar] [CrossRef]
- Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled lidar inertial odometry via smoothing and mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24–30 October 2020; pp. 5135–5142. [Google Scholar] [CrossRef]
- Xu, W.; Zhang, F. FAST-LIO: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter. IEEE Robot. Autom. Lett. 2021, 6, 3317–3324. [Google Scholar] [CrossRef]
- Xu, W.; Cai, Y.; Zhang, F.; He, D.; Lin, J. FAST-LIO2: Fast direct lidar-inertial odometry. IEEE Trans. Robot. 2022, 38, 2053–2073. [Google Scholar] [CrossRef]
- Zhang, J.; Singh, S. Visual-lidar odometry and mapping: Low-drift, robust, and fast. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 2174–2181. [Google Scholar] [CrossRef]
- Shao, W.; Vijayarangan, S.; Li, C.; Kantor, G. Stereo visual inertial lidar simultaneous localization and mapping. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 370–377. [Google Scholar] [CrossRef]
- Shan, T.; Englot, B.; Ratti, C.; Rus, D. LVI-SAM: Tightly-coupled lidar-visual-inertial odometey via smoothing and mapping. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021. [Google Scholar] [CrossRef]
- Qin, T.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
- Qin, C.; Ye, H.; Pranata, C.E.; Han, J.; Zhang, S.; Liu, M. Lins: A lidar-inertial state estimator for robust and efficient navigation. In Proceedings of the 2020 IEEE international conference on robotics and automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 8899–8906. [Google Scholar] [CrossRef]
- Moosmann, F.; Stiller, C. Velodyne SLAM. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 393–398. [Google Scholar] [CrossRef]
- Koide, K.; Miura, J.; Menegatti, E. A portable three-dimensional LIDAR-based system for long-term and wide-area people behavior measurement. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419841532. [Google Scholar] [CrossRef]
- Behley, J.; Stachniss, C. Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments. Robot. Sci. Syst. 2018, 2018, 59. [Google Scholar] [CrossRef]
- Droeschel, D.; Schwarz, M.; Behnke, S. Continuous mapping and localization for autonomous navigation in rough terrain using a 3D laser scanner. Robot. Auton. Syst. 2017, 88, 104–115. [Google Scholar] [CrossRef]
- Wang, W.; Liu, X.; Li, L.; Tian, Y. Intensity-assisted ICP for fast registration of 2D-LIDAR. Sensors 2019, 19, 2124. [Google Scholar] [CrossRef] [PubMed]
- Khan, S.; Wollherr, D.; Buss, M. Modeling laser intensities for simultaneous localization and mapping. IEEE Robot. Autom. Lett. 2016, 1, 692–699. [Google Scholar] [CrossRef]
- Kohlbrecher, S.; von Stryk, O.; Meyer, J.; Klingauf, U. A flexible and scalable SLAM system with full 3D motion estimation. In Proceedings of the 2011 IEEE International Symposium on Safety, Security, and Rescue Robotics, Kyoto, Japan, 1–5 November 2011; pp. 155–160. [Google Scholar] [CrossRef]
- Wang, H.; Wang, C.; Xie, L. Intensity-slam: Intensity assisted localization and mapping for large scale environment. IEEE Robot. Autom. Lett. 2021, 6, 1715–1721. [Google Scholar] [CrossRef]
- Park, Y.S.; Jang, H.; Kim, A. I-LOAM: Intensity enhanced LiDAR odometry and mapping. In Proceedings of the 2020 17th International Conference on Ubiquitous Robots (UR), Kyoto, Japan, 22–26 June 2020; pp. 455–458. [Google Scholar] [CrossRef]
- Shuaixin, L.I.; Jiuren, L.I.; Bin, T.; Long, C.; Li, W.; Li, G. A laser SLAM method for unmanned vehicles in point cloud degenerated tunnel environments. Acta Geod. Cartogr. Sinica 2021, 50, 1487–1499. [Google Scholar] [CrossRef]
- Ren, Z.L.; Wang, L.G.; Bi, L. Improved extended Kalman filter based on fuzzy adaptation for SLAM in underground tunnels. Int. J. Precis. Eng. Manuf. 2019, 20, 2119–2127. [Google Scholar] [CrossRef]
- Prados Sesmero, C.; Villanueva Lorente, S.; Di Castro, M. Graph SLAM built over point clouds matching for robot localization in tunnels. Sensors 2021, 21, 5340. [Google Scholar] [CrossRef] [PubMed]
- Tardioli, D.; Villarroel, J.L. Odometry-less localization in tunnel-like environments. In Proceedings of the 2014 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Espinho, Portugal, 14–15 May 2014; pp. 65–72. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Y.; Dong, F.; Sun, Q.; Song, W. Laser SLAM Matching Localization Method for Subway Tunnel Point Clouds. Sensors 2025, 25, 3681. https://doi.org/10.3390/s25123681
Zhang Y, Dong F, Sun Q, Song W. Laser SLAM Matching Localization Method for Subway Tunnel Point Clouds. Sensors. 2025; 25(12):3681. https://doi.org/10.3390/s25123681
Chicago/Turabian StyleZhang, Yi, Feiyang Dong, Qihao Sun, and Weiwei Song. 2025. "Laser SLAM Matching Localization Method for Subway Tunnel Point Clouds" Sensors 25, no. 12: 3681. https://doi.org/10.3390/s25123681
APA StyleZhang, Y., Dong, F., Sun, Q., & Song, W. (2025). Laser SLAM Matching Localization Method for Subway Tunnel Point Clouds. Sensors, 25(12), 3681. https://doi.org/10.3390/s25123681