A Lightweight LiDAR–Visual Odometry Based on Centroid Distance in a Similar Indoor Environment
Abstract
1. Introduction
- ①
- We design a forward-projection and interpolation strategy to construct dense colored point clouds from sparse 16-line LiDAR scans and monocular images. This enhancement improves vertical resolution and front-view coverage, enabling scan-to-map pose estimation and reducing z-axis drift in long narrow indoor scenes.
- ②
- We introduce a lightweight multi-modal feature extraction algorithm based on centroid distances, which jointly considers geometric and photometric saliency. A dynamic radius adjustment mechanism is incorporated to adapt neighborhood size according to the point’s distance from the LiDAR sensor, ensuring stable feature quality under varying point densities.
- ③
- We incorporate a feature reliability weighting strategy into the back-end joint optimization, assigning higher weights to original LiDAR points and lower weights to interpolated or color-enhanced points. The proposed system is validated in challenging indoor environments and demonstrates strong localization accuracy and robustness.
2. Related Work
3. System Overview
4. Methodology
4.1. Hybrid Dense Point Cloud Construction Based on Range Map
4.2. Search-Radius-Optimized Centroid-Based Feature Point Extraction
4.3. Geometric and Color Joint Optimization
5. Experiment
5.1. Experiment Setup
5.2. Hybrid Dense Point Cloud Construction
5.3. Results of Feature Point Extraction
5.4. Localization Accuracy on the Simulation Dataset
5.5. Runtime Evaluation on the Simulation Dataset
5.6. Results of Public Benchmark Dataset in Indoor Environment
5.7. Results of Public Benchmark Dataset in Outdoor Environments
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Jiang, W.; Liu, T.; Chen, H.; Song, C.; Chen, Q.; Geng, T. Multi-Frequency Phase Observable-Specific Signal Bias Estimation and Its Application in the Precise Point Positioning with Ambiguity Resolution. GPS Solut. 2023, 27, 4. [Google Scholar] [CrossRef]
- Jiang, W.; Chen, Y.; Chen, Q.; Chen, H.; Pan, Y.; Liu, X.; Liu, T. High Precision Deformation Monitoring with Integrated GNSS and Ground Range Observations in Harsh Environment. Measurement 2022, 204, 112179. [Google Scholar] [CrossRef]
- Zhou, H.; Yao, Z.; Lu, M. Lidar/UWB Fusion Based SLAM With Anti-Degeneration Capability. IEEE Trans. Veh. Technol. 2021, 70, 820–830. [Google Scholar] [CrossRef]
- Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-Time. In Proceedings of the Robotics: Science and Systems X, Robotics: Science and Systems Foundation, Berkeley, CA, USA, 12–16 July 2014. [Google Scholar]
- Shan, T.; Englot, B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765. [Google Scholar]
- Wang, H.; Wang, C.; Chen, C.-L.; Xie, L. F-LOAM: Fast LiDAR Odometry and Mapping. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 4390–4396. [Google Scholar]
- Lin, J.; Zhang, F. Loam_livox: A Fast, Robust, High-Precision LiDAR Odometry and Mapping Package for LiDARs of Small FoV. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020. [Google Scholar]
- Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-Coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 5135–5142. [Google Scholar]
- Xu, W.; Zhang, F. FAST-LIO: A Fast, Robust LiDAR-Inertial Odometry Package by Tightly-Coupled Iterated Kalman Filter. IEEE Robot. Autom. Lett. 2021, 6, 3317–3324. [Google Scholar] [CrossRef]
- Xu, W.; Cai, Y.; He, D.; Lin, J.; Zhang, F. FAST-LIO2: Fast Direct LiDAR-Inertial Odometry. IEEE Trans. Robot. 2022, 38, 2053–2073. [Google Scholar] [CrossRef]
- Zhang, J.; Kaess, M.; Singh, S. A Real-Time Method for Depth Enhanced Visual Odometry. Auton. Robot. 2017, 41, 31–43. [Google Scholar] [CrossRef]
- Graeter, J.; Wilczynski, A.; Lauer, M. LIMO: Lidar-Monocular Visual Odometry. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar]
- Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.; Tardós, J.D. ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
- Pire, T.; Baravalle, R.; D’Alessandro, A.; Civera, J. Real-Time Dense Map Fusion for Stereo SLAM. Robotica 2018, 36, 1510–1526. [Google Scholar] [CrossRef]
- Schops, T.; Sattler, T.; Pollefeys, M. BAD SLAM: Bundle Adjusted Direct RGB-D SLAM. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 134–144. [Google Scholar]
- Yan, C.; Qu, D.; Xu, D.; Zhao, B.; Wang, Z.; Wang, D.; Li, X. GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024, Seattle, WA, USA, 16–22 June 2024. [Google Scholar]
- Zhao, H.; Guan, W.; Lu, P. LVI-GS: Tightly Coupled LiDAR–Visual–Inertial SLAM Using 3-D Gaussian Splatting. IEEE Trans. Instrum. Meas. 2025, 74, 7504810. [Google Scholar] [CrossRef]
- Di Giammarino, L.; Giacomini, E.; Brizi, L.; Salem, O.; Grisetti, G. Photometric LiDAR and RGB-D Bundle Adjustment. IEEE Robot. Autom. Lett. 2023, 8, 4362–4369. [Google Scholar] [CrossRef]
- Zhang, J.; Singh, S. Visual-Lidar Odometry and Mapping: Low-Drift, Robust, and Fast. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 2174–2181. [Google Scholar]
- Shin, Y.-S.; Park, Y.S.; Kim, A. DVL-SLAM: Sparse Depth Enhanced Direct Visual-LiDAR SLAM. Auton. Robot. 2020, 44, 115–130. [Google Scholar] [CrossRef]
- Shin, Y.-S.; Park, Y.S.; Kim, A. Direct Visual SLAM Using Sparse Depth for Camera-LiDAR System. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 5144–5151. [Google Scholar]
- Chou, C.-C.; Chou, C.-F. Efficient and Accurate Tightly-Coupled Visual-Lidar SLAM. IEEE Trans. Intell. Transp. Syst. 2022, 23, 14509–14523. [Google Scholar] [CrossRef]
- Chen, W.; Shang, G.; Ji, A.; Zhou, C.; Wang, X.; Xu, C.; Li, Z.; Hu, K. An Overview on Visual SLAM: From Tradition to Semantic. Remote Sens. 2022, 14, 3010. [Google Scholar] [CrossRef]
- Li, Q.; Chen, S.; Wang, C.; Li, X.; Wen, C.; Cheng, M.; Li, J. LO-Net: Deep Real-Time Lidar Odometry. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2019, Long Beach, CA, USA, 15–20 June 2019; pp. 8473–8482. [Google Scholar]
- Lu, W.; Zhou, Y.; Wan, G.; Hou, S.; Song, S. L3-Net: Towards Learning Based LiDAR Localization for Autonomous Driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2019, Long Beach, CA, USA, 15–20 June 2019; pp. 6389–6398. [Google Scholar]
- Yin, D.; Zhang, Q.; Liu, J.; Liang, X.; Wang, Y.; Maanpää, J.; Ma, H.; Hyyppä, J.; Chen, R. CAE-LO: LiDAR Odometry Leveraging Fully Unsupervised Convolutional Auto-Encoder for Interest Point Detection and Feature Description. arXiv 2020, arXiv:2001.01354. [Google Scholar]
- Liu, J.; Zhuo, D.; Feng, Z.; Zhu, S.; Peng, C.; Liu, Z.; Wang, H. DVLO: Deep Visual-LiDAR Odometry with Local-to-Global Feature Fusion and Bi-Directional Structure Alignment. In European Conference on Computer Vision; Springer Nature: Cham, Switzerland, 2024. [Google Scholar]
- Zhu, J.; Li, H.; Zhang, T. Camera, LiDAR, and IMU Based Multi-Sensor Fusion SLAM: A Survey. Tsinghua Sci. Technol. 2024, 29, 415–429. [Google Scholar] [CrossRef]
- Zhang, Y.; Shi, P.; Li, J. 3D LiDAR SLAM: A Survey. Photogramm. Rec. 2024, 39, 457–517. [Google Scholar] [CrossRef]
- Tuna, T.; Nubert, J.; Nava, Y.; Khattak, S.; Hutter, M. X-ICP: Localizability-Aware LiDAR Registration for Robust Localization in Extreme Environments. IEEE Trans. Robot. 2024, 40, 452–471. [Google Scholar] [CrossRef]
- Gao, H.; Zhang, X.; Fang, Y.; Yuan, J. A Line Segment Extraction Algorithm Using Laser Data Based on Seeded Region Growing. Int. J. Adv. Robot. Syst. 2018, 15, 172988141875524. [Google Scholar] [CrossRef]
- Ji, M.; Shi, W.; Cui, Y.; Liu, C.; Chen, Q. Adaptive Denoising-Enhanced LiDAR Odometry for Degeneration Resilience in Diverse Terrains. IEEE Trans. Instrum. Meas. 2024, 73, 8501715. [Google Scholar] [CrossRef]
- Tang, H.; Zhang, T.; Niu, X.; Wang, L.; Wei, L.; Liu, J. FF-LINS: A Consistent Frame-to-Frame Solid-State-LiDAR-Inertial State Estimator. IEEE Robot. Autom. Lett. 2023, 8, 8525–8532. [Google Scholar] [CrossRef]
- Lin, J.; Zhang, F. R3LIVE: A Robust, Real-Time, RGB-Colored, LiDAR-Inertial-Visual Tightly-Coupled State Estimation and Mapping Package. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 10672–10678. [Google Scholar]
- Huang, S.-S.; Ma, Z.-Y.; Mu, T.-J.; Fu, H.; Hu, S.-M. Lidar-Monocular Visual Odometry Using Point and Line Features. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 1091–1097. [Google Scholar]
- Ress, V.; Zhang, W.; Skuddis, D.; Haala, N.; Soergel, U. SLAM for Indoor Mapping of Wide Area Construction Environments. arXiv 2024, arXiv:2404.17215. [Google Scholar] [CrossRef]
- Du, W.; Beltrame, G. Real-Time Simultaneous Localization and Mapping with LiDAR Intensity. arXiv 2023, arXiv:2301.09257. [Google Scholar]
- Boche, S.; Laina, S.B.; Leutenegger, S. Tightly-Coupled LiDAR-Visual-Inertial SLAM and Large-Scale Volumetric Occupancy Mapping. In Proceedings of the 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 13–17 May 2024. [Google Scholar]
- Wen, T.; Fang, Y.; Lu, B.; Zhang, X.; Tang, C. LIVER: A Tightly Coupled LiDAR-Inertial-Visual State Estimator With High Robustness for Underground Environments. IEEE Robot. Autom. Lett. 2024, 9, 2399–2406. [Google Scholar] [CrossRef]
- Zhang, H.; Du, L.; Bao, S.; Yuan, J.; Ma, S. LVIO-Fusion:Tightly-Coupled LiDAR-Visual-Inertial Odometry and Mapping in Degenerate Environments. IEEE Robot. Autom. Lett. 2024, 9, 3783–3790. [Google Scholar] [CrossRef]
- Xu, X.; Hu, J.; Zhang, L.; Cao, C.; Yang, J.; Ran, Y.; Tan, Z.; Xu, L.; Luo, M. Detection-First Tightly-Coupled LiDAR-Visual-Inertial SLAM in Dynamic Environments. Measurement 2025, 239, 115506. [Google Scholar] [CrossRef]
- Zhou, Z.; Guo, C.; Pan, Y.; Li, X.; Jiang, W. A 2-D LiDAR-SLAM Algorithm for Indoor Similar Environment With Deep Visual Loop Closure. IEEE Sens. J. 2023, 23, 14650–14661. [Google Scholar] [CrossRef]
- Wang, X.; Zheng, S.; Lin, X.; Zhang, Q.; Liu, X. Robust Loop Closure Detection and Relocalization with Semantic-Line Graph Matching Constraints in Indoor Environments. Int. J. Appl. Earth Obs. Geoinf. 2024, 129, 103844. [Google Scholar] [CrossRef]
- Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
- Zhou, L.; Li, Z.; Kaess, M. Automatic Extrinsic Calibration of a Camera and a 3D LiDAR Using Line and Plane Correspondences. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 5562–5569. [Google Scholar]
- Teng, H.; Chatziparaschis, D.; Kan, X.; Roy-Chowdhury, A.K.; Karydis, K. Centroid Distance Keypoint Detector for Colored Point Clouds. In Proceedings of the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2–7 January 2023; pp. 1196–1205. [Google Scholar]
- Alexandre, L.A. Set Distance Functions for 3D Object Recognition. In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications; Ruiz-Shulcloper, J., Sanniti Di Baja, G., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8258, pp. 57–64. ISBN 978-3-642-41821-1. [Google Scholar]
Algorithm | RMSE | Mean | Median | Std | SSE |
---|---|---|---|---|---|
LOAM | 5.514768 | 5.096059 | 5.059721 | 2.107806 | 61,646.471891 |
LeGO-LOAM | 3.386482 | 3.302808 | 3.213319 | 0.748144 | 26,468.741302 |
FLOAM | 18.538025 | 17.931672 | 19.400977 | 4.702498 | 875,297.84575 |
ORB-SLAM2 | 2.457203 | 2.457185 | 2.454733 | 0.009277 | 15,444.806296 |
VLOAM | 2.173518 | 1.747652 | 1.190177 | 0.692243 | 8678.317338 |
Ours (no camera) | 2.885563 | 2.813023 | 3.093331 | 0.642941 | 21,473.970550 |
Our method | 1.301558 | 1.238605 | 1.175269 | 0.399890 | 4368.965482 |
Algorithm | LOAM | LeGO-LOAM | FLOAM | ORB-SLAM2 | VLOAM | Our Method |
---|---|---|---|---|---|---|
Average time/frame | 46.4989 | 33.5247 | 35.6841 | 61.6017 | 77.2669 | 68.7036 |
LOAM | LeGO-LOAM | FLOAM | VLOAM | Our Method | |
---|---|---|---|---|---|
Corridor | 2.5855 | 2.0781 | 1.9635 | 0.9406 | 0.5224 |
Rooms | 0.8755 | 1.5308 | 0.9217 | 1.1735 | 0.6624 |
LOAM | LeGO-LOAM | FLOAM | VLOAM | Our Method | |
---|---|---|---|---|---|
Building | 2.5089 | 2.6649 | 2.3709 | 2.1351 | 2.4315 |
Path | 2.8433 | 2.3384 | 4.8403 | 2.4237 | 2.5488 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, Z.; Jiang, W.; Guo, C.; Liu, Y.; Zhou, X. A Lightweight LiDAR–Visual Odometry Based on Centroid Distance in a Similar Indoor Environment. Remote Sens. 2025, 17, 2850. https://doi.org/10.3390/rs17162850
Zhou Z, Jiang W, Guo C, Liu Y, Zhou X. A Lightweight LiDAR–Visual Odometry Based on Centroid Distance in a Similar Indoor Environment. Remote Sensing. 2025; 17(16):2850. https://doi.org/10.3390/rs17162850
Chicago/Turabian StyleZhou, Zongkun, Weiping Jiang, Chi Guo, Yibo Liu, and Xingyu Zhou. 2025. "A Lightweight LiDAR–Visual Odometry Based on Centroid Distance in a Similar Indoor Environment" Remote Sensing 17, no. 16: 2850. https://doi.org/10.3390/rs17162850
APA StyleZhou, Z., Jiang, W., Guo, C., Liu, Y., & Zhou, X. (2025). A Lightweight LiDAR–Visual Odometry Based on Centroid Distance in a Similar Indoor Environment. Remote Sensing, 17(16), 2850. https://doi.org/10.3390/rs17162850