An Improved SLAM Algorithm for Substation Inspection Robots Based on 3D Lidar and Visual Information Fusion
Abstract
:1. Introduction
- (1)
- In this paper, a 3D laser and vision fusion SLAM algorithm for inspection robots is proposed. The algorithm combines image feature points and 3D laser point position information, uses the laser position and visual position with interpolation, applies the position-adaptive selection method to improve the positioning accuracy and robustness of the algorithm, and finally, constructs a color laser point cloud map of the substation.
- (2)
- The algorithm in this paper was combined with a substation inspection robot and tested in real sites to verify its applicability in real environments. A substation inspection robot equipped with 3D Lidar and vision sensors was used to experimentally test the inspection robot VO-Lidar SLAM algorithm proposed in this paper through three different environments, namely, an indoor scene, an outdoor dataset, and a substation.
2. Related Works
3. Framework of the Study
3.1. Algorithm Fusion Framework
3.2. Improved Visual Interframe Localization
3.2.1. Depth Calculation of Feature Points Based on the Laser Projection Method
- (1)
- Select the laser points within 180 degrees in front of the robot.
- (2)
- Convert these laser points from the Lidar coordinate system to the camera’s coordinate system.
3.2.2. Visual Inter-Frame Position Calculation
3.3. Laser Vision Double Interpolation to Correct Point Cloud Distortion
3.4. Laser Vision Position Adaptive Selection Methods
3.5. Color Laser Point Cloud Map Construction
4. Experimentation and Analysis
4.1. Indoor Scene Test Results and Analysis
4.2. Outdoor Dataset Test Results and Analysis
4.3. Substation Scenario Test Results and Analysis
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Pearson, E.; Mirisola, B.; Murphy, C.; Huang, C.; O’Leary, C.; Wong, F.; Meyerson, J.; Bonfim, L.; Zecca, M.; Spina, N.; et al. Robust Autonomous Mobile Manipulation for Substation Inspection. J. Mech. Robot. 2024, 16, 1–17. [Google Scholar] [CrossRef]
- Wang, D.; Zhang, C.; Li, J.; Zhu, L.; Zhou, B.; Zhou, Q.; Cheng, L.; Shuai, Z. A Novel Interval Power Flow Method Based on Hybrid Box-Ellipsoid Uncertain Sets. IEEE Trans. Power Syst. 2024, 39, 6111–6114. [Google Scholar] [CrossRef]
- Zhao, S.; Gao, Y.; Wu, T.; Singh, D.; Jiang, R.; Sun, H.; Sarawata, M.; Qiu, Y.; Whittaker, W.; Higgins, I.; et al. SubT-MRS Dataset: Pushing SLAM Towards All-weather Environments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 22647–22657. [Google Scholar]
- Cai, D.; Li, R.; Hu, Z.; Lu, J.; Li, S.; Zhao, Y. A comprehensive overview of core modules in visual SLAM framework. Neurocomputing 2024, 590, 127760. [Google Scholar] [CrossRef]
- Zhao, X.; Ren, X.; Li, L.; Wu, J.; Zhao, X. PA-SLAM: Fast and Robust Ground Segmentation and Optimized Loop Detection Laser SLAM System. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 5812–5822. [Google Scholar] [CrossRef]
- Zhou, T.; Zhao, C.; Wingren, C.P.; Fei, S.; Habib, A. Forest feature LiDAR SLAM (F2-LSLAM) for backpack systems. ISPRS J. Photogramm. Remote Sens. 2024, 212, 96–121. [Google Scholar] [CrossRef]
- Wan, Q.; Lv, R.; Xiao, Y.; Li, Z.; Zhu, X.; Wang, Y.; Liu, H.; Zeng, Z. Multi-Target Occlusion Tracking with 3D Spatio-Temporal Context Graph Model. IEEE Sens. J. 2023, 18, 21631–21639. [Google Scholar] [CrossRef]
- Wan, Q.; Zhu, X.; Xiao, Y.; Yan, J.; Chen, G.; Sun, M. An Improved Non-Parametric Method for Multiple Moving Objects Detection in the Markov Random Field. Comput. Model. Eng. Sci. 2020, 124, 129–149. [Google Scholar] [CrossRef]
- Ying, Z. Research on Visual SLAM for Sewage Pipe Robot. Master’s Thesis, Southwest Jiaotong University, Chongqing, China, 2018. [Google Scholar]
- Montemerlo, M.; Thrun, S. Simultaneous localization and mapping with unknown data association using FastSLAM. In Proceedings of the IEEE International Conference on Robotics and Automation, Taipei, China, 14–19 September 2003; Volume 2, pp. 1985–1991. [Google Scholar]
- Kuang, C.; Chen, W. FastSLAM 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges. Proc. Int. conf. Artif. Intell. 2003, 133, 1151–1156. [Google Scholar]
- Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. In Proceedings of the Robotics: Science and Systems, Berkeley, CA, USA, 12–16 July 2014; pp. 109–111. [Google Scholar]
- Zhang, J.; Singh, S. Low-drift and real-time lidar odometry and mapping. Auton. Robot. 2017, 41, 401–416. [Google Scholar] [CrossRef]
- Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-time loop closure in 2D LIDAR SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016; pp. 1271–1278. [Google Scholar]
- Klein, G.; Murray, D. Parallel tracking and mapping for small AR workspaces. In Proceedings of the IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; pp. 225–234. [Google Scholar]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
- Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
- Endres, F.; Hess, J.; Sturm, J.; Cremers, D.; Burgard, W. 3-D Mapping With an RGB-D Camera. IEEE Trans. Robot. 2013, 30, 177–187. [Google Scholar] [CrossRef]
- Henry, P.; Krainin, M.; Herbst, E.; Ren, X.; Fox, D. RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments. In Experimental Robotics, Springer Tracts in Advanced Robotics; Springer: Berlin/Heidelberg, Germany, 2014; Volume 79, pp. 477–491. [Google Scholar]
- Newcombe, R.A.; Izadi, S.; Hilliges, O.; Molyneaux, D.; Kim, D.; Davison, A.J.; Kohi, P.; Shotton, J.; Hodges, S.; Fitzgibbon, A. KinectFusion: Real-time dense surface mapping and tracking. In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality, Basel, Switzerland, 26–29 October 2011; pp. 127–136. [Google Scholar]
- Engel, J.; Stückler, J.; Cremers, D. Large-scale direct SLAM with stereo cameras. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, 28 September–2 October 2015; pp. 1935–1942. [Google Scholar]
- Li, J.; Zheng, K.; Gao, L.; Ni, L.; Huang, M.; Chanussot, J. Model-informed Multi-stage Unsupervised Network for Hyperspectral Image Super-resolution. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5516117. [Google Scholar] [CrossRef]
- Li, J.; Zheng, K.; Li, Z.; Gao, L.; Jia, X. X-shaped interactive autoencoders with cross-modality mutual learning for unsupervised hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5518317. [Google Scholar] [CrossRef]
- Zhang, J.; Kaess, M.; Singh, S. Real-time depth enhanced monocular odometry. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 4973–4980. [Google Scholar]
- Gao, X. Fourteen Lectures on Vision SLAM: From Theory to Practice; Electronic Industry Press: Beijing, China, 2017. [Google Scholar]
- Arun, K.S.; Huang, T.S.; Blostein, S.D. Least-squares fitting of two 3-D point sets. IEEE Trans. Pattern Anal. Mach. Intell. 1987, 5, 698–700. [Google Scholar] [CrossRef] [PubMed]
- Pomerleau, F.; Colas, F.; Siegwart, R. A review of point cloud registration algorithms for mobile robotics. Found. Trends Robot. 2015, 4, 1–104. [Google Scholar] [CrossRef]
Arithmetic | Length d(M) | Starting Point Coordinates A (x, y, z) | End Point Coordinates B (x, y, z) | Positioning Error % |
---|---|---|---|---|
Lidar | 72.477 | (0, 0, 0) | (1.064, 0.199, 0.085) | 1.498 |
After distortion correction | 72.563 | (0, 0, 0) | (0.957, 0.171, 0.076) | 1.344 |
Arithmetic | Trajectory Length d(M) | Starting Point Coordinates A (x, y, z) | End Point Coordinates B (x, y, z) | Positioning Error % |
---|---|---|---|---|
Lidar | 72.477 | (0, 0, 0) | (1.064, 0.199, 0.085) | 1.498 |
VO-Lidar | 73.891 | (0, 0, 0) | (0.098, 0.128, 0.023) | 0.220 |
Data Set | Length (m) | Lidar SLAM Relative Positioning Error (%) | VO-Lidar SLAM Relative Positioning Error (%) | Change in Error (%) |
---|---|---|---|---|
Data0 (urban area) | 3723 | 1.37 | 1.19 | 0.18 |
Data1 (highway) | 2451 | 2.29 | 1.87 | 0.42 |
Data2 (countryside) | 397 | 1.15 | 0.85 | 0.30 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, Y.; Fan, S. An Improved SLAM Algorithm for Substation Inspection Robots Based on 3D Lidar and Visual Information Fusion. Energies 2025, 18, 2797. https://doi.org/10.3390/en18112797
Liu Y, Fan S. An Improved SLAM Algorithm for Substation Inspection Robots Based on 3D Lidar and Visual Information Fusion. Energies. 2025; 18(11):2797. https://doi.org/10.3390/en18112797
Chicago/Turabian StyleLiu, Yicen, and Songhai Fan. 2025. "An Improved SLAM Algorithm for Substation Inspection Robots Based on 3D Lidar and Visual Information Fusion" Energies 18, no. 11: 2797. https://doi.org/10.3390/en18112797
APA StyleLiu, Y., & Fan, S. (2025). An Improved SLAM Algorithm for Substation Inspection Robots Based on 3D Lidar and Visual Information Fusion. Energies, 18(11), 2797. https://doi.org/10.3390/en18112797