Research on Digital Terrain Construction Based on IMU and LiDAR Fusion Perception
Abstract
:1. Introduction
- (1)
- A terrain construction system based on the fusion of IMU and LiDAR perception is established. This system includes the architecture for state estimation and mapping, IMU pre-integration factors, LiDAR odometry factors, and loop-closure detection factors.
- (2)
- Considering the large computational load of the original architecture, this paper improves the existing terrain construction system by adding a new functional module to the algorithm framework. This module saves each point cloud frame with complete road surface details and stitches multiple point clouds together to generate a point cloud file containing the complete road surface information.
- (3)
- A new moving average-like algorithm is proposed to address the challenges in extracting road elevation information. By comparing the extraction effects of sliding windows with different widths, the optimal window width is determined, thereby improving the accuracy of road elevation information extraction.
2. Sensor Coordinate Systems and Parameter Calibration
2.1. Sensor Coordinate System Configuration
- (1)
- LiDAR coordinate system
- (2)
- IMU coordinate system
2.2. Sensor Parameters Calibration
- (1)
- Coordinate System Alignment
- (2)
- Angle Compensation
- (3)
- Position Calibration
3. Ground Construction Model Based on Sensor Fusion
3.1. State Estimation and Mapping Architecture
3.2. IMU Pre-Integration Factor
3.3. LiDAR Odometer Factor
3.4. Loop-Closure Factor
3.5. Terrain Construction Experiment Based on an Improved Architecture
4. Road Information Extraction Model Based on Point Cloud Library and Moving Average-like Algorithm
4.1. Extraction of Point Cloud Information for Front Tire Trajectories
4.2. Moving Average-like Algorithm
- (1)
- Initial grid region creation: Starting from the longitudinal distance m, first draw a grid area with a length of 0.04 m (in the x direction) and a width of 0.205 m (in the y direction), with no restriction in the z direction. In this grid, m. Then, move forward by 0.01 m and draw another grid area with the same dimensions: length 0.04 m (in the x direction) and width 0.205 m (in the y direction), with no restriction in the z direction. For this grid, x ∈ (0.01, 0.05) m. Repeat this process until you reach the final grid area with a length of 0.04 m (in the x direction), and a width of 0.205 m (in the y direction), with no restriction in the z direction. For this final grid, m. In the end, a total of 1997 grid areas, each 4 cm in width, will be obtained.
- (2)
- Point assignment to grid regions: Iterate through the points and place each point into the corresponding grid areas based on its x value. A single point may fall into multiple grid areas. For example, a point with an x value of 3 cm will be included in the grid areas spanning 0~4 cm, 1~5 cm, 2~6 cm, and 3~7 cm. Therefore, this point needs to be placed into all of these overlapping grid areas simultaneously.
- (3)
- Data storage: Create a vector container to store the x, y, and z data of the points for each grid region.
- (4)
- Calculate averages: In a single grid, calculate the average x value and the average z value of all points within the grid. These averages represent the x and z values for that particular grid.
- (5)
- Generate elevation information: Perform the same operation for the other grids, iterating through all grids. This will yield 1997 points with x values as the horizontal coordinates and z values as the vertical coordinates. By plotting these points and connecting them on the x–z coordinate plane, you will obtain the elevation information of the road surface.
4.3. Results and Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Li, B.; Wang, Y.; Papaioannou, G.; Du, H. Sensor Fusion and Advanced Controller for Connected and Automated Vehicles. Sensors 2023, 23, 7015. [Google Scholar] [CrossRef]
- Hasanujjaman, M.; Chowdhury, M.Z.; Jang, Y.M. Sensor Fusion in Autonomous Vehicle with Traffic Surveillance Camera System: Detection, Localization, and AI Networking. Sensors 2023, 23, 3335. [Google Scholar] [CrossRef] [PubMed]
- Ahangari Sisi, Z.; Mirzaei, M.; Rafatnia, S. Estimation of vehicle suspension dynamics with data fusion for correcting measurement errors. Measurement 2024, 231, 114438. [Google Scholar] [CrossRef]
- Zhang, Z.H.; Zhao, J.T.; Huang, C.Y.; Li, L. Learning visual semantic map-matching for loosely multi-sensor fusion localization of autonomous vehicles. IEEE Trans. Intell. Vehicles 2023, 8, 358–367. [Google Scholar] [CrossRef]
- Sun, Y.; Luo, Y.; Zhang, Q.; Xu, L.; Wang, L.; Zhang, P. Estimation of Crop Height Distribution for Mature Rice Based on a Moving Surface and 3D Point Cloud Elevation. Agronomy 2022, 12, 836. [Google Scholar] [CrossRef]
- Almalki, F.A.; Angelides, M.C. Autonomous flying IoT: A synergy of machine learning, digital elevation, and 3D structure change detection. Comput. Commun. 2022, 190, 154–165. [Google Scholar] [CrossRef]
- Moura Coelho, R.; Gouveia, J.; Botto, M.A.; Krebs, H.I.; Martins, J. Real-time walking gait terrain classification from foot-mounted Inertial Measurement Unit using Convolutional Long Short-Term Memory neural network. Expert Syst. Appl. 2022, 203, 117306. [Google Scholar] [CrossRef]
- Chen, Y.X.; Chen, L.; Huang, C.; Lu, Y.; Wang, C. A dynamic tire model based on HPSO-SVM. Int. J. Agric. Biol. Eng. 2019, 12, 36–41. [Google Scholar] [CrossRef]
- Song, F.; Yang, Z.; Gao, X.; Dan, T.; Yang, Y.; Zhao, W.; Yu, R. Multi-scale feature-based land cover change detection in mountainous terrain using multi-temporal and multi-sensor remote sensing images. IEEE Access 2018, 6, 77494–77508. [Google Scholar] [CrossRef]
- Klein, G.; Murray, D. Parallel tracking and mapping for small and workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 225–234. [Google Scholar]
- Raul, M.; Jose, M.; Juan, D.T. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar]
- Gao, Y.; Li, X.; Wang, X.V.; Wang, L.; Gao, L. A Review on Recent Advances in Vision-based Defect Recognition towards Industrial Intelligence. J. Manuf. Syst. 2022, 62, 753–766. [Google Scholar] [CrossRef]
- Zhang, Y.Y.; Zhang, B.; Shen, C.; Liu, H.L.; Huang, J.C.; Tian, K.P.; Tang, Z. Review of the field environmental sensing methods based on multi-sensor information fusion technology. Int. J. Agric. Bio-Log. Eng. 2024, 17, 1–13. [Google Scholar]
- Cai, C.; Zou, Y.; Pan, Z.; Liu, Z.; Gao, S. A novel lane detection algorithm based on multi-feature fusion and window search. J. Jiangsu Univ. 2023, 44, 386–391. [Google Scholar]
- Zhou, X.; Sun, J.; Tian, Y.; Wu, X.H.; Dai, C.X.; Li, B. Spectral classification of lettuce cadmium stress based on in-formation fusion and VISSA-GOA-SVM algorithm. J. Food Process Eng. 2019, 42, e13085. [Google Scholar] [CrossRef]
- Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. Robot. Sci. Syst. 2014, X, 007. [Google Scholar] [CrossRef]
- Shan, T.; Englot, B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765. [Google Scholar]
- Lin, J.; Zhang, F. A fast, complete, point cloud based loop closure for LiDAR odometry and mapping. arXiv 2019, arXiv:1909.11811. [Google Scholar]
- Wang, H.; Wang, C.; Xie, L. Lightweight 3-D Localization and Mapping for Solid-State LiDAR. IEEE Robot. Autom. Lett. 2021, 6, 1801–1807. [Google Scholar] [CrossRef]
- Xu, H. Semi-supervised manifold learning based on polynomial mapping for localization in wireless sensor networks. Signal Process. 2020, 172, 107570.1–107570.11. [Google Scholar] [CrossRef]
- Droeschel, D.; Schwarz, M.; Behnke, S. Continuous mapping and localization for autonomous navigation in rough terrain using a 3D laser scanner. Robot. Auton. Syst. 2017, 88, 104–115. [Google Scholar] [CrossRef]
- Xu, J.; Liu, H.; Shen, Y.; Zeng, X.; Zheng, X. Individual nursery trees classification and segmentation using a point cloud-based neural network with dense connection pattern. Sci. Hortic. 2024, 328, 112945. [Google Scholar] [CrossRef]
- Nguyen-Ngoc, T.-T.; Phi, T.-D.; Phan-Nguyen, P.-Q.; Nguyen, V.-H. Tightly-coupled GPS/INS/Lidar Integration for Road Vehicles. In Proceedings of the 2023 International Symposium on Electrical and Electronics Engineering (ISEEE 2023), Galati, Romania, 26–28 October 2023; pp. 156–160. [Google Scholar]
- Almalioglu, Y.; Turan, M.; Lu, C.X.; Trigoni, N.; Markham, A. Milli-RIO: Ego-Motion Estimation with Low-Cost MillimetreWave Radar. IEEE Sens. J. 2021, 21, 3314–3323. [Google Scholar] [CrossRef]
- Su, Y.; Wang, T.; Shao, S.; Yao, C.; Wang, Z. GR-LOAM: LiDAR-based sensor fusion SLAM for ground robots on complex terrain. Robot. Auton. Syst. 2021, 140, 103759. [Google Scholar] [CrossRef]
- Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. arXiv 2020, arXiv:2007.00258. [Google Scholar]
- Li, S.Q.; Sun, X.; Zhang, M.A. Vehicle recognition technology at urban intersections based on the fusion of LiDAR and cameras. J. Jiangsu Univ. 2024, 45, 621–628. [Google Scholar]
- Li, Y.; Yang, S.; Xiu, X.; Miao, Z. A Spatiotemporal Calibration Algorithm for IMU–LiDAR Navigation System Based on Similarity of Motion Trajectories. Sensors 2022, 22, 7637. [Google Scholar] [CrossRef]
- Li, C.X.; Jiang, H.B.; Wang, C.Y.; Ma, S.D. Vehicle pose estimation method based on sensor information fusion. J. Jiangsu Univ. 2022, 43, 636–644. [Google Scholar]
- Yang, N.; Cheng, L. SLAM algorithm for large scenes based on efficient loopback detection. J. Shenyang Univ. Technol. 2024, 43, 45–51. [Google Scholar]
- Chen, W.; Shang, G.; Ji, A.; Zhou, C.; Wang, X.; Xu, C.; Li, Z.; Hu, K. An Overview on Visual SLAM: From Tradition to Semantic. Remote Sens. 2022, 14, 3010. [Google Scholar] [CrossRef]
- Rusu, R.B.; Cousins, S. 3D is here: Point Cloud Library (PCL). In Proceedings of the 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1–4. [Google Scholar]
- Cao, B.; Mendoza, R.C.; Philipp, A.; Gohring, D. LiDAR-based object-level SLAM for autonomous vehicles. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 4397–4404. [Google Scholar]
- Wang, C.; Li, Y. Road surface modeling using LiDAR data: A method based on moving average filter. J. Geogr. Inf. Sci. 2018, 34, 1113–1123. [Google Scholar]
- Zhan, Q.; Liang, Y.; Cai, Y.; Xiao, Y. Pattern Detection in Airborne LiDAR Data Using Laplacian of Gaussian Filter. Geo-Spat. Inf. Sci. 2011, 14, 184–189. [Google Scholar] [CrossRef]
Algorithmic Approach | Congruence with the Actual Road Surface |
---|---|
Sliding window width of 4 cm | 97.6% |
Sliding window width of 6 cm | 90.3% |
Sliding window width of 8 cm | 82.8% |
Sliding window width of 10 cm | 74.5% |
Gaussian filter | 71.4% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Huang, C.; Wang, Y.; Sun, X.; Yang, S. Research on Digital Terrain Construction Based on IMU and LiDAR Fusion Perception. Sensors 2025, 25, 15. https://doi.org/10.3390/s25010015
Huang C, Wang Y, Sun X, Yang S. Research on Digital Terrain Construction Based on IMU and LiDAR Fusion Perception. Sensors. 2025; 25(1):15. https://doi.org/10.3390/s25010015
Chicago/Turabian StyleHuang, Chen, Yiqi Wang, Xiaoqiang Sun, and Shiyue Yang. 2025. "Research on Digital Terrain Construction Based on IMU and LiDAR Fusion Perception" Sensors 25, no. 1: 15. https://doi.org/10.3390/s25010015
APA StyleHuang, C., Wang, Y., Sun, X., & Yang, S. (2025). Research on Digital Terrain Construction Based on IMU and LiDAR Fusion Perception. Sensors, 25(1), 15. https://doi.org/10.3390/s25010015