A Localization and Mapping Algorithm Based on Improved LVI-SAM for Vehicles in Field Environments
Abstract
:1. Introduction
2. Materials and Methods
2.1. Problem Formulation
2.2. Brief Review of the LVI-SAM Algorithm
2.3. Overview of the Proposed Lidar Odometry
2.4. Design of the Pose Estimation Node
Algorithm 1: From coarse to fine pose estimation algorithm. |
Input: Point cloud P, Q, Error limit ε. |
Output: Pose estimation result R1, t1. |
1. Select four coplanar and non-collinear points (p1, p2, p3, p4) in the point cloud P as the point base; |
2. Take qi (i = 1, 2, …, n) as the center of the sphere and r as the radius to make n spheres in the point cloud Q; |
3. Store the points in the interval [r − ε, r + ε] as the extracted point pairs; |
4. Extract (q1, q2, q3, q4) from Q corresponding to (p1, p2, p3, p4) according to Equation (2); |
5. Carry out the coarse registration to get an initial pose estimation Ri, ti; |
6. Divide the point cloud into several grids of the same size and extract the point cloud data in each grid with the center of gravity according to Equation (3); |
7. Transform the points in the point cloud Q according to Equation (4); |
8. Minimize the Euler distance between all matches by iteratively using Equation (5); |
9. Return R1, t1. |
2.5. Pose Correction Node Design
Algorithm 2: The pose correction algorithm. |
Input: Point cloud P, Q, Keyframe selection value h, Threshold ɕ. |
Output: Pose correction result R, t. |
1. Determine the keyframe by judging if h is equal to five; |
2. Divide the target point cloud Q into grids of a specified size and calculate the mean vector and covariance matrix of each point in the grids according to Equation (6); |
3. Map the source point cloud P to each grid and calculate the corresponding probability density according to the mapped grid of the source point; |
4. Construct the dynamic grid map and store these grids in a map data structure; |
5. Determine the activation state of the grids by judging whether the number of points in the grid is greater than the threshold ɕ; |
6. Update the mean and covariance matrix of each grid according to Equation (8); |
7. Construct the likelihood function according to Equations (9) and (10); |
8. Use the Newton iteration method to get the optimal transformation parameter; |
9. Correct the localization results of the pose estimation node; |
10. Return R, t. |
2.6. The Overall Framework of the Proposed Localization and Mapping Algorithm
3. Results and Discussion
3.1. Platform and Experimental Environment
3.2. Analysis of Localization Results
3.3. Analysis of Mapping Results
3.4. Analysis of the Real-Time Performance
4. Conclusions
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Asghar, R.; Garzon, M.; Lussereau, J.; Laugier, C. Vehicle localization based on visual lane marking and topological map matching. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020. [Google Scholar]
- Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef] [PubMed]
- Afia, A.B.; Escher, A.-C.; Macabiau, C. A low-cost gnss/imu/visual monoslam/wss integration based on federated kalman filtering for navigation in urban environments. In Proceedings of the 28th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2015), Tampa, FL, USA, 14–18 September 2015. [Google Scholar]
- Nieoczym, A.; Caban, J.; Dudziak, A.; Stoma, M. Autonomous vans—The planning process of transport tasks. Open Eng. 2020, 10, 18–25. [Google Scholar] [CrossRef]
- Bartuska, L.; Labudzki, R. Research of basic issues of autonomous mobility. Transp. Res. Procedia 2020, 44, 356–360. [Google Scholar] [CrossRef]
- Stoma, M.; Dudziak, A.; Caban, J.; Droździel, P. The Future of Autonomous Vehicles in the Opinion of Automotive Market Users. Energies 2021, 14, 4777. [Google Scholar] [CrossRef]
- Chen, L.; Yang, H.; Chen, Z.; Feng, Z. Research on Intelligent Disinfection-Vehicle System Design and Its Global Path Planning. Electronics 2023, 12, 1514. [Google Scholar] [CrossRef]
- Filip, I.; Pyo, J.; Lee, M.; Joe, H. LiDAR SLAM with a Wheel Encoder in a Featureless Tunnel Environment. Electronics 2023, 12, 1002. [Google Scholar] [CrossRef]
- Huang, B.; Zhao, J.; Liu, J. A survey of simultaneous localization and mapping. arXiv 2019, arXiv:1909.05214. [Google Scholar]
- Cheng, J.; Zhang, L.; Chen, Q.; Hu, X.; Cai, J. A review of visual SLAM methods for autonomous driving vehicles. Eng. Appl. Artif. Intell. 2022, 114, 104992. [Google Scholar] [CrossRef]
- Arshad, S.; Kim, G.-W. Role of Deep Learning in Loop Closure Detection for Visual and Lidar SLAM: A Survey. Sensors 2021, 21, 1243. [Google Scholar] [CrossRef]
- Su, Y.; Wang, T.; Shao, S.; Yao, C.; Wang, Z. GR-LOAM: LiDAR-based sensor fusion SLAM for ground robots on complex terrain. Robot. Auton. Syst. 2021, 140, 103759. [Google Scholar] [CrossRef]
- Lin, J.; Zheng, C.; Xu, W.; Zhang, F. R2 LIVE: A Robust, Real-Time, LiDAR-Inertial-Visual Tightly-Coupled State Estimator and Mapping. IEEE Robot. Autom. Lett. 2021, 6, 7469–7476. [Google Scholar] [CrossRef]
- Alliez, P.; Bonardi, F.; Bouchafa, S.; Didier, J.-Y.; Hadj-Abdelkader, H.; Munoz, F.I.; Kachurka, V.; Rault, B.; Robin, M.; Roussel, D. Real-Time Multi-SLAM System for Agent Localization and 3D Mapping in Dynamic Scenarios. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 24 October 2020–24 January 2021. [Google Scholar]
- Wisth, D.; Camurri, M.; Das, S.; Fallon, M. Unified Multi-Modal Landmark Tracking for Tightly Coupled Lidar-Visual-Inertial Odometry. IEEE Robot. Autom. Lett. 2021, 6, 1004–1011. [Google Scholar] [CrossRef]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
- Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
- Grisetti, G.; Stachniss, C.; Burgard, W. Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Trans. Robot. 2007, 23, 34–46. [Google Scholar] [CrossRef] [Green Version]
- Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
- Montemerlo, M.; Thrun, S.; Koller, D.; Wegbreit, B. Fastslam: A factored solution to the simultaneous localization and mapping problem. In Proceedings of the Eighteenth National Conference on Artificial Intelligence, Edmonton, AB, Canada, 28 July–1 August 2002; pp. 593–598. [Google Scholar]
- Montemerlo, M.; Thrun, S.; Koller, D.; Wegbreit, B. Fastslam 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges. In Proceedings of the IJCAI 2003, Acapulco, Mexico, 9–15 August 2003; pp. 1151–1156. [Google Scholar]
- Kohlbrecher, S.; von Stryk, O.; Meyer, J.; Klingauf, U. A flexible and scalable SLAM system with full 3D motion estimation. In Proceedings of the 2011 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Kyoto, Japan, 1–5 November 2011; pp. 155–160. [Google Scholar]
- Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-time loop closure in 2d lidar slam. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1271–1278. [Google Scholar]
- Zhang, J.; Singh, S. LOAM: Lidar odometry and mapping in real-time. Robot. Sci. Syst. 2014, 9, 1–9. [Google Scholar]
- Debeunne, C.; Vivet, D. A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping. Sensors 2020, 20, 2068. [Google Scholar] [CrossRef] [Green Version]
- Mur-Artal, R.; Tardos, J.D. Visual-Inertial Monocular SLAM With Map Reuse. IEEE Robot. Autom. Lett. 2017, 2, 796–803. [Google Scholar] [CrossRef] [Green Version]
- López, E.; García, S.; Barea, R.; Bergasa, L.M.; Molinos, E.J.; Arroyo, R.; Romera, E.; Pardo, S. A Multi-Sensorial Simultaneous Localization and Mapping (SLAM) System for Low-Cost Micro Aerial Vehicles in GPS-Denied Environments. Sensors 2017, 17, 802. [Google Scholar] [CrossRef]
- Chen, M.; Yang, S.; Yi, X.; Wu, D. Real-time 3D mapping using a 2D laser scanner and IMU-aided visual SLAM. In Proceedings of the 2017 IEEE International Conference on Real-time Computing and Robotics (RCAR), Okinawa, Japan, 14–18 July 2017. [Google Scholar]
- Zhu, J.; Tang, Y.; Shao, X.; Xie, Y. Multisensor Fusion Using Fuzzy Inference System for a Visual-IMU-Wheel Odometry. IEEE Trans. Instrum. Meas. 2021, 70, 1–16. [Google Scholar] [CrossRef]
- Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version]
- Zhang, J.; Singh, S. Visual-lidar odometry and mapping: Low-drift, robust, and fast. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015. [Google Scholar]
- Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021. [Google Scholar]
- Shan, T.; Englot, B.; Ratti, C.; Rus, D. Lvi-sam: Tightly-coupled lidar-visual-inertial odometry via smoothing and mapping. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021. [Google Scholar]
- Mellado, N.; Aiger, D.; Mitra, N.J. Super 4PCS Fast Global Pointcloud Registration via Smart Indexing. Comput. Graph. Forum 2014, 33, 205–215. [Google Scholar] [CrossRef] [Green Version]
- Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Sensor Fusion IV: Control Paradigms and Data Structures; SPIE: Bellingham, DC, USA, 1992; Volume 1611. [Google Scholar]
- Pomerleau, F.; Colas, F.; Siegwart, R. A review of point cloud registration algorithms for mobile robotics. Found. Trends® Robot. 2015, 4, 1–104. [Google Scholar] [CrossRef] [Green Version]
- Magnusson, M. The Three-Dimensional Normal-Distributions Transform: An Efficient Representation for Registration, Surface Analysis, and Loop Detection. Ph.D. Thesis, Örebro Universitet, Örebro, Sweden, 2009. [Google Scholar]
- Wang, K.; Zhou, J.; Zhang, W.; Zhang, B. Mobile LiDAR Scanning System Combined with Canopy Morphology Extracting Methods for Tree Crown Parameters Evaluation in Orchards. Sensors 2021, 21, 339. [Google Scholar] [CrossRef]
Platform | Configuration |
---|---|
Vehicle platform | HUNTER 2.0 |
LiDAR | Velodyne VLP-16 |
Inertial navigation system | XW-GI5651 |
Camera | USB 3.1 HD camera |
Operation system | Ubuntu 18.04 |
Software platform | Robot Operating System (ROS) Melodic |
Development language | C++ |
Point cloud processing library | Point Cloud Library (PCL) 1.7 |
Evaluation tool | evo |
Visualization | Cloud Compare |
APE | Translation Error (m) | Rotation Error (Unit-Less) | |||||||
---|---|---|---|---|---|---|---|---|---|
Algorithm | Max | Mean | Min | RMSE | Max | Mean | Min | RMSE | |
Our method | 1.04 | 0.58 | 0.06 | 0.61 | 2.82 | 2.03 | 0.54 | 2.16 | |
LVI-SAM | 1.05 | 0.60 | 0.11 | 0.64 | 2.82 | 2.05 | 0.59 | 2.16 | |
LIO-SAM | 7.19 | 3.50 | 0.85 | 3.82 | 2.82 | 2.82 | 2.77 | 2.82 |
Step | Time Cost (ms) |
---|---|
Point cloud coarse registration | 115 |
Point cloud fine registration | 71 |
Pose correction | 168 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Han, L.; Shi, Z.; Wang, H. A Localization and Mapping Algorithm Based on Improved LVI-SAM for Vehicles in Field Environments. Sensors 2023, 23, 3744. https://doi.org/10.3390/s23073744
Han L, Shi Z, Wang H. A Localization and Mapping Algorithm Based on Improved LVI-SAM for Vehicles in Field Environments. Sensors. 2023; 23(7):3744. https://doi.org/10.3390/s23073744
Chicago/Turabian StyleHan, Lanyi, Zhiyong Shi, and Huaiguang Wang. 2023. "A Localization and Mapping Algorithm Based on Improved LVI-SAM for Vehicles in Field Environments" Sensors 23, no. 7: 3744. https://doi.org/10.3390/s23073744
APA StyleHan, L., Shi, Z., & Wang, H. (2023). A Localization and Mapping Algorithm Based on Improved LVI-SAM for Vehicles in Field Environments. Sensors, 23(7), 3744. https://doi.org/10.3390/s23073744