A Robust Framework for Simultaneous Localization and Mapping with Multiple Non-Repetitive Scanning Lidars
Abstract
:1. Introduction
- An accurate and automatic calibration of multiple non-repetitive scanning lidars with small or no overlapping districts.
- A novel feature selection method for multiple lidar fusion, which not only increases computation efficiency, but also raises the awareness against degeneracy.
- A self-adaptive feature extraction method for various lidars; both the close-to-rectangle and circular scanning pattern of Livox lidar can be satisfied.
- A scan context [28] for the place description of lidars with irregular scan patterns. Experimental results show that this method is robust in challenging areas.
2. Materials and Methods
2.1. Syetem Overview
2.2. Automatic Calibration of the System
2.3. Feature Extraction
- Tele-15:The feature points can be extracted through counting the local smoothness. Moreover, in view of the limited feature points in the tiny FoV, point reflectivity is also employed as an extra determinant. If the reflectivity of a point is different from the neighboring one for a threshold, it is also treated as an edge point.
- Horizon:We deployed a purely time domain feature extraction method for Horizon. All the raw point cloud data in a single frame are divided into patches with a 6 × 7 point, and an eigendecomposition is performed for the covariance of the 3D coordinates. All the 42 points are extracted as surface features if the second largest eigenvalue is 0.4 times larger than the smallest one. Then the points with the largest curvature on each scan line are found for non-surface patches, and an eigendecomposition is performed. If the largest eigenvalue is 0.3 times larger than the second largest one, the six points are extracted as edge features. Although highly accurate for feature extraction, this method can only solve low speed occasions due to the limited patch size. Therefore, a time-domain-based method is selected for the UGV platform, while a traditional approach is adopted for the passenger vehicle platform.
2.4. Feature Selection
- Down sample the current fused point cloud with voxel filter [43] and extract all the feature points . Set as empty at the beginning of each frame.
- For each lidar input, obtain a random subset containing elements from . For each feature point in , search the correspondence and compute the information matrix from residuals calculated from (6) and (7). Add any to that leads to maximum enhancement of the objective, . Then the is updated and replaced with .
- For each lidar input, stop step 2 until good features are found.
- Send all the good feature sets to scan registration after every thread finishes step 3.
2.5. Scan Registration
2.6. Scan Context Integrated Global Optimization
3. Results
3.1. Passenger Car Downtown Experiments
3.1.1. System Setup and Scenario Overview
3.1.2. Results of Experiment #1
- The advantage of integrating point clouds from multiple lidars is manifest; much more environmental information can be obtained and presented. While the front lidar mapping merely depicts the road shape, multi-lidar fusion generates generous supplementary geometrical features. Thanks to the long-range Tele-15 lidar, the playground is completely preserved on the map shown in Figure 10e. As feature quality has a great impact on lidar odometry, multi-lidar fusion should be a better approach for degeneracy problems.
- The multi-lidar fusion system was more robust than the single lidar system in dynamic scenes. The system encountered heavy congestion at the beginning, with vehicles, bicycles, and people moving irregularly. Hence the scan-to-scan correlation was difficult to determine in a severely limited FoV. We can see from Figure 10c and the upper right corner of Figure 10d that the scan registration died with front lidar only, which was the reason for the loss of other three trajectories. This is an increasingly common situation in the suburbs; with limited observable features, single lidar mapping is vulnerable to dynamic scenes. From our experience, it could be easily misled by a truck or a bus passing by, especially at crossroads, while waiting for traffic lights. With the auxiliary features from side lidars and feature filtering for front lidars, our method had the highest accuracy in these scenarios. It can be noticed from Figure 10c that the incorrect matching of dynamic objects at the beginning ruins the entire odometry. For other three single-lidar-based approaches, the initial translational error had already surpassed 10 m. While for our system, the position errors always stayed at a low level along the trajectory.
3.1.3. Results of Experiment #2
3.2. Small Platform Campus Experiments
3.2.1. System Setup and Scenario Overview
3.2.2. Effects of Lidar Number
- A five-lidar fusion could eliminate the exaggerated height ramp of a single lidar approach. There were six speed bumpers on the path, causing large vertical vibrations to the vehicle. Some of these perpendicular motions are fatal to lidar odometry, as vertical displacement will be enlarged by incorrect registrations. The front lidar trajectory in Figure 13a encountered a vertical dump in the left corner, which was the result of two consecutive speed bump. With limited FoV from only the front view, the single lidar odometry was unable to correct these errors; it was further affected by error propagation in Figure 13c. Extending extra constraint features from side lidars, our approach was able to correct these errors and maintain a flat terrain.
- The geometry of multiple lidar placement made a significant improvement to the system performance. We can notice from Figure 13a that the results from one front view lidar and two back view lidars had the most proximal accuracy with the five-lidar fusion. On the other hand, the positioning error of two back lidars was worse than the single front view lidar. Therefore, pentagon-like and triangle-like multi-lidar setups should deliver better mapping and positioning results in real cases.
3.2.3. Effects of Loop Closure Optimization
3.2.4. Comparison with Mechanical Lidar
- Multi-lidar fusion had a commendable mapping accuracy and retained abundant features in the meantime. In the current market, the five Livox Horizon kit is (5 Livox Horizons and a Livox Hub) almost the same price as a VLP-16, and half the cost of an OS1-64. Moreover, the horizontal odometry results in Figure 15b and Table 1 also indicate that our approach had an accuracy comparable with Lio-sam.
- Multi-lidar fusion had a better performance in challenging scenarios. Shown in Figure 15c,d we carried out an experiment of small loop at class breaktime, with students and vehicles blocking the paths. Therefore, our vehicle had to avoid collisions with irregular motions, such as moving back and forth, sharp turnings, and fast accelerations. Lio-sam had the most significant failure in such areas, because of loss of landmark constraints. Since the IMU pre-integration process of the system depends heavily on lidar odometry [48], the system was liable to fail once the landmark constraint information was insufficient. The maximum error of A-LOAM came from back-and-forth motions, causing a more than 90° deviation to the trajectory. The starting area was a crowded pathway with vehicles on both sides; as our platform was much lower than common sedans, most of the surface points were cast on vehicles. Once the front view is blocked by pedestrians, the loops cannot be detected or are mismatched, hence LeGO-LOAM failed to close the loop. A crucial benefit of multi-lidar fusion is the capability of manipulating each lidar input freely. With a pre-set empirical threshold for the amount of edge points, all the edge and surface features from the individual lidars after selection were further down sampled to 10% of their original size once the threshold was reached. This is similar to shutting down some typical view angles of a mechanical lidar, and thus alleviating bad impacts in certain areas. Moreover, the major part of dynamic objects in the front view can be removed by point-to-edge and point-to-plane residuals. With the help of the two mentioned improvements, our multi-lidar fusion solution had a strong capability in severe situations, and the end-to-end errors were small, as presented in Table 2.
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Liu, R.; Wang, J.; Zhang, B. High definition map for automated driving: Overview and analysis. J. Navig. 2020, 73, 324–341. [Google Scholar] [CrossRef]
- Bresson, G.; Alsayed, Z.; Yu, L.; Glaser, S. Simultaneous localization and mapping: A survey of current trends in autonomous driving. IEEE Trans. Intell. Veh. 2017, 2, 194–220. [Google Scholar] [CrossRef] [Green Version]
- Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.; Tardós, J.D. ORB-SLAM3: An accurate open-source library for visual, visual-inertial and multi-map SLAM. arXiv 2020, arXiv:2007.11898. [Google Scholar]
- Qin, T.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version]
- Cvišić, I.; Ćesić, J.; Marković, I.; Petrović, I. SOFT-SLAM: Computationally efficient stereo visual simultaneous localization and mapping for autonomous unmanned aerial vehicles. J. Field Robot. 2018, 35, 578–595. [Google Scholar] [CrossRef]
- Bénet, P.; Guinamard, A. Robust and Accurate Deterministic Visual Odometry. In Proceedings of the 33rd International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2020), Virtual Program, 21–25 September 2020; pp. 2260–2271. [Google Scholar]
- Taketomi, T.; Uchiyama, H.; Ikeda, S. Visual SLAM algorithms: A survey from 2010 to 2016. IPSJ Trans. Comput. Vis. Appl. 2017, 9, 1–11. [Google Scholar] [CrossRef]
- Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. In Proceedings of the Robotics: Science and Systems, Berkeley, CA, USA, 12–16 July 2014; Volume 2. [Google Scholar]
- Liu, Z.; Zhang, F.; Hong, X. Low-cost retina-like robotic lidars based on incommensurable scanning. IEEE ASME Trans. Mechatron. 2021. [Google Scholar] [CrossRef]
- Glennie, C.L.; Hartzell, P.J. Accuracy Assessment and Calibration of Low-Cost Autonomous LIDAR Sensors. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 43, 371–376. [Google Scholar] [CrossRef]
- Lin, J.; Zhang, F. Loam livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 3126–3131. [Google Scholar]
- Lin, J.; Zhang, F. A fast, complete, point cloud based loop closure for lidar odometry and mapping. arXiv 2019, arXiv:1909.11811. [Google Scholar]
- Livox and Xpeng Partner to Bring Mass Produced, Built-In Lidar to the Market. Available online: https://www.livoxtech.com/news/14 (accessed on 7 April 2021).
- Joint Collaboration between Livox, Zhito and FAW Jiefang Propels Autonomous Heavy-Duty Truck into the Smart Driving Era. Available online: https://www.livoxtech.com/news/11 (accessed on 7 April 2021).
- Li, K.; Li, M.; Hanebeck, U.D. Towards high-performance solid-state-lidar-inertial odometry and mapping. IEEE Robot. Autom. Lett. 2021. [Google Scholar] [CrossRef]
- Xu, W.; Zhang, F. Fast-lio: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter. IEEE Robot. Autom. Lett. 2021, 6, 3317–3332. [Google Scholar] [CrossRef]
- Sun, P.; Kretzschmar, H.; Dotiwalla, X.; Chouard, A.; Patnaik, V.; Tsui, P.; Guo, J.; Zhou, Y.; Chai, Y.; Caine, B. Scalability in perception for autonomous driving: Waymo open dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 2446–2454. [Google Scholar]
- Liu, T.; Liao, Q.; Gan, L.; Ma, F.; Cheng, J.; Xie, X.; Wang, Z.; Chen, Y.; Zhu, Y.; Zhang, S. Hercules: An autonomous logistic vehicle for contact-less goods transportation during the covid-19 outbreak. arXiv 2020, arXiv:2004.07480. [Google Scholar]
- Geyer, J.; Kassahun, Y.; Mahmudi, M.; Ricou, X.; Durgesh, R.; Chung, A.S.; Hauswald, L.; Pham, V.H.; Mühlegg, M.; Dorn, S. A2d2: Audi autonomous driving dataset. arXiv 2020, arXiv:2004.06320. [Google Scholar]
- Jiao, J.; Yun, P.; Tai, L.; Liu, M. MLOD: Awareness of Extrinsic Perturbation in Multi-LiDAR 3D Object Detection for Autonomous Driving. arXiv 2020, arXiv:2010.11702. [Google Scholar]
- Lin, J.; Liu, X.; Zhang, F. A decentralized framework for simultaneous calibration, localization and mapping with multiple LiDARs. arXiv 2020, arXiv:2007.01483. [Google Scholar]
- Jiao, J.; Ye, H.; Zhu, Y.; Liu, M. Robust Odometry and Mapping for Multi-LiDAR Systems with Online Extrinsic Calibration. arXiv 2020, arXiv:2010.14294. [Google Scholar]
- Fenwick, J.W.; Newman, P.M.; Leonard, J.J. Cooperative concurrent mapping and localization. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292), Washington, DC, USA, 11–15 May 2002; Volume 2, pp. 1810–1817. [Google Scholar]
- Tao, T.; Huang, Y.; Yuan, J.; Sun, F.; Wu, X. Cooperative simultaneous localization and mapping for multi-robot: Approach & experimental validation. In Proceedings of the 2010 8th World Congress on Intelligent Control and Automation, Jinan, China, 6–9 July 2010; pp. 2888–2893. [Google Scholar]
- Nettleton, E.; Thrun, S.; Durrant-Whyte, H.; Sukkarieh, S. Decentralised SLAM with low-bandwidth communication for teams of vehicles. In Proceedings of the Field and Service Robotics; Springer: Berlin/Heidelberg, Germany, 2003; pp. 179–188. [Google Scholar]
- Nettleton, E.; Durrant-Whyte, H.; Sukkarieh, S. A robust architecture for decentralised data fusion. In Proceedings of the Proceedings of the International Conference on Advanced Robotics (ICAR), Coimbra, Portugal, 30 June 30–3 July 2003. [Google Scholar]
- Durrant-Whyte, H. A Beginner’s Guide to Decentralised Data Fusion; Technical Document of Australian Centre for Field Robotics; University of Sydney: Sydney, Australia, 2000; pp. 1–27. [Google Scholar]
- Kim, G.; Kim, A. Scan context: Egocentric spatial descriptor for place recognition within 3d point cloud map. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4802–4809. [Google Scholar]
- Cui, J.; Niu, J.; Ouyang, Z.; He, Y.; Liu, D. ACSC: Automatic Calibration for Non-repetitive Scanning Solid-State LiDAR and Camera Systems. arXiv 2020, arXiv:2011.08516. [Google Scholar]
- Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
- Jiao, J.; Yu, Y.; Liao, Q.; Ye, H.; Fan, R.; Liu, M. Automatic calibration of multiple 3d lidars in urban environments. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 15–20. [Google Scholar]
- Lv, J.; Xu, J.; Hu, K.; Liu, Y.; Zuo, X. Targetless Calibration of LiDAR-IMU System Based on Continuous-time Batch Estimation. arXiv 2020, arXiv:2007.14759. [Google Scholar]
- Eckenhoff, K.; Geneva, P.; Bloecker, J.; Huang, G. Multi-camera visual-inertial navigation with online intrinsic and extrinsic calibration. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3158–3164. [Google Scholar]
- Lepetit, V.; Moreno-Noguer, F.; Fua, P. Epnp: An accurate o (n) solution to the pnp problem. Int. J. Comput. Vis. 2009, 81, 155. [Google Scholar] [CrossRef] [Green Version]
- Segal, A.; Haehnel, D.; Thrun, S. Generalized-icp. In Proceedings of the Robotics: Science and systems, Seattle, WA, USA, 28 June–1 July 2009; Volume 2, p. 435. [Google Scholar]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. Acm 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An open-source Robot Operating System. In Proceedings of the ICRA workshop on open source software, Kobe, Japan, 12–17 May 2009; Volume 3, p. 5. [Google Scholar]
- Zhang, J.; Kaess, M.; Singh, S. On degeneracy of optimization-based state estimation problems. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 809–816. [Google Scholar]
- Zhao, Y.; Vela, P.A. Good Feature Matching: Toward Accurate, Robust VO/VSLAM with Low Latency. IEEE Trans. Robot. 2020, 36, 657–675. [Google Scholar] [CrossRef] [Green Version]
- Mirzasoleiman, B.; Badanidiyuru, A.; Karbasi, A.; Vondrák, J.; Krause, A. Lazier than lazy greedy. In Proceedings of the Proceedings of the AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–29 January 2015; Voulme 29. [Google Scholar]
- Summers, T.H.; Cortesi, F.L.; Lygeros, J. On submodularity and controllability in complex dynamical networks. IEEE Trans. Control Netw. Syst. 2015, 3, 91–101. [Google Scholar] [CrossRef] [Green Version]
- Jiao, J.; Zhu, Y.; Ye, H.; Huang, H.; Yun, P.; Jiang, L.; Wang, L.; Liu, M. Greedy-Based Feature Selection for Efficient LiDAR SLAM. arXiv 2021, arXiv:2103.13090. [Google Scholar]
- Rusu, R.B.; Cousins, S. 3d is here: Point cloud library (pcl). In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar]
- Wang, H.; Wang, C.; Xie, L. Intensity scan context: Coding intensity and geometry relations for loop closure detection. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 2095–2101. [Google Scholar]
- Zhang, Z.; Scaramuzza, D. A tutorial on quantitative trajectory evaluation for visual (-inertial) odometry. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 7244–7251. [Google Scholar]
- Shan, T.; Englot, B. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765. [Google Scholar]
- Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. arXiv 2020, arXiv:2007.00258. [Google Scholar]
- Kaess, M.; Johannsson, H.; Roberts, R.; Ila, V.; Leonard, J.J.; Dellaert, F. iSAM2: Incremental smoothing and mapping using the Bayes tree. Int. J. Robot. Res. 2012, 31, 216–235. [Google Scholar] [CrossRef]
Lio-sam | A-LOAM | LeGO-LOAM | Lili-om | Livox Horizon Loam | 5 Livox Horizon | |
---|---|---|---|---|---|---|
Mean | 3.287 m | 6.369 m | 4.363 m | 6.268 m | 54.852 m | 1.812 m |
RMSE | 2.354 m | 8.973 m | 6.251 m | 9.347 m | 60.623 m | 1.997 m |
Lio-sam | A-LOAM | LeGO-LOAM | 5 Livox Horizons | |
---|---|---|---|---|
Position | 463.185 m | 88.043 m | 33.139 m | 0.302 m |
Attitude | 2.711° |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, Y.; Lou, Y.; Zhang, Y.; Song, W.; Huang, F.; Tu, Z. A Robust Framework for Simultaneous Localization and Mapping with Multiple Non-Repetitive Scanning Lidars. Remote Sens. 2021, 13, 2015. https://doi.org/10.3390/rs13102015
Wang Y, Lou Y, Zhang Y, Song W, Huang F, Tu Z. A Robust Framework for Simultaneous Localization and Mapping with Multiple Non-Repetitive Scanning Lidars. Remote Sensing. 2021; 13(10):2015. https://doi.org/10.3390/rs13102015
Chicago/Turabian StyleWang, Yusheng, Yidong Lou, Yi Zhang, Weiwei Song, Fei Huang, and Zhiyong Tu. 2021. "A Robust Framework for Simultaneous Localization and Mapping with Multiple Non-Repetitive Scanning Lidars" Remote Sensing 13, no. 10: 2015. https://doi.org/10.3390/rs13102015
APA StyleWang, Y., Lou, Y., Zhang, Y., Song, W., Huang, F., & Tu, Z. (2021). A Robust Framework for Simultaneous Localization and Mapping with Multiple Non-Repetitive Scanning Lidars. Remote Sensing, 13(10), 2015. https://doi.org/10.3390/rs13102015