Point-Line Visual Stereo SLAM Using EDlines and PL-BoW
Abstract
:1. Introduction
- (1)
- A stereo SLAM system based on the integration of point and line features. Such a method employs an EDlines algorithm to improve the speed of line feature detector in the front-end of the system. In addition, the comprehensive representation and transformation of line features are also derived.
- (2)
- A method using entropy scale and geometric constraints is proposed to eliminate the outliers of line features. The strategy of removing the mismatched features in the front-end ensures the reliability of the extracted lines without increasing the additional algorithm complexity. The application of this method improves the accuracy of camera pose estimation and map construction.
- (3)
- A novel Point and Line Bags of Word (PL-BoW) model combining the point and line features is proposed to improve the accuracy and robustness of loop detection. Unlike popular methods of evaluating the BoW of point and line features independently, the proposed PL-BoW model takes into account the constraints of the extracted point and line features. Such a model improves the reliability of the loop detection process under the interference of weak texture and light changes, which typically exist in structured engineered environments.
2. Representation and Detection of Line Features
2.1. Geometric Representation of Lines
2.2. Extraction and Description of Line Features
2.3. Extraction and Description of Line Features
3. Bundle Adjustment and Loop Closure with Points and Lines
3.1. Graph Optimization with Point and Line Features
3.2. Loop Closure with Points and Lines
Algorithm 1: PL-BoW Based Loop Detection. | |
Input: | The keyframes set , the KD-tree associated with and the current keyframe ; |
Output: | A revisit matching keyframe ; |
1 | Select candidate keyframes through retrieving the words of points and lines in the |
image using Term Frequency-Inverse | |
Document Frequency (TF-IDF); | |
2 | NumberOfCommonView Words ; |
3 | NumberOfCommonViewPLpairs ; |
4 | for each do |
5 | if |
6 | ; |
7 | Calculate the similarity ; |
8 | ; |
9 | end |
10 | end |
11 | for each do |
12 | Remove with ; |
13 | end |
14 | Perform space consistency detection on to obtain . |
4. Experimental Verification
4.1. Stereo SLAM on KITTI Dataset
4.2. Stereo SLAM on EuRoC Dataset
4.3. Comparison of Processing Time
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Smith, R.C.; Cheeseman, P. On the representation and estimation of spatial uncertainty. Int. J. Robot. Res. 1986, 5, 56–68. [Google Scholar] [CrossRef]
- Bescos, B.; Fácil, J.M.; Civera, J.; Neira, J. DynaSLAM: Tracking, mapping, and inpainting in dynamic scenes. IEEE Robot. Autom. Lett. 2018, 3, 4076–4083. [Google Scholar] [CrossRef] [Green Version]
- Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef] [Green Version]
- Poulose, A.; Han, D.S. Hybrid indoor localization using IMU sensors and smartphone camera. Sensors 2019, 19, 5084. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Z.; Rebecq, H.; Forster, C.; Scaramuzza, D. Benefit of large field-of-view cameras for visual odometry. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 801–808. [Google Scholar]
- Li, H.; Yao, J.; Lu, X.; Wu, J. Combining points and lines for camera pose estimation and optimization in monocular visual odometry. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 1289–1296. [Google Scholar]
- Palmarini, R.; Erkoyuncu, J.A.; Roy, R.; Torabmostaedi, H. A systematic review of augmented reality applications in maintenance. Robot.-Comput.-Integr. Manuf. 2018, 49, 215–228. [Google Scholar] [CrossRef] [Green Version]
- Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.; Brasch, N.; Wang, Y.; Navab, N.; Tombari, F. Structure-slam: Low-drift monocular slam in indoor environments. IEEE Robot. Autom. Lett. 2020, 5, 6583–6590. [Google Scholar] [CrossRef]
- Zhang, Y.; Hsiao, M.; Zhao, Y.; Dong, J.; Enge, J.J. Distributed Client-Server Optimization for SLAM with Limited On-Device Resources. arXiv 2021, arXiv:2103.14303. [Google Scholar]
- Lu, Y.; Song, D. Robust RGB-D odometry using point and line features. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 3934–3942. [Google Scholar]
- Scaramuzza, D.; Pradalier, C.; Siegwart, R. Performance evaluation of a vertical line descriptor for omnidirectional images. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 3127–3132. [Google Scholar]
- Scaramuzza, D.; Criblez, N.; Martinelli, A.; Siegwart, R. A Robust Descriptor for Tracking Vertical Lines in Omnidirectional Images and its Use in Robot Self-Calibration. Int. J. Robot. Res. 2009, 28, 149–171. [Google Scholar] [CrossRef] [Green Version]
- Pumarola, A.; Vakhitov, A.; Agudo, A.; Sanfeliu, A.; Moreno-Noguer, F. PL-SLAM: Real-time monocular visual SLAM with points and lines. In Proceedings of the 2017 IEEE international conference on robotics and automation (ICRA), Singapore, 29 May–3 June 2017; pp. 4503–4508. [Google Scholar]
- Ma, J.; Wang, X.; He, Y.; Mei, X.; Zhao, J. Line-based stereo slam by junction matching and vanishing point alignment. IEEE Access 2019, 7, 181800–181811. [Google Scholar] [CrossRef]
- Gomez-Ojeda, R.; Moreno, F.A.; Zuniga-Noël, D.; Scaramuzza, D.; Gonzalez-Jimenez, J. PL-SLAM: A stereo SLAM system through the combination of points and line segments. IEEE Trans. Robot. 2019, 35, 734–746. [Google Scholar] [CrossRef] [Green Version]
- Berenguer, Y.; Payá, L.; Valiente, D.; Peidró, A.; Reinoso, O. Relative altitude estimation using omnidirectional imaging and holistic descriptors. Remote Sens. 2019, 11, 323. [Google Scholar] [CrossRef] [Green Version]
- Li, K.; Yao, J. Line segment matching and reconstruction via exploiting coplanar cues. ISPRS J. Photogramm. Remote Sens. 2017, 125, 33–49. [Google Scholar] [CrossRef]
- Zuo, X.; Xie, X.; Liu, Y.; Huang, G. Robust visual SLAM with point and line features. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 1775–1782. [Google Scholar]
- Zhang, G.; Lee, J.H.; Lim, J.; Suh, I.H. Building a 3-D line-based map using stereo SLAM. IEEE Trans. Robot. 2015, 31, 1364–1377. [Google Scholar] [CrossRef]
- Vakhitov, A.; Lempitsky, V. Learnable line segment descriptor for visual slam. IEEE Access 2019, 7, 39923–39934. [Google Scholar] [CrossRef]
- Gomez-Ojeda, R.; Gonzalez-Jimenez, J. Geometric-based line segment tracking for HDR stereo sequences. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 69–74. [Google Scholar]
- Luo, X.; Tan, Z.; Ding, Y. Accurate line reconstruction for point and line-based stereo visual odometry. IEEE Access 2019, 7, 185108–185120. [Google Scholar] [CrossRef]
- Shao, Z.; Chen, M.; Liu, C. Feature matching for illumination variation images. J. Electron. Imaging 2015, 24, 033011. [Google Scholar] [CrossRef]
- Ma, J.; Jiang, X.; Jiang, J.; Zhao, J.; Guo, X. LMR: Learning a two-class classifier for mismatch removal. IEEE Trans. Image Process. 2019, 28, 4045–4059. [Google Scholar] [CrossRef]
- Lim, H.; Kim, Y.; Jung, K.; Hu, S.; Myung, H. Avoiding Degeneracy for Monocular Visual SLAM with Point and Line Features. arXiv 2021, arXiv:2103.01501. [Google Scholar]
- Qian, K.; Zhao, W.; Li, K.; Ma, X.; Yu, H. Visual SLAM With BoPLW Pairs Using Egocentric Stereo Camera for Wearable-Assisted Substation Inspection. IEEE Sensors J. 2019, 20, 1630–1641. [Google Scholar] [CrossRef]
- Zhao, W.; Qian, K.; Ma, Z.; Ma, X.; Yu, H. Stereo visual SLAM using bag of point and line word pairs. In Proceedings of the International Conference on Intelligent Robotics and Applications, Shenyang, China, 8–11 August 2019; pp. 651–661. [Google Scholar]
- Akinlar, C.; Topal, C. EDLines: A real-time line segment detector with a false detection control. Pattern Recognit. Lett. 2011, 32, 1633–1642. [Google Scholar] [CrossRef]
- Von Gioi, R.G.; Jakubowicz, J.; Morel, J.M.; Randall, G. LSD: A fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 32, 722–732. [Google Scholar] [CrossRef] [PubMed]
- Ma, X.; Ning, S. Real-Time Visual-Inertial SLAM with Point-Line Feature using Improved EDLines Algorithm. In Proceedings of the 2020 IEEE 5th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 12–14 June 2020; pp. 1323–1327. [Google Scholar]
- Zhang, L.; Koch, R. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency. J. Vis. Commun. Image Represent. 2013, 24, 794–805. [Google Scholar] [CrossRef]
- Burri, M.; Nikolic, J.; Gohl, P.; Schneider, T.; Rehder, J.; Omari, S.; Achtelik, M.W.; Siegwart, R. The EuRoC micro aerial vehicle datasets. Int. J. Robot. Res. 2016, 35, 1157–1163. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the 2012 IEEE conference on computer vision and pattern recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Grupp, M. Python Package for the Evaluation of Odometry and SLAM; GitHub: San Francisco, CA, USA, 2017. [Google Scholar]
Seg. | ORB-SLAM2 | PL-SLAM | PEL-SLAM | |||
---|---|---|---|---|---|---|
Trans. (m) | Rot. (deg) | Trans. (m) | Rot. (deg) | Trans. (m) | Rot. (deg) | |
0 | ||||||
1 | ||||||
2 | ||||||
3 | ||||||
4 | ||||||
5 | ||||||
6 | ||||||
7 | ||||||
8 | ||||||
9 |
Seg. | PL-SLAM | ORB-SLAM2 | sPLVO | PEL-SLAM | ||||
---|---|---|---|---|---|---|---|---|
Trans. | Rot. (deg) | Trans. | Rot. (deg) | Trans. | Rot. | Trans. | Rot. | |
MH-01 | ||||||||
MH-02 | ||||||||
MH-03 | ||||||||
MH-04 | ||||||||
MH-05 | ||||||||
V1-01 | ||||||||
V1-02 | ||||||||
V1-03 | ||||||||
V2-01 | ||||||||
V2-02 | ||||||||
V2-03 |
KITTI | ORB-SLAM | PL-SLAM | PEL-SLAM | EuRoC | ORB-SLAM | PL-SLAM | sPLVO | PEL-SLAM |
---|---|---|---|---|---|---|---|---|
Seg. | Time | Time (s) | Time | Seg. | Time | Time | Time | Time |
0 | MH-01 | |||||||
1 | MH-02 | |||||||
2 | MH-03 | |||||||
3 | MH-04 | |||||||
4 | MH-05 | |||||||
5 | V1-01 | |||||||
6 | V1-02 | |||||||
7 | V1-03 | |||||||
8 | V2-01 | |||||||
9 | V2-02 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rong, H.; Gao, Y.; Guan, L.; Ramirez-Serrano, A.; Xu, X.; Zhu, Y. Point-Line Visual Stereo SLAM Using EDlines and PL-BoW. Remote Sens. 2021, 13, 3591. https://doi.org/10.3390/rs13183591
Rong H, Gao Y, Guan L, Ramirez-Serrano A, Xu X, Zhu Y. Point-Line Visual Stereo SLAM Using EDlines and PL-BoW. Remote Sensing. 2021; 13(18):3591. https://doi.org/10.3390/rs13183591
Chicago/Turabian StyleRong, Hanxiao, Yanbin Gao, Lianwu Guan, Alex Ramirez-Serrano, Xu Xu, and Yunyu Zhu. 2021. "Point-Line Visual Stereo SLAM Using EDlines and PL-BoW" Remote Sensing 13, no. 18: 3591. https://doi.org/10.3390/rs13183591