Next Article in Journal
Detection of Natural Gas Leakages Using a Laser-Based Methane Sensor and UAV
Next Article in Special Issue
Robust Visual-Inertial Navigation System for Low Precision Sensors under Indoor and Outdoor Environments
Previous Article in Journal
Inferring Near-Surface PM2.5 Concentrations from the VIIRS Deep Blue Aerosol Product in China: A Spatiotemporally Weighted Random Forest Model
Previous Article in Special Issue
Detection of a Moving UAV Based on Deep Learning-Based Distance Estimation
Article

Autonomous Vehicle Localization with Prior Visual Point Cloud Map Constraints in GNSS-Challenged Environments

1
School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, China
2
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Academic Editors: Nereida Rodriguez-Alvarez and Kai-Wei Chiang
Remote Sens. 2021, 13(3), 506; https://doi.org/10.3390/rs13030506
Received: 27 November 2020 / Revised: 21 January 2021 / Accepted: 28 January 2021 / Published: 31 January 2021
Accurate vehicle ego-localization is key for autonomous vehicles to complete high-level navigation tasks. The state-of-the-art localization methods adopt visual and light detection and ranging (LiDAR) simultaneous localization and mapping (SLAM) to estimate the position of the vehicle. However, both of them may suffer from error accumulation due to long-term running without loop optimization or prior constraints. Actually, the vehicle cannot always return to the revisited location, which will cause errors to accumulate in Global Navigation Satellite System (GNSS)-challenged environments. To solve this problem, we proposed a novel localization method with prior dense visual point cloud map constraints generated by a stereo camera. Firstly, the semi-global-block-matching (SGBM) algorithm is adopted to estimate the visual point cloud of each frame and stereo visual odometry is used to provide the initial position for the current visual point cloud. Secondly, multiple filtering and adaptive prior map segmentation are performed on the prior dense visual point cloud map for fast matching and localization. Then, the current visual point cloud is matched with the candidate sub-map by normal distribution transformation (NDT). Finally, the matching result is used to update pose prediction based on the last frame for accurate localization. Comprehensive experiments were undertaken to validate the proposed method, showing that the root mean square errors (RMSEs) of translation and rotation are less than 5.59 m and 0.08°, respectively. View Full-Text
Keywords: autonomous vehicle; stereo visual odometry; pose prediction; accurate vehicle localization autonomous vehicle; stereo visual odometry; pose prediction; accurate vehicle localization
Show Figures

Figure 1

MDPI and ACS Style

Lin, X.; Wang, F.; Yang, B.; Zhang, W. Autonomous Vehicle Localization with Prior Visual Point Cloud Map Constraints in GNSS-Challenged Environments. Remote Sens. 2021, 13, 506. https://doi.org/10.3390/rs13030506

AMA Style

Lin X, Wang F, Yang B, Zhang W. Autonomous Vehicle Localization with Prior Visual Point Cloud Map Constraints in GNSS-Challenged Environments. Remote Sensing. 2021; 13(3):506. https://doi.org/10.3390/rs13030506

Chicago/Turabian Style

Lin, Xiaohu, Fuhong Wang, Bisheng Yang, and Wanwei Zhang. 2021. "Autonomous Vehicle Localization with Prior Visual Point Cloud Map Constraints in GNSS-Challenged Environments" Remote Sensing 13, no. 3: 506. https://doi.org/10.3390/rs13030506

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop