Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera
AbstractLiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches. View Full-Text
Share & Cite This Article
Sim, S.; Sock, J.; Kwak, K. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera. Sensors 2016, 16, 933.
Sim S, Sock J, Kwak K. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera. Sensors. 2016; 16(6):933.Chicago/Turabian Style
Sim, Sungdae; Sock, Juil; Kwak, Kiho. 2016. "Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera." Sensors 16, no. 6: 933.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.