Next Article in Journal
Active Control of Torsional Vibration during Mode Switching of Hybrid Powertrain Based on Adaptive Model Reference
Previous Article in Journal
PIV Measurement and Proper Orthogonal Decomposition Analysis of Annular Gap Flow of a Hydraulic Machine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel and Simplified Extrinsic Calibration of 2D Laser Rangefinder and Depth Camera

1
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
2
School of Mechanical Engineering, Yanshan University, Qinhuangdao 066004, China
3
Shenzhen Key Laboratory of Precision Engineering, Shenzhen 518055, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Machines 2022, 10(8), 646; https://doi.org/10.3390/machines10080646
Submission received: 14 June 2022 / Revised: 7 July 2022 / Accepted: 30 July 2022 / Published: 3 August 2022
(This article belongs to the Topic Advances in Mobile Robotics Navigation)

Abstract

:
It is too difficult to directly obtain the correspondence features between the two-dimensional (2D) laser-range-finder (LRF) scan point and camera depth point cloud, which leads to a cumbersome calibration process and low calibration accuracy. To address the problem, we propose a calibration method to construct point-line constraint relations between 2D LRF and depth camera observational features by using a specific calibration board. Through the observation of two different poses, we construct the hyperstatic equations group based on point-line constraints and solve the coordinate transformation parameters of 2D LRF and depth camera by the least square (LSQ) method. According to the calibration error and threshold, the number of observation and the observation pose are adjusted adaptively. After experimental verification and comparison with existing methods, the method proposed in this paper easily and efficiently solves the problem of the joint calibration of the 2D LRF and depth camera, and well meets the application requirements of multi-sensor fusion for mobile robots.

1. Introduction

With the rapid development of sensor technology and computer vision technology, laser-range-finder (LRF) and cameras have become indispensable sensors for autonomous driving, mobile robots and other fields [1]. Two-dimensional (2D) LRF is commonly used to measure depth information in a single plane due to its high precision, light weight and low power consumption. The camera acquires rich information, such as color and texture, but it is sensitive to light and weather, resulting in its poor stability. On the other hand, it is difficult for the camera to measure depth directly over long distances. Therefore, laser vision fusion plays an important role in robot self-localization [2,3], environmental perception [4], target tracking [5], and path planning [6].
To integrate data information from 2D LRF and depth cameras, the relative positional relationship between the two sensors needs to be precisely known [7]. This is a classical extrinsic calibration problem, where the objective is to determine the conversion relationship between two coordinate systems. In contrast to 3D LRF, which identifies different features, 2D LRF only measures depth information in a single plane, and it is difficult for the camera to see the plane scanned by 2D LRF, which makes extrinsic calibration for 2D LRF and cameras more challenging. Therefore, additional constraints must be used to find the correspondence between the 2D LRF and the camera.
There has been a large amount of research work on the extrinsic calibration of 2D LRF and cameras, which is divided into two categories: target-based calibration and non-target calibration. References [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24] are target-based calibration. Zhang and Pless [8] proposed a method by using point constraints on a plane, but only two degrees of freedom are constrained in a single observation, which required a large number of different observations to ensure accuracy. Vasconcelos et al. [9] proposed to solve the problem in [8] by forming a perspective three-point (P3P) problem. Zhou [10] further proposed an algebraic method for extrinsic calibration. Both methods in [9,10] required multiple observations and suffered from multi-solution problems. Kaiser et al. [11] proposed a calibration algorithm such that the problem of rigid displacement estimation between two sensors was reduced to the registration of plane and line. Li [12], Kwak et al. [13] used isosceles triangles and foliate panels as calibration targets, and used point-line constraint to calibrate the camera and LRF. Dong et al. [14] proposed a special V-shaped calibration target method with the checkerboard, which was used for both camera and LRF calibration by a single observation, but it needed a cumbersome solution process. Itami et al. [15,16] proposed an improved method for the checkerboard calibration of the camera and LRF, which directly obtained the point-to-point correspondence between the LRF and camera. Huang et al. [23] proposed a method to calibrate the 2D LRF and camera using a one-side transparent hollow calibration board, but it required the scanning surface of the LRF to form a certain angle with the hollow calibration board. Tu et al. [24] proposed an accuracy criterion for directional synthesis to eliminate the large error data during observation, but their laser and visual observation points were less constrained and still required multiple observations. Although there are many extrinsic parameters calibration methods for the 2D LRF and camera, the problems of a complex calibration process, high requirements for calibration board production, many calibration times and a limited calibration environment still exist.
References [25,26,27,28,29,30,31] are the calibration without target, which are further divided into feature-based extrinsic calibration and motion-based extrinsic calibration. Levison et al. [25] proposed a self-calibration method based on edge feature matching. Zhao et al. [26] proposed an extrinsic calibration framework based on motion LRF and visible light camera. Yang et al. [27] built on previous methods to achieve matching of images and 3D LRF point clouds by methods such as keyframing and motion recovery structures. However, since different sensors acquired data on different principles, the transformation of each sensor was determined based on the sensor that acquired data at the lowest frequency. Schneider et al. [28] proposed to apply deep learning to LRF and the extrinsic calibration of the visible light camera, constructing loss functions by photometric loss and point cloud distance loss, and using unsupervised learning methods for training. However, such methods rely on image feature points and radar point cloud data that are often difficult to obtain in natural scenes and have harsh usage conditions. In order to estimate the LiDAR to stereo camera extrinsic parameters for driving platforms, applying 3D mesh reconstruction-based point cloud registration, a photometric error function was built [31]. In addition to directly obtaining the extrinsic parameter of calibration, CFNet proposed by Wang [32] was utilized to predict the calibration flow based on convolutional neural networks. Recently, other authors proposed to optimize the external parameter calibration with additional sensors [33]. The targetless extrinsic parameter calibration has a common feature, where lidar collects 3D point cloud data with rich feature information. So they are too difficult to be applied to the calibration of 2D LRF.
In order to solve the calibration problems mentioned above, in this paper, we propose a method to constrain the correspondence between the depth camera and 2D LRF within a special calibration plate. The point-line features constraint of the 2D LRF and the depth camera on the calibration plate is used to realized the joint calibration of the 2D LRF and depth camera. For the extrinsic calibration of the 2D laser rangefinder and depth camera, the main contributions of the article are as follows:
  • Provide a novel specific calibration board which is simple to manufacture for the 2D LRF and camera calibration to construct three observation feature point-line constraints between two sensors.
  • Through the method in this paper, the joint calibration of 2D LRF and depth camera is completed by only two observations with an oversimplified operation.
  • By setting the calibration threshold, the joint calibration of the 2D LRF and depth camera placed on a movable device adjusts the number of observations autonomously.
The layout of the article is as follows: Section 2 describes the calibration principles of the 2D LRF and depth cameras. Section 3 describes the calibration methods and algorithms for the 2D LRF and depth cameras. Section 4 verifies the proposed method by experiments, and we draw conclusions in Section 5.

2. The Calibration Basis of 2D LRF and Depth Camera

In the process of mobile robot localization and mapping, it is often necessary to unify the environmental perception data of each sensor to the world coordinate system called base frame to describe. The joint calibration of 2D LRF and depth camera is to determine the coordinate transformation relationship between the coordinate system of the depth camera, 2D LRF and the world. As shown in Figure 1, the calibration involves four coordinate systems, which are world coordinate system O w x w y w z w , LRF coordinate system O l x l y l z l called laser frame, depth camera coordinate system O c x c y c z c called camera frame, and pixel coordinate system O u v . The pixel coordinate system is the reference coordinate system of the camera observation data, which usually needs to be transferred to the camera coordinate system. With the determined relative position among O w x w y w z w , O l x l y l z l and O c x c y c z c , the observation point coordinates are able to be transformed among the coordinate systems. Depending on the application scenarios and needs, the observation data from the depth camera can be expressed under the LRF coordinate system and then the data can be projected to describe in the world coordinate system. The scanned data of the LRF can be also expressed under the camera coordinate system, and then the data are transformed from the camera coordinate system to the world coordinate system.
The 2D LRF is often mounted horizontally in mobile robot applications. However, due to installation error and uncertainty, the mounting will have some deflection angles. The camera also has some angles relative to the world coordinate system. According to the rigid body coordinate transformation relationship, the position data collected by the 2D LRF and depth camera in their respective coordinate systems correspond to the position of the observation point in the world coordinate system, as shown in Equation (1).
x w y w z w = R i w x i y i z i + T i w ,
In the formula, x i is the coordinate value in i coordinate system. i = l , c , l is the radar coordinate system, c is the camera coordinate system. R i w is the rotation matrix from the coordinate system to the world coordinate system, and T i w is the translation matrix of the i coordinate system with respect to the world coordinate system. The camera used in this paper is the ZED depth camera, and its imaging principle is shown in Figure 2. The figure contains two coordinate systems: the camera coordinate system O c x c y c z c , and the pixel coordinate system O u v . The two cameras of the depth camera are located in the same plane, the optical axes of the left and right cameras are parallel, and the focal length parameters f are the same. The coordinates of the observation point in the camera coordinate system are assumed to be P ( x c , y c , z c ) .
According to the camera’s small-aperture imaging principle and triangle similarity, we have
z c f = x c u l z c f = x c b u r z c f = y c v l = y c v r ,
where x c , y c , z c are the coordinates of O c x c y c z c ; u l and v l are the coordinates in the left camera pixel coordinate system; u r and v r are the coordinates in the right camera pixel coordinate system; f is the focal length of the camera; and b isthe distance which is called the base line between the binocular cameras. Using the camera coordinate system of the left camera as the camera coordinate system of the depth camera, the relationship between the pixel coordinates and the camera coordinates of the depth camera is obtained as
x c = u l z c f y c = v l z c f z c = f b u l u r ,
By Equation (3),
x c y c z c = b d 0 0 0 b d 0 0 0 b f d u l v l 1 ,
where d = u l u r , d is called the parallax of the two cameras. According to the depth data acquisition principle of the depth camera, the point cloud data of the observed object is gathered. For the 2D LRF, it directly obtains the distance and angle information of the obstacle. In practical application, the points of the actual object scanned by the LRF have a unique point corresponding to them in the depth camera. The polar coordinate data of the LRF are converted into Cartesian coordinate data, then the Cartesian coordinates are converted to be expressed under the coordinate system of O c x c y c z c by Equation (5).
x c y c z c = R l c x l y l z l + T l c ,
where R l c = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 , T l c = t 1 t 2 t 3 . R l c denotes the 3 × 3 rotation matrix from O l x l y l z l to O c x c y c z c , and T l c denotes the translation matrix from O l x l y l z l to O c x c y c z c .
Because the geometric relationships do not vary with the coordinate system, the coordinate system transformation does not affect the geometric constraint relationships. The data points acquired by the 2D LRF are transformed to be expressed under the depth camera coordinate system, and the equations are constructed using the constraints of the LRF scanned points on the depth camera observation line. The set of equations constructed using multiple observations is solved by linear least squares for the rotation matrix R l c and translation matrix T l c .

3. Calibration Methods

3.1. Feature Extraction

Although the scanned points of 2D LRF are not visible, it accurately obtains the contour of the obstacle. Based on the characteristics of 2D LRF, a specific calibration plate is used in this paper, as shown in Figure 3. The special feature of the calibration plate is its clever shape, which makes the laser scan points form three uncorrelated straight lines and makes the camera’s 3D point cloud form three uncorrelated planes. The calibration plate consists of two rectangular planes and two triangular planes. There is no limit to the angle between the various planes in the production of the calibration plate. The only thing that needs to be ensured is that any three planes are uncorrelated and obtain as many observation points as possible on the calibration plate. In addition, the calibration plate eliminates the limitations of the environment and installation relationship during the calibration process, just ensuring that the LRF and camera observe the calibration plate at the same time. By the specific calibration plate, the characteristic information of 2D LRF scanning is obtained. At the same time, the 3D point cloud data of the depth camera observation on calibration plate are obtained too.
As shown in Figure 4, the calibration plate is placed at the position where the 2D LRF and depth camera observe simultaneously, and the observation data of 2D LRF and depth camera are collected. The 2D LRF data are observed based on the LRF coordinate system O x l y l z l , and the observation data of depth camera are observed based on the left camera of depth camera coordinate system O x c y c z c .
On the calibration plate, the scanned points of LRF will form a folded shape of E F G G H I , as shown in Figure 4. By the RANSAC (random sample consensus) method, the scanned points on the rectangular plane P B D Q are fitted to the straight line E I ¯ , the points on the plane A B C are fitted to the straight line F G ¯ , and the points on the plane A C D are fitted to the straight line G H ¯ . Any two of the three lines are respectively associated to find the intersection points F ( x 1 l , y 1 l , 0 ) , G ( x 2 l , y 2 l , 0 ) , H ( x 3 l , y 3 l , 0 ) .
The depth data from the depth camera are transformed into the 3D point cloud, and the point cloud data of plane P B D Q , plane A B C , and plane A C D under the observation of the depth camera are extracted. By RANSAC (random sample consensus) method, the 3D point clouds on each plane are fitted to the plane equations of the corresponding planes. The spatial linear equations of straight line A B ¯ , straight line A C ¯ and straight line A D ¯ are obtained by associating any two plane equations of plane P B D Q , plane A B C and plane A C D . Suppose the plane equations of the plane P B D Q , plane A B C , and plane A C D are
A 1 x c + B 1 y c + C 1 z c + D 1 = 0 A 2 x c + B 2 y c + C 2 z c + D 2 = 0 A 3 x c + B 3 y c + C 3 z c + D 3 = 0 ,
Then the spatial linear equations of the line A B ¯ , line A C ¯ and line A D ¯ are
A 1 x c + B 1 y c + C 1 z c + D 1 = 0 A 2 x c + B 2 y c + C 2 z c + D 2 = 0 A 2 x c + B 2 y c + C 2 z c + D 2 = 0 A 3 x c + B 3 y c + C 3 z c + D 3 = 0 A 3 x c + B 3 y c + C 3 z c + D 3 = 0 A 1 x c + B 1 y c + C 1 z c + D 1 = 0 ,
where A 1 , B 1 , C 1 , A 2 , B 2 , C 2 , A 3 , B 3 , C 3 are known. By extracting and fitting 2D LRF data and depth camera point cloud data, the linear equations of three feature points based on the LiDAR coordinate system and three spatial straight lines based on the depth camera coordinate system are obtained.

3.2. Parameter Fitting

The projection of the three feature points of 2D LRF into the depth camera coordinate system by Equation (5) has
x l i c y l i c z l i c = r 11 x i l + r 12 y i l + t 1 r 21 x i l + r 22 y i l + t 2 r 31 x i l + r 32 y i l + t 3 ,
where l denotes the 2D LRF coordinate system, i denotes the i-th intersection point under one 2D LRF observation, and i = 1, 2, 3. From the projection results, it is known that only nine unknowns need to be solved to obtain the rotation and translation matrices for the 2D LRF and depth camera coordinate system conversions, so only nine mutually independent sets of equations need to be coupled. Since the transformation of the coordinate system of points does not change the geometric relationship, the points under the 2D LRF coordinate system should be on the straight line A B ¯ , the straight line A C ¯ and the straight line A D ¯ , respectively, after the points are transformed to the camera coordinate system, i.e.,
A 1 r 11 x 1 l + A 1 r 12 y 1 l + A 1 t 1 + B 1 r 21 x 1 l + B 1 r 22 y 1 l + B 1 t 2 + C 1 r 31 x 1 l + C 1 r 32 y 1 l + C 1 t 3 + D 1 = 0 A 2 r 11 x 1 l + A 2 r 12 y 1 l + A 2 t 1 + B 2 r 21 x 1 l + B 2 r 22 y 1 l + B 2 t 2 + C 2 r 31 x 1 l + C 2 r 32 y 1 l + C 2 t 3 + D 2 = 0 A 2 r 11 x 2 l + A 2 r 12 y 2 l + A 2 t 1 + B 2 r 21 x 2 l + B 2 r 22 y 2 l + B 2 t 2 + C 2 r 31 x 2 l + C 2 r 32 y 2 l + C 2 t 3 + D 2 = 0 A 3 r 11 x 2 l + A 3 r 12 y 2 l + A 3 t 1 + B 3 r 21 x 2 l + B 3 r 22 y 2 l + B 3 t 2 + C 3 r 31 x 2 l + C 3 r 32 y 2 l + C 3 t 3 + D 3 = 0 A 3 r 11 x 3 l + A 3 r 12 y 3 l + A 3 t 1 + B 3 r 21 x 3 l + B 3 r 22 y 3 l + B 3 t 2 + C 3 r 31 x 3 l + C 3 r 32 y 3 l + C 3 t 3 + D 3 = 0 A 1 r 11 x 3 l + A 1 r 12 y 3 l + A 1 t 1 + B 1 r 21 x 3 l + B 1 r 22 y 3 l + B 1 t 2 + C 1 r 31 x 3 l + C 1 r 32 y 3 l + C 1 t 3 + D 1 = 0 ,
Collated by
A 1 x 1 l A 1 y 1 l A 1 B 1 x 1 l B 1 y 1 l B 1 C 1 x 1 l C 1 y 1 l C 1 A 2 x 1 l A 2 y 1 l A 2 B 2 x 1 l B 2 y 1 l B 2 C 2 x 1 l C 2 y 1 l C 2 A 2 x 2 l A 2 y 2 l A 2 B 2 x 2 l B 2 y 2 l B 2 C 2 x 2 l C 2 y 2 l C 2 A 3 x 2 l A 3 y 2 l A 3 B 3 x 2 l B 3 y 2 l B 3 C 3 x 2 l C 3 y 2 l C 3 A 3 x 3 l A 3 y 3 l A 3 B 3 x 3 l B 3 y 3 l B 3 C 3 x 3 l C 3 y 3 l C 3 A 1 x 3 l A 1 y 3 l A 1 B 1 x 3 l B 1 y 3 l B 1 C 1 x 3 l C 1 y 3 l C 1 r 11 r 12 t 1 r 21 r 22 t 2 r 31 r 32 t 3 = D 1 D 2 D 2 D 3 D 3 D 1 ,
The six equations in Equation (9) are independent of each other. By changing the calibration model or the position and attitude of the calibration plate, the six constraint equations are obtained again by the same procedure. Since there are only nine unknowns in the calibration parameters, the set of equations from the last observation is combined to form the super-stationary set of equations in Equation (11), where n is the number of observations. The parameters of the rotation and translation matrices are solved using linear least squares to determine the coordinate transformation of the 2D LRF and depth camera.
A 11 x 11 l A 11 y 11 l A 11 B 11 x 11 l B 11 y 11 l B 11 C 11 x 11 l C 11 y 11 l C 11 A 12 x 11 l A 12 y 11 l A 12 B 12 x 11 l B 12 y 11 l B 12 C 12 x 11 l C 12 y 11 l C 12 A 12 x 12 l A 12 y 12 l A 12 B 12 x 12 l B 12 y 12 l B 12 C 12 x 12 l C 12 y 12 l C 12 A 13 x 12 l A 13 y 12 l A 13 B 13 x 12 l B 13 y 12 l B 13 C 13 x 12 l C 13 y 12 l C 13 A 13 x 13 l A 13 y 13 l A 13 B 13 x 13 l B 13 y 13 l B 13 C 13 x 13 l C 13 y 13 l C 13 A 11 x 13 l A 11 y 13 l A 11 B 11 x 13 l B 11 y 13 l B 11 C 11 x 13 l C 11 y 13 l C 11 A n 1 x n 1 l A n 1 y n 1 l A n 1 B n 1 x n 1 l B n 1 y n 1 l B n 1 C n 1 x n 1 l C n 1 y n 1 l C n 1 A n 2 x n 1 l A n 2 y n 1 l A n 2 B n 2 x n 1 l B n 2 y n 1 l B n 2 C n 2 x n 1 l C n 2 y n 1 l C n 2 A n 2 x n 2 l A n 2 y n 2 l A n 2 B n 2 x n 2 l B n 2 y n 2 l B n 2 C n 2 x n 2 l C n 2 y n 2 l C n 2 A n 3 x n 2 l A n 3 y n 2 l A n 3 B n 3 x n 2 l B n 3 y n 2 l B n 3 C n 3 x n 2 l C n 3 y n 2 l C n 3 A n 3 x n 3 l A n 3 y n 3 l A n 3 B n 3 x n 3 l B n 3 y n 3 l B n 3 C n 3 x n 3 l C n 3 y n 3 l C n 3 A n 1 x n 3 l A n 1 y n 3 l A n 1 B n 1 x n 3 l B n 1 y n 3 l B n 1 C n 1 x n 3 l C n 1 y n 3 l C n 1 6 n × 9 r 11 r 12 t 1 r 21 r 22 t 2 r 31 r 32 t 3 9 × 1 = D 11 D 12 D 12 D 13 D 13 D 11 D n 1 D n 2 D n 2 D n 3 D n 3 D n 1 6 n × 1 ,

3.3. Calibration Algorithm

For the joint calibration of the 2D LRF and depth camera, the method in this paper aims to determine the coordinate transformation relationship between the depth camera, LRF and the world. Before the calibration work, the calibrated depth camera had completed internal parameter calibration. To evaluate the calibration accuracy of the method in this paper, the three feature points under the LRF observation are projected under the point cloud reference coordinate system of the depth camera by solving the obtained calibration parameters, and the projection error of the laser points is calculated using the distance from the formula point to the straight line. The calibration plate or calibration model’s position is changed, the experiment is repeated several times, and the average calibration accuracy is calculated by multiple sets of data.
e r r = 1 3 N i = 1 N j = 1 3 | A j x j + B j y j + C j z j + D j | A j 2 + B j 2 + C j 2 ,
where N is the number of test, ( x j , y j , z j ) is the coordinate of the point projected to the depth camera coordinate system. A j , B j , C j , D j is the linear equation coefficient of the corresponding line of the projected point. e r r is the calibration accuracy. The smaller the e r r is, the higher the calibration accuracy.
The algorithm framework of the calibration method in this paper is shown in Algorithm 1, where R , T and O b s e r v N u m are defined as the final extrinsic parameter calibration result. Datasets I and S are the point cloud of the depth camera and 2D LRF. Dataset P is the pose sequence of AGV. Additionally, E is the error threshold, and R g t and T g t are the ground truth of the rotation and translation matrices. After two observations, the algorithm first solves the calibration error and compares it with the calibration threshold. When the calibration error is less than the target threshold, the rotation matrix R , the translation matrix T and the number of observations are output to complete the extrinsic parameter calibration of LRF and depth camera.
Algorithm 1 LRF and Depth Camera Extrinsic Parameter Calibration.
Input:
Image set I , point cloud set S , calibration position of AGV set P , and error threshold E , ground truth R g t , ground truth T g t
Output:
Calibration result R , T and O b s e r v
  1:
R 0 , T 0 , O b s e r v 0
  2:
A 0 , b 0 , e r r 100
  3:
A n g u l a r E r r o r 0
  4:
D i s t a n c e E r r o r 0
  5:
X r 11 r 12 t 1 r 21 r 22 t 2 r 31 r 32 t 3 T
  6:
Set firt calibration position of AGV from P
  7:
O b s e r v 1
  8:
while e r r > E do
  9:
    Get set I and S
 10:
    Extracted 3D point cloud of calibration plate features
 11:
    Calculate M A 1 B 1 C 1 D 1 A 2 B 2 C 2 D 2 A 3 B 3 C 3 D 3
 12:
    Extracted 2D point cloud of calibration plate shape features
 13:
    Calculate L x 1 l y 1 l x 2 l y 2 l x 3 l y 3 l
 14:
    Calculate C x 1 c y 1 c x 2 c y 2 c x 3 C y 3 c
 15:
     A 1 A 1 x 1 l A 1 y 1 l A 1 B 1 x 1 l B 1 y 1 l B 1 C 1 x 1 l C 1 y 1 l C 1 A 2 x 1 l A 2 y 1 l A 2 B 2 x 1 l B 2 y 1 l B 2 C 2 x 1 l C 2 y 1 l C 2 A 2 x 2 l A 2 y 2 l A 2 B 2 x 2 l B 2 y 2 l B 2 C 2 x 2 l C 2 y 2 l C 2 A 3 x 2 l A 3 y 2 l A 3 B 3 x 2 l B 3 y 2 l B 3 C 3 x 2 l C 3 y 2 l C 3 A 3 x 3 l A 3 y 3 l A 3 B 3 x 3 l B 3 y 3 l B 3 C 3 x 3 l C 3 y 3 l C 3 A 1 x 3 l A 1 y 3 l A 1 B 1 x 3 l B 1 y 3 l B 1 C 1 x 3 l C 1 y 3 l C 1
 16:
     b 1 D 1 D 2 D 2 D 3 D 3 D 1 T
 17:
    if  O b s e r v > 2 then
 18:
         e r r 1 3 j = 1 3 A j x j c + B j y j c + C j z j c + D j A j 2 + B j 2 + C j 2
 19:
        if  e r r < E  then
 20:
           Calculate R r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33
 21:
           Calculate T t 1 t 2 t 3 T
 22:
            A n g u l a r E r r o r cos 1 trace R g t 1 R 1 2
 23:
            D i s t a n c e E r r o r T T g t 2
 24:
        else
 25:
           Add A 1 into A
 26:
           Add b 1 into b
 27:
           Calculate X by A X = b
 28:
           Set another calibration of AGV position from P
 29:
            O b s e r v O b s e r v + 1
 30:
        end if
 31:
    else
 32:
        Add A 1 into A
 33:
        Add b 1 into b
 34:
        if  O b s e r v = 2  then
 35:
           Calculate X by A X = b
 36:
        end if
 37:
        Set another calibration board position from P
 38:
         O b s e r v O b s e r v + 1
 39:
    end if
 40:
end while
 41:
return R , T and O b s e r v

4. Calibration Experiments and Analysis of Results

4.1. Experimental Equipment and Environment

The experiments in this paper are based on a ROS (robot operating system) under the Linux environment. As shown in Figure 5a, Pepperl+Fuchs R2000 2D LRF and a stereolabs ZED2i stereo binocular camera are used to collect the 2D point cloud data from the LRF and the 3D point cloud data from the depth camera, respectively. The detailed parameters of depth camera and LRF are shown in Table 1 and Table 2. In Experiment, both the depth camera and LRF select a 30 Hz sampling rate.
The calibration experiments are performed on a homemade automated guided vehicles (AGV) vehicle. The installation relationship of the experimental equipment is shown in Figure 5. The 2D LRF is installed behind and above the depth camera. It is basically installed horizontally. The camera is installed horizontally in front of the 2D LRF, and the relative position of the depth camera and the LRF is kept constant during the experiment. The calibration plate is placed at the location where the 2D LRF and the depth camera observe simultaneously. Through the calibration algorithm proposed in this paper, experimental data under different observation positions, which are adjusted by controlling the AGV, are collected to complete the autonomous joint calibration of the 2D LRF and the depth camera.

4.2. Experimental Steps

The experiments in this paper are performed in a ROS environment. After running the function package of 2D LRF and depth camera, a node and two topics are established to receive respectively the scan topic of 2D LRF and the depth image topic of the depth camera. Then the depth data of the depth camera are converted into 3D point cloud data. The calibration of the 2D LRF and depth camera are completed by executing the following steps in the calibration program.
  • Identify and extract the point cloud data gathered by 2D LRF on the calibration plate by line and corner feature detection algorithms; split the point cloud data into three parts; fit the point cloud of each part into a straight line; and solve the intersection point of any two straight lines. The feature extraction process is shown in Figure 6.
  • Project the intersection points found in the previous step under the depth camera coordinate system by Equation (8).
  • Identify and extract the point cloud collected by the depth camera on the calibration plate by edge and corner detection algorithms. Segment the three planes of the calibration plate; obtain the equation of the plane by fitting the point cloud on the plane; and find the equation of the intersection line between two planes in the three planes. The feature extraction and fitting process are shown in Figure 7.
  • Using the point on the line as a constraint, the coordinates of the projected point are substituted into the intersection equation to obtain six equations.
  • Move the AGV to adjust the observation position, and complete the data collection and extraction again.
  • Solve the rotation and translation matrices of the depth camera and 2D LRF coordinate transformation according to Equation (11).
  • Take multiple experiments, calculate the average of the calibration results.
Figure 6. Features of LRF extraction and fitting. (a) Laser observation position. (b) Laser observation data. (c) 2D point cloud of calibration plate shape features. (d) Feature data of calibration plate fitting.
Figure 6. Features of LRF extraction and fitting. (a) Laser observation position. (b) Laser observation data. (c) 2D point cloud of calibration plate shape features. (d) Feature data of calibration plate fitting.
Machines 10 00646 g006
Figure 7. Features of depth camera extraction and fitting. (a) Depth camera observation position. (b) Depth camera point cloud. (c) 3D point cloud of calibration plate features. (d) 3D point cloud plane fitting of calibration plate.
Figure 7. Features of depth camera extraction and fitting. (a) Depth camera observation position. (b) Depth camera point cloud. (c) 3D point cloud of calibration plate features. (d) 3D point cloud plane fitting of calibration plate.
Machines 10 00646 g007aMachines 10 00646 g007b

4.3. Experimental Results and Analysis

In this paper, the baseline of ZED2i camera is used as the ground truth. The LRF is first calibrated with the left camera frame to obtain R L C l and T L C l , and calibrated with right camera frame to obtain R L C r and T L C r . We then compute the relative pose (baseline) between the binocular cameras and compare it with the ground truth R C r C l = 1 0 0 0 1 0 0 0 1 and T C r C l = 0 120 0 T from ZED2i Camera parameters. In order to verify the calibration accuracy and calibration efficiency of the calibration method proposed in this paper, the number of observation is used as the independent variable. For each independent variable, 10 repeated experiments were conducted to obtain the mean values of rotation and translation errors and standard deviation. Based on the experiments, we obtain the relationship between the result of the mean values and standard deviation, and the number of observations as shown in Figure 8.
As in Figure 8, the mean values of the rotation and translation errors and the standard deviations of the calibration results gradually decrease as the number of observations increases. After the number of observations is greater than 6, the average value of the rotation error is less than 1 . With the increasing number of observations, the average value of the error gradually tends to be flat. The mean value of the translation error is obtained as 3.74 of the rotation error and 28.31 mm of translation error under only two observation experiments. The experimental data show that the calibration method of this paper is feasible and achieves high calibration accuracy.
In order to verify the feasibility of this paper’s method, the calibration accuracy of our method is compared with the more representative methods of current 2D LRF and camera calibration work. For the methods of Refs. [13,23] and the method of this paper, each method divides the experiments into three groups of 2 observations, 10 observations, and 20 observations, and each group of experiments is conducted 10 times. For each method, the LRF is first calibrated for both left and right cameras to obtain the corresponding rotation and translation matrix. Then, the coordinate transformation matrix between the two cameras is calculated and compared with the ground truths R C r C l and T C r C l . The average of the 10 experimental results is used as the calibration accuracy. The calibration comparison results are shown in Figure 9. From the experimental results, it is seen that under only two observations, the method in this paper obtains smaller rotation and translation errors than the other two methods. Under multiple observation experiments, the method in this paper achieves the average value of rotation error of 0.68 and the average value of translation error of 6.67 mm, which is better than the other two methods. Finally, the calibration results are shown in Table 3, where C r C l P g t is the ground truth of the relative position of the left and right cameras, and C r C l P c a l i b is the calibration result of the relative position of the left and right cameras. L C l P d e s is the designed installation position of the LRF and depth camera, and L C l P c a l i b is the calibration result of the relative position of the LRF and camera.
Based on the calibration results, the LRF data are projected into the depth color point cloud map of the depth camera, and the effects before and after calibration are compared visually. The effect before calibration is shown in Figure 10a,b. Before calibration, the 2D lidar point cloud is obscured by the depth point cloud of the camera, and the position of the laser point cloud is lower than the actual observation position. The effect after calibration is shown in Figure 10c,d. After calibration, the point cloud data of the depth camera and the 2D LRF point cloud data basically overlap in the given dimension, which basically meets the application requirements. Due to the occlusion of the vehicle body model, the tf coordinates of the reference system in ROS are shown additionally in Figure 11.
The calibration method in this paper is experimentally verified to be simple and easy to implement, and the joint calibration of the 2D LRF and depth camera is completed with high accuracy and efficiency without excessive position and angle observations. Compared with Ref. [23], there is no complicated feature point extraction and display operation in the calibration process, and no need to adjust the calibration plate position manually. The algorithm is designed to adaptively adjust the AGV observation poses. The calibration method has no restriction on the relative installation position of 2D LRF and depth camera, and the calibration plate is easy and convenient to make. It greatly simplifies the calibration process of the 2D LRF and depth camera and improves the calibration efficiency. Compared with Refs. [13,23], the calibration method in this paper has obvious advantages in calibration accuracy, efficiency and operability.

5. Conclusions

In this paper, we present a novel and simplified calibration method to construct point-line constraints for 2D LRF and depth camera observation data by using a specific calibration plate, which effectively achieves the fusion of 2D LRF point cloud data and depth camera 3D point cloud data. The specific calibration plate proposed in this paper eliminates the influence of environment, calibration plate fabrication limitation and sensor installation limitation on calibration. Compared with previous methods, we greatly simplify the calibration process of depth camera and 2D LRF by automatically adjusting the number of observations with a defined error threshold. A series of experiments verify that our method achieves higher accuracy than the compared methods in two observations. Our method is also extended to the case of multiple observations to reduce noise. The calibration accuracy, efficiency and operability of the calibration method meet the practical requirements of mobile robots.

Author Contributions

W.Z. and K.H. contributed to the main idea of this paper; H.C. and Z.J. prepared the experimental platform; W.Z. performed the experiments, analyzed the data and wrote the paper. K.H. is the project principal investigator; Q.Z. and Y.X. gave some data analysis and revision suggestions. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by NSFC-Shenzhen Robot Basic Research Center project (U2013204) and SIAT-CUHK Joint Laboratory of Precision Engineering.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LRFLaser Range Finder
2DTwo-Dimensional
LSQLeast Square
P3PPerspective Three-Point
AGVAutomated Guided Vehicles

References

  1. Khurana, A.; Nagla, K.S. Extrinsic calibration methods for laser range finder and camera: A systematic review. MAPAN 2021, 36, 669–690. [Google Scholar] [CrossRef]
  2. Zhu, Z.; Ma, Y.; Zhao, R.; Liu, E.; Zeng, S.; Yi, J.; Ding, J. Improve the Estimation of Monocular Vision 6-DOF Pose Based on the Fusion of Camera and Laser Rangefinder. Remote Sens. 2021, 13, 3709. [Google Scholar] [CrossRef]
  3. Shao, W.; Zhang, H.; Wu, Y.; Sheng, N. Application of Fusion 2D Lidar and Binocular Vision in Robot Locating Obstacles. J. Intell. Fuzzy Syst. 2021, 41, 4387–4394. [Google Scholar] [CrossRef]
  4. Lei, G.; Yao, R.; Zhao, Y.; Zheng, Y. Detection and Modeling of Unstructured Roads in Forest Areas Based on Visual-2D Lidar Data Fusion. Forests 2021, 12, 820. [Google Scholar] [CrossRef]
  5. Zou, Y.; Chen, T. Laser vision seam tracking system based on image processing and continuous convolution operator tracker. Opt. Lasers Eng. 2018, 105, 141–149. [Google Scholar] [CrossRef]
  6. Li, A.; Cao, J.; Li, S.; Huang, Z.; Wang, J.; Liu, G. Map Construction and Path Planning Method for a Mobile Robot Based on Multi-Sensor Information Fusion. Appl. Sci. 2022, 12, 2913. [Google Scholar] [CrossRef]
  7. Peng, M.; Wan, Q.; Chen, B.; Wu, S. A Calibration Method of 2D Lidar and a Camera Based on Effective Lower Bound Estimation of Observation Probability. Electron. Inf. J. 2022, 44, 1–10. [Google Scholar]
  8. Zhang, Q.; Pless, R. Extrinsic calibration of a camera and laser range finder (improves camera calibration). In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No. 04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 3, pp. 2301–2306. [Google Scholar]
  9. Vasconcelos, F.; Barreto, J.P.; Nunes, U. A minimal solution for the extrinsic calibration of a camera and a laser-rangefinder. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2097–2107. [Google Scholar] [CrossRef]
  10. Zhou, L. A new minimal solution for the extrinsic calibration of a 2D LIDAR and a camera using three plane-line correspondences. IEEE Sens. J. 2014, 14, 442–454. [Google Scholar] [CrossRef]
  11. Kaiser, C.; Sjoberg, F.; Delcura, J.M.; Eilertsen, B. SMARTOLEV—An orbital life extension vehicle for servicing commercial spacecrafts in GEO. Acta Astronaut. 2008, 63, 400–410. [Google Scholar] [CrossRef]
  12. Li, G.H.; Liu, Y.H.; Dong, L. An algorithm for extrinsic parameters calibration of a camera and a laser range finder using line features. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 3854–3859. [Google Scholar]
  13. Kwak, K.; Huber, D.F.; Badino, H. Extrinsic calibration of a single line scanning lidar and a camera. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 3283–3289. [Google Scholar]
  14. Dong, W.B.; Isler, V. A novel method for the extrinsic calibration of a 2D laser rangefinder and a camera. IEEE Sens. J. 2018, 18, 4200–4211. [Google Scholar] [CrossRef]
  15. Gomez-Ojeda, R.; Briales, J.; Fernandez-Moral, E.; Gonzalez-Jimenez, J. Extrinsic calibration of a 2d laser-rangefinder and a camera based on scene corners. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 3611–3616. [Google Scholar]
  16. Itami, F.; Yamazaki, T. A simple calibration procedure for a 2D LiDAR with respect to a camera. IEEE Sens. J. 2019, 19, 7553–7564. [Google Scholar] [CrossRef]
  17. Itami, F.; Yamazaki, T. An improved method for the calibration of a 2-D LiDAR with respect to a camera by using a checkerboard target. IEEE Sens. J. 2020, 20, 7906–7917. [Google Scholar] [CrossRef]
  18. Liu, D.X.; Dai, B.; Li, Z.H.; He, H. A method for calibration of single line laser radar and camera. J. Huazhong Univ. Sci. Technol. (Natural Sci. Ed.) 2008, 36, 68–71. [Google Scholar]
  19. Le, Q.V.; Ng, A.Y. Joint calibration of multiple sensors. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 3651–3658. [Google Scholar]
  20. Li, Y.; Ruichek, Y.; Cappelle, C. 3D triangulation based extrinsic calibration between a stereo vision system and a LIDAR. In Proceedings of the 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011; pp. 797–802. [Google Scholar]
  21. Chai, Z.Q.; Sun, Y.X.; Xiong, Z.H. A novel method for lidar camera calibration by plane fitting. In Proceedings of the 2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics(AIM), Auckland, New Zealand, 9–12 July 2018; pp. 286–291. [Google Scholar]
  22. Tian, Z.; Huang, Y.; Zhu, F.; Ma, Y. The extrinsic calibration of area-scan camera and 2D laser rangefinder (LRF) using checkerboard trihedron. Access IEEE 2020, 8, 36166–36179. [Google Scholar] [CrossRef]
  23. Huang, Z.; Su, Y.; Wang, Q.; Zhang, C. Research on external parameter calibration method of two-dimensional lidar and visible light camera. J. Instrum. 2020, 41, 121–129. [Google Scholar]
  24. Tu, Y.; Song, Y.; Liu, F.; Zhou, Y.; Li, T.; Zhi, S.; Wang, Y. An Accurate and Stable Extrinsic Calibration for a Camera and a 1D Laser Range Finder. IEEE Sens. J. 2022, 22, 9832–9842. [Google Scholar] [CrossRef]
  25. Leveinson, J.; Thrun, S. Automatic online calibration of cameras and lasers. Robot. Sci. Syst. 2013, 2, 7. [Google Scholar]
  26. Zhao, W.Y.; Nister, D.; Hus, S. Alignment of continuous video onto 3D point clouds. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1305–1318. [Google Scholar] [CrossRef]
  27. Yang, B.; Chen, C. Automatic registration of UAV-borne sequent images and LiDAR data. ISPRS J. Photogramm. Remote Sens. 2015, 101, 262–274. [Google Scholar] [CrossRef]
  28. Schneider, N.; Piewak, F.; Stiller, C.; Franke, U. RegNet: Multimodal sensor registration using deep neural networks. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 1803–1810. [Google Scholar]
  29. Yu, G.; Chen, J.; Zhang, K.; Zhang, X. Camera External Self-Calibration for Intelligent Vehicles. In Proceedings of the 2019 IEEE 28th International Symposium on Industrial Electronics (ISIE), Vancouver, BC, Canada, 12–14 June 2019; pp. 1688–1693. [Google Scholar]
  30. Jiang, P.; Osteen, P.; Saripalli, S. Calibrating LiDAR and Camera using Semantic Mutual information. arXiv 2021, arXiv:2104.12023. [Google Scholar]
  31. Hu, H.; Han, F.; Bieder, F.; Pauls, J.H.; Stiller, C. TEScalib: Targetless Extrinsic Self-Calibration of LiDAR and Stereo Camera for Automated Driving Vehicles with Uncertainty Analysis. arXiv 2022, arXiv:2202.13847. [Google Scholar]
  32. Lv, X.; Wang, S.; Ye, D. CFNet: LiDAR-camera registration using calibration flow network. Sensors 2021, 21, 8112. [Google Scholar] [CrossRef]
  33. Zhang, X.; Zeinali, Y.; Story, B.A.; Rajan, D. Measurement of three-dimensional structural displacement using a hybrid inertial vision-based system. Sensors 2019, 19, 4083. [Google Scholar] [CrossRef]
Figure 1. Relationship of coordinate systems.
Figure 1. Relationship of coordinate systems.
Machines 10 00646 g001
Figure 2. Imaging principle of binocular stereo camera.
Figure 2. Imaging principle of binocular stereo camera.
Machines 10 00646 g002
Figure 3. Combined 2D LRF and depth camera calibration board.
Figure 3. Combined 2D LRF and depth camera calibration board.
Machines 10 00646 g003
Figure 4. Feature point extraction.
Figure 4. Feature point extraction.
Machines 10 00646 g004
Figure 5. Experimental equipment and sensor installation location. (a) Experimental equipment. (b) Relationship of sensor mounting position.
Figure 5. Experimental equipment and sensor installation location. (a) Experimental equipment. (b) Relationship of sensor mounting position.
Machines 10 00646 g005
Figure 8. Rotation and translation errors versus number of observations. (a) Mean and standard deviation of translational errors. (b) Mean and standard deviation of rotation errors.
Figure 8. Rotation and translation errors versus number of observations. (a) Mean and standard deviation of translational errors. (b) Mean and standard deviation of rotation errors.
Machines 10 00646 g008
Figure 9. Comparison with the methods of Kwak and Huang. (a) Mean of translational errors. (b) Mean of rotation errors.
Figure 9. Comparison with the methods of Kwak and Huang. (a) Mean of translational errors. (b) Mean of rotation errors.
Machines 10 00646 g009
Figure 10. Calibration effect of depth camera and 2D LRF. (a) Front view of before calibration. (b) Top view before calibration. (c) Front view after calibration. (d) Top view after calibration.
Figure 10. Calibration effect of depth camera and 2D LRF. (a) Front view of before calibration. (b) Top view before calibration. (c) Front view after calibration. (d) Top view after calibration.
Machines 10 00646 g010
Figure 11. Tf coordinate relationship of reference system in ROS.
Figure 11. Tf coordinate relationship of reference system in ROS.
Machines 10 00646 g011
Table 1. 2D LRF basic parameters.
Table 1. 2D LRF basic parameters.
Range/mRate/HzResolution/°Accuracy/mmAngle/°
0.1–3010–500.042±25360
Table 2. Depth camera basic parameters.
Table 2. Depth camera basic parameters.
Depth Range/mDepth FPS/HzResolutionApertureField/°
0.2–2015–1003840 × 1080f/1.8110H × 70V × 120D
Table 3. Calibration results.
Table 3. Calibration results.
PoseX/mmY/mmZ/mmYaw/ Pitch/ Roll/
C r C l P g t 01200000
C r C l P c a l i b 1.27125.753.13000.11
L C l P d e s 1286010000
L C l P c a l i b 162.3572.5132.58000.57
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhou, W.; Chen, H.; Jin, Z.; Zuo, Q.; Xu, Y.; He, K. A Novel and Simplified Extrinsic Calibration of 2D Laser Rangefinder and Depth Camera. Machines 2022, 10, 646. https://doi.org/10.3390/machines10080646

AMA Style

Zhou W, Chen H, Jin Z, Zuo Q, Xu Y, He K. A Novel and Simplified Extrinsic Calibration of 2D Laser Rangefinder and Depth Camera. Machines. 2022; 10(8):646. https://doi.org/10.3390/machines10080646

Chicago/Turabian Style

Zhou, Wei, Hailun Chen, Zhenlin Jin, Qiyang Zuo, Yaohui Xu, and Kai He. 2022. "A Novel and Simplified Extrinsic Calibration of 2D Laser Rangefinder and Depth Camera" Machines 10, no. 8: 646. https://doi.org/10.3390/machines10080646

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop