Next Article in Journal
Stretchable Pressure Sensor with Leakage-Free Liquid-Metal Electrodes
Next Article in Special Issue
2D Rotation-Angle Measurement Utilizing Least Iterative Region Segmentation
Previous Article in Journal
Performance Evaluation of Miniature Integrated Electrochemical Cells Fabricated Using LTCC Technology
Previous Article in Special Issue
A Vision Based Detection Method for Narrow Butt Joints and a Robotic Seam Tracking System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Convenient Calibration Method for LRF-Camera Combination Systems Based on a Checkerboard

1
Institute of Optics and Electronics of Chinese Academy of Sciences, Chengdu 610209, China
2
University of Chinese Academy of Sciences, Beijing 100149, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2019, 19(6), 1315; https://doi.org/10.3390/s19061315
Submission received: 6 February 2019 / Revised: 1 March 2019 / Accepted: 11 March 2019 / Published: 15 March 2019
(This article belongs to the Special Issue Visual Sensors)

Abstract

:
In this paper, a simple and easy high-precision calibration method is proposed for the LRF-camera combined measurement system which is widely used at present. This method can be applied not only to mainstream 2D and 3D LRF-cameras, but also to calibrate newly developed 1D LRF-camera combined systems. It only needs a calibration board to record at least three sets of data. First, the camera parameters and distortion coefficients are decoupled by the distortion center. Then, the spatial coordinates of laser spots are solved using line and plane constraints, and the estimation of LRF-camera extrinsic parameters is realized. In addition, we establish a cost function for optimizing the system. Finally, the calibration accuracy and characteristics of the method are analyzed through simulation experiments, and the validity of the method is verified through the calibration of a real system.

1. Introduction

In the field of measurements, a single sensor is seldom able to perform high-precision measurements by itself. Combined multi-sensor measurement schemes can effectively combine the characteristics of each sensor, leveraging the complementary advantages of sensors, and improving the accuracy and robustness of the measurement system. As [1] shows, laser range finders (LRFs) provide high-precision distance information, while camera can provide rich image information. The combination of LRFs and cameras has attracted wide attention, with interesting applications in navigation [2], human detection [3] and 3D texture reconstruction [4].
Compared with the current mainstream schemes combining scanning lasers and vision, the more challenging combination of 1-D laser ranging and vision has attracted the attention of researchers due to its low cost and wide applicability. The Shuttle Radar Topography Mission (SRTM) [5] realizes high-precision measurements of Interferometric Synthetic Aperture Radar (IFSAR) on long-range cooperative targets. Ordez [6] proposed a combination of camera and Laser Distance Meter (LDM) to estimate the length of a line segment in an unknown plane. Wu [7] applied this method to a visual odometry (VO) system and realized the application in a quasi-plane scene. In our previous work, we further extended this method and constructed a complete SLAM method based on laser-vision fusion [1].
Sensor calibration is the premise of data fusion, including the calibration of each sensor’s own parameters and the relationship of relative data between each sensor [8]. However, as a necessary prerequisite for high-precision measurements, the calibration technology of 1D laser-camera systems evolves seldom. The existing calibration algorithm based on scanning laser ranging has been unable to apply, but the traditional one-dimensional laser calibration algorithms require a high-precision manipulator laser interferometer and other complex equipment.
In this paper, a simple and feasible high-precision laser and visual calibration algorithm is proposed, which can calibrate the parameters of laser and camera sensors through only simple data processing. Firstly, the camera parameters and distortion coefficients are determined using a non-iterative method. Then, the coordinates of the laser spot in the camera coordinate system are obtained by inversion of the laser image points in the image, and the initial values of external camera and laser ranging parameters are estimated. Finally, the parameters are optimized through the parameterization of the rotation matrix [9] and the Gröbner basis method [10]. Compared with the existing methods, the main contributions of this paper are as follows:
(1)
The method proposed in this paper has wider applicability. It can be used for joint calibration of vision sensors and LRF from 1D to 3D.
(2)
Compared with existing 1D laser-vision calibration methods, the proposed method can be realized using a simple chessboard lattice, without complicated customized targets and high-precision mechanical structures.
(3)
The accuracy and usability of the proposed method are verified by simulation and observation experiments.
This paper is organized as follows: the existing methods related to our work are outlined in the following section. Section 3 and Section 4 describe the mathematical model and illustrate the proposed algorithm. In Section 5, we evaluate the solution of the simulation and observation experiments. Finally, conclusions and future are provided in Section 6.

2. Related Work

For the extrinsic parameters between LRF and vision sensors, it is helpful to combine the high-precision distance information of laser ranging with the high lateral resolution of vision to achieve high-precision pose estimation. However, this method is mostly used to calibrate 2D or 3D LRFs and cameras.
Vasconcelos [11] calibrated the camera-laser extrinsic parameters by moving a checkerboard freely. This method assumes that the internal parameters are known and accurate, and converts the external parameter calibration problem into a plane-coplanar alignment problem to reach an exact solution. Similar work includes Scaramuzza [12] and Ha [13]. Ranjith [14] realized the correlation and calibration of 3D LiDAR data and image data through feature point retrieval. Zhang [15] uses mobile LRF and visual camera to achieve self-calibration of their external parameters through motion constraints. Viejo [16] realized the correlation of the two sets of data by arranging control points, and calibrated the external parameters of 3D LiDAR and a monocular camera.
However, the above algorithms are mostly used to calibrate the external parameters of 2D or 3D scanning laser and vision systems, and cannot be used for 1D laser ranging without a scanning mechanism due to the lack of constraints. For the calibration of 1D LRF, the traditional method mostly realizes the correlation between the two by means of complex a manipulator or specific calibration target. For example, Zhu’s [17] calibration algorithm is used to calibrate the direction and position parameters of a laser range finder based on spherical fitting. The calibration accuracy is high, but the solution is highly customized and not universal. Lu [18] designed a multi-directional calibration block to calibrate the laser beam direction of a point laser probe on the platform of a coordinate measuring machine. Zhou [19] proposed a new calibration algorithm for serial coordinate measuring machines (CMMs) with cylindrical and conical surfaces as calibration objects. Similar calibration methods are used in the implementation of the LFR-camera slam method [1]. The relative rotation and translation of the two sensors’ coordinate systems are estimated through a high-precision laser tracker.
Although this method can achieve high accuracy, it requires the installation of sensors on precision measuring equipment, which has high calibration cost and complex operation, and cannot meet the needs of low-cost and fast landing scenarios such as existing robots. In 2010, Ordez [6] proposed a set method of cameras and LRF to measure short distances in the plane. In another study [20], the author introduces a preliminary calibration method for a digital camera and a laser rangefinder. The experiment involves the artificial adjustment of the projection center of the laser pointer, and only two laser projections are used. The accuracy and robustness of the calibration method are both problematic. After that, Wu et al. [7] proposed a two-part calibration method based on the Ransac scheme, and solved the corresponding linear equation in the image by creating the index table of laser spot. However, this method cannot be well applied to the case where the laser light is close to the optical axis of the camera, and the final accuracy evaluation criteria are not given.
Zhang [21] proposed a simple calibration method for camera intrinsic parameters, where the parameters were determined using a non-linear method, and high accuracy was achieved. Afterwards, based on Zhang’s framework, researchers improved accuracy and scene expansion by designing different forms of targets [22,23,24] and improving the calibration of the algorithm [25,26,27]. Hartly [28] introduced the distortion division model to correct the imaging distortion. On this basis, Hong [29] further explored the calibration method of large distortion cameras.
Currently, the calibration of omnidirectional cameras has attracted wide attention in order to improve the user’s degree of freedom and immersion in the virtual reality and autopilot. Li et al. [30] proposed a multi-phase camera calibration scheme based on random pattern calibration board. Their method supports the calibration of a camera system which comprise normal pinhole cameras. Gwon Hwan [31] proposed a new intrinsic calibration and extrinsic calibration method of omnidirectional cameras based on the Aruco marker and a Charuco board. The calibration structure and method can solve the problem of suing overly complicated procedures to accurately calibrate multiple cameras.
At the same time, the calibration board also plays an important role in the other calibration processs. Liu [32] studied different applications of lasers and cameras. The calibration method of multiple non-common-view cameras by scanning a laser rangefinder is proposed. In the literature, the correlation between laser distance information and camera images is established through a specific calibration plate, so as to realize the relative pose estimation between cameras. Inspired by Liu’s work [32], we establish a constraint of 1D laser and monocular vision by combining planar and coplanar constraints, so as to determine related external parameters. Considering that the camera imaging model has a direct impact on the calibration accuracy, we have improved Zhang‘s method [21] used in camera calibration by replacing the traditional polynomial model with the division distortion model, and solved the linear solution of the iterative optimization using variable least squares on the basis of Hartly [28] and Hong [29]. Thus, the problem of falling into local optimal solutions is avoided, and the calibration speed is greatly improved. Combining the above innovations, a convenient method for calibrating the parameters of the camera-laser measurement system is realized, which can complete the calibration of measurement systems, including camera internal parameters, distortion coefficients and camera-laser external parameters, in one operation.

3. Measurement Model

Previous researchers established relatively mature camera imaging and laser measurement models. We integrate the two mathematical models and construct a complete mathematical description of the coordinate system.
As shown in the Figure 1, O C is the camera coordinate system, O C Z C is the optical axis direction of the camera, O u v is the image plane of the camera, O T is the coordinate system of the target itself, point P l is the spatial position of the laser spot, point P w is the spatial coordinate of the target control point and O l represents the coordinate system of 1D laser ranging. We set the camera coordinate system O c as the measurement coordinate system O M of the system. In the next part, we introduce the imaging model of the monocular camera and the 1D laser ranging model, and convert and fuse the data through extrinsic parameters [ R l 2 C T l 2 C ] .

3.1. Camera Imaging Model

In order to describe the imaging process of a monocular camera more accurately, we combine the lens distortion model with the aperture imaging model and introduce the shift of the distortion center e relative to the image center O P [28]. In the camera coordinate system, O p x y is the physical coordinate system of the phase plane and O u v represents the image coordinate system. Image center Op denotes the intersection of the optical axis and the image plane.
The ideal imaging process can be described as the process of transforming a point P i T ( X i T = [ X i T Y i T Z i T 1 ] T ) in the world coordinate system to the image plane imaging point P i u ( x i u = [ u i u v i u 1 ] T ) through a projection relationship. The mathematical expression is as follows:
ρ i x i u = A C T T 2 C X i T = f u s u 0 0 f v v 0 0 0 1 r 11 r 12 r 13 t 1 r 21 r 22 r 23 t 2 r 31 r 32 r 33 t 3 X i T Y i T Z i T 1
where ρ i is a named depth scale factor, the intrinsic matrix and A C is described by a five-parameter model; f u , f v are the focal lengths, u 0 v 0 T is the coordinate of the image center O P , and s is the skew coefficient. T T 2 C is the transformation matrix relating O T to O C and it can be expressed as a rotation matrix R T 2 C combined with the translation vector t T 2 C .
Due to lens design and processing, the actual imaging process is distorted. We introduce a division distortion model to improve our imaging. The mathematical expressions are as follows:
x i u e = x i d e 1 + λ 1 r i d 2 + λ 2 r i d 4 +
x i d represents the actual position of projection point P i T , and its coordinates are x i d = [ u i d v i d 1 ] T ; λ 1 and λ 2 are the distortion coefficients and r d represents the distance from point x i d to the distortion center e , expressed as r i d = ( u i d d u 0 ) 2 + ( v i d d v 0 ) 2 .
In order to illustrate the method more clearly, the most important parameters used in this paper and their meaning are shown in Table 1.

3.2. LRF Model

The mathematical model of the 1D laser ranging module is relatively simple. The laser ranging module can output single point laser distance information by observing the reflected signal and calculating the optical path using image coherence [33]. The mathematical determination of the origin coordinate and laser direction of the laser ranging module allows the coordinate of the laser in the measurement coordinate system. In order to better represent the measurement results in the system measurement coordinate system O M , we set up the European three-dimensional coordinate system O l for the LRF module. The laser emission direction is O l Z l , the directions of O l X l are perpendicular and parallel to the O C x y plane, and the directions of O l Y l are determined by the right-hand rule, as shown in Figure 1. The measured distance information d i l represents the distance from the origin O l to the laser spot P i l .
In the process of extrinsic parameter calibration, the coordinate origin O l of the laser ranging coordinate system and the laser emission direction O l Z l need to be calculated. Finally, the conversion relations between camera measurement system O M and the LRF coordinate system O l are estimated, the rotation matrix R l 2 M and the translation vector t l 2 M are determined.

4. Methodology

Calibration of the measurement system is the process of determining the model parameters of the measurement system. For our system, through the measurement and imaging of a specific target, the model parameters of the measurement system are determined using the corresponding relationship between the coordinates of the control points and the image coordinates. The main parameters are the intrinsic parameters of the camera and the extrinsic parameters of between LRF and camera.
The specific calibration process is divided into three main steps: (1) the estimation of the camera distortion center e ; (2) the intrinsic parameters A C and distortion coefficients λ 1 λ 2 are decoupled and determined independently; (3) finding the extrinsic parameters R l 2 M t l 2 M for translating the laser-vision coordinate system to the measurement coordinate system; (4) determining the optimal solution A C , λ 1 , λ 2 , R l 2 M t l 2 M using the Gröbner basis method. In this section, we elaborate on the above.

4.1. The Center of Distortion

In many studies, it is usually assumed that the distortion center and the main point are in the same position, but Hartley [28] determined experimentally that there is a certain deviation between them. During the calibration process, we use a checkerboard as the calibration object, and extract the corners P i T of the checkerboard as the control points for camera calibration. Since the corners are distributed on a plane, we set the Z i T = 0 in the target coordinate system, in which case the imaging model can be expressed as:
ρ i x i u = P X i T = A [ r 1 r 2 r 3 t 1 ] X i T Y i T 0 1
where r 1 r 2 r 3 is the column vector of rotation matrix R T 2 C . The above equation can be simplified as:
ρ i x i u = H X i T = A C [   r 1 r 2 t ] X i T Y i T 1
Matrix H called the homography matrix, and expresses the mapping relation between the corner of the checkerboard and the image points. The coordinates of P i T are abbreviated as X i T = X i T Y i T 1 T .
From the division model of Equation (2), we obtain:
x i d = e + k i x i u e , k i = 1 + λ 1 r i d 2 + λ 2 r i d 4 +
We multiply the left side of the equations by e × and combine it with Equation (4). In consideration of e × e = 0 :
e × x i d = k i e × H X i T , e × = 0 1 d v 0 1 0 d u 0 - d v 0 d u 0 0
We then multiply the left sides of the equations by x i d T and obtain:
x i d T e × H X i T = 0
Let F H = e × H . F H is called the fundamental matrix of distortion and is expressed as follows:
x i d T F H X i T = 0 ,   F H = F 11 F 12 F 13 F 21 F 22 F 23 F 31 F 32 F 33
We can solve the values of the fundamental matrix F H using 8 pairs of corresponding corner points. The equation can be formulated as:
A f H = 0
where:
A = x 1 d X 1 T x 1 d Y 1 T x 1 d y 1 d X 1 T y 1 d Y 1 T y 1 d X 1 T Y 1 T 1 x n d X n T x n d Y n T x n d y n d X n T y n d Y n T y n d X n T Y n T 1 f H = [ F 11 F 12 F 13 F 21 F 22 F 23 F 31 F 32 F 33 ]
The corresponding equations are solvable using least square when the number of points is greater than 8 points. The corresponding distortion center e is the left null vector of F H :
e T e × = 0 e T F H = e T e × H = 0
So far, we have obtained the image coordinates of the distorted center e . The corresponding homography matrix H can be obtained using the fundamental matrix F H .

4.2. Decoupling Camera Parameters

If the image coordinate origin O P is moved to the distortion center e, the new distortion center after translation is expressed as e ^ = 0 0 1 T . In the new coordinate system, Equation (8) is expressed as:
x ^ i d T F ^ H X i T = 0
where x ^ i d and F ^ H represent the transformed image coordinates x i d and the fundamental matrix F H . The transformation relationship is as follows:
x ^ i d = T e 2 e ^ x i d   ,   F ^ H = T e 2 e ^ F H ,   where   T e 2 e ^ = 1 0 d u 0 0 1 d v 0 0 0 1
From the definition of F ^ H :
F ^ H = e × H ^ = 0 1 0 1 0 0 0 0 0 H ^
Let:
F ^ H = F ^ 1 F ^ 2 F ^ 3 F ^ 11 F ^ 12 F ^ 13 F ^ 21 F ^ 22 F ^ 23 F ^ 31 F ^ 32 F ^ 33
and:
H ^ = H ^ 1 H ^ 2 H ^ 3 H ^ 11 H ^ 12 H ^ 13 F ^ 21 F ^ 22 H ^ 23 H ^ 31 H ^ 32 H ^ 33
Equations (15) and (16) are then introduced into Equation (14):
H ^ 1 = F ^ 2 , H ^ 2 = F ^ 1
So far, the first two rows H ^ 1   H ^ 2 of the homography matrix have been obtained. Referring to Equations (2) and (4), the image distortion after translation can be expressed as follows:
ρ i x i d 1 + λ 1 r i d 2 + λ 2 r i d 4 + = H X i T
An equation set can be obtained after sorting out:
x ^ i d X i T T F ^ 2 X i T r ^ i d 2 r ^ i d 4 y ^ i d X i T T F ^ 1 X i T r ^ i d 2 r ^ i d 4 [ H ^ 3 ] T λ 1 λ 2 = F ^ 2 X i T F ^ 1 X i T
For Equation (19) and the combined Equation (17), two equations can be obtained for each pair of corner points. When the number of corresponding points N > = n + 3 (where n denotes the number of distortion parameters), an overdetermined equation is obtained. This can be achieved by moving the target, as shown in Figure 2. The homography matrix H ^ and the distortion coefficients λ 1 λ 2 can be obtained by using the least square method.

4.3. Parameter Solution

From the perspective projection model, the imaging relationship of the translation sequence can be expressed as follows:
ρ i x i u = ρ i T e 2 e ^ 1 x ^ i u = T e 2 e ^ 1 H ^ X i T = H X i T
It is known that:
H = T e 2 e ^ 1 H ^ = 1 0 d u 0 0 1 d v 0 0 0 1 H ^
Equation (19) can been solved to obtain H ^ . The initial homography matrix H can be calculated by substituting Equation (21). We set [   r 1 r 2 t ] H = H 1 H 2 H 3 = A C [   r 1 r 2 t ] . By using the orthogonality and normality of rotation matrix [   r 1 r 2 r 3 ] , we obtain:
r 1 r 2 = 0 r 1 r 1 = r 2 r 2 H 1 T A C T A C 1 H 2 = 0 H 1 T A C T A C 1 H 1 = H 2 T A C T A C 1 H 2
Therefore, three images are needed to find five unknowns in the camera intrinsic parameter matrix A C . If the camera collects n images from different directions for calibration, a set of linear equations containing 2 n constrained equations can be established, which can be written in matrix form as follows:
V b = 0
where V is the coefficient matrix and b is the variable to be solved, with:
b = [ B 11 B 12 B 13 B 22 B 23 B 33 ]
B = A C T A C 1 = B 11 B 12 B 13 B 21 B 22 B 23 B 31 B 32 B 33
The solvable camera intrinsic parameters are:
v 0 = ( B 12 B 13 B 11 B 23 ) / ( B 11 B 22 B 12 2 ) λ = B 33 [ B 13 2 + v 0 ( B 12 B 13 B 11 B 23 ) ] / B 11 f u = λ / B 11 f v = λ B 11 / ( B 11 B 22 B 12 2 ) s = B 12 f u 2 f v / λ u 0 = k v 0 / f v B 13 f u 2 / λ
Similarly, the camera parameters can be obtained:
[ r 1 r 2 r 3 t 1 ] = A C 1 H 1 A C 1 H 1 A C 1 H 2 A C 1 H 3 r 1 × r 2 A C 1 H 3 A C 1 H 3
In the case of obtaining the parameters outside the target, the spatial coordinate X l C = X l C Y l C Z l C 1 T of the laser spot P l in the camera coordinate system can be found by solving the known plane equation ( X C , Y C , Z C ) in the direction obtained by connecting the ray and the target from the camera optical center O C to the ideal image point coordinate x l u = u l u v l u 1 T . Moving the calibration board along the laser direction, the spatial position of laser spot can be obtained at different distances after multiple acquisitions. By processing the data, the laser beam can be straight in the camera coordinate system.
Through data processing, the linear equation ( X C , Y C , Z C ) of the laser beam in the camera coordinate system can be obtained in the form of Equation (29). By combining the distance information DL obtained through laser ranging, the spatial coordinates of the laser origin in the camera coordinate system can be obtained, and the transformation relationship R l 2 M t l 2 M between the laser system and camera system can be estimated:
( X C , Y C , Z C ) : A Q B Q C Q D Q X C Y C Z C 1 T = 0
( X C , Y C , Z C ) : X l i C X l 0 C A l = Y l i C Y l 0 C B l = Z l i C Z l 0 C C l
where X l 0 C = 1 n X l i C , Y l 0 C = 1 n Y l i C , Z l 0 C = 1 n Z l i C . Combining with Equation (27), we have:
( R T 2 M , t T 2 M ) = A Q B Q C Q D Q [ r 1 r 2 r 3 t 1 ] X T Y T Z T 1 T
By combining with the imaging model, the linear equation between laser spot and camera light center can be expressed in two-point form:
X C u l u = Y C v l u = 2 Z C f u + f v
Equations (30) and (31) are solved simultaneously, the only solution of which X l C = X l C Y l C Z l C 1 T is the coordinate of the laser spot on the target in the camera coordinate system.
After many measurements, the linear equation can be expressed as a series of spatial point sets X l i C   X l i C = X l i C Y l i C Z l i C 1 T , i = 1 , 2 , 3 , , as shown in Figure 2. Constraints can be applied using a point-line relationship to solve the linear equation ( X C , Y C , Z C ) corresponding to laser rays, such as:
Y l i C Y l 0 C X l i C X l 0 C 0 0 Z l i C Z l 0 C Y l i C Y l 0 C A l 0 B l B l 0 C l = 0
A space point can provide two constraints, and we need at least two space points to solve the equation and estimate the linear equation ( X C , Y C , Z C ) . Finally, the laser origin position is determined on the line by calculating the distance information obtained by ranging according to the coordinate system established before and using the relative transformation matrix of laser-camera R T 2 M t T 2 M .

4.4. Optimization of Solution

The above process does not involve any iteration. The camera internal parameters and laser-camera external parameters can be found using least squares. The calculation speed is fast and local minima can be effectively avoided effectively. If we want to obtain higher accuracy, we can take the calculated value as the initial value, and further improve the calibration accuracy of the system through the non-linear optimization method.
Given n calibrated images, each image has m corners x i d and one laser projection point x l d . The following objective functions are then constructed:
E ( A C , λ 1 , λ 2 , R T 2 M , t T 2 M ) = i = 1 n j = 1 m x i , j d P r o ( X j T ) + γ x l , j d P r o ( d i l )
where P r o ( X j T ) and P r o ( d i l ) represent the projection functions of corner points X j T and laser spot X l C under the division distortion model, and γ is a named weight coefficient that denotes the contribution of corner and laser points to errors, generally speaking γ = 5 .
Using Cayley-Gibbs-Rodriguez (CGR) [9] to parameterize the rotation matrix R , the latter can be expressed as a function of the CGR parameters s = s 1 s 2 s 3 :
R = 1 1 + s 1 2 + s 2 2 + s 3 2 1 + s 1 2 - s 2 2 - s 3 2 2 s 1 s 2 - 2 s 3 2 s 1 s 3 + 2 s 2 2 s 1 s 2 + 2 s 3 1 - s 1 2 + s 2 2 - s 3 2 2 s 2 s 3 - 2 s 1 2 s 1 s 3 - 2 s 2 2 s 2 s 3 + 2 s 1 1 - s 1 2 - s 2 2 + s 3 2
The problem is then transformed into an unconstrained optimization problem. The automatic Gröbner basis method [10] is used to solve Equation (32), and the minimum solution E min ( A ˜ C , λ ˜ 1 , λ ˜ 2 , R ˜ l 2 M , t ˜ l 2 M ) can be obtained. A nonlinear optimization method is used to further improve the accuracy and stability of the solution.
In this part, we have completed the estimation of the optimal solution of all parameters, including the camera intrinsic parameter matrix A ˜ C , distortion coefficient λ ˜ 1 λ ˜ 2 and laser-camera external parameters R ˜ l 2 M t ˜ l 2 M .

5. Experiment and Analysis

In this part, we evaluate the calibration methods of the camera internal parameters and camera-laser external parameters. The effectiveness and influencing factors of the proposed system calibration algorithm are analyzed through computer simulation experiments, while the measurement system is calibrated through observation experiments. In order to better evaluate the calibration results, we refer to the re-projection error [34] evaluation method in the camera calibration process, and unify the laser spot and target corner to establish the following error evaluation function:
E d r = 1 m n i = 1 n j = 1 m x i , j d P r o ( X j T ) + γ x l , i d P r o ( d i l )
The re-projection error Edr is an important metric of the calibration results: the smaller Edr is, the better the calibration results are.

5.1. Simulation Result

For the simulation experiment, we used the MATLAB R2016a software for Windows 10. The relevant parameters of the simulation system are shown in Table 2. In the measurement system, the laser direction is parallel to the optical axis of the camera and a 50 mm offset in the O M X M direction is arranged.
The target is shown in the Figure 3, where the blue dots represent the corners of the checkerboard lattice, evenly distributed in the plane, and the adjacent corners are 15   mm apart. The relative position between the target and the system is randomly generated by the system within a given range.
Throughout the experiment, we compare the estimated values from each calculation with the real values set by simulation, and evaluate the accuracy of the algorithm by calculating the deviation between the two. The error is expressed as follows:
E f u = f ˜ u f u f u , E f v = f ˜ v f v f v E u 0 = u ˜ 0 u 0 u 0 , E u 0 = v ˜ 0 v 0 v 0 E R = max i = 1 3 arccos r ˜ i r i E t = t ˜ l 2 M - t l 2 M
Kopparapu et al., confirmed [32] that noise has a significant impact on calibration accuracy. We add ω n o i s e G a u s s 0 , Σ n o i s e Gaussian noise to the simulated projection image, where Σ n o i s e is the standard deviation of the Gaussian distribution. In the simulation, the standard deviation Σ n o i s e of noise increases gradually in the range of 0.1 to 1.5   pixel . For each ω n o i s e X distribution, we performed 100 independent experiments, and obtained the average value of calibration error as the statistical result.
The results are shown in Figure 4. It can be seen that with the increase of noise, the deviation between the calibration parameters and the true value increases linearly. When the corner extraction noise is 0.5 pixels, the system calibration error is about 0.2, the focal length deviation is 0.1%, and the main point deviation is about 0.8%. In terms of extrinsic parameters, the translation error also follows a linear distribution, but the fluctuation is more obvious. It can be seen that the system is sensitive to the internal parameters. At the same time, under the corner extraction error of 0.5 pixels, the translation error is about 1 mm and the rotation error is 0.02 degrees.
In addition, we analyzed the impact of the number of collected data on the calibration accuracy, and set the calibration data to gradually increase from the minimum of three groups of image distance data to 15 groups. The results are shown in the Figure 5. With the increase of calibration data, the re-projection error remains almost stable, but the accuracy of the estimated system variables is significantly improved. When the number of data increases to 8, the decline of the correlation error slows down. Therefore, sufficient calibration data collected in a certain range can help to improve the accuracy of system calibration. However, after reaching a certain number, the effect gradually decreases, and so 8–10 groups of data are appropriate.
We also analyzed the influence of the measurement error of the laser ranging system on the calibration accuracy. The Gaussian-distributed noise ω d G a u s s 0 , Σ d was added to the ranging error, and the standard deviation Σ d was changed gradually from 1 mm to 15 mm. We calculated the calibration errors of the parameters of the system at each noise level. As shown in Figure 6, except for the linear relationship between translation vector and distance error, the other parameters hardly change with the increase of error.

5.2. Real Experiment

In the actual experiment, we built a measurement system with a 1D laser-camera combination, and calibrated the system with the method proposed in this paper. As shown in Figure 7, the system is composed of a MER-131-210U3C camera and a SKD-100 laser ranging system. The related parameters are shown in Table 3.
The system was calibrated using a calibration board composed of 11 × 8 square chessboard lattices with a distance of 15 mm between corners. The iterative Harris algorithm was used to extract the checkerboard corner coordinates (red +) from the calibrated image accurately, and the centroid method was used to extract the image coordinates (green × ) of the laser spot. The accuracy can reach sub-pixel level. The results of 12 images collected at different distances from 150   mm to 1500   mm are shown in Figure 8.
In order to verify the accuracy of our calibration method, we compared the internal parameters obtained with the classical Zhang [21] calibration method and the Li’s method [30]. In the calibration process of Bo‘s method, we replaced the original random corner matching process by directly inputting the coordinates of checkerboard lattices into the program, but still retain the complete algorithm for camera parameter determination. The results are shown in Table 4, where the accuracy of the intrinsic parameters obtained by the calibration methods are compared. The calibration accuracy is evaluated using the re-projection error [34] and expressed as:
E r p = 1 m j = 1 m x j d P r o ( X j T ) 2
From the calibration results in Figure 9 and Table 4, we see that our method and Zhang’s method [21] have similar calibration results in camera intrinsic parameters. Because the distortion models used by the two methods are different, the physical meanings of the distortion coefficients are different, so it is meaningless to compare them. Judging from the re-projection error, our method is slightly better than Zhang’s calibration algorithm. This proves the effectiveness of our calibration algorithm.
At the same time, the extrinsic parameters R ˜ l 2 M t ˜ l 2 M of the laser-camera combination of the measurement system are also calculated and the calibration results were evaluated using the evaluation function set Equation (37). The results are shown in Table 5 and Figure 10.
Ferrara et al. [35] mentioned that the position of the checkerboard has an effect on the accuracy of calibration. We supplemented a set of calibration data of checkerboard location on the edge of the image to verify the effect of the change of checkerboard location on the accuracy of the proposed method. The data are shown in Figure 11. The calibration results are shown in Table 6. When the image is in the edge position, the calibration results are basically consistent with the internal and external parameters obtained in Table 3 and Table 4, and the re-projection errors of the internal and external parameters are slightly increased, but the difference is small. It therefore shown that the method proposed in this paper is also applicable when the collected data lie on the edge of the image.

6. Conclusions

In this paper, we present a convenient and fast method for calibrating a combined 1D laser ranging and monocular camera measurement system, aiming to realize an accurate measurement system fusing laser and vision. The method is easy to implement and has high calibration accuracy. The fast robust determination of the camera imaging model parameters is achieved by introducing a division distortion model. Then, a linear-plane constraint is formulated to realize robust estimation of the initial value of the laser-vision parameters. Finally, an unconstrained optimization problem is formulated using the rotation matrix parameters, and the high precision calibration of the whole measurement system is realized. The factors affecting the calibration accuracy are analyzed through simulation experiments, and the effectiveness of the proposed method is verified through real scene experiments.

Author Contributions

Z.Z. and R.Z. conceived the methodology and implemented the methodology. Z.Z. designed the simulated experiments and analyzed the data. R.Z. wrote the paper. Z.Z. and R.Z. contributed equally to this work. E.L. designed and guided the experiments. K.Y. and Y.M. performed the experiments.

Funding

This research was funded by the National Natural Science Foundation of China (No. 61501429) and the Youth Innovation Promotion Association CAS (No. 2016335).

Acknowledgments

Thanks to the accompaniers working with us in department of the Institute of Optics and Electronics, Chinese Academy of Sciences.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, Z.; Zhao, R.; Liu, E.; Yan, K.; Ma, Y. Scale Estimation and Correction of the Monocular Simultaneous Localization and Mapping (SLAM) Based on Fusion of 1D Laser Range Finder and Vision Data. Sensors 2018, 18, 1948. [Google Scholar] [CrossRef] [PubMed]
  2. Douillard, B.; Fox, D.; Ramos, F. Laser and Vision Based Outdoor Object Mapping. Robot. Sci. Syst. 2008, 9–16. [Google Scholar] [CrossRef]
  3. Premebida, C.; Ludwig, O.; Nunes, U. LIDAR and Vision-Based Pedestrian Detection System; John Wiley and Sons Ltd.: Hoboken, NJ, USA, 2009. [Google Scholar]
  4. Whelan, T.; Kaess, M.; Johannsson, H.; Fallon, M.; Leonard, J.J.; McDonald, J. Real-time large-scale dense RGB-D SLAM with volumetric fusion. Int. J. Robot. Res. 2015, 34, 598–626. [Google Scholar] [CrossRef]
  5. Duren, R.M.; Wong, E.; Breckenridge, B.; Shaffer, S.J.; Duncan, C.; Tubbs, E.F.; Salomon, P.M. Metrology, attitude, and orbit determination for spaceborne interferometric synthetic aperture radar. In Proceedings of the Acquisition, Tracking, & Pointing XII, Orlando, FL, USA, 30 July 1998. [Google Scholar]
  6. Ordónez, C.; Arias, P.; Herráez, J.; Rodriguez, J.; Martín, M.T. A combined single range and single image device for low-cost measurement of building façade features. Photogramm. Rec. 2010, 23, 228–240. [Google Scholar] [CrossRef]
  7. Wu, K.; Di, K.; Sun, X.; Wan, W.; Liu, Z. Enhanced monocular visual odometry integrated with laser distance meter for astronaut navigation. Sensors 2014, 14, 4981–5003. [Google Scholar] [CrossRef] [PubMed]
  8. Chen, Z.; Yang, X.; Zhang, C.; Jiang, S. Extrinsic calibration of a laser range finder and a camera based on the automatic detection of line feature. In Proceedings of the International Congress on Image and Signal Processing, Biomedical Engineering and Informatics, Datong, China, 15–17 October 2016. [Google Scholar]
  9. Hesch, J.A.; Roumeliotis, S.I. A Direct Least-Squares (DLS) method for PnP. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011. [Google Scholar]
  10. Kukelova, Z.; Bujnak, M.; Pajdla, T. Automatic Generator of Minimal Problem Solvers. In Proceedings of the European Conference on Computer Vision, Marseille, France, 12–18 October 2008. [Google Scholar]
  11. Vasconcelos, F.; Barreto, J.P.; Nunes, U. A minimal solution for the extrinsic calibration of a camera and a laser-rangefinder. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2097–2107. [Google Scholar] [CrossRef] [PubMed]
  12. Scaramuzza, D.; Harati, A.; Siegwart, R. Extrinsic self calibration of a camera and a 3D laser range finder from natural scenes. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007. [Google Scholar]
  13. Ha, J.E. Improved algorithm for the extrinsic calibration of a camera and laser range finder using 3D-3D correspondences. Int. J. Control Autom. Syst. 2015, 13, 1272–1276. [Google Scholar] [CrossRef]
  14. Unnikrishnan, R.; Hebert, M. Fast Extrinsic Calibration of a Laser Rangefinder to a Camera; Carnegie Mellon University: Pittsburgh, PA, USA, 2005. [Google Scholar]
  15. Zhang, Q.; Pless, R. Extrinsic calibration of a camera and laser range finder (improves camera calibration). In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, 28 September–2 October 2005. [Google Scholar]
  16. Viejo, D.; Navarrete-Sanchez, J.; Cazorla, M. Portable 3D laser-camera calibration system with color fusion for SLAM. Investigación 2013, 3, 29–45. [Google Scholar]
  17. Zhu, Z.; Tang, B.Q.; Li, J.; Gan, Z. Calibration of laser displacement sensor used by industrial robots. Opt. Eng. 2004, 43, 12–14. [Google Scholar]
  18. Ke-Qing, L.U.; Wang, W.; Chen, Z.C. Calibration of laser beam-direction for point laser sensors. Opt. Precis. Eng. 2010, 18, 880–886. [Google Scholar]
  19. Zhou, A.; Guo, J.; Shao, W.; Li, B. A segmental calibration method for a miniature serial-link coordinate measuring machine using a compound calibration artefact. Meas. Technol. 2013, 24, 065001. [Google Scholar] [CrossRef]
  20. Martínez, J.; Ordóñez, C.; Arias, P.; Armesto, J. Non-contact 3D Measurement of Buildings through Close Range Photogrammetry and a Laser Distance Meter. Photogramm. Eng. Remote Sens. 2011, 77, 805–811. [Google Scholar] [CrossRef]
  21. Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  22. Liu, Z.; Wu, Q.; Wu, S.; Pan, X. Flexible and accurate camera calibration using grid spherical images. Opt. Express 2017, 25, 15269–15285. [Google Scholar] [CrossRef] [PubMed]
  23. Liu, Z.; Wu, Q.; Wu, S.; Pan, X. Camera Calibration from the Quasi-affine Invariance of Two Parallel Circles. Opt. Express 2004, 3021, 190–202. [Google Scholar]
  24. Wong, K.Y.; Mendonca, P.R.S.; Cipolla, R. Camera Calibration from Surfaces of Revolution. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 147–161. [Google Scholar] [CrossRef]
  25. Anchini, R.; Beraldin, J.A. Subpixel location of discrete target images in close-range camera calibration: A novel approach. Proc. SPIE 2007, 6491, 10–18. [Google Scholar]
  26. Wang, Q.S. A Model Based on DLT Improved Three-dimensional Camera Calibration Algorithm Research. Available online: http://www.en.cnki.com.cn/Article_en/CJFDTotal-DBCH201612065.htm (accessed on 14 March 2019).
  27. Kukelova, Z.; Pajdla, T. A Minimal Solution to Radial Distortion Autocalibration. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2410–2422. [Google Scholar] [CrossRef] [PubMed]
  28. Hartley, R.I.; Kang, S.B. Parameter-free radial distortion correction with centre of distortion estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1309–1321. [Google Scholar] [CrossRef] [PubMed]
  29. Hong, Y.; Ren, G.; Liu, E. Non-iterative method for camera calibration. Opt. Express 2015, 23, 23992–24003. [Google Scholar] [CrossRef] [PubMed]
  30. Li, B.; Heng, L.; Koser, K.; Pollefeys, M. A multiple-camera system calibration toolbox using a feature descriptor-based calibration pattern. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots & Systems, Tokyo, Japan, 3–7 November 2014. [Google Scholar]
  31. An, G.; Lee, S.; Seo, M.W.; Yun, K.; Cheong, W.S.; Kang, S.J. Charuco Board-Based Omnidirectional Camera Calibration Method. Electronics 2018, 7, 421. [Google Scholar] [CrossRef]
  32. Liu, Z.; Li, F.; Zhang, G. An external parameter calibration method for multiple cameras based on laser rangefinder. Measurement 2014, 47, 954–962. [Google Scholar] [CrossRef]
  33. Lichti, D.D. Error modelling, calibration and analysis of an AM–CW terrestrial laser scanner system. ISPRS J. Photogramm. Remote Sens. 2007, 61, 307–324. [Google Scholar] [CrossRef]
  34. Kopparapu, S.; Corke, P. The Effect of Noise on Camera Calibration Parameters. Graph. Models 2001, 63, 277–303. [Google Scholar] [CrossRef]
  35. Ferrara, P.; Piva, A.; Argenti, F.; Kusuno, J.; Niccolini, M.; Ragaglia, M.; Uccheddu, F. Wide-angle and long-range real time pose estimation: A comparison between monocular and stereo vision systems. J. Vis. Commun. Image Represent. 2017, 48, 159–168. [Google Scholar] [CrossRef]
Figure 1. Measurement Model.
Figure 1. Measurement Model.
Sensors 19 01315 g001
Figure 2. Changing of the azimuth and angle of the calibration plate and performing multiple measurements.
Figure 2. Changing of the azimuth and angle of the calibration plate and performing multiple measurements.
Sensors 19 01315 g002
Figure 3. The imaging illustration. (Left) Schematic diagram of a simulated scenario. (Right) The generated image. The blue dots represent the ideal image points, the green dots represent the added distortion points, the red X represents the ideal image point of laser and the red circle is the distorted laser projection point.
Figure 3. The imaging illustration. (Left) Schematic diagram of a simulated scenario. (Right) The generated image. The blue dots represent the ideal image points, the green dots represent the added distortion points, the red X represents the ideal image point of laser and the red circle is the distorted laser projection point.
Sensors 19 01315 g003
Figure 4. Simulation results for different image noise levels. (a) The re-projection error E d r ; (b) The effects of noise on intrinsic parameters such as E f u , E f v , E u 0 , E u 0 ; (c) mileage error E t for different noise levels; (d) effects of noise on rotation error E R .
Figure 4. Simulation results for different image noise levels. (a) The re-projection error E d r ; (b) The effects of noise on intrinsic parameters such as E f u , E f v , E u 0 , E u 0 ; (c) mileage error E t for different noise levels; (d) effects of noise on rotation error E R .
Sensors 19 01315 g004
Figure 5. The simulation results for different numbers of collected data. (a) Re-projection error E d r ; (b) Effects of the number of data on intrinsic parameters such as E f u , E f v , E u 0 , E u 0 ; (c) Mileage error E t for different noise levels; (d) Effect of the number of data on rotation error E R .
Figure 5. The simulation results for different numbers of collected data. (a) Re-projection error E d r ; (b) Effects of the number of data on intrinsic parameters such as E f u , E f v , E u 0 , E u 0 ; (c) Mileage error E t for different noise levels; (d) Effect of the number of data on rotation error E R .
Sensors 19 01315 g005
Figure 6. Simulation results of different distance noise levels. (a) Re-projection error E d r ; (b) Effects of noise on intrinsic parameters such as E f u , E f v , E u 0 , E u 0 ; (c) Mileage error E t with different noise; (d) the effects of noise on rotation error E R .
Figure 6. Simulation results of different distance noise levels. (a) Re-projection error E d r ; (b) Effects of noise on intrinsic parameters such as E f u , E f v , E u 0 , E u 0 ; (c) Mileage error E t with different noise; (d) the effects of noise on rotation error E R .
Sensors 19 01315 g006
Figure 7. Measurement system combining 1D LRF and camera.
Figure 7. Measurement system combining 1D LRF and camera.
Sensors 19 01315 g007
Figure 8. Samples of images used for the real experiment.
Figure 8. Samples of images used for the real experiment.
Sensors 19 01315 g008
Figure 9. Re-projection error distribution for different images marked as different colors: (a) Zhang’s method [21]; (b) proposed method; (c) Li’s method [30].
Figure 9. Re-projection error distribution for different images marked as different colors: (a) Zhang’s method [21]; (b) proposed method; (c) Li’s method [30].
Sensors 19 01315 g009
Figure 10. Visualization of extrinsic parameters R ˜ l 2 M t ˜ l 2 M .
Figure 10. Visualization of extrinsic parameters R ˜ l 2 M t ˜ l 2 M .
Sensors 19 01315 g010
Figure 11. Image sample when the checkerboard is close to the edge.
Figure 11. Image sample when the checkerboard is close to the edge.
Sensors 19 01315 g011
Table 1. The parameter statement of the system.
Table 1. The parameter statement of the system.
ParameterMean
Coordinats O M / O T / O C / O l measurement/target/camera/laser-ranging coordinate system
O u v image plane coordinate system of camera
T T 2 C transformation matrix relating O T to O C
R l 2 M , t l 2 M Extrinsic matrix of O l to O C
R T 2 C , t T 2 C Extrinsic matrix of O T to O C
  T e 2 e ^ transformation matrix relating the distortion center e to new distortion center e ^
Imaging Geometry A C intrinsic matrix
λ 1 , λ 2 distortion coefficients
e distortion center, expressed as d u 0 d v 0 T
ρ i depth scale factor
H homography matrix
H ^ transformed homography matrix H
F H fundamental matrix of distortion
F ^ H transformed fundamental matrix of distortion
Variable X i T position of the corner in the target coordinate system,
x i u ideal position of projection point in image plane
x i d actual position of projection point in image plane
x ^ i d transformed image coordinates x i d
( X C , Y C , Z C ) / ( R T 2 M , t T 2 M ) plane equation of the target in camera coordinate system
( X C , Y C , Z C ) linear equation of the laser beam in the camera coordinate system
E ( A C , λ 1 , λ 2 , R T 2 M , t T 2 M ) objective functions to be optimized
E d r re-projection error
Table 2. System parameters of simulation.
Table 2. System parameters of simulation.
Parameter A C e ( λ 1 , λ 2 ) R l 2 M t l 2 M
Set value 850 s 512 0 850 384 0 0 1 509 380 6.15 × 10 7 1.6 × 10 13 1 0 0 0 1 0 0 0 1 50 0 0
Unit pixel pixel pixel 2 pixel 4 - mm
Table 3. The system parameters of 1D LRF and camera.
Table 3. The system parameters of 1D LRF and camera.
SensorsParameterValue
MER-131-210U3C
(camera)
Sensor Size 1 / 2
Resolution 1280   ( H )   ×   1024   ( V )
Frame Rate 210   F P S
Pixel Size 4.8   μ m × 4.8   μ m
Focuses 5   mm
F (Relative Aperture) 1.4 16
SKD-100
(LRF)
Wavelength 635   nm
Range 1 1000   mm
Accuracy 2   mm
Table 4. System parameters of simulation.
Table 4. System parameters of simulation.
Method A C e ( λ 1 , λ 2 ) Mean E r p
Zhang [21] 1053.3 0 643.5 0 1048.0 539.7 0 0 1   pixel - 0.1348   mm 2 0.01661   mm 4 0.09337   pixel
Li [30] 1059.7 0.1604 649.5 0 1058.9 539.2 0 0 1   pixel - 0.1372   mm 2 0.1993   mm 4 0.07974   pixel
proposed 1055.2 0 647.2 0 1054.9 538.1 0 0 1   pixel 509 380 pixel 6.15 × 10 7 1.6 × 10 13 pixel 2 pixel 4 0.07725   pixel
Table 5. System extrinsic parameters and evaluation error.
Table 5. System extrinsic parameters and evaluation error.
Parameter R l 2 M t l 2 M E d r
Set value 0.9974 0.0052 0.0715 0.0052 0.9893 0.1456 0.0715 0.1456 1 0.5030 32.6479 1.859 0.10173
Unit-mmpixel
Table 6. System parameters and evaluation error when checkerboard is close to edge.
Table 6. System parameters and evaluation error when checkerboard is close to edge.
Parameter A C e ( λ 1 , λ 2 ) Mean E r p
Value 1057.9 0 646.9 0 1057.2 536.8 0 0 1   pixel 507 379 pixel 6.21 × 10 7 1.53 × 10 13   pixel 2 pixel 4 0.08371   pixel
Parameter R l 2 M t l 2 M E d r
Set value 0.9962 0.0101 0.0865 0.0101 0.9902 0.1381 0.0855 0.1388 0.9999 0.8167 31.5531 1.6742 mm 0.1147   pixel

Share and Cite

MDPI and ACS Style

Zhang, Z.; Zhao, R.; Liu, E.; Yan, K.; Ma, Y. A Convenient Calibration Method for LRF-Camera Combination Systems Based on a Checkerboard. Sensors 2019, 19, 1315. https://doi.org/10.3390/s19061315

AMA Style

Zhang Z, Zhao R, Liu E, Yan K, Ma Y. A Convenient Calibration Method for LRF-Camera Combination Systems Based on a Checkerboard. Sensors. 2019; 19(6):1315. https://doi.org/10.3390/s19061315

Chicago/Turabian Style

Zhang, Zhuang, Rujin Zhao, Enhai Liu, Kun Yan, and Yuebo Ma. 2019. "A Convenient Calibration Method for LRF-Camera Combination Systems Based on a Checkerboard" Sensors 19, no. 6: 1315. https://doi.org/10.3390/s19061315

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop