^{1}

^{1}

^{1}

^{*}

^{2}

^{2}

^{3}

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (

Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results.

Recently, multi-sensors have been frequently used in the field of robot vision. For instance, a ranging sensor such as high-speed 3D LIDAR is used in conjunction with a color camera for various robot navigation tasks. The 3D LIDAR sensor is capable of providing 3D position and depth information about objects, whereas the color camera captures their 2D color features. Therefore, by providing the 2D image data with the 3D positional information, one can visualize the objects with a more realistic view. However, as a prerequisite, we need to know their relative positions and orientations by calibrating both sensors of the LIDAR and the color camera.

A checkerboard plane has been used to calibrate between a camera and a LIDAR. The calibration method using a checkerboard usually involves a two-step process [

In this paper we are interested in finding a projection matrix between the camera and the LIDAR directly without needing to perform a separate two-step (

In this paper, we propose a new calibration method between a camera and a 3D LIDAR using a polygonal board such as a triangle or diamond plane. By estimating the 3D locations of vertices from the scanned laser data and their corresponding corners in the 2D image, our approach for the calibration is to find point-to-point correspondences between the 2D image and the 3D point clouds. The corresponding pairs are used to solve the equations to obtain the calibration matrix.

This paper is composed of the following sections: in Section 2, we survey previous works related to camera and range sensor calibration. The mathematical formulation of the calibration between 2D and 3D sensors is presented in Section 3. In Section 4, we address the proposed calibration method. Experiments conducted on real data are explained in Section 5 and the conclusions follow in Section 6.

Calibration between sensors can be done by finding geometric relationships from co-observable features in the data captured by both sensors. For a color camera and a range sensor, feature points in 2D images can be readily detectable, but it is hard to identify the corresponding 3D points from the range data. Therefore, instead of pinpointing individual 3D feature points, the projected 3D points on the planar board (or on the line) were used to formulate constraints to solve the equations for a transformation matrix. For example, Zhang and Pless [

A planar board plays an important role in the calibration. Wasielewski and Strauss [

As 3D laser range sensors become popular, the calibration problem turned to a calibration between a 3D LIDAR and a camera [

Many types of special rig for 3D range sensor besides the LIDAR were used to estimate extrinsic parameters between a camera and a range sensor [

In the previous studies, various types of calibration rigs or environmental structures were used to improve the calibration accuracy. However, the performance of those methods relies on the density and location of actual scanned points on the calibration board (or the environmental structure). This implies that the accuracy of the calibration may drop quickly for a low resolution 3D LIDAR with a relatively small number of sensors. In this work, we solve this problem by adopting the following novel approaches:

We propose a polygonal planar board with adjacent sides as a calibration rig. Then, our calibration matrix can be obtained by simply solving linear equations given by a set of corresponding points between the 2D-3D vertices of the polygonal board.

The 3D vertices of the polygonal board are estimated, but not measured, from the scanned 3D points on the board. That is, once the geometric structure of the calibration board is known, we can calculate specific 3D points such as the vertices of the board without actually scanning those points. This property enables us to estimate the projection matrix directly using the corresponding pairs between 2D image and 3D points, which is especially useful for a low resolution 3D LIDAR with a relatively small number of sensors.

Using our approach, the combined projection matrix of the extrinsic and intrinsic matrices can be estimated without estimating them separately. Of course, our method can be used only for the extrinsic transformation matrix as usual.

We set a triangle board in front of the rigidly mounted camera and 3D LIDAR (see _{u}_{v}_{0}, _{0}) is the center point of the image plane. Also, _{u}_{v}_{0}, _{0}) and the extrinsic parameters (_{pq}

For each corresponding pair we have two equations as in _{pq}

Our calibration method uses a polygonal planar board with adjacent sides (e.g., triangle and diamond boards) (see

Noting that our vertex-based calibration method can be applied for any polygonal board with adjacent sides, we explain our method with a simple triangle planar board and the extension to other polygonal board such as a diamond plane should be straightforward. The overall steps of our method can be summarized as follows.

Data acquisition: Place one or more triangle planar boards in front of the camera and 3D LIDAR. Take camera images and measure the 3D point clouds of the 3D LIDAR for various locations of the board. To reduce the measured errors in the 3D LIDAR and to easily detect vertices of the triangle planar board in the image, it is recommended to use a bright monochromatic color for the board. Also, the board color should be distinctive from the background and the size of the board has to be large enough to include multiple laser scanning lines of the 3D LIDAR on the board surface.

Matching 2D-3D point correspondences: Detect vertices of the triangle plane in images and identify their corresponding 3D points from the laser scans by estimating the meeting points of two adjacent sides of the board.

Estimate the calibration parameters between 3D LIDAR and camera. With the corresponding pairs solve the linear equations for the initial estimate and refine the solutions for the final estimates.

Of the above three steps we elaborate steps (ii) and (iii) in the following subsections.

In order to solve the linear equations for the transformation matrix, we need to find point-to-point correspondences between the image and the 3D laser point at the vertices of the triangle planar board. For a 2D image, the vertices can be easily detected by a corner detection method such as Features from Accelerated Segment Test (FAST) [_{iC}_{C}_{C}_{iL}_{L}_{L}_{iR}_{R}_{R}_{C}_{pC}_{pC}_{pC}_{L}_{pL}_{pL}_{pL}_{R}_{pR}_{pR}_{pR}

To locate the vertices on the triangle board in the 3D LIDAR coordinate, we first need to measure the 3D point clouds on the board plane. Suppose that there are _{1}, _{2} , … , _{l}_{n}_{n}_{n}_{n}_{1}, _{n}_{2} , … , _{nmn}_{nm}_{n}_{n}_{n}_{n}_{1} + (_{n}_{n}_{n}_{n}

By using the

Once we estimate the board plane using the inlier 3D points of the RANSAC algorithm, we can project all the scanned 3D points

To estimate the three vertices of the triangle planar board in the LIDAR coordinate we use the projected 3D points

Let us denote the three sides of the triangle board as _{L}_{R}_{B}

To calculate
_{n}_{0} and _{n}_{1} in _{n}_{0} and _{n}_{1}. Also,
_{nmn}_{nm}_{1}+1. The locations of the virtual points are determined by the average distance between the scanned points for each scan line. So, by calculating the average Euclidean distance

Now, locating the boundary points

The 3D coordinate of the center vertex on the triangle _{C}_{C}_{C}_{L}_{R}

The suitability of the detected vertices can be tested by comparing the known real length
_{L}_{R}_{B}

If _{B}_{B}_{C}_{L}_{R}

The vertices of the triangle board captured by the camera as a 2D image can be readily detected by a corner detection method such as FAST [_{3} = {(_{1}, _{1}, _{1}),( _{2}, _{2}, _{2}), … , ( _{n}_{n}_{n}_{2} = {(_{1},_{1}), (_{2},_{2}), … , (_{n}_{n}_{i}_{i}_{i}_{i}_{i}

Note that, as we have more scan lines on the board, we can estimate the plane more accurately. Also, a polygonal structure with more intersections between edges definitely improves the accuracy of the solution for the camera calibration. For example, a diamond board with four vertices as in _{B}

Experiments with the diamond planar board are conducted to evaluate the performance of our method. The lengths of four sides of the diamond board used in our experiment are known and equal to 72 cm. For the sensors we used a color camera with resolution of 659 × 493 and a Velodyne HDL-32E LIDAR (see

Our correspondence-based estimation of _{B}

Now, we have four corresponding corners between the 2D image and 3D data and are ready to solve the equations for the projection matrix. Note that we need more than 12 correspondence pairs for estimating 12 calibration parameters and we have to take more than three different positions of the diamond board. Then, the calibration parameters are determined by solving the linear equations and the refinement process.

To evaluate the accuracy of the proposed method for different positions of the diamond board, we executed our calibration method for various positions of the diamond board and calculated the calibration pixel errors. Among all 12 positions in _{12}C_{3} = 220 possible combinations for the experiments. For each experiment we have 3 × 4 = 12 corresponding vertex pairs for the solution of the matrix equation. Once we have the final estimation of the calibration matrix, we can compute the reprojection errors for all 48 vertices in all 12 positions. The reprojection errors are calculated based on the distances in pixels between the vertex in 2D and its projected 3D vertex by the estimated matrix. Then, we calculate the average root mean squares for all 48 reprojection errors. The results are shown as box-plots in

After the calibration of the camera and the LIDAR (see

We conducted comparative experiments with the checkerboard method in [

In this paper, we have proposed a new approach for the calibration of a camera and a 3D LIDAR based on 2D-3D key point correspondences. The corresponding 3D points are the vertices of a planar board with adjacent sides and they are estimated from the projected 3D laser points on the planar board. Since our approach is based on 2D-3D point correspondences, the projection matrix can be estimated without separating the intrinsic and extrinsic parameters. Also, our monochromatic calibration board provides more reliable measurements of the 3D points on the board than the checkerboard. Experimental results confirm that our 2D-3D correspondence based calibration method yields accurate calibration, even for a low resolution 3D LIDAR.

This work was supported by the Agency for Defense Development, Korea, and by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the ITRC (Information Technology Research Center)) support program (NIPA-2014-H0301-14-4007) supervised by the NIPA (National IT Industry Promotion Agency).

C.S. Won led the work. Y. Park and S. Yun contributed to the algorithm and the experiments. K. Cho, K. Um, and S. Sim contributed to the problem formulation and algorithm verification.

The authors declare no conflict of interest.

Velodyne HDL-32E scanning on checkerboard and monochromatic board: (

Calibration board with adjacent sides: the scanned points on the border of the plane are used for estimating the side lines of the board.

Calibration configuration of a camera and 3D LIDAR with a triangle board.

Polygonal planar boards: (

Scanned laser (dotted) lines on the triangle planar board.

The 3D points (red) and its orthogonal projection (green). The inlier 3D points of the RANSAC are selected by: (

Projection of 3D points (red)

Vertices, adjacent lines, and projected points on the triangle board.

Virtual points (empty circles) near the side line.

Vertex estimation process for the triangle board: (

Diamond board with four vertices. (

Diamond boards with 12 different positions: the distances from the camera to the board are 1.7 m, 2.2 m, 3 m and 5∼7 m.

Selection of four corners on the diamond board in 2D image: (

Lasers scans on the diamond board: (

Box-plots of reprojection (pixel) errors for different numbers and positions of the diamond board. The red line in the boxes represents the average error and the extents of the boxes are at 25th and 75th percentiles.

Composition of 3D laser data on the color image by the estimated calibration matrix. (

Comparative results: (