# Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Works

#### 2.1. Multiple Views on a Planar Checkerboard

#### 2.2. Multiple Geometry Elements

#### 2.3. Correlation of Mutual Information

#### 2.4. Our Approach

## 3. Overview and Notations

#### 3.1. Overview

#### 3.2. Notations

- ${\mathit{p}}_{i}={({x}_{i},{y}_{i},{z}_{i})}^{T}$: coordinates of a 3D point.
- $\mathit{P}=\{{\mathit{p}}_{1},{\mathit{p}}_{2},\dots ,{\mathit{p}}_{n}\}$: set of n 3D points.
- $\mathbf{\theta}={({\theta}_{x},{\theta}_{y},{\theta}_{z})}^{T}$: rotation angle vector whose element corresponds to the rotation angle along x-, y-, z-axis respectively.
- $\mathit{t}={({t}_{x},{t}_{y},{t}_{z})}^{T}$: the translation vector.
- $\mathbf{R}\left(\mathbf{\theta}\right)={\mathbf{R}}_{z}\left({\theta}_{z}\right){\mathbf{R}}_{y}\left({\theta}_{y}\right){\mathbf{R}}_{x}\left({\theta}_{x}\right)$: rotation matrix.
- ${T}_{r}(\mathbf{\theta},\mathit{t},{\mathit{p}}_{\mathit{i}})=\mathbf{R}\left(\mathbf{\theta}\right){\mathit{p}}_{\mathit{i}}+\mathit{t}$: function that transforms the 3D point ${\mathit{p}}_{i}$ with the angle vector $\mathbf{\theta}$ and translation vector $\mathit{t}$.
- ${\widehat{\mathit{p}}}_{i}={T}_{r}(\mathbf{\theta},\mathit{t},{\mathit{p}}_{i})$: transformed point of ${\mathit{p}}_{i}$.
- ${\mathit{P}}^{c}=\{{\mathit{p}}_{1}^{c},{\mathit{p}}_{2}^{c},\dots ,{\mathit{p}}_{N}^{c}\}$: set of estimated 3D corner points of the chessboard from the point cloud. N is the number of the corners in the chessboard.
- ${\mathtt{x}}_{i}={({u}_{i},{v}_{i})}^{T}$: coordinates of 2D pixel.
- ${\mathtt{X}}^{c}=\{{\mathtt{x}}_{1}^{c},{\mathtt{x}}_{2}^{c},\dots ,{\mathtt{x}}_{N}^{c}\}$: set of detected 2D corner pixels of the chessboard from the image.

## 4. Corner Estimation from the Point Cloud

#### 4.1. Automatic Detection of the Chessboard

#### 4.1.1. Segmentation of the Point Cloud

#### 4.1.2. Finding the Chessboard from the Segments

#### 4.2. Corner Estimation

#### 4.2.1. Model Formulation

- directions of ${\mathbf{\mu}}_{1},{\mathbf{\mu}}_{2},{\mathbf{\mu}}_{3}$ are defined to obey to the right hand rule.
- direction of ${\mathbf{\mu}}_{3}$ (the normal of the chessboard) is defined to point to the side of origin of the LiDAR coordinate system.
- angle between ${\mathbf{\mu}}_{1}$ and x axis of the LiDAR coordinate system is not more than ${90}^{\circ}$

#### 4.2.2. Correspondence of Intensity and Color

#### 4.2.3. Cost Function and Optimization

## 5. Extrinsic Calibration Estimation

#### 5.1. Corner Estimation from the Image

#### 5.2. Correspondence of the 3D-2D Corners

#### 5.3. Initial Value by PnP

#### 5.4. Refinement with Nonlinear Optimization

## 6. Experimental Results and Error Evaluation

#### 6.1. Setup

#### 6.2. Simulation for Corner Detection Error in the Point Cloud

#### 6.2.1. Simulation of the Point Cloud

#### 6.2.2. Error Results from the Simulation

#### 6.3. Detected Corners

#### 6.3.1. From the Image

#### 6.3.2. From the Point Cloud

#### 6.4. Estimated Extrinsic Parameters

#### 6.5. Re-Projection Error

#### 6.6. Re-Projection Results

## 7. Discussions

- Automatic segmentation. As the first step of the proposed method, automatic segmentation is performed. The current segmentation method is only based on the distance information, which needs the chessboard to be spatially separated from the surrounding objects. Nevertheless, slight under-segmentation caused by the stand of the chessboard or over-segmentation caused by the measurement noise may still occur. The degree of mis-segmentation generated by the segmentation method used in this work is experimentally shown to be negligible for the corners estimation with the overall optimization of the proposed method.
- Simulation. To evaluate the performance for the corner estimation with the proposed method, we approximately simulated the points by considering the probability model of the distance as Gaussian distribution. However, the probability model for the noise of reflectance intensity, which is an aspect for corners estimation, is not considered. Under the influence of reflectance intensity, the real error of corner estimation is supposed to be higher than the simulated results in this work. This is one of the reasons why the relative error for corners estimation is about 0.2%, as shown in Figure 13b, and the final re-projection error increased to 0.8% in Section 6.5. For a more precise simulation, the probability model of the reflectance value related to the incidence angle, the distance and the divergence of laser beam needs to be formulated.
- Chessboard. As shown in Figure 11, both the horizontal and vertical intervals increase as the distance increases. To gather enough information for corner estimation, the side length of one grid in the chessboard is suggested to be greater than 1.5 times of the theoretical vertical interval at the farthest place. In addition, the intersection angle between the diagonal line of the chessboard and the z-axis of the LiDAR is suggested to be less than ${15}^{\circ}$ to enable the scanning of as many patterns as possible.We use the panoramic image for calibration, therefore, to remain unaffected by the stitching error, it is better to place the chessboard in the center of the field of view for each camera.
- Correspondence of 3D and 2D corners. In this work, a chessboard with 6∼8 patterns is used and the counting order is defined as starting from the of the chessboard for automatic correspondence. To make the “lower left” identified correctly, the chessboard should be captured to make the “lower left” of the real chessboard be same with that of chessboard in the image during the data acquisition. Also, the direction of z-axis of the two sensors should be almost consistent shown as in Figure 9b. However, these restrictions can be released with the introduction of asymmetrical patterns in practical use.

## 8. Conclusions and Future Works

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Mastin, A.; Kepner, J.; Fisher, J. Automatic registration of LIDAR and optical images of urban scenes. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009. [Google Scholar]
- Paparoditis, N.; Papelard, J.P.; Cannelle, B.; Devaux, A.; Soheilian, B.; David, N.; Houzay, E. Stereopolis II: A multi-purpose and multi-sensor 3d mobile mapping system for street visualisation and 3d metrology. Rev. Fr. Photogramm. Télédétec.
**2012**, 200, 69–79. [Google Scholar] - Szarvas, M.; Sakai, U.; Ogata, J. Real-time pedestrian detection using LIDAR and convolutional neural networks. In Proceedings of the 2006 IEEE Intelligent Vehicles Symposium (IV), Tokyo, Japan, 13–15 June 2006. [Google Scholar]
- Premebida, C.; Nunes, U. Fusing LIDAR, camera and semantic information: A context-based approach for pedestrian detection. Int. J. Robot. Res. (IJRR)
**2013**, 32, 371–384. [Google Scholar] [CrossRef] - Schlosser, J.; Chow, C.K.; Kira, Z. Fusing LIDAR and images for pedestrian detection using convolutional neural networks. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016. [Google Scholar]
- Levinson, J.; Thrun, S. Automatic Online Calibration of Cameras and Lasers. Available online: http://roboticsproceedings.org/rss09/p29.pdf (accessed on 27 April 2017).
- Taylor, Z.; Nieto, J.; Johnson, D. Multi-modal sensor calibration using a gradient orientation measure. J. Field Robot. (JFR)
**2014**, 32, 675–695. [Google Scholar] [CrossRef] - Pandey, G.; McBride, J.R.; Savarese, S.; Eustice, R.M. Automatic extrinsic calibration of vision and lidar by maximizing mutual information. J. Field Robot. (JFR)
**2014**, 32, 696–722. [Google Scholar] [CrossRef] - Mirzaei, F.M.; Kottas, D.G.; Roumeliotis, S.I. 3D LIDAR–camera intrinsic and extrinsic calibration: Identifiability and analytical least-squares-based initialization. Int. J. Robot. Res. (IJRR)
**2012**, 31, 452–467. [Google Scholar] [CrossRef] - Park, Y.; Yun, S.; Won, C.; Cho, K.; Um, K.; Sim, S. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board. Sensors
**2014**, 14, 5333–5353. [Google Scholar] [CrossRef] [PubMed] - García-Moreno, A.I.; Hernandez-García, D.E.; Gonzalez-Barbosa, J.J.; Ramírez-Pedraza, A.; Hurtado-Ramos, J.B.; Ornelas-Rodriguez, F.J. Error propagation and uncertainty analysis between 3D laser scanner and camera. Robot. Auton. Syst.
**2014**, 62, 782–793. [Google Scholar] [CrossRef] - Powell, M.J.D. An efficient method for finding the minimum of a function of several variables without calculating derivatives. Comput. J.
**1964**, 7, 155–162. [Google Scholar] [CrossRef] - Kneip, L.; Li, H.; Seo, Y. UPnP: An optimal O(n) solution to the absolute pose problem with universal applicability. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; Springer: Berlin, Germany, 2014; pp. 127–142. [Google Scholar]
- Levenberg, K. A method for the solution of certain non-linear problems in least squares. Q. Appl. Math.
**1944**, 2, 164–168. [Google Scholar] [CrossRef] - Marquardt, D.W. An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Ind. Appl. Math.
**1963**, 11, 431–441. [Google Scholar] [CrossRef] - Zhang, Q.; Pless, R. Extrinsic calibration of a camera and laser range finder (improves camera calibration). In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sendai, Japan, 28 September–2 October 2004. [Google Scholar]
- Unnikrishnan, R.; Hebert, M. Fast Extrinsic Calibration of a Laser Rangefinder to a Camera. Available online: http://repository.cmu.edu/robotics/339/ (accessed on 27 April 2017).
- Pandey, G.; McBride, J.; Savarese, S.; Eustice, R. Extrinsic calibration of a 3D laser scanner and an omnidirectional camera. IFAC Proc. Vol.
**2010**, 43, 336–341. [Google Scholar] [CrossRef] - Pandey, G.; McBride, J.R.; Eustice, R.M. Ford Campus vision and lidar data set. Int. J. Robot. Res. (IJRR)
**2011**, 30, 1543–1552. [Google Scholar] [CrossRef] - Scaramuzza, D.; Harati, A.; Siegwart, R. Extrinsic self calibration of a camera and a 3D laser range finder from natural scenes. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Diego, CA, USA, 29 October–2 November 2007. [Google Scholar]
- Moghadam, P.; Bosse, M.; Zlot, R. Line-based extrinsic calibration of range and image sensors. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013. [Google Scholar]
- Gong, X.; Lin, Y.; Liu, J. 3D LIDAR-camera extrinsic calibration using an arbitrary trihedron. Sensors
**2013**, 13, 1902–1918. [Google Scholar] [CrossRef] [PubMed] - Geiger, A.; Moosmann, F.; Car, O.; Schuster, B. Automatic camera and range sensor calibration using a single shot. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, MN, USA, 14–18 May 2012. [Google Scholar]
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. (IJRR)
**2013**, 32, 1231–1237. [Google Scholar] [CrossRef] - Atanacio-Jiménez, G.; González-Barbosa, J.J.; Hurtado-Ramos, J.B.; Ornelas-Rodríguez, F.J.; Jiménez-Hernández, H.; García-Ramirez, T.; González-Barbosa, R. LIDAR velodyne HDL-64E calibration using pattern planes. Int. J. Adv. Robot. Syst.
**2011**, 8, 59. [Google Scholar] [CrossRef] - Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI)
**2000**, 22, 1330–1334. [Google Scholar] [CrossRef] - Scaramuzza, D.; Martinelli, A.; Siegwart, R. A toolbox for easily calibrating omnidirectional cameras. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Beijing, China, 9–15 October 2006. [Google Scholar]
- Rabbani, T.; Van Den Heuvel, F.; Vosselmann, G. Segmentation of point clouds using smoothness constraint. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. (ISPRS)
**2006**, 36, 248–253. [Google Scholar] - Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM
**1981**, 24, 381–395. [Google Scholar] [CrossRef] - Wang, W.; Sakurada, K.; Kawaguchi, N. Incremental and enhanced scanline-based segmentation method for surface reconstruction of sparse LiDAR data. Remote Sens.
**2016**, 8, 967. [Google Scholar] [CrossRef] - Velodyne LiDAR, Inc. HDL-32E User’s Manual; Velodyne LiDAR, Inc.: San Jose, CA, USA, 2012. [Google Scholar]
- Jolliffe, I. Principal Component Analysis; Wiley Online Library: Hoboken, NJ, USA, 2002. [Google Scholar]
- Jones, E.; Oliphant, T.; Peterson, P. SciPy: Open Source Scientific Tools for Python. Available online: http://www.scipy.org (accessed on 27 April 2017).
- Rufli, M.; Scaramuzza, D.; Siegwart, R. Automatic detection of checkerboards on blurred and distorted images. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Nice, France, 22–26 September 2008. [Google Scholar]
- Kneip, L.; Furgale, P. OpenGV: A unified and generalized approach to real-time calibrated geometric vision. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 1–8. [Google Scholar]
- Point Grey Research, Inc. Technical Application Note (TAN2012009): Geometric Vision Using Ladybug
^{®}Cameras; Point Grey Research, Inc.: Richmond, BC, Canada, 2012. [Google Scholar]

**Figure 1.**Data from an identical scene captured by the LiDAR sensor and the panoramic camera. (

**a**) The points are colored by the reflectance intensity (blue indicates low intensity, red indicates high intensity); (

**b**) The zoomed chessboard. We can see the changes in reflection intensity of the point cloud between the white and black patterns; (

**c**) The panoramic image of the same scene.

**Figure 3.**Angular resolution of the used LiDAR in this work. The left figure is the top view and the right one is the side view of the LiDAR and the chessboard.

**Figure 4.**Uniformity of the points distribution. (

**a**) shows an example of the point cloud with better uniformity than (

**b**).

**Figure 5.**The principle used to estimate corners in the points. (

**a**) The chessboard model; (

**b**) The scanned point cloud of the chessboard. Colors indicate the intensity (blue for low and red for high reflectance intensity); (

**c**) Find a matrix that translates the most 3D points on the corresponding patterns. Green points are estimated corners; (

**d**) Consider the corners of the chessboard model as the corners of the point cloud.

**Figure 6.**Directions of the basis vectors relative to the LiDAR coordinate system. Blue arrow lines in the left figure represent the basis vectors decomposed by PCA. After transformation with the basis vectors, chessboard’s points are mapped to the ${X}^{P}O{Y}^{P}$ (chessboard plane). Then we can apply the model described in Figure 5.

**Figure 7.**Estimated parameters for each frame. (

**a**) Scatter diagram of the intensity for all points in the chessboard; (

**b**) The histogram of the intensity. ${R}_{L},{R}_{H}$ can be found at the peaks of the two sides.

**Figure 8.**Cost definition for corner detection in the point cloud. (

**a**) An example of the point falling into the wrong pattern. The square represents a white pattern of the chessboard; (

**b**) An example of the point falling out of the chessboard model; (

**a**) describes the first term and (

**b**) describes the second term of the cost function (Equation (5)).

**Figure 10.**Distribution of the 20 chessboard positions. The chessboard is captured by the LiDAR from different heights and angles. The length of the coordinate axis is 1 m. Four groups of colors represent four positions of the chessboard for each horizontal camera. (

**a**) Top view of the point clouds of the chessboard; (

**b**) Side view of the point clouds of the chessboard.

**Figure 11.**Vertical field view of Velodyne HDL-32e and the relationship between interval and distance. (

**a**) Vertical angles of Velodyne HDL-32e; (

**b**) Vertical field of view; (

**c**) Relationship between the horizontal interval of two adjacent lasers and noise of the point cloud; (

**d**) Relationship between the interval of two successive points of the scanline and the distance of the chessboard. Red lines in (

**c**,

**d**) show the range of chessboard’s distance we place in this work.

**Figure 12.**Front view and side view of chessboard’s point clouds from simulation results and real data. (

**a**–

**c**) Simulated point clouds with the multiplier 1 at different distances; (

**d**–

**f**) Simulated point clouds with the multiplier 2 at different distances; (

**g**–

**i**) Simulated point clouds with the multiplier 3 at different distances; (

**j**–

**l**) Real point clouds at different distances.

**Figure 13.**Corner detection error by simulation. The horizontal axes represent different simulation conditions and the vertical axes represent the relative error. Red points represent the mean value and the vertical lines represent the $3\sigma $ range of the results simulated by 100 times at each simulation condition. (

**a**) Relationship between the error and noise of the point cloud at 1 m. The x axis represent the multiplier for the noise baseline; (

**b**) Relationship between the error and the distance of the chessboard with the baseline noise.

**Figure 14.**Detected corners from the panoramic images. (

**a**–

**c**) Example results of detected corners from images with different poses and distances.

**Figure 15.**Estimated corners of chessboard. (

**a**) The fitted chessboard model of the point cloud in the real Velobug LiDAR coordinate system; (

**b**) The front view of the zoom checkerboard; (

**c**) The side view of the zoom checkerboard.

**Figure 16.**Estimated parameters by the proposed method and Pandey’s Mutual Information (MI) method [8] with different initial values as the numbers of frame increases. (

**a**–

**c**) Rotation angle along each axis; (

**d**–

**f**) Translation along each axis.

**Figure 17.**Re-projection error calculation and results. (

**a**,

**b**) Shaded quadrilaterals show the regions of black and white patterns respectively. Points mapped into these regions are counted for error calculation; (

**c**) The errors for parameters estimated by different numbers of frames. The point and vertical line represent the mean and $3\sigma $ range of all errors calculated by applying the estimated parameters to the different number of frames.

**Figure 18.**Re-projected corners and points of the chessboard (best viewed when zoomed in). Big green circles and cyan lines indicate the detected corners. Small pink circles in big green circles and pink lines indicate re-projected corners estimated from the point cloud. Big blue and red circles represent the start and end for counting the corners. Blue points indicate low reflectance intensity and red points indicate high reflectance intensity.

**Figure 19.**Re-projection results. (

**a**) All re-projected points on the color panoramic image and colored by intensity; (

**b**) All re-projected points on the color panoramic image and colored by distance; (

**c**) Re-projected result on edge extracted image of all points colored by intensity; (

**d**) Re-projected result on edge extracted image of all points colored by distance.

**Figure 20.**Zoomed details of re-projected points. (

**a**–

**d**) Re-projected results on chessboard, human, pillar and car respectively. Each column represents re-projected points colored by intensity and distance on original RGB images and edges-extracted images. Blue indicates low value and red indicates high value.

**Figure 21.**Projection of the RGB information from the image to the point cloud with the estimated extrinsic parameters. (

**a**) An example of the colored point cloud; (

**b**) The zoomed view of the chessboard in (

**a**). The red points in (

**a**,

**b**) are occluded region caused by the chessboard; (

**c**) The zoomed view of the car in (

**a**).

Camera Index | 0 | 1 | 2 | 3 | 4 |
---|---|---|---|---|---|

Frame Index | 1, 6, 11, 16 | 2, 7, 12, 17 | 3, 8, 13, 18 | 4, 9, 14, 19 | 5, 10, 15, 20 |

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Wang, W.; Sakurada, K.; Kawaguchi, N.
Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard. *Remote Sens.* **2017**, *9*, 851.
https://doi.org/10.3390/rs9080851

**AMA Style**

Wang W, Sakurada K, Kawaguchi N.
Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard. *Remote Sensing*. 2017; 9(8):851.
https://doi.org/10.3390/rs9080851

**Chicago/Turabian Style**

Wang, Weimin, Ken Sakurada, and Nobuo Kawaguchi.
2017. "Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard" *Remote Sensing* 9, no. 8: 851.
https://doi.org/10.3390/rs9080851