Next Article in Journal
Body Mass Index in Human Gait for Building Risk Assessment Using Graph Theory
Previous Article in Journal
Extremely Robust Remote-Target Detection Based on Carbon Dioxide-Double Spikes in Midwave Spectral Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Geometric Model and Calibration Method for a Solid-State LiDAR

1
Centre for Sensors, Instrumentation and Systems Development, Universitat Politècnica de Catalunya (CD6-UPC), Rambla de Sant Nebridi 10, 08222 Terrassa, Spain
2
Beamagine S.L., Carrer de Bellesguard 16, 08755 Castellbisbal, Spain
3
Image Processing Group, TSC Department, Universitat Politècnica de Catalunya (UPC), Carrer de Jordi Girona 1-3, 08034 Barcelona, Spain
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(10), 2898; https://doi.org/10.3390/s20102898
Submission received: 7 April 2020 / Revised: 15 May 2020 / Accepted: 18 May 2020 / Published: 20 May 2020
(This article belongs to the Section Optical Sensors)

Abstract

:
This paper presents a novel calibration method for solid-state LiDAR devices based on a geometrical description of their scanning system, which has variable angular resolution. Determining this distortion across the entire Field-of-View of the system yields accurate and precise measurements which enable it to be combined with other sensors. On the one hand, the geometrical model is formulated using the well-known Snell’s law and the intrinsic optical assembly of the system, whereas on the other hand the proposed method describes the scanned scenario with an intuitive camera-like approach relating pixel locations with scanning directions. Simulations and experimental results show that the model fits with real devices and the calibration procedure accurately maps their variant resolution so undistorted representations of the observed scenario can be provided. Thus, the calibration method proposed during this work is applicable and valid for existing scanning systems improving their precision and accuracy in an order of magnitude.

1. Introduction

Nowadays, Light Detection and Ranging (LiDAR) devices are aimed to be used in a wide variety of applications among which autonomous vehicles and computer vision for robotics are outstanding. Their first studies and uses were related to atmospheric observations [1,2,3] and airborne mapping [4,5] decades ago. Progressively, the 3D sensing capability of LiDARs pushed them towards more user-oriented devices at the same moment in which audiovisual and computer vision applications such as object detection, RGB + depth fusion and augmented reality emerged [6,7].
Furthermore, the current disruption of autonomous driving and robotics has forced LiDAR technology to move a step forward in order to meet their demanding specifications: large range and high spatial resolution whilst real-time performance and background solar tolerance. Initially, mechanical rotating LiDARs [8] showed up as the solution for this technical challenge as they were based on sending light pulses instead of using amplitude-modulated illumination, prone to problems with background illumination outdoors. Mechanical LiDARs do imaging through spinning a macroscopic element, either the whole sensor embodiment or an optical element such as a prism or a galvanometer mirror [9]. Nonetheless, moving parts generally mean large enclosures and poor mechanical tolerance to vibration, shock and impact. As a consequence, and although they have been widely used in research and some industrial applications [10,11,12], solid-state LiDARs which precisely avoid large mechanical parts have arisen great interest because they also provide scalability, reliability and embeddedness.
Currently, several scanning strategies avoiding moving parts are emerging. Due to their constant evolution, there is a vast variety of literature, either scientific publications or patents, concerning how the scanning method is performed. However, there is no detailed description on how 3D data is actually calculated.
Since new computer vision techniques require accurate and precise 3D representations of the surroundings of the object for critical applications, there is a real need to provide models and calibration algorithms able to correct or compensate any possible distortion in LiDAR scanning. Adding the fact that research is yet mainly based on mechanical techniques, there is a lack of calibration procedures for non-mechanical, solid-state devices, in contrast to rotating ones [13,14,15,16,17,18,19]. Up to date, significant research about LiDAR imaging assumes the LiDAR device provides (x,y,z) data precise enough from the direct measurement [20,21,22,23,24,25,26].
Thus, the main aim of this work is to present a general-purpose, suitable and understandable scanning model along with a feasible calibration procedure for non-mechanical, solid-state LiDAR devices. In order to do so, the geometry of the system and the sources of distortion will be discussed during the second section of the paper. Afterwards, the third section will describe the model and the calibration algorithm proposed as well as the LiDAR devices used for this work. Then, the fourth section will show the results of the implementation of the suggested method for both devices whereas the last section will gather the presented results and previous ones for comparison and for reaching conclusions about this work. Two appendixes have been added with the detailed derivation of some of the equations discussed.

2. Problem and Model Formulation

Imaging LiDARs are based on measuring the Time-of-Flight (TOF) value for a number of points within their Field-of-View (FOV). Then, the whole set of points, usually known as point cloud, becomes a 3D representation of the scenario being observed. Without prejudice to the generality and keeping the broad variety of techniques for either obtaining the TOF measurement and performing the scanning, the nth measured point p n in the LiDAR reference system { L } can be expressed as follows:
{ L } p n { L } x n , { L } y n , { L } z n T = c 2 t T O F , n · { L } s ^ n ,
where c is the speed of light in air, t T O F , n is its TOF measure and s ^ n is the unitary vector representing the scanning direction of the LiDAR described in its reference system { L } . From this equation, it can be appreciated that both the TOF measurement t T O F , n and the scanning direction s ^ n are crucial for obtaining accurate and precise point clouds.
The goal of this paper is to provide a method for characterizing the whole FOV of a solid-state LiDAR system in order to obtain more accurate and precise point clouds through mapping its angular resolution in detail. As a consequence, this section in particular is going to focus on the scanning direction of an imaging LiDAR system.
Firstly, the imaging distortion causes will be presented by focusing on a particular solid-state LiDAR system, which uses a two-axis micro-electrical-mechanical system (MEMS) mirror [27] as a reflective surface in order to aim the light beam in both horizontal and vertical directions by steering on its two axis. Therefore, the mechanics and the dynamics of the scanning system are analyzed on the basis of vectorial Snell’s law which, eventually, leads us to a non-linear expression relating the two tilting angles of the MEMS with the scanning direction s ^ n . Comparable approaches for obtaining the non-linear relation between the scanning direction and its scanning principle may be taken for other scanning methods.
After presenting the distortion causes, a general description of the LiDAR imaging system is going to be presented. It is based on a spherical description of the scanning direction and it is suitable for a broad variety of scanning techniques. This description will be used on the following sections to reach the goal of this paper. Let us start addressing the problem of FOV distortion then.

2.1. The Problem

Ideally, the FOV of the system corresponding to the set of all s ^ n should have constant spacing, meaning constant spatial resolution. Nonetheless, such spacing is distorted as a result of varying resolution from either optical and/or mechanical mismatch, no matter how the scanning is performed. Thus, a characterization of these undesired artifacts must be done. Generally, it is done using ray tracing based in the well-known Snell’s law in combination with some geometrical model of the scanner [14,16,18] as previously introduced.
Snell’s law describes how light is reflected and transmitted at the interface between two media of different refractive indices. Thus, it is used in optics to compute those angles and estimate light paths [28]. Commonly, it is expressed as a scalar function so a 2D representation is obtained. However, there is an important assumption behind it—the incident and the reflected and/or transmitted rays lay in the same plane of the surface normal. Consequently, one of them is a linear combination of the other two vectors. Let us describe the reflected ray r ^ as a linear combination of the incident ray i ^ and the surface normal n ^ , all of them being unitary vectors.
r ^ = μ i ^ + η n ^ .
Vectorial Snell’s law is obtained from the above equation imposing the scalar law and using vectorial algebra in conjunction with the previously commented assumption. (A detailed derivation of Equation (3) may be found in Appendix A.1.)
r ^ = μ i ^ μ ( n ^ · i ^ ) 1 μ 2 1 ( n ^ · i ^ ) 2 n ^ ,
where η in Equation (2) is the expression between brackets and μ = n i / n r is the ratio between the refractive indices of the incident and reflected media. Notice that in general μ = 1 since both incident and reflected media are the same and usually air. Despite this simplification, the expression is non-linear due to the intrinsic ( n ^ · i ^ ) dependence of η . Hence, the non-linearity on the surface normal is the origin of the optical distortion of the FOV as the beam steering is performed by tilting the reflective surface. In addition, and according to Figure 1b which is explained below, there is commonly an optical assembly placed, called a field expander, in order to obtain larger FOVs. As a consequence, each lens surface interacts with the scanning direction following Equation (3), which introduces additional distortion as well.
Let us now use geometric algebra to express how the tilt angles of the MEMS rotate the surface normal n ^ , and consequently, the reflected ray r ^ using Equation (3) which is the scanning direction. The above Figure 1 shows the set of reference systems and angles used in the geometrical description of the LiDAR—the reference system of the laser source { I } , the one for the MEMS or any reflective surface at its resting or central position { M } , and the one for the scanning optics { S } .
Appendix A.2 includes a detailed derivation of the relationships between them resulting in Equation (A13). It leads to the following expression for the LiDAR scanning direction s ^ in the laser source reference system { I } as a function of the tilt angles of the mirror α and β , for the horizontal and vertical directions respectively, which may be found to follow Equation (3).
{ I } s ^ = { I } i ^ γ ( α , β , ψ ) γ ( α , β , ψ ) 2 { I } n ^ with { I } n ^ = sin ( α ) cos ( β ) cos ( ψ ) sin ( β ) + sin ( ψ ) cos ( α ) cos ( β ) sin ( ψ ) sin ( β ) cos ( ψ ) cos ( α ) cos ( β ) and γ ( α , β , ψ ) = { I } n ^ 3 = sin ( ψ ) sin ( β ) cos ( ψ ) cos ( α ) cos ( β ) ,
where, as expressed in the last row, γ is just the third component of the normal vector n ^ expressed in the laser source reference system { I } . It depends on the scanning tilt angles α and β as well as the angle between the incident direction of the laser beam and the mirror surface ψ . It can be seen that Equation (4) makes even more explicit the non-linear relationship between the scanning direction s ^ and the tilt angles α and β .
At this point, we can consider the effect of the scanning dynamics of the surface normal n ^ due to the continuous tilt of the surface. Imaging at high frame rates is an essential requirement for the considered applications. Here it remains one of the key advantages of using MEMS mirrors. Because of being micro devices with large surface area to volume ratios, they offer less resistance to motion and, more importantly, electromagnetic and fluid forces are more relevant than other physical forces such as torques. The overall effect is that they can be rapidly driven with fast electrical signals providing lower inertia values so they are able to keep clearer and well-defined dynamics at larger frequencies [27].
In order to reach higher frame rates, one of the tilt angles is usually driven at a fast oscillation frequency near its resonance. As a consequence, its motion is no longer linear, not even harmonic, and it becomes asymmetric, resulting in an additional distortion of the angular resolution as further discussed in Section 3.1.
In summary, the distortion of the FOV is an unavoidable consequence of how the scanning is performed due to two key aspects. The first one resides in the optical nature of LiDAR devices, here demonstrated using Snell’s law. Additionally, the dynamics of the beam steering technique also introduces an important variation on the spatial resolution of the system. Thus, the following sections will provide the model and the method for characterizing the whole FOV of a solid-state LiDAR system through mapping its angular resolution in detail.

2.2. The Model

In order to do so, let us start by defining the LiDAR reference system. Up to now, three different reference systems have been defined within the device as a result of applying rotation matrices. Taking into account that the scanning reference system { S } results from a rotation of the laser source system { I } , which was defined matching the Cartesian coordinates, it can be stated that { S } is an orthonormal basis of R 3 . Then, any point within this Euclidean space can be expressed as a linear combination of the three basis of { S } . As a consequence, the LiDAR reference system { L } = { L ^ 1 , L ^ 2 , L ^ 3 } is defined as the scanning reference system { S } and, for convenience, centered at the origin of the space.
Given a target point Q with known ( X , Y , Z ) coordinates, its expression in LiDAR coordinates is then { L } Q = X L ^ 1 + Y L ^ 2 + Z L ^ 3 as Figure 2 represents. Since TOF resolves the euclidean distance to this point, the most useful coordinate system is the spherical one. Thus, the point could be expressed as the previous Equation (1), { L } Q = c 2 t T O F sin ( θ ) cos ( ϕ ) , sin ( θ ) sin ( ϕ ) , cos ( θ ) T .
Let us now relate the polar angle θ and the azimuthal angle ϕ with the FOV of the system. Using trigonometry, both horizontal and vertical viewing angles can be related to ( X , Y , Z ) .
θ H θ X = arctan X Z and θ V θ Y = arctan Y Z .
From the previous definition of { L } , θ is the angle between the vector going from the center of the reference system to Q and the LiDAR’s optical axis L ^ 3 = k . Thus:
θ = arctan X 2 + Y 2 Z = arctan ( X Z ) 2 + ( Y Z ) 2 = arctan tan ( θ H ) 2 + tan ( θ V ) 2 .
Both viewing angles range from FOV i / 2 , FOV i / 2 , being FOV i the respective horizontal or vertical FOV of the device, FOV H and FOV V . It can be demonstrated that for | x | < π / 2 both arctan ( x ) x x 3 / 3 + x 5 / 5 + O ( x 7 ) and tan ( x ) x + x 3 / 3 + 2 x 5 / 15 + O ( x 7 ) functions are below their real value. Hence, taking the first order for each expansion, the polar angle range is θ 0 , ( FOV H / 2 ) 2 + ( FOV V / 2 ) 2 . Consequently, the azimuthal angle is simply the arc-tangent between the horizontal and the vertical scanning angles, ranging from ϕ [ 0 , 2 π ) . To sum up, the final expression for any point p L R 3 , being L the subset of points contained in the LiDAR’s full FOV, related to the viewing angles θ H and θ V is as follows:
{ L } p = c 2 t T O F sin ( θ ) cos ( ϕ ) , sin ( θ ) sin ( ϕ ) , cos ( θ ) T with t T O F 0 and θ = arctan tan ( θ H ) 2 + tan ( θ V ) 2 ) 0 , ( FOV H / 2 ) 2 + ( FOV V / 2 ) 2 and ϕ = arctan θ V θ H [ 0 , 2 π ) .

3. Materials and Methods

In this section, we present the method with the scope of mapping both viewing angles θ H and θ V in order to characterize the varying angular resolution discussed in the previous sections and to provide reliable point clouds using Equation (7), as well as the tested devices and the calibration target.

3.1. Calibration Method

Since the intensity of the back-reflected pulse can also be measured, let us consider a frame of the scanning optics as an image from now on. With this approach, each scanned direction becomes a pixel of it as depicted in Figure 3. Ideally, the angular resolution from one pixel to the next Δ θ would be constant across the whole FOV so the viewing angles θ H and θ V for a pixel at row and column (i,j) would be simply linear as follows:
θ H ( j ) = j N H 2 F O V H N H = j N H 2 Δ θ H and θ V ( i ) = i N V 2 Δ θ V ,
where N H and N V represent the number of columns and rows, horizontal and vertical measurements, respectively.
Nonetheless, it has already been demonstrated that the FOV is distorted so non-linear terms must be wisely added. Traditionally, optical distortion has been modelled as a combination of radial and tangential distortion [29,30,31]. In the case of the LiDAR scanning, however, not only these cross-correlated optical terms play a role but also the dynamics of the MEMS scanning, as previously discussed. Generally, laser pulses are not sent when the oscillation is at its edges so the majority of its linear regime is seized, as it can be appreciated in Figure 3.
Moreover, radial symmetry is lost, so a proper distortion model is needed. For convenience, the fastest scanning direction, the horizontal, is split up by its two directions of motion in order to separate the strongest asymmetric behaviour of the MEMS. Thus, odd and even lines of the image are treated independently as two separate images yielding two independent distortion mapping functions f odd and f even aiming to relate the pixel position in the acquisition frame (i,j) with the resulting viewing angles θ H and θ V . They are defined in Equation (9) and shown in Figure 3b.
Following the literature about distortion mapping [32,33,34], different nonlinear equations were proposed and tested along the development of this work and are presented in the following Section 3.2. Defining 𝚤 ˜ = ( i N V / 2 ) and 𝚥 ˜ = ( j N H / 2 ) and both A and B as the vectors containing the set of parameters for both odd and even horizontal scanning directions respectively, the two viewing angles may be expressed, without loss of generality, as:
f : ( i , j ) X ( θ H , θ V ) θ H ( i , j ) θ V ( i , j ) o d d = f ( 𝚤 ˜ , 𝚥 ˜ , A ) f odd θ H ( i , j ) θ V ( i , j ) e v e n = f ( 𝚤 ˜ , 𝚥 ˜ , B ) f even .
Now, finding both mapping functions f odd and f even reduces to solving a non-linear least squares system (NLSQ) for a set of M control viewing angles θ H m and θ V m measured at different pixel positions ( i m , j m ) , so both vector parameters A and B minimize an objective function g . Our suggested objective function is based on the angular error between the control viewing angles and the estimated ones at the measured pixel positions on the image using the mapping functions f odd and f even , as expressed below.
{ A , B min X | | g k ( i m , j m , X ) | | 2 2 = min X | | θ H m θ V m k f k ( i m , j m , X ) | | 2 2 = = min X m = 1 M | | θ H m θ H θ V m θ V k | | 2 2 m = 1 , . . , M with k = odd , even } .

3.2. Distortion Mapping Equations

Let us now introduce the three different sets of mapping functions f k (with k = odd , even ) that have been proposed and tested during this work. As previously commented, the mapping functions aim to model the discussed angular variation that causes the FOV distortion. Thus, non-linear terms must be added to the ideal linear case presented in Equation (8) where the angular resolution is constant.
According to this, the linear term has been expanded up to a third-order polynomial in order to include the effect of the varying angular velocity and acceleration of the MEMS. This expansion is common to all the proposed mapping functions and relates exclusively the effect on the scanning direction, horizontal or vertical, caused by its corresponding pixel direction on the image.
On one hand, θ H , 0 and θ V , 0 apply for a global shift of the viewing angles in each direction. On the other hand, Δ θ ˜ H and Δ θ ˜ V can be considered as the mean angular resolution as if the FOV was homogeneous. Then, the following non-linear terms are related to the MEMS scanning dynamics, being ω H and ω V related to the MEMS angular velocities due to their even symmetry whereas Ω H and Ω V are related to its angular acceleration because of their odd symmetry.
Firstly, an equation similar to the widely used optical distortion model [35] is proposed. This model introduces a radial distortion with terms proportional to the radial distance to the center of the image plus a couple of cross-correlated terms with a distortion center in pixels known as the tangential distortion. However, since the MEMS dynamics is asymmetric, we have introduced introduced different cross-correlated terms following the literature about distortion mapping for optical displays [32,33,34] that provide a higher degree-of-freedom (DOF) to the cross-effect of the scanning directions than the traditional imaging distortion model used in the first approach.
Then, the second and the third proposed mapping functions contain the same number of cross-correlated terms, three in particular, but they differ in the DOF given to the optical distortion centers in pixels. Whereas the second one presents a unique distortion center for all the terms, the third one has different individual distortion centers for each term becoming the mapping function with the highest DOF possible.
Once the common part of the three proposed mapping functions has been discussed as well as the inclusion of the cross-correlated terms, let us now introduce them separately in detail. Recall that the pixel positions are normalized as 𝚤 ˜ = ( i N V / 2 ) and 𝚥 ˜ = ( j N H / 2 ) .
Map 1: Optical-Like Mapping
Defining the radius as r ˜ = 𝚤 ˜ 2 + 𝚥 ˜ 2 , the radial R n and tangential P n traditional optical coefficients are applied with a common distortion center on the image i c and j c .
f k M 1 : ( i , j ) X ( θ H , θ V ) θ H ( i , j ) = θ H , 0 + Δ θ ˜ H 𝚥 ˜ + j c + ω H 𝚥 ˜ + j c 2 + Ω H 𝚥 ˜ + j c 3 + + R 1 r ˜ + R 2 r ˜ 2 + R 3 r ˜ 4 + P 1 r ˜ + 2 𝚥 ˜ + j c 2 + 2 P 2 𝚥 ˜ + j c 𝚤 ˜ + i c θ V ( i , j ) = θ V , 0 + Δ θ ˜ V 𝚤 ˜ + i c + ω V 𝚤 ˜ + i c 2 + Ω V 𝚥 ˜ + i c 3 + + R 1 r ˜ + R 2 r ˜ 2 + R 3 r ˜ 4 + 2 P 1 𝚥 ˜ + j c 𝚤 ˜ + i c + P 2 r ˜ + 2 𝚤 ˜ + i c 2 .
Map 2: Cross Mapping
Instead of using the traditional distortion coefficients, the cross-terms are introduced with more general functions but using the same common distortion center. P H , n apply for the cross-terms affecting θ H whereas P V , n for θ V .
f k M 2 : ( i , j ) X ( θ H , θ V ) θ H ( i , j ) = θ H , 0 + Δ θ ˜ H 𝚥 ˜ + j c + ω H 𝚥 ˜ + j c 2 + Ω H 𝚥 ˜ + j c 3 + + P H , 1 𝚥 ˜ + j c 𝚤 ˜ + i c + P H , 2 𝚥 ˜ + j c 2 𝚤 ˜ + i c + P H , 3 𝚥 ˜ + j c 𝚤 ˜ + i c 2 θ V ( i , j ) = θ V , 0 + Δ θ ˜ V 𝚤 ˜ + i c + ω V 𝚤 ˜ + i c 2 + Ω V 𝚤 ˜ + i c 3 + + P V , 1 𝚥 ˜ + j c 𝚤 ˜ + i c + P V , 2 𝚥 ˜ + j c 2 𝚤 ˜ + i c + P V , 3 𝚥 ˜ + j c 𝚤 ˜ + i c 2 .
Map 3: Multi-Decentered Cross-Mapping
Finally, rather than using a unique distortion center on the image for all functions, this equation provides freedom to every term to have a specific distortion center.
f k M 3 : ( i , j ) X ( θ H , θ V ) θ H ( i , j ) = θ H , 0 + Δ θ ˜ H 𝚥 ˜ + j 0 + ω H 𝚥 ˜ + j ω 0 2 + Ω H 𝚥 ˜ + j Ω 0 3 + + P H , 1 𝚥 ˜ + j P 1 𝚤 ˜ + i P 1 + P H , 2 𝚥 ˜ + j P 2 2 𝚤 ˜ + i P 2 + P H , 3 𝚥 ˜ + j P 3 𝚤 ˜ + i P 3 2 θ V ( i , j ) = θ V , 0 + Δ θ ˜ V 𝚤 ˜ + i 0 + ω V 𝚤 ˜ + i ω 0 2 + Ω V 𝚤 ˜ + i Ω 0 3 + + P V , 1 𝚥 ˜ + j P 1 𝚤 ˜ + i P 1 + P V , 2 𝚥 ˜ + j P 2 2 𝚤 ˜ + i P 2 + P V , 3 𝚥 ˜ + j P 3 𝚤 ˜ + i P 3 2 .

3.3. Calibration Pattern and Algorithm

In order to obtain the set of measurements, a regular grid of dimension 4 × 2 m with 200 mm width squares was constructed onto a planar wall with absorbent optical duct tape as depicted in Figure 4. Then, the LiDAR prototype is placed at a known Z distance facing the pattern so both viewing angles can be resolved using Equation (5) for all the scanned grid’s intersections.
The scheme in Figure 5 describes the step-by-step procedure presented in pseudocode in Algorithm 1, which was implemented in practice using Matlab© (Algorithm 1 was implemented using standard Matlab© functions from the Signal Processing, Computer Vision and Optimization libraries.). Recall that, for the calibration method we propose, we consider a frame of the scanning optics as an image composed of even and odd scanning lines that we split in two images as discussed above and shown in Figure 3. Pixel locations from both odd and even images are obtained with sub-pixel resolution after doing line-detection using image processing, which includes a binarization of the image to work with the pixel locations corresponding to the black lines of the pattern, fitting them with two-dimensional lines on the image and calculating their intersections. Afterwards, these pixel locations in conjunction with the known grid dimensions and the LiDAR position, are introduced to an optimization solver according to the presented NLSQ problem of Equation (10) in order to estimate both A and B parameter vectors of the chosen mapping function. Finally, using the spherical description of the LiDAR scanning direction of Equation (7), the set of scanning directions are calculated from the mapped θ H and θ V . Thus, the generated point cloud is calibrated over the whole system’s FOV.
Algorithm 1: Image processing for obtaining the pixel locations of the lines’ intersections.
Sensors 20 02898 i001

3.4. Prototypes

With the purpose of testing the method with different FOVs and distortions, two long range solid-state LiDAR prototypes from Beamagine S.L. with different FOV have been used. For the first one, we used a 30 × 20 ° FOV prototype with 300 × 150 pixels (45 k points) whereas for the second one, we used a 50 × 20 ° FOV with 500 × 150 pixels (75 k points).
Both scanning systems are based on MEMS technology, driving them with voltage control signals in both horizontal and vertical directions, the horizontal being the fastest one, resulting in a frame rate of 10 fps for both of them. As can be appreciated in Figure 6, which shows two images of the calibration pattern acquired with the two presented prototypes placed at the same distance, the second system presents a stronger distortion and asymmetry in MEMS dynamics due to its wider FOV.

4. Results

Once the model and the method for characterizing the scanning directions of a solid-state LiDAR system through mapping its angular resolution have been introduced and discussed, let us now present their results. Firstly, we are going to use the model for simulating the 30 × 20 ° FOV prototype and comparing it with experimental results. Later on, the calibration results for both prototypes are going to be presented and discussed.

4.1. Model Simulation

Firstly, the effect on the angular distortion of the combined effects of Snell’s law and the MEMS non-linear dynamics was studied. Using Equation (4) and the set of reference systems presented in Figure 1, the whole set of scanning directions s ^ n may be calculated from the value of the tilt angles of the MEMS mirror α and β . Afterwards, according to Equation (5) and Figure 2, both horizontal and vertical scanning angles are obtained applying a constant optical magnification over the whole FOV of the system for each direction.
In the case of the 30 × 20 ° FOV prototype, it creates a point cloud of 300 × 150 pixels and the MEMS is tilted ψ = 25 ° relative to the laser source according to specifications. The angular resolution in both directions of the scan is simulated using two methods: as a perfectly linear variation of the angles α and β (thus, including only the distortion effects due to Snell’s law), and, in a more realistic approach, modelling the fastest axis α of the mirror as a sinusoidal variation, as Figure 7a,c show respectively.
If the system was ideal and had a constant angular resolution, the probability of obtaining a given value of angular resolution would be a delta function at F O V H / N c o l u m n s and at F O V V / N r o w s , as depicted with an arrow in Figure 7b. The wider the histogram, the larger the deviation from the ideal homogeneous, linear case, as depicts the case Figure 7d.
Results show that even if the MEMS dynamics was completely linear, a slight broadening of the angular resolution would still appear due to the intrinsic non-linearity of Snell’s law. This widening of the delta function is significantly increased if the motion of the mirror is not linear, as shown in Figure 7d, where the non-linearity of the sinusoidal movement in α results in a widely distributed value of the Δ θ H .
To further depict the effect, the resulting image from scanning the calibration pattern from a known distance of 3.8 m is simulated and qualitatively compared to an experimental capture. This comparison is going to become quantitative later on whilst comparing the angular resolutions across the whole FOV and in Section 4.3 as well. Both images are shown in Figure 8. Both of them present curves instead of straight lines as a result of fluctuation in angular resolution due to the different non-linearities in the system, as expected. Notice also how vertical lines are specially modified at the bottom of the image, where they bend towards the center, suggesting that not only the fast axis of the MEMS mirror, but also the slow one, present a non-linear behavior, as will be discussed later. Moreover, the asymmetric behaviour between odd and even rows of the image can be appreciated.

4.2. Calibration Results

After simulating the model, we place the two prototypes being considered at a known distance of 3.8 m from the grid pattern described in Section 3, in order to test the calibration algorithm. The grid lines on the image are tracked and fitted, resulting in respectively 45 and 95 intersections for each LiDAR prototype as Figure 9 shows, which are the measured pixel positions ( i m , j m ) .
According to Equation (10), the minimization algorithm is fed with the set of measured scanning angles { θ H m , θ V m } and their respective pixel positions ( i m , j m ) in the image in order to find the parameters of the three suggested mapping functions introduced in Section 3.2. In order to compare their performance, the error between the measured scanning angle and the estimated one using the mapping function is computed for all the control points. As figures of merit, we use the mean value of the angular error in all control points, its standard deviation and the value of angular error for which the Gamma function that fits its probability density function (PDF) is within a confidence value of 95%, as shown in Figure 10.
In addition, the rectangular region comprised inside the distorted FOV is considered as the homogeneous FOV of the system as discussed previously. Notice the fact that the rectangular region will be always tinnier than the designed FOV due to distortion, providing lower but indicative angular values. On Table 1, the obtained results for the figures of merit of each mapping function are presented, for both prototypes.
The table above shows that Map 1, the optical-like mapping function with radial and tangential coefficients, results in a less accurate estimation of the viewing angles than the other non-linear mapping functions tested for both LiDARs. The fact that optical distortion is assumed to have radial symmetry is considered to be the principal cause of this behavior. Nonetheless, despite their comparable performance with the 30 × 20 ° device, the Map 3 function shows improved results when the FOV of the system is wider and, as a consequence, distortion is stronger. It can be a result of having more degrees of freedom for the distortion optical centers of the nonlinear terms with respect to Map 2. However, notice that the homogeneous FOV is pretty similar for all mapping functions with respect to the prototypes’ designed FOV. Consequently, from now on the presented results correspond to the third mapping function Map 3.
Figure 10 shows the Probability Density Function (PDF) of the error committed in the mapping of the viewing angles for all the control points. On one hand, the PDF is narrower in the vertical direction for both LiDAR prototypes, as it is the slower direction of motion. On the other hand, the 30 × 20 ° FOV prototype presents smaller errors in both scanning directions using the same mapping function, which is coherent with the fact that the wider the FOV of the system, the higher the distortion it will present.
For the purpose of relating the analytic model with the obtained results from the proposed calibration method, Figure 11 compares the variation of the angular resolution for the 30 × 20 ° FOV prototype of the simulated sinusoidal MEMS dynamics for the fast axis, Figure 11a, against the variation obtained through the mapping function Map 3, Figure 11b. Additionally, it also presents the results of the same Map 3 for the 50 × 20 ° device in Figure 11c.
On the one hand, the above statement regarding the effect of the wider FOV on distortion can be observed again comparing Figure 11b against Figure 11c. On the other hand, it is easily observed that the mapped variations of angular resolution in the horizontal direction presented in Figure 11b are closer to the ones described in the simulated case Figure 11a than the vertical ones. This indicates that even the motion of the slow scanning direction is not completely linear.
This is further shown in Figure 12, where the surface plots for the horizontal and vertical angular resolution variation maps of the 30 × 20 ° FOV prototype are respectively presented for its whole FOV. Figure 12a,b correspond to the simulated variations whereas Figure 12c,d to the measured ones using the mapping function Map 3. Ideally, we would expect a flat surface for the vertical angular resolution variation, which is the case Figure 12b shows, however, experimentally there exists a slight variation as Figure 12d presents.
The committed angular error on the control points previously presented in Figure 10 can be fitted with a biharmonic spline in order to interpolate such error across the whole FOV of the system as shows Figure 13c,d. Hence, we can compare the angular resolution variation on Figure 13a,b, for the horizontal and vertical directions respectively, with their corresponding committed errors of the mapping function Figure 13c,d.
It can be appreciated that the committed angular error is below the order of magnitude of the angular resolution of the LiDAR, even one order of magnitude below in some regions of the FOV. Thus, it can be concluded that the tested mapping functions are good approximations of the variable angular resolution across the FOV of a LiDAR device.
Finally, notice that both Figure 12 and Figure 13 show that the center of the point cloud has a lower density of points than the edges of the FOV because they are more separated, particularly in the horizontal direction. This result is compliant with the MEMS dynamics since in the middle of the horizontal scanning line its velocity is higher than in the edges, where the change in scanning direction takes place and damps the oscillation.

4.3. Impact on the Point Cloud

So far, results of the presented method have been expressed in terms of the scanning angles but, obviously such calibration errors have quantitative effects in the accuracy of the measured point cloud.
Using error propagation theory, the final point error δ ( i , j ) = δ X , δ Y , δ Z T and its standard deviation < σ k > , which is interpreted as its uncertainty, can be derived from the values of angular error ϵ H and ϵ V of the mapped scanning angles depicted in Figure 13. Hence, we can estimate which is the final lateral accuracy of the device in millimeters across its whole FOV, of course being aware that the distance error is proportional to the value of range measured. In addition, comparing them with the ones resulting from the ideal (but unreal) case of homogeneous angular resolution provides a quantitative measure of the improvement brought on by the method we propose.
With this aim, Figure 14 shows such comparisons of the initial non-calibrated and the final calibrated point cloud distortions for the two prototypes here described. Both PDFs in Figure 14a,c show the improvement in the angular error for the 30 × 20 ° FOV prototype and the 50 × 20 ° one respectively, whilst Figure 14b,d present the improvement in the distance error, which comes from propagating the angular error as previously commented. Notice that the final distance error δ ( i , j ) is a 3-dimensional error but, for comparing a unique measurement, only the norm | | δ ( i , j ) | | has been considered.
From Figure 14a,c, it can be observed that the dashed lines, which correspond to the ideal non-calibrated case, represent a wider angular error in both directions than the solid lines, which result from the calibration. Propagating such angular errors and comparing the norms of the distance error for each scanning position across the whole FOV of the prototypes, as Figure 14b,d depicts, show a very significant improvement of the calibration method against the non-calibrated cases. A larger FOV means also a larger error, calling for the relevance of a calibration process like the one presented here for LiDAR devices with larger FOVs.
As discussed earlier, the fastest scanning direction presents higher distortion and, consequently, larger error. Since this direction is the horizontal one and the non-lineal dynamics of the MEMS are greater at the extremes, the magnitude of the final distance error increases in the same direction as well. Figure 14 shows that, for both LiDAR prototypes, the PDf of the angular error is narrower after applying the suggested calibration. As a result, the final distance errors are one order of magnitude smaller compared to the ones obtained for the homogeneous angular case. In particular, the mean value of the error and its uncertainty across the whole FOV are reduced by a factor ∼1/40 and ∼1/30, respectively.
Empirically, these improvements can be directly observed in the measured point clouds. Figure 15 compares a point cloud obtained with the presented calibration method against one assuming constant angular resolution of the same scenario. The scene was captured using the 30 × 20 ° FOV prototype, which as commented is the one with smaller distance error. Even in that case, the road is not completely straight in the previous point cloud as Figure 15a shows.
This is better seen if the ground plane is rotated to the Y = 0 plane so the orthogonal projections of the point cloud from above are depicted as both Figure 15c,d present. The projection of the rods forming the sidewalk barrier should be straight lines. On the one hand, the rods are visibly not forming a straight line in the non-calibrated point cloud, Figure 15c, what means that the lateral accuracy is poorer. On the other hand, they are also wider so the lateral precision is also inferior.
In order to provide quantitative indicators, the real distance between each road pin was physically measured with a measuring tape and taken as the ground truth shown in Figure 15f. Then, the computed centroid of each road pin on the point cloud is projected on the ground plane Y = 0 and it is compared to the ground truth, obtaining a distance in metric units which represents the absolute metric error on the point cloud due to the FOV distortion.
The mean value of the considered absolute error results in values of several 100 mm for the non-calibrated case whereas, for the calibrated one, it is one order of magnitude smaller. In particular, for these captures that range between 20 and 40 m from the LiDAR, the mean lateral error and its uncertainty are of ±20 mm (±30 mm) for the calibrated case and of ±700 mm (±300 mm) for the other. These results closely match the expected lateral error and uncertainty shown in Figure 14, confirming the improvement of the calibration in one order of magnitude. It should be noted that at a range of 100 m, the corresponding lateral accuracies of the calibrated point cloud would be of ±48 mm (±32 mm) and ±77 mm (±42 mm) for the 30 × 20 ° and 50 × 20 ° FOV LiDAR prototypes respectively.

5. Conclusions

We have introduced a geometrical model for the scanning system of a LiDAR device based only in Snell’s law and its specific mechanics, what allows using the same procedure for other scanning techniques, either solid-state based or mechanical ones, as long as its scanning principle is carefully described. For instance, solid-state LiDAR systems based on Optical Phased Array devices (OPA) can relate their electrical control signals such as frequency with the final scanning direction using the presented approach.
In particular, for the solid-state MEMS mirror based system analyzed during this work, we have related its scanning direction to the tilt angles of the MEMS device, which has been tested to fit with the performance of a general solid-state LiDAR device including such elements. In addition, such a model may be used for characterizing the MEMS dynamics, which may be a first step towards either closed-loop control of the mirror oscillation, or, alternatively, towards the correction of the angular distortion of the optics and the MEMS dynamics by setting appropriate tilt angle dynamics for each scanning direction of the system.
Furthermore, the suggested geometrical model enables a calibration method which provides an accurate estimation of the varying angular resolution of the scanning system. We would like to emphasise that this method and the spherical description provided are generic to any scanning technique. Consequently, any other LiDAR device can use them and take advantage of their benefits, which are summarized below.
The presented work has been used to improve the accuracy of the measured point clouds, which will be more reliable as input for machine learning procedures. In particular, we have shown an improvement of one full order of magnitude in the accuracy of their lateral position measurements, preserving the shape of the objects on the final point cloud. Moreover, considering that the accuracy of the mapping functions is below the resolution of the system, other functions might be proposed but similar results will be obtained.
Overall, this work provides, for the first time in our knowledge, a general model for LiDAR scanning systems based on MEMS mirrors, which provides a simple calibration procedure based on the measurement of the angular errors of the scanner, which has been confirmed accurate, simple and useful for the characterization of such devices.

Author Contributions

Investigation, P.G.-G.; Project administration, S.R. and J.R.C.; Software, P.G.-G.; Supervision, S.R., N.R. and J.R.C.; Validation, N.R.; Writing—original draft, P.G.-G., S.R., N.R. and J.R.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministerio de Ciencia, Innovación y Universidades (MICINN) of Spain grant number DI-17-09181 and the Agència de Gestió d’Ajuts Universitaris i de Recerca (AGAUR) of Catalonia grant number 2018-DI-0086.

Acknowledgments

We would like to acknowledge the technical support from Beamagine staff: Jordi Riu, Eduardo Bernal and Isidro Bas; as well as from the UPC: Albert Gil.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In this appendix the reader can find the detailed derivation of some of the key formulas used in previous sections.

Appendix A.1. Vectorial Snell’s Law

Starting from Equation (2), r ^ = μ i ^ + η n ^ , let us firstly apply the cross-product of the surface normal on both sides of the equation.
n ^ × r ^ = μ ( n ^ × i ^ ) + η ( n ^ × n ^ ) | n ^ | · | r ^ | · sin ( θ r ) A ^ = μ · | n ^ | · | i ^ | · sin ( θ i ) B ^
Since all ray vectors lay in the same plane, both A ^ and B ^ are parallel and using the scalar version of Snell’s law sin θ r / sin θ i = n i / n r , the expression for μ is found:
μ = n i n r
Secondly, let us apply the dot product of the reflected ray with itself in order to find the expression for η knowing that θ r = θ i so their cosines are equal:
( μ i ^ + η n ^ ) · ( μ i ^ + η n ^ ) = μ 2 + η 2 + 2 μ η ( n ^ · i ^ ) = 1 η 2 + 2 μ ( n ^ · i ^ ) η + ( μ 2 1 ) = 0
Solving this second order equation, it can be found the final expression for η :
η = 2 μ ( n ^ · i ^ ) ± 4 μ 2 ( n ^ · i ^ ) 2 4 ( μ 2 1 ) 2 = μ ( n ^ · i ^ ) ± 1 μ 2 ( 1 ( n ^ · i ^ ) 2 )
Thus, there are two solutions to the above equation: the positive one corresponds to the reflected case, while the negative one corresponds to the transmitted case. It can be easily deduced for a ray coming perpendicular onto the surface n ^ = i ^ , so η = μ ± 1 and r ^ = μ i ^ + ( μ ± 1 ) n ^ = i ^ . Finally, Equation (3) is the result of substituting these expressions for a reflected ray, thus the positive solution of Equation (A4), in Equation (2).

Appendix A.2. Geometrical Model of MEMS Scanning

As described in Figure 1 in the paper, let us define three reference systems: one for the laser source { I } = { i ^ 1 , i ^ 2 , i ^ 3 } , another one for the MEMS at its resting or central position { M } = { m ^ 1 , m ^ 2 , m ^ 3 } , where m ^ 3 = n ^ is coincident with the surface normal at rest; and, finally, one more for the scanning { S } = { s ^ 1 , s ^ 2 , s ^ 3 } where s ^ 3 = r ^ , the propagation vector of the beam when the MEMS is at its central position too. As it will be shown afterwards, this is the { M } reference system rotated again, matching s ^ 3 with the specular reflection of the incident ray. The aim is to express the MEMS normal and its incident ray in the same reference system in any scanning case so the set of scanning directions can be found by applying Equation (3).
Let us assume that { I } is in the origin of the Cartesian coordinate system, as depicted in Figure 1 (a) and that the incident laser beam is emitted in the z direction i ^ = z ^ = i ^ 3 . Additionally, the MEMS device is facing towards the laser source but tilted a certain angle ψ in the x-axis. Consequently, using the right-hand rule whilst setting m ^ 1 = i ^ 1 = x ^ , this rotation can be expressed as:
R x ( ψ ) = 1 0 0 0 cos ( π + ψ ) sin ( π + ψ ) 0 sin ( π + ψ ) cos ( π + ψ ) = 1 0 0 0 cos ( ψ ) sin ( ψ ) 0 sin ( ψ ) cos ( ψ )
Consequently, the linear transformation from the MEMS reference system { M } to the source one { I } , without taking into account translations because the aim is to obtain its direction, is:
{ M } { I } T = R x ( ψ ) · { M } u
The driven tilts of the MEMS can be performed about both m ^ 1 and m ^ 2 MEMS axis. Let us define α as the angular tilt provoking an horizontal displacement in the FOV whereas β causes a vertical one. Hence, an α counterclockwise rotation can be expressed as a rotation about m ^ 2 whilst β about m ^ 1 .
R m 2 ( α ) R α = cos ( α ) 0 sin ( α ) 0 1 0 sin ( α ) 0 cos ( α ) and R m 1 ( β ) R β = 1 0 0 0 cos ( β ) sin ( β ) 0 sin ( β ) cos ( β )
With these rotation matrices, the MEMS surface normal in the laser reference system is defined as the result of applying β - α - ψ rotations, what defines the first rotation to be taken in the slow direction and matches a proper Euler angle rotation definition x-y-x in the fixed frame { M } :
{ I } n = R ψ · R α · R β · { M } n { M } { I } T ( α , β , ψ ) · { M } n
As previously commented, the MEMS surface normal in its own reference system { M } coincides with the third vector of the basis m ^ 3 at the central position. Thus, the final expression for the MEMS normal in the reference system of the laser source { I } depending on the tilt angles results in:
{ I } n ( α , β , ψ ) = { M } { I } T ( α , β , ψ ) · 0 0 1 = sin ( α ) cos ( β ) cos ( ψ ) sin ( β ) + sin ( ψ ) cos ( α ) cos ( β ) sin ( ψ ) sin ( β ) cos ( ψ ) cos ( α ) cos ( β )
Notice that if the small angle approximation is assumed for all angles ( cos ( x ) 1 and sin ( x ) x ), rotations are commutative because any rotation order leads to { I } n = α , β + ψ , β ψ 1 T .
Now that both vectors, the MEMS normal and the incident ray, are expressed in the same reference system, the expression for the scanning direction depending on the tilt angles can be found. Firstly, let us define γ ( α , β , ψ ) as the dot product between the MEMS normal and the incident ray.
γ ( α , β , ψ ) { I } n ^ T · { I } i ^ = sin ( α ) cos ( β ) cos ( ψ ) sin ( β ) + sin ( ψ ) cos ( α ) cos ( β ) sin ( ψ ) sin ( β ) cos ( ψ ) cos ( α ) cos ( β ) T · 0 0 1 = = sin ( ψ ) sin ( β ) cos ( ψ ) cos ( α ) cos ( β )
{ I } s ^ = { I } i ^ γ ( α , β , ψ ) γ ( α , β , ψ ) 2 { I } n ^
In particular, as previously commented, when the MEMS is at its rest position with α = β = 0 , the reflected ray results in:
{ I } s ^ ( 0 , 0 , ψ ) = 0 0 1 cos ( ψ ) ( cos ( ψ ) ) 2 0 sin ( ψ ) cos ( ψ ) = 0 2 cos ( ψ ) sin ( ψ ) 1 2 cos ( ψ ) 2 = 0 sin ( 2 ψ ) cos ( 2 ψ )
which is exactly the specular reflection of the incident ray with angle 2 ψ . To sum up, let us write down the defined set of basis being aware that there is a translation with respect from the laser source and that they are defined for α = β = 0 .
{ I } = { i ^ , j ^ , k ^ } , { M } = i ^ , 0 cos ( ψ ) sin ( ψ ) , 0 sin ( ψ ) cos ( ψ ) and { S } = i ^ , 0 cos ( 2 ψ ) sin ( 2 ψ ) , 0 sin ( 2 ψ ) cos ( 2 ψ )
Thus, this is the set of reference basis that characterize the scanning of a LiDAR system.

References

  1. Fernald, F.G. Analysis of atmospheric lidar observations: Some comments. Appl. Opt. AO 1984, 23, 652. [Google Scholar] [CrossRef] [PubMed]
  2. Korb, C.L.; Gentry, B.M.; Weng, C.Y. Edge technique: Theory and application to the lidar measurement of atmospheric wind. Appl. Opt. 1992, 31, 4202–4213. [Google Scholar] [CrossRef] [PubMed]
  3. McGill, M.J. Lidar Remote Sensing. In Encyclopedia of Optical Engineering; Marcel Dekker: New York, NY, USA, 2003; pp. 1103–1113. [Google Scholar]
  4. Liu, X. Airborne LiDAR for DEM generation: Some critical issues. Prog. Phys. Geogr. Earth Environ. 2008, 32, 31–49. [Google Scholar] [CrossRef]
  5. Mallet, C.; Bretar, F. Full-waveform topographic lidar: State-of-the-art. ISPRS J. Photogramm. Remote Sens. 2009, 64, 1–16. [Google Scholar] [CrossRef]
  6. Azuma, R.T.; Malibu, I. A Survey of Augmented Reality. Presence Teleoperators Virtual Environ. 1997, 6, 355–385. [Google Scholar] [CrossRef]
  7. Azuma, R.; Baillot, Y.; Behringer, R.; Feiner, S.; Julier, S.; MacIntyre, B. Recent advances in augmented reality. IEEE Comput. Graph. Appl. 2001, 21, 34–47. [Google Scholar] [CrossRef] [Green Version]
  8. Schwarz, B. LIDAR: Mapping the world in 3D. Nat. Photonics 2010, 4, 429–430. [Google Scholar] [CrossRef]
  9. Royo, S.; Ballesta-Garcia, M. An Overview of Lidar Imaging Systems for Autonomous Vehicles. Appl. Sci. 2019, 9, 4093. [Google Scholar] [CrossRef] [Green Version]
  10. Takagi, K.; Morikawa, K.; Ogawa, T.; Saburi, M. Road Environment Recognition Using On-vehicle LIDAR. In Proceedings of the 2006 IEEE Intelligent Vehicles Symposium, Tokyo, Japan, 13–15 June 2006; pp. 120–125. [Google Scholar] [CrossRef]
  11. Premebida, C.; Monteiro, G.; Nunes, U.; Peixoto, P. A Lidar and Vision-based Approach for Pedestrian and Vehicle Detection and Tracking. In Proceedings of the 2007 IEEE Intelligent Transportation Systems Conference, Seattle, WA, USA, 30 September–3 October 2007; pp. 1044–1049. [Google Scholar] [CrossRef]
  12. Gallant, M.J.; Marshall, J.A. The LiDAR compass: Extremely lightweight heading estimation with axis maps. Robot. Auton. Syst. 2016, 82, 35–45. [Google Scholar] [CrossRef]
  13. Huynh, D.Q.; Owens, R.A.; Hartmann, P.E. Calibrating a Structured Light Stripe System: A Novel Approach. Int. J. Comput. Vis. 1999, 33, 73–86. [Google Scholar] [CrossRef]
  14. Glennie, C.; Lichti, D.D. Static Calibration and Analysis of the Velodyne HDL-64E S2 for High Accuracy Mobile Scanning. Remote Sens. 2010, 2, 1610–1624. [Google Scholar] [CrossRef] [Green Version]
  15. Muhammad, N.; Lacroix, S. Calibration of a rotating multi-beam lidar. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 5648–5653. [Google Scholar] [CrossRef]
  16. Atanacio-Jiménez, G.; González-Barbosa, J.J.; Hurtado-Ramos, J.B.; Ornelas-Rodríguez, F.J.; Jiménez-Hernández, H.; García-Ramirez, T.; González-Barbosa, R. LIDAR Velodyne HDL-64E Calibration Using Pattern Planes. Int. J. Adv. Robot. Syst. 2011, 8, 59. [Google Scholar] [CrossRef]
  17. Mirzaei, F.M.; Kottas, D.G.; Roumeliotis, S.I. 3D LIDAR–camera intrinsic and extrinsic calibration: Identifiability and analytical least-squares-based initialization. Int. J. Robot. Res. 2012, 31, 452–467. [Google Scholar] [CrossRef] [Green Version]
  18. Yu, C.; Chen, X.; Xi, J. Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner. Sensors 2017, 17, 164. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Cui, S.; Zhu, X.; Wang, W.; Xie, Y. Calibration of a laser galvanometric scanning system by adapting a camera model. Appl. Opt. AO 2009, 48, 2632–2637. [Google Scholar] [CrossRef]
  20. Rodriguez, F.S.A.; Fremont, V.; Bonnifait, P. Extrinsic calibration between a multi-layer lidar and a camera. In Proceedings of the 2008 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Seoul, Korea, 20–22 August 2008; pp. 214–219. [Google Scholar] [CrossRef] [Green Version]
  21. Kwak, K.; Huber, D.F.; Badino, H.; Kanade, T. Extrinsic calibration of a single line scanning lidar and a camera. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 3283–3289. [Google Scholar]
  22. Zhou, L.; Deng, Z. Extrinsic calibration of a camera and a lidar based on decoupling the rotation from the translation. In Proceedings of the 2012 IEEE Intelligent Vehicles Symposium, Alcala de Henares, Spain, 3–7 June 2012; pp. 642–648. [Google Scholar] [CrossRef]
  23. García-Moreno, A.I.; Gonzalez-Barbosa, J.J.; Ornelas-Rodriguez, F.J.; Hurtado-Ramos, J.B.; Primo-Fuentes, M.N. LIDAR and Panoramic Camera Extrinsic Calibration Approach Using a Pattern Plane. In Pattern Recognition; Hutchison, D., Kanade, T., Kittler, J., Kleinberg, J.M., Mattern, F., Mitchell, J.C., Naor, M., Nierstrasz, O., Pandu Rangan, C., Steffen, B., et al., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 7914, pp. 104–113. [Google Scholar] [CrossRef] [Green Version]
  24. Park, Y.; Yun, S.; Won, C.S.; Cho, K.; Um, K.; Sim, S. Calibration between Color Camera and 3D LIDAR Instruments with a Polygonal Planar Board. Sensors 2014, 14, 5333–5353. [Google Scholar] [CrossRef] [Green Version]
  25. Dhall, A.; Chelani, K.; Radhakrishnan, V.; Krishna, K.M. LiDAR-Camera Calibration using 3D-3D Point correspondences. arXiv 2017, arXiv:1705.09785. [Google Scholar]
  26. Guindel, C.; Beltrán, J.; Martín, D.; García, F. Automatic extrinsic calibration for lidar-stereo vehicle sensor setups. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
  27. Urey, H.; Holmstrom, S.; Baran, U. MEMS laser scanners: A review. J. Microelectromech. Syst. 2014, 23, 259. [Google Scholar] [CrossRef]
  28. Ortiz, S.; Siedlecki, D.; Grulkowski, I.; Remon, L.; Pascual, D.; Wojtkowski, M.; Marcos, S. Optical distortion correction in Optical Coherence Tomography for quantitative ocular anterior segment by three-dimensional imaging. Opt. Express OE 2010, 18, 2782–2796. [Google Scholar] [CrossRef]
  29. Brown, D.C. Decentering Distortion of Lenses. Photogramm. Eng. 1966, 32, 444–462. [Google Scholar]
  30. Swaninathan, R.; Grossberg, M.D.; Nayar, S.K. A perspective on distortions. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; Volume 2. [Google Scholar] [CrossRef] [Green Version]
  31. Pan, B.; Yu, L.; Wu, D.; Tang, L. Systematic errors in two-dimensional digital image correlation due to lens distortion. Opt. Lasers Eng. 2013, 51, 140–147. [Google Scholar] [CrossRef]
  32. Bauer, A.; Vo, S.; Parkins, K.; Rodriguez, F.; Cakmakci, O.; Rolland, J.P. Computational optical distortion correction using a radial basis function-based mapping method. Opt. Express OE 2012, 20, 14906–14920. [Google Scholar] [CrossRef] [PubMed]
  33. Li, A.; Wu, Y.; Xia, X.; Huang, Y.; Feng, C.; Zheng, Z. Computational method for correcting complex optical distortion based on FOV division. Appl. Opt. AO 2015, 54, 2441–2449. [Google Scholar] [CrossRef] [PubMed]
  34. Sun, C.; Guo, X.; Wang, P.; Zhang, B. Computational optical distortion correction based on local polynomial by inverse model. Optik 2017, 132, 388–400. [Google Scholar] [CrossRef]
  35. Heikkila, J.; Silven, O. A four-step camera calibration procedure with implicit image correction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, 17–19 June 1997; pp. 1106–1112. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Scheme of (a) the Light Detection and Ranging (LiDAR) scanning system geometry depicting (blue) the laser source, (red) the MEMS mirror and (green) the scanning reference systems with i ^ 1 = m ^ 1 = s ^ 1 pointing inside the paper plane; and (b) the global geometry of the LiDAR scanning setup in the general case, with the tilt angles of the MEMS magnified by additional optics.
Figure 1. Scheme of (a) the Light Detection and Ranging (LiDAR) scanning system geometry depicting (blue) the laser source, (red) the MEMS mirror and (green) the scanning reference systems with i ^ 1 = m ^ 1 = s ^ 1 pointing inside the paper plane; and (b) the global geometry of the LiDAR scanning setup in the general case, with the tilt angles of the MEMS magnified by additional optics.
Sensors 20 02898 g001
Figure 2. Scheme of the LiDAR Field-of-View (FOV) and an observed point Q relating (red) the LiDAR reference system and its scanning angles (green) in spherical coordinates.
Figure 2. Scheme of the LiDAR Field-of-View (FOV) and an observed point Q relating (red) the LiDAR reference system and its scanning angles (green) in spherical coordinates.
Sensors 20 02898 g002
Figure 3. (a) Scheme of the LiDAR’s acquisition frame showing the relation between (black) the pixel position (i,j) from the MEMS dynamics and (red) the scanning angles of the FOV. (b) Scheme of the distortion mapping functions for odd and even lines, f odd and f even , relating the pixel position within the acquisition frame with the final scanning angles.
Figure 3. (a) Scheme of the LiDAR’s acquisition frame showing the relation between (black) the pixel position (i,j) from the MEMS dynamics and (red) the scanning angles of the FOV. (b) Scheme of the distortion mapping functions for odd and even lines, f odd and f even , relating the pixel position within the acquisition frame with the final scanning angles.
Sensors 20 02898 g003
Figure 4. Scheme of the calibration pattern and position of the LiDAR.
Figure 4. Scheme of the calibration pattern and position of the LiDAR.
Sensors 20 02898 g004
Figure 5. Step-by-step algorithm scheme for the estimation of the mapping function.
Figure 5. Step-by-step algorithm scheme for the estimation of the mapping function.
Sensors 20 02898 g005
Figure 6. Acquired images of the calibration pattern with the (a) 30 × 20 ° FOV and (b) 50 × 20 ° FOV LiDAR prototypes.
Figure 6. Acquired images of the calibration pattern with the (a) 30 × 20 ° FOV and (b) 50 × 20 ° FOV LiDAR prototypes.
Sensors 20 02898 g006
Figure 7. Simulated probability histogram of the angular resolution (b) and (d) for both scanning directions assuming, respectively, (a) linear motion of the MEMS and (c) harmonic motion of the MEMS in its fastest direction, being the red curve of α the cropped linear region of the whole motion shown in blue. Homogeneous resolution is calculated with F O V H = 27 . 5 ° and F O V V = 16 . 5 ° .
Figure 7. Simulated probability histogram of the angular resolution (b) and (d) for both scanning directions assuming, respectively, (a) linear motion of the MEMS and (c) harmonic motion of the MEMS in its fastest direction, being the red curve of α the cropped linear region of the whole motion shown in blue. Homogeneous resolution is calculated with F O V H = 27 . 5 ° and F O V V = 16 . 5 ° .
Sensors 20 02898 g007
Figure 8. (a) Simulated and (b) experimental image of the grid pattern at 3.8 m using the 30 × 20 ° FOV prototype.
Figure 8. (a) Simulated and (b) experimental image of the grid pattern at 3.8 m using the 30 × 20 ° FOV prototype.
Sensors 20 02898 g008
Figure 9. Grid control points on the odd image for both (a) 30 × 20 ° and (b) 50 × 20 ° FOV prototypes.
Figure 9. Grid control points on the odd image for both (a) 30 × 20 ° and (b) 50 × 20 ° FOV prototypes.
Sensors 20 02898 g009
Figure 10. Probability Density Function (PDF) of both angular errors, (blue) horizontal and (red) vertical, of the odd lines for both (a) 30 × 20 ° FOV prototype and (b) 50 × 20 ° FOV prototype, using the mapping function Map 3 in Equation (13). The angular errors outside the 95% confidence interval are the filled areas.
Figure 10. Probability Density Function (PDF) of both angular errors, (blue) horizontal and (red) vertical, of the odd lines for both (a) 30 × 20 ° FOV prototype and (b) 50 × 20 ° FOV prototype, using the mapping function Map 3 in Equation (13). The angular errors outside the 95% confidence interval are the filled areas.
Sensors 20 02898 g010
Figure 11. Comparison of both (blue) horizontal and (red) vertical resolution variation for the odd lines. (a) Simulation of the 30 × 20 ° FOV prototype with a sinusoidal dynamics for the fast scanning axis. (b) Results of the mapping function Map 3 for the 30 × 20 ° FOV prototype. (c) Results for the 50 × 20 ° .
Figure 11. Comparison of both (blue) horizontal and (red) vertical resolution variation for the odd lines. (a) Simulation of the 30 × 20 ° FOV prototype with a sinusoidal dynamics for the fast scanning axis. (b) Results of the mapping function Map 3 for the 30 × 20 ° FOV prototype. (c) Results for the 50 × 20 ° .
Sensors 20 02898 g011
Figure 12. Comparison of respectively both horizontal and vertical angular resolution variations across the whole FOV of the 30 × 20 ° FOV prototype. (a,b) Sinusoidal dynamics for the fast scanning axis. (c,d) Results of the mapping function Map 3.
Figure 12. Comparison of respectively both horizontal and vertical angular resolution variations across the whole FOV of the 30 × 20 ° FOV prototype. (a,b) Sinusoidal dynamics for the fast scanning axis. (c,d) Results of the mapping function Map 3.
Sensors 20 02898 g012
Figure 13. Results of the mapping function Map 3 for the 30 × 20 ° FOV prototype across its FOV. (a) Horizontal and (b) vertical angular resolution. (c) Horizontal and (d) vertical committed angular error.
Figure 13. Results of the mapping function Map 3 for the 30 × 20 ° FOV prototype across its FOV. (a) Horizontal and (b) vertical angular resolution. (c) Horizontal and (d) vertical committed angular error.
Sensors 20 02898 g013
Figure 14. Improvement of the presented calibration for (a,b) the 30 × 20 ° and (c,d) the 50 × 20 ° FOV prototype, compared to their respective ideal cases of homogeneous angular resolution. (a,c) Probability density functions of the angular error, being (red) the horizontal and (blue) the vertical scanning direction of (dashed line) the non-calibrated and (solid line) the calibrated case. (b,d) Norm of the final point distance error.
Figure 14. Improvement of the presented calibration for (a,b) the 30 × 20 ° and (c,d) the 50 × 20 ° FOV prototype, compared to their respective ideal cases of homogeneous angular resolution. (a,c) Probability density functions of the angular error, being (red) the horizontal and (blue) the vertical scanning direction of (dashed line) the non-calibrated and (solid line) the calibrated case. (b,d) Norm of the final point distance error.
Sensors 20 02898 g014
Figure 15. 30 × 20 ° FOV prototype point clouds of (e) the scenario. (a) Previous calibration point cloud. (b) Presented calibration point cloud. (c) and (d) Orthogonal projection on the ground plane of (a) and (b) respectively. (f) Measured distances between road pins taken as the ground truth.
Figure 15. 30 × 20 ° FOV prototype point clouds of (e) the scenario. (a) Previous calibration point cloud. (b) Presented calibration point cloud. (c) and (d) Orthogonal projection on the ground plane of (a) and (b) respectively. (f) Measured distances between road pins taken as the ground truth.
Sensors 20 02898 g015
Table 1. Mapping results of the tested prototypes. Notice that units are millidegrees ( ° /1000) and the best performance is highlighted in bold for all figures except for the homogeneous FOV, which is in degrees and represents how far from the designed rectangular FOV is the distortion map.
Table 1. Mapping results of the tested prototypes. Notice that units are millidegrees ( ° /1000) and the best performance is highlighted in bold for all figures except for the homogeneous FOV, which is in degrees and represents how far from the designed rectangular FOV is the distortion map.
Figure of Merit [Mdeg]Dir.30 × 20 ° 50 × 20 °
Map 1Map 2Map 3Map 1Map 2Map 3
Homogeneous FOV [ ° ]Odd27.55 × 16.3227.47 × 16.5427.47 × 16.5252.89 × 13.9653.19 × 14.4453.34 × 14.38
Even27.25 × 16.4427.46 × 16.6427.45 × 16.6353.08 × 13.3353.07 × 14.2753.07 × 14.21
Mean errorOdd25 × 2821 × 920 × 8101 × 8945 × 4037 × 31
Even23 × 2422 × 922 × 9108 × 11847 × 6446 × 37
Standard deviationOdd24 × 2314 × 514 × 566 × 9734 × 3229 × 22
Even24 × 1814 × 714 × 773 × 11037 × 4835 × 31
Max. angular error (<95%)Odd79 × 7048 × 1747 × 19239 × 274117 × 10395 × 72
Even77 × 6048 × 2547 × 26265 × 324117 × 165113 × 98

Share and Cite

MDPI and ACS Style

García-Gómez, P.; Royo, S.; Rodrigo, N.; Casas, J.R. Geometric Model and Calibration Method for a Solid-State LiDAR. Sensors 2020, 20, 2898. https://doi.org/10.3390/s20102898

AMA Style

García-Gómez P, Royo S, Rodrigo N, Casas JR. Geometric Model and Calibration Method for a Solid-State LiDAR. Sensors. 2020; 20(10):2898. https://doi.org/10.3390/s20102898

Chicago/Turabian Style

García-Gómez, Pablo, Santiago Royo, Noel Rodrigo, and Josep R. Casas. 2020. "Geometric Model and Calibration Method for a Solid-State LiDAR" Sensors 20, no. 10: 2898. https://doi.org/10.3390/s20102898

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop