Next Article in Journal
Testing a Firefly-Inspired Synchronization Algorithm in a Complex Wireless Sensor Network
Next Article in Special Issue
An Improved Multi-Sensor Fusion Navigation Algorithm Based on the Factor Graph
Previous Article in Journal
Performance and Challenges of Service-Oriented Architecture for Wireless Sensor Networks
Previous Article in Special Issue
A Unified Model for BDS Wide Area and Local Area Augmentation Positioning Based on Raw Observations
 
 
Correction published on 5 December 2017, see Sensors 2017, 17(12), 2821.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter

School of Mechanical and Electrical Engineering, China University of Mining and Technology, Xuzhou 221116, China
Sensors 2017, 17(3), 539; https://doi.org/10.3390/s17030539
Submission received: 3 January 2017 / Revised: 24 February 2017 / Accepted: 5 March 2017 / Published: 8 March 2017
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)

Abstract

:
The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.

1. Introduction

In today’s world, Light Detection and Ranging (LiDAR) devices and Inertial Measurement Units (IMUs) often found on vehicles, airplanes and robots are increasingly being used to perform localization or for navigation tasks. The LiDAR and IMU sensors, together, can supply accurate attitude estimation and are very suitable in many applications. However, in GPS-denied environments the IMUs are usually unreliable with respect to position for long periods of time due to time drift, which may cause large cumulative errors, the a LiDAR is a device which uses laser beams to determine the distance and azimuth from the sensor to an object, which can provide 3D localization information with high accuracy and efficiency to reduce or bound IMUs drift. Therefore, the integrated LiDAR-IMU system can provide highly accurate position and pose information over long periods of time in GPS-denied environments.
Usually combining information from multiple sensors offers several potential advantages, including enhanced accuracy and improved robustness, but these come at a cost: a fundamental requirement in any multiple sensor system is time delay calibration. To ensure optimal performance, the LiDAR and IMU sensors must be properly time delay calibrated, including estimates of the relative timing of each sensor measurement, coordinate transformation between the different sensors, and time delay calibration parameters estimation are required [1,2,3,4].
Several methods exist for LiDAR, IMU and camera time delay calibration. The work of Kelly et al. [5] presented a time delay iterative closest point method for determining the time delay between inertial and visual sensor measurements. Aghili et al. [6] presented a robust 6-Degree of Freedom (DOF) relative navigation sysetm by combining the iterative closest point (ICP) registration algorithm and a noise Adaptive Kalman Filter (AKF) in a closed loop configuration together with measurements from a LiDAR and an IMU. Pothou et al. [7] investigated the determination of the misalignment between the IMU body frame and the LiDAR frame, using a Quality Assurance/Quality Control (QA/QC) technique to evaluate the LiDAR-IMU bore sight misalignment. Deymier et al. [8] proposed the self-calibration of a vehicle’s acquisition system with cameras, IMU and 3D LiDAR. Jian et al. [9] integrated two complementary technologies—Inertial Navigation System (INS) and LiDAR Simultaneous Localization and Mapping (SLAM)—into one navigation frame with a loosely coupled Extended Kalman Filter (EKF) to use the advantages and overcome the drawbacks of each system to establish a stable long-term navigation process. Mirzaei et al. [10] presented an EKF for precisely determining the unknown transformation between a camera and an IMU. Yun et al. [11] developed the IMU/Vision/LiDAR integrated navigation system which can provide accurate relative navigation information in GNSS-denied environments; and construct an overall integrated navigation filter based on the EKF approach. Li et al. proposed [12] a method of integrating the measurements from a LIDAR and a Micro-Electro-Mechanical System (MEMS) IMU, and using the Kalman Filter (KF) to estimate the error of IMU and LIDAR sensors.
As mentioned above, the AKF, EKF, KF and ICP algorithms are used in LiDAR, IMU and camera sensors to calibrate the time delay. During the course of LiDAR-IMU calibration, neither the EKF algorithm nor AKF algorithm can avoid time delay calibration bias and filtering divergences. Initially, the correspondences between LiDAR and IMU measurements are usually unknown; as a result the relative time delay information between the LiDAR-IMU data flows cannot be computed directly [13,14,15,16]. The question is posed differently as follows: first, the measurement rate information of LiDAR and IMU cannot be directly compared. Second, in general, the rigid body transformation between the LiDAR and the IMU is unknown. Third, both signals are discrete and have different temporal resolutions, the LiDAR typically scanning information is available at 5–20 Hz or less, however the IMU produces data is normally available at 100 Hz or more [17,18,19,20].
In order to solve the problem of LiDAR-IMU time delay calibration, we present a fusion method based on ICP and an iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF [21,22,23]. The total least squares cost function is used by the ICP algorithm for registration, which allows us to merge the IMU orientation measurement uncertainty in a principled way, and to reduce the longer time intervals due to the accumulated noise effects by integrated LiDAR orientation measurement. The ISPKF algorithm measurement update step is iterated until the change in the posterior state estimate drops below some small threshold, and can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU sensors are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay can be accurately calibrated.
This paper is organized as follows: the coordinate transformation between LiDAR and IMU are described in Section 2. In Section 3, we establish the LiDAR and IMU measurement models and the time delay error model in LiDAR-IMU. Section 4 given the implementation procedure of ICP-ISPKF for LiDAR-IMU time delay calibration. Section 5 experimental studies and analysis the results using LiDAR, IMU and optical measuring system based our proposed method are described, followed by the conclusions in Section 6.

2. Coordinate Transformation between LiDAR and IMU

2.1. Coordinate Frame

In general, there are basically four coordinate frames in the LiDAR and IMU, The relationship between these frames is shown in Figure 1:
(1)
LiDAR frame, {L}, is represented in this frame of reference, in which the axes are defined as right, forward and up.
(2)
IMU frame, {I}, is defined by the IMU, in which angular rotation rates and linear accelerations are measured, with its origin at a point on the IMU body.
(3)
Object frame, {O}, is the coordinate of moving object, the axes in the object frame are forward, right and down.
(4)
World frame, {W}, is considered to be the fundamental coordinate frame and serves as an absolute reference for both the {I} and the {L}.

2.2. Transformation from LiDAR Frame to IMU Frame

2.2.1. Transformation from LiDAR Frame to the World Frame

Suppose a point WP in the {W} frame is located at LP in the {L}, the transformation from the {W} coordinate to the {L} coordinate system can be expressed as [24]:
L P = R W L W P + L T L W
where R W L is a 3 × 3 orthonormal matrix representing the rotation from the {W} frame to the {L} frame, and L T L W is the translation vector. The subscript is the origin of the {L} coordinate, and is the origin of the tangent frame. Our goal is to calculate R W L and L T L W .
The transformation is implemented through the observation of LiDAR scanning to a calibration plane. A geometric constraint can be obtained between LiDAR scanning points and the calibration plane. Since LiDAR scanning points lie on the calibration plane, and Wr is the normal vector to the plane, we have:
W r W P = d
where WP is the coordinate of LiDAR scanning point in {W} frame; d is a scalar representing the distance from the origin of the {W} frame to the calibration plane, which is calculated from the position and orientation of the calibration plane. From Equation (1) we know that:
W P = ( R W L ) 1 ( L P L T L W )
By substituting Equation (3) into Equation (2), we have:
W r ( R W L ) 1 ( L P L T L W ) = d
For any given LiDAR scanning point and calibration plane position, Equation (4) gives a constraint on R W L and L T L W . It will be solved in two consecutive steps: a linear solution, followed by a non-linear optimization [25,26]:
(1) Linear solution. The LiDAR scanning plane in the {L} is L Z = 0 . Each LiDAR scanning point can be represented as L P = [ L P x   L P y   L P z ] T . Then Equation (4) is rewritten as:
W r ( R W L ) 1 ( [ L P x L P y L P z ] L T L W ) = W r ( R W L ) 1 ( [ 1 0 0 0 1 0 0 0 1 ] L T L W ) [ L P x L P y L P z ] = d
Let’s define Z = ( R W L ) 1 ( [ 1 0 0 0 1 0 0 0 1 ] L T L W ) , Equation (5) is rewritten as:
W r Z L P = d
where Z is the parameter to be solved, which is the integration of two unknown parameters R W L and L T L W .
Equation (6) can be solved using the standard linear least squares algorithm. In order to obtain a better result, multiple calibration planes should be used in the transformation. Suppose in the transformation we use a total of N calibration planes, with Mi (i = 1,…,N) LiDAR scanning points on the i-th plane. Let Z = [ z 11 z 12 z 13 z 21 z 22 z 23 z 31 z 32 z 33 ] , the normal vector for the i-th plane W r i = [ r i , 1   r i , 2   r i , 3 ] , the distance from origin of the {W} frame and the i-th calibration plane is di and the j-th LiDAR scanning point on the i-th calibration plane is L P i j = [ L P i j , x   L P i j , y   L P i j , z ] T , Equation (6) is rewritten as:
( r i , 1 z 11 + r i , 2 z 21 + r i , 3 z 31 ) L P i j , x + ( r i , 1 z 12 + r i , 2 z 22 + r i , 3 z 32 ) L P i j , y + ( r i , 1 z 13 + r i , 2 z 23 + r i , 3 z 33 ) L P i j , z = d i
where i = 1,2,…,N, j = 1,2,…,M.
Therefore for each LiDAR scanning point we have a linear equation which is a row in Equation (7) can be calculated using the standard linear least square algorithm. Then R W L and L T L W will be obtained from Z.
(2) Nonlinear solution. The linear solution is obtained by the least squares method, which aims to minimize an algebraic distance. A nonlinear minimization method is used to minimize the differences between the measured Euclidean distances as well as the calculated distance from the LiDAR scanning points to the calibration plane, which is physically meaningful.
Equation (4) gives two types of distances: d is the distance from the calibration plane to the origin of the {W} frame obtained by a field survey, and W r · ( R W L ) 1 ( L P L T L W ) is the calculated distance. The difference between these two distances is defined as the distance error for one calibration plane pose. The nonlinear solution aims to minimize the sum function of the distance errors for all the plane positions. The sum function is defined as:
s u m = i = 1 N j = 1 M i ( W r i ( R W L ) 1 ( L P i j L T L W ) d i ) 2
where ri defines the i-th calibration plane, di is the distance from the i-th plane to the center of the {W} frame, and Pij is the j-th LiDAR scanning point with the i-th calibration plane; the pair of R W L and L T L W that minimize Equation (8) are considered to be the rotation and translation matrix to be calculated. Equation (8) can be minimized as a nonlinear optimization problem by getting the translation and rotation matrix between the {W} frame and LiDAR scanning points in the {W} frame can be converted to a point in the {L} frame.

2.2.2. Transformation from LiDAR Frame to IMU Frame

The transformation of the LiDAR frame to the IMU one is to obtain the geometric relationship between the {L} frame and {O} frame. The IMU consists of three gyros and three accelerometers. The gyros provide change of Euler angles, while the accelerometers give the specific force. By integrating the output of the gyros and the accelerometers, we can obtain the translation and rotation matrix between the {W} frame and the {O} frame. Let the rotation matrix be R O W , and translation vector be O T W O , then a point in the {O} frame can be converted to a point in the {W} frame by:
W P = R O W O P + O T W O
where W P is the point in the {W} frame, and O P is the point in the {O} frame.
Finally, by substituting Equation (1) into Equation (9) we have:
L P = R W L ( R O W O P + O T W O ) + L T L W
Equation (10) is the transformation from the {O} frame to the {L} frame.

3. LiDAR and IMU Measurement Model

3.1. LiDAR Measurement Model

The LiDAR scans the laser beam through 360°. Therefore, a post which will be modeled as a vertical line in the tangent frame will appear as a point in the LiDAR frame. Similarly, a wall which will be modeled as a vertical plane in the tangent frame will appear as a line in the LiDAR frame of references. The LiDAR position and point feature detection are shown in Figure 2.
Consider a mapped vertical post at P0. The post will appear as a point in the LiDAR return. The vector from T to P0 in world frame is W T L P O = [ N 0   E 0   0 ] T , which is known from the survey. The line is denoted as:
W T W P ( m ) = W T W P O + m e 3
where W T W P ( m ) is the vector in world frame from T to a point P at height m on the post, m is a scalar with m ( 0 , + ) . The vector e3 is parallel to the post. In our application, all the posts and planes are modeled as vertical. For a vertical post e 3 = [ 0   0   1 ] T in the world frame.
For any feature point P, the vector from the LiDAR to the point is:
T L P = T W P ( T W O + T O L )
As is shown in Figure 3. Equation (12) is valid for all the reference frames.
In Equation (12) and Figure 3, TWP is a constant in the world frame known from the feature survey. TWO will be calculated in the world frame. TOL in the object frame is known, and can be determined by the pre-calibration.
R O L is the rotation matrix from the object frame to the LiDAR frame, so the vector from the LiDAR to a feature point in LiDAR coordinates is denoted as:
L T L P = R O L ( R W O ( W T W P W T W O ) O T O L )
By substituting Equation (11) into Equation (13) we have:
L T L P ( m ) = R O L ( R W O ( W T W P + m e 3 W T W O ) O T O L )
which is the vector in the LiDAR coordinates from the origin of the LiDAR sensor to a point with height m on the post. By expanding Equation (14), the coordinate of L T L P ( m ) in the L Z direction is:
Z ( m ) = e 3 W L T L P ( m ) = m e 3 W R O L R W O e 3 + e 3 W R O L ( R W O ( W T W P W T W O ) O T O L )
Equation (15) is the theoretical equation. Note that for a single-planar LiDAR, the scan plane in LiDAR coordinate is L Z = 0 . Therefore, for all the LiDAR points we have L Z = 0 , and we can use this fact to calculate m as:
m = e 3 W R O L ( R W O ( W T W P W T W O ) O T O L ) e 3 W R O L R W O e 3
By substituting Equation (16) into Equation (14), the detected point is:
L T L P ( m ) = R O L ( R W O ( W T W P e 3 W R O L ( R W O ( W T W P W T W O ) O T O L ) e 3 W R O L R W O e 3 e 3 W T W O ) O T O L )
Equation (17) is the LiDAR measurement model.

3.2. IMU Measurement Model

The IMU consists of three gyros and three accelerometers. The gyro provides change of Euler angles, while the accelerometers give the specific force. This model is based on the inertial measurement system error modeling method presented by Jonathan Kelly [5,27] and integrates the modified Rodrigues parameters kinematic equation. We obtained the IMU measuring equations as follows:
P ˙ I (t)=F(t) P I (t)+ P I (t) F T (t)+G(t) Q g (t) G T (t)
F(t)= ρ ˙ (t) v = 1 2 ( ( ρ T (t)ω(t)) I 3 [ ω(t)] × +ρ(t)ω (t) T ω(t) ρ T (t) )
G(t)= ρ ˙ (t) n g = 1 4 ( (1 ρ(t) 2 ) I 3 +2 [ ρ(t)] × +2ρ(t) ρ T (t) )
ρ ˙ (t)= 1 4 ( (1 ρ(t) 2 ) I 3 +2 [ ρ(t)] × +2ρ(t) ) ω I (t)
where P I ( t ) is the IMU orientation covariance at a time t , Q g ( t ) is an additive noise vector, ρ ( t ) is the modified Rodrigues parameters vector, F ( t ) and G ( t ) are the system matrix for ρ ( t ) , [ ρ ( t ) ] × is the skew-symmetric cross-product matrix for ρ ( t ) , I 3 is the 3 × 3 identity matrix, ω I ( t ) = ω m ( t ) b g n g ( t ) is the true angular velocity of IMU, and b g and n g ( t ) are the gyroscope bias vector and the noise vector, respectively.

3.3. Time Delay Error Model

We will identify a specific instantaneous local LiDAR frame as {Lk} with timestamp t L k according the receiver clock, for k = 1,…,n LiDAR poses. Similarly, for j = 1,…,m IMU poses, the {Ij} will be identified as a specific instantaneous local IMU frame with timestamp t I j . In general, the IMU data are available at a substantially higher rate than the LiDAR data, and m > n.
To compare LiDAR and IMU measurement data, a common representation is required; we use the orientation measurements to match IMU and LiDAR. The IMU orientation is measured with respect to the initial sensor pose. The LiDAR orientation is measured relative to a series of calibration planes. Our algorithm employs the modified Rodrigues parameters as a minimal and convenient orientation representation. The use of the modified Rodrigues parameters vector representation allows us to propagate the IMU orientation uncertainty forward continuously in time [28].
We have the following relationship at any time t I j :
I W ρ ( t ) = L W ρ ( t + τ ) I L ρ
where I L ρ is the modified Rodrigues parameter vector with the orientation from the {I} frame to the {L} frame. L W ρ ( t + τ ) is the modified Rodrigues parameter vector with the orientation from the {L} frame to the {W}, τ is the time delay. L W ρ ( t ) is the modified Rodrigues parameter vector with the orientation from the {I} frame to the {W} frame.
Although the orientation of the rate measurement information supplied by the IMU cannot be measured directly, by integrating the IMU gyroscope data, we can obtain the IMU measurement orientation change over a period of time. Similarly, we can conveniently compute the LiDAR measured orientation relative to the {W} frame. The measurement relationship between the {I}, {L} and {W} frame can be computed as:
I I 0 ρ ( t ) = W I 0 ρ L W ρ ( t + τ ) I L ρ
where W I 0 ρ is the modified Rodrigues parameter vector with the orientation from the {I0} frame to the {W} frame.
(1) Position error analysis: the error in position can be calculated as follows:
δ ( W T ˙ W O ) = R O W O V R ^ O W O V ^ = R ^ O W δ ( O V ) W V ^ ρ
where R O W = R ^ O W ( I + ρ ) ,   I = ( R W L ) T · R W L ,   ρ is the calibration plane tilt error, and O V = O V ^ + δ ( O V ) .
(2) Velocity error analysis: the error in velocity is modeled as follows:
δ ( O V ˙ ) = O V ˙ O V ^ ˙ = O a O ( Ω I L O + Ω I O O ) O V O a ^ O + ( Ω ^ I L O + Ω ^ I O O ) O V ^ = R ^ O W ( ( ( g W ) ( W T W O ) + W V ^ ( W ω I L ) ( W T W O ) ) δ ( W T W O ) + ( g W + W ω I L ( W V ^ ) T ( W ω I L ) T W V ^ I ) ρ ) ( Ω ^ I L O + Ω ^ I O O ) δ ( O V ) δ b a O V ^ δ b g γ a O V ^ γ g
where gW is the local gravity vector at the moving object location represented in the {W} frame, W ω I L is the angular rate of the {W} frame origin to the {I} frame represented in the {L} frame. γ g is the white Gaussian measurement noise, bg is the gyro bias, which is modeled as a random constant plus random noise.
(3) Attitude error analysis: the attitude error is given by:
δ ( Θ ˙ ) = Θ ˙ Θ ^ ˙ = Ω E 1 W ω ^ I W δ ( W ω I W ) R ^ I W ( δ b g + γ g )
where δ ( W ω I W ) can be calculated as:
δ ( W ω I W ) = ω I L [ sin α 0 cos α ] Θ ( W R I W ) δ ( W R I W )
(4) Time delay calibration parameters error model: the measurement noise vector, the time delay parameter vector and process noise vector of LiDAR-IMU can be expressed as:
{ x = [ δ ( W T W O ) δ ( O V ) δ Θ δ ω b a δ ω b g ] T b = [ b a b g ] T γ = [ γ a γ g ] T
where the x, b and γ are the time delay of the error state vector, process noise vector and measurement noise vector, respectively.
According to the Equations (25)–(28), the time delay calibration parameters error model of LiDAR-IMU can be computed as:
[ δ ( W T ˙ W O ) δ ( O V ˙ ) δ Θ ˙ δ ω ˙ b a δ ω b g ] = [ R ^ O W ( g W ) ( W T W O ) R ^ O W W V ^ 0 0 0 ( Ω ^ I L O + Ω ^ I O O ) R ^ O W g W I O V ^ 0 0 Ω ^ E 1 0 R ^ O W 0 0 0 0 0 0 0 0 0 0 ] [ δ ( W T W O ) δ ( O V ) δ Θ δ ω b a δ ω b g ] + [ I 0 0 0 0 O V ^ 0 0 0 R ^ O W 0 0 0 0 I 0 0 0 0 I ] [ γ a γ g b a b g ]

4. Time Delay Calibration Using the ICP-ISPKF Integration Method

The ICP algorithm can be used to register the spatial data from IMU and LiDAR measurements, which operates by aligning two curves in a 3-D orientation space generated from integrated IMU gyroscope data and from LiDAR scanner data. Each point on the respective curve has a corresponding timestamp, identifying the time at which the measurement arrived at the receiver. By registering the orientation curves, we are able to use the timestamp values to estimate the relative delay between IMU and LiDAR data streams. ICP incorporates in a principled way the uncertainty in the LiDAR orientation measurements and accounts for the fact that the integrated IMU orientation becomes less accurate over longer intervals due to the incorporation of noise. We specifically avoid this using the ISPKF because of The ISPKF algorithm measurement update step is iterated until the change in the posterior state estimate drops below some small threshold, and can optimally estimate the time delay calibration parameters. The ISPKF is shown to achieve superior performance in terms of efficiency and accuracy compared with the KF, EKF and UKF, also the Gabe Sibley et al. [29] have compared the ISPKF with KF, EKF, UKF and Gauss-Newton filter, and demonstrate the ISPKF’s capabilities in avoiding IMU measuring noise. Therefore, in order to solve the problem of LiDAR-IMU time delay calibration, we use the fusion method based on ICP and ISPKF algorithm.
As shown in Figure 4, the LiDAR-IMU time delay calibration is integrated with ICP and ISPKF method. Firstly, once the ICP succeeds matching the LiDAR measurement points, the LiDAR-IMU state estimations are updated with the precise registration value. Secondly, at the time delay filter estimate step, the obtained predicted pose is required to find the corresponding points between LiDAR and IMU measurements. Thirdly, at each time step, the recursive algorithm is used to update the related covariance estimates and precise registration value for LiDAR-IMU. The ICP convergence information and the fault-detection logic are used to adaptively adjust ISPKF reliable estimate of motion state and a set of related parameters. In closed-loop ICP-ISPKF architecture, through ICP initial guess and fault detection, the LiDAR-IMU robust pose tracking and automatic fault recovery are established. Finally, the pose prediction information used to align the time delay error for ICP [6,30].

4.1. The ICP Algorithm for Estimation the Time Delay and Relative Orientation of LiDAR-IMU

The ICP algorithm is utilized to estimation the LiDAR-IMU time delay and relative orientation. At the beginning, the transforms between the LiDAR-IMU orientation curves are computed through iteratively selecting n correspondences point using the ICP algorithm. We employed the TD-ICP algorithm registration rules proposed by Jonathan Kelly [5,31] and by adjusting the search corresponding time scale and the orientation curves converge, this ICP algorithm can be described in two steps:
Step I: Registration Rules.
The ICP algorithm operates by iteratively selecting the closest point between the IMU orientation curve and the LiDAR measurement point, and the concept of ICP proximity requires a suitable distance measurement. The minimum value of distance function can be computed as
d i j = ( L i W ρ , I j I 0 ρ ) = 4 arctan σ i j T σ i j
σ i j = ( W I 0 ρ ^ L i W ρ I L ρ ^ ) I j I 0 ρ
where d i j is the incremental rotation arc length taking from L i W ρ to I j I 0 ρ in radians orientation, and σ i j is the spatial transform model on the unit sphere.
Step II: Nonlinear Iterative Registration Rules.
We can get n correspondence relationship between IMU and LiDAR orientation measurement curves by selecting the closest IMU point for LiDAR point. The cost function used to align the IMU and LiDAR orientation curves can be calculated as
U ( ρ W I 0 , ρ I L ) = k = 1 n s k P L k 1 s k T + k = 1 n t k P I f ( k ) 1 t k T
s k = ρ L k W ρ ^ L k W
t k = ρ I f ( k ) I 0 ρ ^ I f ( k ) I 0
where P L k and P I f ( k ) are the associated covariance matrices, s and t are the stacked residuals vectors, and the ρ W I 0 and ρ I L are transform parameters.
By using Lagrange multipliers and incorporating the constraints ρ ^ I f ( k ) I 0 = ρ ^ I 0 W ρ ^ L k W ρ ^ I L , differentiating and rearranging, Equation (32) can be minimized as
U ( ρ W I 0 , ρ I L ) = [ k = 1 n J k ( P I f ( k ) + H k P L k H k T ) 1 J k T ] 1 [ k = 1 n ( ρ ^ I f ( k ) I 0 ρ ^ I 0 W ρ ^ L k W ρ ^ I L ) J k ( P I f ( k ) + H k P L k H k T ) 1 J k T ]
H ( ρ W I 0 , ρ L k W , ρ I L ) = [ ( ρ W I 0 ρ L k W ρ I L ) ( ρ W I 0 ) , ( ρ W I 0 ρ L k W ρ I L ) ( ρ I L ) ]
where H k is the block-diagonal Jacobian matrix of the constraints with respect to ρ L k W , and J k is the stacked Jacobian matrix of the constraints with respect to the transform parameters ρ W I 0 and ρ I L .

4.2. Iterated Sigma Point Kalman Filter (ISPKF) Algorithm for Compensation Calibration Parameters

After the relative orientation and optimal time delay estimation for the LiDAR-IMU, the ISPKF algorithm is used for compensation of the time delay calibration parameters between the LiDAR and IMU measurements.
The ISPKF algorithm uses the process noise component to implement the incremental state vector and the state covariance matrix, as expressed in:
X ^ a ( t k ) = [ X ^ ( t k ) n ( t k ) ]
where n(tk) is the noise vector in measurement process, X ^ a ( t k ) is the augmented state vector. At time tk−1, shortly after the LiDAR and IMU measurement update, the enhanced state covariance matrix P a + ( t k 1 ) and the enhanced state mean X ^ a + ( t k 1 ) can be formed as:
P a + ( t k 1 ) = [ P + ( t k 1 ) 0 m × 12 0 12 × m Q c ( t k 1 ) ]
X ^ a + ( t k 1 ) = [ X ^ + ( t k 1 ) 0 12 × 1 ]
where the state vector X ^ a + ( t k 1 ) has size m = 26 + 3n and Q c ( t k 1 ) is the continuous-time covariance matrix for the noise vector n(tk−1), when n calibration points are included, for the Cartesian and inverse calibration points parameterization, respectively. The scaled form is employed for unscented transform, which requires a scaling term:
λ = α 2 ( N + β ) N
The α parameter is used to control the sigma points spread with respect to the mean of the state; we usually set a small positive value for parameter α. In the a posteriori state distribution Taylor series expansion, the β parameter is used to correct the higher order term. We set β = 2 and minimize the joint Gaussian distribution fourth-order error.
We use the augmented state vector X ^ a + ( t k 1 ) to generate a set of sigma points according to the following equation:
{ χ a ( 0 ) ( t k 1 ) = X ^ a + ( t k 1 ) χ a ( l ) ( t k 1 ) = X ^ a + ( t k 1 ) + ( S ( t k 1 ) ) j , j = l = 1 , , N χ a ( l ) ( t k 1 ) = X ^ a + ( t k 1 ) ( S ( t k 1 ) ) j , j = 1 , , N , l = N + 1 , , 2 N S ( t k 1 ) = ( λ + N ) P a + ( t k 1 )
where P a + ( t k 1 ) is the augmented state covariance matrix, ( S ( · ) ) j expresses the j-th column of the matrix S. The weight values of the associated sigma point can be described as:
{ W m ( 0 ) = λ / ( λ + N ) W c ( 0 ) = λ / ( λ + N ) + ( 1 α 2 + β ) W m ( j ) = W m ( j ) = 1 2 ( λ + N ) , j = 1 , , 2 N
A single σ-point can be propagated by the enhanced nonlinear process model function fa. Over the time interval t [ t k 1 , t k ) , the a priori state estimate and covariance tk can be computed as:
{ χ k ( p ) = f a ( χ a ( p ) ( t k 1 ) ) , p = 0 , , 2 N X ^ k = p = 0 2 N W m ( p ) χ k ( p ) P k = p = 0 2 N W c ( p ) ( χ k ( p ) X ^ k ) ( χ k ( p ) X ^ k ) T
Through propagating each σ point through the nonlinear measurement model function h, we can determine the predicted measurement vector as follows:
{ k ( p ) = h ( χ k ( p ) ) , p = 0 , , 2 N Z ^ k = p = 0 2 N W m ( p ) Ζ k ( p )
Through computing the a posteriori state vector, the state covariance matrix, and the Kalman gain matrix, we can perform the state update as follows:
{ P Z ^ k Z ^ k = p = 0 2 N W c ( p ) ( k ( p ) Z ^ k ) ( k ( p ) Z ^ k ) T K k = P X ^ k Z ^ k ( P Z ^ k Z ^ k + R k ) 1 X ^ k + = X ^ k + K k ( Z k Z ^ k ) P k + = P k K k P Z ^ k Z ^ k K k T
where Rk is the measurement covariance matrix for Zk, while P Z ^ k Z ^ k and P X ^ k Z ^ k are the predicted measurement covariance matrix and the state-measurement cross-covariance matrix, respectively.

5. Experiments and Discussion

To verify the effectiveness of the proposed algorithm, a field experiment was performed on the fourth floor of the School of Mechanical and Electrical Engineering building on the campus of China University of Mining and Technology. The experimental layout is shown in Figure 5. The experiments were conducted with a VLP-16 LiDAR (Velodyne, Morgan Hill, CA, USA) and Spatial FOG IMU (Advanced Navigation Company, Sydney, Australia). The specifications of the IMU and LiDAR are listed in Table 1.
The LiDAR consists of 16 laser scanners that collectively span a 27° of vertical field of view. Firstly, in each experiment, we initialize the IMU biases and keeping the IMU stationary for 3 s. We begin recording data from the LiDAR and the IMU, the moving objector carrying the LiDAR-IMU platform automatically moves following different configurations in front of the calibration plane. The IMU update data is recorded at 100 Hz, while the LiDAR scanning data rate is 100,000 points/s. We chose a set of three different trajectories for the LiDAR-IMU, and for each trajectory, we experimented with a different time shift and a different initial orientation estimate. In all trajectories, the initial true relative orientation of the IMU with respect to the LiDAR was the same. In the experiments, the moving objector ran three different trials, with the LiDAR-IMU platform translating and rotating in front of the calibration plane for approximately 5 s, and the LiDAR-IMU platform then ran in three different orientation and time delay configurations. For configuration 1, we set the IMU-camera relative orientation to a nominal value (a roll of 90°, pitch of 0°, and yaw of 90°), and fixed the estimated time delay value at zero. For configuration 2, we used the mean LiDAR-IMU relative orientation computed by averaging the roll, pitch, and yaw values, while again fixing the time delay at zero. Finally, for configuration 3, we used the averaged LiDAR-IMU relative orientation and the mean time delay value from configuration 2, and we determined the performance of each configuration by computing the RMS error. Once the LiDAR and IMU measurements for each configuration of the calibration plane were available, we used the method described in Section 4 to accurately estimate the LIDAR-IMU time delay calibration parameters and the LIDAR-IMU transformation parameters.
The experimental results for LiDAR-IMU time delay calibration as shown in Figure 6. Figure 6a,d,g, shows the initial alignment between IMU and LiDAR in East, North and Up orientation respectively. Figure 6b,e,h, shows one time alignment using ICP-ISPKF between IMU and LiDAR in East, North and Up orientation, respectively. Figure 6c,f,i, shows ten times alignment using ICP-ISPKF between IMU and LiDAR in East, North and Up orientation respectively. We examined the error in the LiDAR-IMU time delay calibration using ICP-ISPKF for ten times, as shown in Table 2.
According to the Table 2 and Figure 6, the initial time delay alignment error is 9.58 ms, the LiDAR-IMU alignment errors in the East, North and Up orientation are 0.093 m, 0.168 m, 0.089 m respectively. After one time calibration using ICP-ISPKF converged, the time delay alignment errors are reduced to 4.67 ms, and the alignment errors in the East, North and Up orientation are reduced to 0.067 m, 0.097 m, 0.063 m, respectively. For clarity, after ten times calibration using ICP-ISPKF converged, the final time delay alignment error was reduced to 0.50 ms, and the final alignment errors in the East, North and Up orientation were reduced to 0.018 m, 0.019 m, 0.017 m, respectively.
In order to prove the efficiency and accuracy of the ISPKF method, we used the ISPKF, KF and EKF methods to estimate the LIDAR-IMU time delay calibration errors. Since obtaining the initial truth time delay value is very difficult, we supposed the initial time delay alignment errors between LiDAR and IMU are zero, when the LiDAR and IMU measurements for each configuration of the calibration plane were available, the ICP-KF, ICP-EKF and ICP-ISPKF method are used to perform the time delay calibration for LiDAR-IMU.
The experimental results are shown in Figure 7, where the red line is the time delay alignment error result using the ICP-KF method, the blue line is the time delay alignment error result using the ICP-EKF method, the green line is the time delay alignment error result using the ICP-ISPKF method, and the one time calibration mean alignment errors using the ICP-KF, ICP-EKF and ICP-ISPKF methods are 0.233 m, 0.151 m, 0.067 m, respectively.

6. Conclusions

In order to solve the problem of LiDAR-IMU time delay calibration, we have presented a fusion method based on an iterative closest point algorithm and an iterated sigma point Kalman filter, which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. We realized the coordinate transformation from the LiDAR frame to the IMU frame, and established the measurement model and time delay error model of LiDAR and IMU. We also presented the methodology of the ICP and ISPKF procedure for the LiDAR-IMU time delay calibration. Intensive experimental studies were conducted to check the validity of the theoretical results. The results show that after ten times calibration using ICP-ISPKF converged, the time delay alignment error was reduced from 9.58 ms to 0.50 ms, and the alignment errors in the East, North and Up orientation were reduced from 0.093 m, 0.168 m, 0.089 m to 0.018 m, 0.019 m, 0.017 m, respectively. In the future work, the ICP-ISPKF algorithm will be used to calibrate the time delay error in other multi-sensor systems. Within the batch execution framework, different methods will be examined for incorporating varying time delays. We are convince that our proposed methods will be useful for other types of time delay calibration for multiple sensors.

Acknowledgments

This project is supported by National Natural Science Foundation of China (51304190, U1510116), and the Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions. The authors would like to express their sincere thanks to them.

Author Contributions

Wanli Liu and Shibo Wang conceived and designed the experiments; Wanli Liu wrote the paper.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Gong, X.; Lin, Y.; Liu, J. 3D LIDAR-camera extrinsic calibration using an arbitrary trihedron. Sensors 2013, 13, 1902–1918. [Google Scholar] [CrossRef] [PubMed]
  2. Sim, S.; Sock, J.; Kwak, K. Indirect correspondence-based robust extrinsic calibration of LiDAR and camera. Sensors 2016, 16, 933. [Google Scholar] [CrossRef] [PubMed]
  3. Zhang, X.; Shi, H.; Pan, J.; Zhang, C. Integrated navigation method based on inertial navigation system and LiDAR. Opt. Eng. 2016, 55, 044102. [Google Scholar] [CrossRef]
  4. Yun, S.; Lee, Y.J.; Kim, C.J.; Sung, S. Integrated navigation design using a gimbaled vision/LiDAR system with an approximate ground description model. Int. J. Aeronaut. Space Sci. 2014, 14, 369–378. [Google Scholar] [CrossRef]
  5. Kelly, J.; Roy, N.; Sukhatme, G.S. Determining the time delay between inertial and visual sensor measurements. IEEE Trans. Robot. 2014, 30, 1514–1523. [Google Scholar] [CrossRef]
  6. Aghili, F.; Su, C.-Y. Robust Relative Navigation by Integration of ICP and Adaptive Kalman Filter Using Laser Scanner and IMU. IEEE/ASME Trans. Mech. 2016, 21, 2015–2026. [Google Scholar] [CrossRef]
  7. Anna, P.; Charles, T.; Spyros, K. On using QA/QC techniques for LiDAR/IMU boresight misalignment. In Proceedings of the 5th International Symposium on Mobile Mapping Technology, Padua, Italy, 19–23 May 2007.
  8. Deymier, C.; Teuliere, C.; Chateau, T. Self-calibration of a vehicle’s acquisition system with cameras, IMU and 3D LiDAR. Traitement Du Signal 2015, 32, 121–145. [Google Scholar] [CrossRef] [Green Version]
  9. Tang, J.; Chen, Y.; Niu, X.; Wang, L.; Chen, L.; Liu, J.; Shi, C.; Hyyppä, J. LiDAR scan matching aided inertial navigation system in GNSS-denied environments. Sensors 2015, 15, 16710–16728. [Google Scholar] [CrossRef] [PubMed]
  10. Mirzaei, F.M.; Kottas, D.G.; Roumeliotis, S.I. 3D LIDAR-camera intrinsic and extrinsic calibration: Identifiability and analytical least-squares-based initialization. Int. J. Robot. Res. 2012, 31, 452–467. [Google Scholar] [CrossRef]
  11. Yun, S.; Lee, Y.J.; Sung, S. IMU/Vision/LiDAR integrated navigation system in GNSS denied environments. In Proceedings of the 2013 IEEE Aerospace Conference Proceedings, Big Sky, MT, USA, 2–9 March 2013.
  12. Li, R.; Liu, J.; Zhang, L.; Hang, Y. LIDAR/MEMS IMU integrated navigation (SLAM) method for a small UAV in indoor environments. In Proceedings of the Inertial Sensors & Systems Symposium, Laguna Beach, CA, USA, 25–26 February 2014; pp. 1–15.
  13. Zhang, X.; Lin, Z. Online calibration method for IMU based on the usage of single-beam LiDAR. Infrared Laser Eng. 2013, 42, 466–471. [Google Scholar]
  14. Pham, D.D.; Suh, Y.S. Pedestrian navigation using foot-mounted inertial sensor and LIDAR. Sensors 2016, 16, 120. [Google Scholar] [CrossRef] [PubMed]
  15. Zhang, X.; Yin, J.; Lin, Z.; Zhang, C. A positioning and orientation method based on the usage of INS and single-beam LiDAR. Optik 2015, 126, 3376–3381. [Google Scholar] [CrossRef]
  16. Nguyen, H.-N.; Zhou, J.; Kang, H.-J. A calibration method for enhancing robot accuracy through integration of an extended Kalman filter algorithm and an artificial neural network. Neurocomputing 2015, 151, 996–1005. [Google Scholar] [CrossRef]
  17. Xian, Z.; Hu, X.; Lian, J. Fusing Stereo Camera and Low-Cost Inertial Measurement Unit for Autonomous Navigation in a Tightly-Coupled Approach. J. Navig. 2014, 68, 434–452. [Google Scholar] [CrossRef]
  18. Bing, A.; Sentis, L.; Paine, N.; Han, S.; Mok, A.; Fok, C.-L. Stability and Performance Analysis of Time-Delayed Actuator Control Systems. J. Dyn. Syst. Meas. Control 2016, 138, 1–20. [Google Scholar]
  19. Wu, Q.; Jia, Q.; Shan, J.; Meng, X. Angular velocity estimation based on adaptive simplified spherical simplex unscented Kalman filter in GFSINS. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 2013, 228, 1375–1388. [Google Scholar] [CrossRef]
  20. Jwo, D.-J. Navigation Integration Using the Fuzzy Strong Tracking Unscented Kalman Filter. J. Navig. 2009, 62, 303–322. [Google Scholar] [CrossRef]
  21. Gaurav, P.; McBride, J.R.; Savarese, S.; Eustice, R.M. Automatic Extrinsic Calibration of Vision and Lidar by Maximizing Mutual Information. J. Field Robot. 2014, 32, 696–722. [Google Scholar]
  22. García-Moreno, A.-I.; González-Barbosa, J.-J.; Ramírez Pedraza, A.; Hurtado Ramos, J.B.; Ornelas Rodriguez, F.J. Accurate evaluation of sensitivity for calibration between a LiDAR and a panoramic camera used for remote sensing. J. Appl. Remote Sens. 2016, 10, 024002. [Google Scholar] [CrossRef]
  23. Klimkovich, B.V.; Tolochko, A.M. Determination of time delays in measurement channels during SINS calibration in inertial mode. Gyroscopy Navig. 2016, 7, 137–144. [Google Scholar] [CrossRef]
  24. Huang, L.; Barth, M. A novel multi-planar LIDAR and computer vision calibration procedure using 2D patterns for automated navigation. In Proceedings of the 2009 IEEE Intelligent Vehicles Symposium, Xi’an, China, 3–5 June 2009; pp. 117–122.
  25. Zhou, L. A new minimal solution for the extrinsic calibration of a 2D LIDAR and a camera using three plane-line correspondences. IEEE Sens. J. 2014, 14, 442–454. [Google Scholar] [CrossRef]
  26. Park, Y.; Yun, S.; Won, C.S.; Cho, K.; Um, K.; Sim, S. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board. Sensors 2014, 14, 5333–5353. [Google Scholar] [CrossRef] [PubMed]
  27. Kelly, J.; Sukhatme, G.S. Visual-Inertial Sensor Fusion: Localization, Mapping and Sensor-to-Sensor Self-calibration. Int. J. Robot. Res. 2011, 30, 56–79. [Google Scholar] [CrossRef]
  28. Kelly, J.; Sukhatme, G.S. Visual-inertial simultaneous localization, mapping and sensor-to-sensor self-calibration. In Proceedings of the 2009 IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA), Daejeon, Korea, 15–18 December 2009; Volume 30, pp. 360–368.
  29. Sibley, G.; Sukhatme, G.; Matthies, L. The Iterated Sigma Point Kalman Filter with Applications to Long Range Stereo. Robot. Sci. Syst. 2006, 8, 235–244. [Google Scholar]
  30. Farhad, A. 3D simultaneous localization and mapping using IMU and its observability analysis. Robotica 2011, 29, 805–814. [Google Scholar]
  31. Kelly, J.; Sukhatme, G.S. A General Framework for Temporal Calibration of Multiple Proprioceptive and Exteroceptive Sensors. In Experimental Robotics; Springer: Berlin, Germany, 2012; Volume 79, pp. 195–209. [Google Scholar]
Figure 1. Relationship between the {L}, {I}, {O} and {W}.
Figure 1. Relationship between the {L}, {I}, {O} and {W}.
Sensors 17 00539 g001
Figure 2. The LIDAR position and point feature detection.
Figure 2. The LIDAR position and point feature detection.
Sensors 17 00539 g002
Figure 3. The LiDAR, Object and World frames as well as the feature point P.
Figure 3. The LiDAR, Object and World frames as well as the feature point P.
Sensors 17 00539 g003
Figure 4. The Architecture of ICP and ISPKF Integration method.
Figure 4. The Architecture of ICP and ISPKF Integration method.
Sensors 17 00539 g004
Figure 5. Experiment layouts.
Figure 5. Experiment layouts.
Sensors 17 00539 g005
Figure 6. Time delay calibration using ICP-ISPKF for LiDAR-IMU. (a) Initial alignment between LiDAR (red dots) and IMU(black line) in East orientation; (b) time delay calibration one time using ICP-ISPKF between LiDAR (magenta dots) and IMU(black line) in East orientation; (c) time delay calibration ten times using ICP-ISPKF between LiDAR (blue dots) and IMU(black line) in East orientation; (d) initial alignment between LiDAR (red dots) and IMU(black line) in North orientation; (e) time delay calibration one time using ICP-ISPKF between LiDAR (magenta dots) and IMU(black line) in North orientation; (f) time delay calibration ten times using ICP-ISPKF between LiDAR (blue dots) and IMU(black line) in North orientation; (g) initial alignment between LiDAR (red dots) and IMU(black line) in Up orientation; (h) time delay calibration one time using ICP-ISPKF between LiDAR (magenta dots) and IMU(black line) in Up orientation; (i) time delay calibration ten times using ICP-ISPKF between LiDAR (blue dots) and IMU(black line) in Up orientation.
Figure 6. Time delay calibration using ICP-ISPKF for LiDAR-IMU. (a) Initial alignment between LiDAR (red dots) and IMU(black line) in East orientation; (b) time delay calibration one time using ICP-ISPKF between LiDAR (magenta dots) and IMU(black line) in East orientation; (c) time delay calibration ten times using ICP-ISPKF between LiDAR (blue dots) and IMU(black line) in East orientation; (d) initial alignment between LiDAR (red dots) and IMU(black line) in North orientation; (e) time delay calibration one time using ICP-ISPKF between LiDAR (magenta dots) and IMU(black line) in North orientation; (f) time delay calibration ten times using ICP-ISPKF between LiDAR (blue dots) and IMU(black line) in North orientation; (g) initial alignment between LiDAR (red dots) and IMU(black line) in Up orientation; (h) time delay calibration one time using ICP-ISPKF between LiDAR (magenta dots) and IMU(black line) in Up orientation; (i) time delay calibration ten times using ICP-ISPKF between LiDAR (blue dots) and IMU(black line) in Up orientation.
Sensors 17 00539 g006aSensors 17 00539 g006bSensors 17 00539 g006c
Figure 7. Time delay calibration using ICP-KF, ICP-EKF and ICP-ISPKF for LiDAR-IMU.
Figure 7. Time delay calibration using ICP-KF, ICP-EKF and ICP-ISPKF for LiDAR-IMU.
Sensors 17 00539 g007
Table 1. The specifications of the IMU and LiDAR.
Table 1. The specifications of the IMU and LiDAR.
IMULiDAR
NavigationSensors Accelerometers Gyroscopes
Horizontal Position Accuracy: 0.5 mRange10 g490°/sChannels16
Vertical Position Accuracy: 0.8 mBias Instability15 μg0.05°/hRange100 m
Velocity Accuracy: 0.007 m/sInitial Bias<1 mg<1°/hAccuracy±3 cm
Roll & Pitch Accuracy: 0.01°Scaling Error<0.03%<0.01%Vertical FOV30°
Heading Accuracy: 0.05°Scale Stability<0.04%<0.02%Horizontal FOV360°
Output Data Rate: Up to 1000 HzNon-linearity<0.03%<0.005%Output Data Rate300,000 pts/s
Table 2. Time delay calibration times using ICP-ISPKF for LiDAR-IMU.
Table 2. Time delay calibration times using ICP-ISPKF for LiDAR-IMU.
Time Delay Calibration Times Using ICP-ISPKF Time Delay Error (ms)Alignment Error in East (m)Alignment Error in North (m)Alignment Error in Up (m)
09.58 0.093 0.168 0.089
14.67 0.067 0.097 0.063
22.45 0.0430.0680.041
31.66 0.0370.0470.035
41.17 0.0310.0390.029
50.87 0.0260.0320.025
60.63 0.0230.0270.021
70.57 0.0200.0240.019
80.53 0.0190.0210.018
90.51 0.0180.0200.018
100.50 0.0180.0190.017

Share and Cite

MDPI and ACS Style

Liu, W. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter. Sensors 2017, 17, 539. https://doi.org/10.3390/s17030539

AMA Style

Liu W. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter. Sensors. 2017; 17(3):539. https://doi.org/10.3390/s17030539

Chicago/Turabian Style

Liu, Wanli. 2017. "LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter" Sensors 17, no. 3: 539. https://doi.org/10.3390/s17030539

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop