LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter

The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.


Introduction
In today's world, Light Detection and Ranging (LiDAR) devices and Inertial Measurement Units (IMUs) often found on vehicles, airplanes and robots are increasingly being used to perform localization or for navigation tasks. The LiDAR and IMU sensors, together, can supply accurate attitude estimation and are very suitable in many applications. However, in GPS-denied environments the IMUs are usually unreliable with respect to position for long periods of time due to time drift, which may cause large cumulative errors, the a LiDAR is a device which uses laser beams to determine the distance and azimuth from the sensor to an object, which can provide 3D localization information with high accuracy and efficiency to reduce or bound IMUs drift. Therefore, the integrated LiDAR-IMU system can provide highly accurate position and pose information over long periods of time in GPS-denied environments.
Usually combining information from multiple sensors offers several potential advantages, including enhanced accuracy and improved robustness, but these come at a cost: a fundamental requirement in any multiple sensor system is time delay calibration. To ensure optimal performance, the LiDAR and IMU sensors must be properly time delay calibrated, including estimates of the relative timing of each sensor measurement, coordinate transformation between the different sensors, and time delay calibration parameters estimation are required [1][2][3][4].
Several methods exist for LiDAR, IMU and camera time delay calibration. The work of Kelly et al. [5] presented a time delay iterative closest point method for determining the time delay

Coordinate Frame
In general, there are basically four coordinate frames in the LiDAR and IMU, The relationship between these frames is shown in Figure 1: (1) LiDAR frame, {L}, is represented in this frame of reference, in which the axes are defined as right, forward and up. (2) IMU frame, {I}, is defined by the IMU, in which angular rotation rates and linear accelerations are measured, with its origin at a point on the IMU body.

Coordinate Frame
In general, there are basically four coordinate frames in the LiDAR and IMU, The relationship between these frames is shown in Figure 1: (1) LiDAR frame, {L}, is represented in this frame of reference, in which the axes are defined as right, forward and up. (2) IMU frame, {I}, is defined by the IMU, in which angular rotation rates and linear accelerations are measured, with its origin at a point on the IMU body.

Transformation from LiDAR Frame to the World Frame
Suppose a point W P in the {W} frame is located at L P in the {L}, the transformation from the {W} coordinate to the {L} coordinate system can be expressed as [24]: where is a 3  3 orthonormal matrix representing the rotation from the {W} frame to the {L} frame, and is the translation vector. The subscript is the origin of the {L} coordinate, and is the origin of the tangent frame. Our goal is to calculate and . The transformation is implemented through the observation of LiDAR scanning to a calibration plane. A geometric constraint can be obtained between LiDAR scanning points and the calibration plane. Since LiDAR scanning points lie on the calibration plane, and W r is the normal vector to the plane, we have: where W P is the coordinate of LiDAR scanning point in {W} frame; d is a scalar representing the distance from the origin of the {W} frame to the calibration plane, which is calculated from the position and orientation of the calibration plane. From Equation (1) we know that: By substituting Equation (3) into Equation (2), we have: where R L W is a 3 × 3 orthonormal matrix representing the rotation from the {W} frame to the {L} frame, and L T LW is the translation vector. The subscript is the origin of the {L} coordinate, and is the origin of the tangent frame. Our goal is to calculate R L W and L T LW . The transformation is implemented through the observation of LiDAR scanning to a calibration plane. A geometric constraint can be obtained between LiDAR scanning points and the calibration plane. Since LiDAR scanning points lie on the calibration plane, and W r is the normal vector to the plane, we have: where W P is the coordinate of LiDAR scanning point in {W} frame; d is a scalar representing the distance from the origin of the {W} frame to the calibration plane, which is calculated from the position and orientation of the calibration plane. From Equation (1) we know that: For any given LiDAR scanning point and calibration plane position, Equation (4) gives a constraint on R L W and L T LW . It will be solved in two consecutive steps: a linear solution, followed by a non-linear optimization [25,26]: (1) Linear solution. The LiDAR scanning plane in the {L} is L Z = 0. Each LiDAR scanning point can be represented as L P = [ L P x L P y L P z ] T . Then Equation (4) is rewritten as: (5) is rewritten as: where Z is the parameter to be solved, which is the integration of two unknown parameters R L W and L T LW . Equation (6) can be solved using the standard linear least squares algorithm. In order to obtain a better result, multiple calibration planes should be used in the transformation. Suppose in the transformation we use a total of N calibration planes, with M i (i = 1, . . . ,N) LiDAR scanning points on the i-th plane. Let Z =    z 11 z 12 z 13 z 21 z 22 z 23 z 31 z 32 z 33   , the normal vector for the i-th plane W r i = [r i,1 r i,2 r i,3 ], the distance from origin of the {W} frame and the i-th calibration plane is d i and the j-th LiDAR scanning point on the i-th calibration plane is L P ij = [ L P ij,x L P ij,y L P ij,z ] T , Equation (6) is rewritten as: (r i,1 z 11 + r i,2 z 21 + r i,3 z 31 ) · L P ij,x + (r i,1 z 12 + r i,2 z 22 + r i,3 z 32 ) · L P ij,y + (r i,1 z 13 + r i,2 z 23 + r i,3 z 33 ) · L P ij,z = d i (7) where i = 1,2, . . . ,N, j = 1,2, . . . ,M. Therefore for each LiDAR scanning point we have a linear equation which is a row in Equation (7) can be calculated using the standard linear least square algorithm. Then R L W and L T LW will be obtained from Z.
(2) Nonlinear solution. The linear solution is obtained by the least squares method, which aims to minimize an algebraic distance. A nonlinear minimization method is used to minimize the differences between the measured Euclidean distances as well as the calculated distance from the LiDAR scanning points to the calibration plane, which is physically meaningful.
Equation (4) gives two types of distances: d is the distance from the calibration plane to the origin of the {W} frame obtained by a field survey, and W r· R L W −1 ( L P − L T LW ) is the calculated distance.
The difference between these two distances is defined as the distance error for one calibration plane pose. The nonlinear solution aims to minimize the sum function of the distance errors for all the plane positions. The sum function is defined as: where ri defines the i-th calibration plane, d i is the distance from the i-th plane to the center of the {W} frame, and P ij is the j-th LiDAR scanning point with the i-th calibration plane; the pair of R L W and L T LW that minimize Equation (8) are considered to be the rotation and translation matrix to be calculated. Equation (8) can be minimized as a nonlinear optimization problem by getting the translation and rotation matrix between the {W} frame and LiDAR scanning points in the {W} frame can be converted to a point in the {L} frame.

Transformation from LiDAR Frame to IMU Frame
The transformation of the LiDAR frame to the IMU one is to obtain the geometric relationship between the {L} frame and {O} frame. The IMU consists of three gyros and three accelerometers. The gyros provide change of Euler angles, while the accelerometers give the specific force. By integrating the output of the gyros and the accelerometers, we can obtain the translation and rotation matrix between the {W} frame and the {O} frame. Let the rotation matrix be R W O , and translation vector be O T WO , then a point in the {O} frame can be converted to a point in the {W} frame by: where W P is the point in the {W} frame, and O P is the point in the {O} frame. Finally, by substituting Equation (1) into Equation (9) we have: Equation (10) is the transformation from the {O} frame to the {L} frame.

LiDAR Measurement Model
The LiDAR scans the laser beam through 360 • . Therefore, a post which will be modeled as a vertical line in the tangent frame will appear as a point in the LiDAR frame. Similarly, a wall which will be modeled as a vertical plane in the tangent frame will appear as a line in the LiDAR frame of references. The LiDAR position and point feature detection are shown in Figure 2. The transformation of the LiDAR frame to the IMU one is to obtain the geometric relationship between the {L} frame and {O} frame. The IMU consists of three gyros and three accelerometers. The gyros provide change of Euler angles, while the accelerometers give the specific force. By integrating the output of the gyros and the accelerometers, we can obtain the translation and rotation matrix between the {W} frame and the {O} frame. Let the rotation matrix be , and translation vector be , then a point in the {O} frame can be converted to a point in the {W} frame by: where is the point in the {W} frame, and is the point in the {O} frame. Finally, by substituting Equation (1) into Equation (9) we have: Equation (10) is the transformation from the {O} frame to the {L} frame.

LiDAR Measurement Model
The LiDAR scans the laser beam through 360°. Therefore, a post which will be modeled as a vertical line in the tangent frame will appear as a point in the LiDAR frame. Similarly, a wall which will be modeled as a vertical plane in the tangent frame will appear as a line in the LiDAR frame of references. The LiDAR position and point feature detection are shown in Figure 2.

Calibration plane
Scanning points Consider a mapped vertical post at P0. The post will appear as a point in the LiDAR return. The vector from T to P0 in world frame is 0 = [ 0 0 0] , which is known from the survey. The line is denoted as: where ( ) is the vector in world frame from T to a point P at height m on the post, m is a scalar with ∈ (0, +∞). The vector e3 is parallel to the post. In our application, all the posts and planes are modeled as vertical. For a vertical post 3 = [0 0 1] in the world frame.
For any feature point P, the vector from the LiDAR to the point is: Consider a mapped vertical post at P 0 . The post will appear as a point in the LiDAR return. The vector from T to P 0 in world frame is W T LP O = [N 0 E 0 0] T , which is known from the survey. The line is denoted as: where W T WP (m) is the vector in world frame from T to a point P at height m on the post, m is a scalar with m ∈ (0, +∞). The vector e 3 is parallel to the post. In our application, all the posts and planes are modeled as vertical. For a vertical post e 3 = [0 0 1] T in the world frame. For any feature point P, the vector from the LiDAR to the point is: As is shown in Figure 3. Equation (12) is valid for all the reference frames. In Equation (12) and Figure 3, TWP is a constant in the world frame known from the feature survey. TWO will be calculated in the world frame. TOL in the object frame is known, and can be determined by the pre-calibration.

World frame
Object frame LiDAR frame is the rotation matrix from the object frame to the LiDAR frame, so the vector from the LiDAR to a feature point in LiDAR coordinates is denoted as: By substituting Equation (11) into Equation (13) we have: which is the vector in the LiDAR coordinates from the origin of the LiDAR sensor to a point with height m on the post. By expanding Equation (14), the coordinate of ( ) in the − direction is: Equation (15) is the theoretical equation. Note that for a single-planar LiDAR, the scan plane in LiDAR coordinate is = 0. Therefore, for all the LiDAR points we have = 0, and we can use this fact to calculate m as: By substituting Equation (16) into Equation (14), the detected point is: Equation (17) is the LiDAR measurement model.

IMU Measurement Model
In general, the IMU measures the moving object's rotational motion in three orthogonal axes. Usually, the output rates of IMU measuring angles are not zero, even when the IMU is stationary, and during the IMU measurements the angular products are easily be corrupted by noise. At time t, In Equation (12) and Figure 3, T WP is a constant in the world frame known from the feature survey. T WO will be calculated in the world frame. T OL in the object frame is known, and can be determined by the pre-calibration.
R L O is the rotation matrix from the object frame to the LiDAR frame, so the vector from the LiDAR to a feature point in LiDAR coordinates is denoted as: By substituting Equation (11) into Equation (13) we have: which is the vector in the LiDAR coordinates from the origin of the LiDAR sensor to a point with height m on the post. By expanding Equation (14), the coordinate of L T LP (m) in the − L Z direction is: Equation (15) is the theoretical equation. Note that for a single-planar LiDAR, the scan plane in LiDAR coordinate is L Z = 0. Therefore, for all the LiDAR points we have L Z = 0, and we can use this fact to calculate m as: By substituting Equation (16) into Equation (14), the detected point is: Equation (17) is the LiDAR measurement model.

IMU Measurement Model
In general, the IMU measures the moving object's rotational motion in three orthogonal axes. Usually, the output rates of IMU measuring angles are not zero, even when the IMU is stationary, and during the IMU measurements the angular products are easily be corrupted by noise. At time t, the IMU measuring angular velocity can be written as [5,27]: where n g (t) is the noise vector, b g and ω I (t) are the bias vector and the true angular velocity of the IMU, respectively. In fact, all IMUs are subject to time drift or their measurement bias values slow change with time. Usually, in the very short period of time of the calibration interval for the IMU, the measurement bias and drift may be negligible, however, in order to obtain an accurate estimate of the gyroscope biases, at the start of calibration, we average several seconds of IMU data.
At a time t > t 0 , in order to obtain the IMU orientation estimate, according to the modified Rodrigues parameters kinematic differential equation, we integrate each IMU measurement forward as: . where As integration continues, the noise and drift will cause inaccuracies, therefore, relative to its initial orientation, the IMU true orientation will become less certain. By estimating the IMU propagation orientation covariance, the matrices F and G can be defined as: Using the continuous time equation the propagation orientation covariance can be written as: .
According to Equation (21), any feature point in the IMU measurement curve can be selected as a zero uncertainty reference. We choose one of the earliest IMU measurements as the uncertainty computation reference, mainly because our computed deviation estimate is relative to the earliest measurement.

Time Delay Error Model
We will identify a specific instantaneous local LiDAR frame as {L k } with timestamp t L k according the receiver clock, for k = 1, . . . ,n LiDAR poses. Similarly, for j = 1, . . . ,m IMU poses, the {I j } will be identified as a specific instantaneous local IMU frame with timestamp t I j . In general, the IMU data are available at a substantially higher rate than the LiDAR data, and m > n.
To compare LiDAR and IMU measurement data, a common representation is required; we use the orientation measurements to match IMU and LiDAR. The IMU orientation is measured with respect to the initial sensor pose. The LiDAR orientation is measured relative to a series of calibration planes. Our algorithm employs the modified Rodrigues parameters as a minimal and convenient orientation representation. The use of the modified Rodrigues parameters vector representation allows us to propagate the IMU orientation uncertainty forward continuously in time [28]. We have the following relationship at any time t I j : where L I ρ is the modified Rodrigues parameter vector with the orientation from the {I} frame to the {L} frame. W L ρ(t + τ) is the modified Rodrigues parameter vector with the orientation from the {L} frame to the {W}, τ is the time delay. W L ρ(t) is the modified Rodrigues parameter vector with the orientation from the {I} frame to the {W} frame.
Although the orientation of the rate measurement information supplied by the IMU cannot be measured directly, by integrating the IMU gyroscope data, we can obtain the IMU measurement orientation change over a period of time. Similarly, we can conveniently compute the LiDAR measured orientation relative to the {W} frame. The measurement relationship between the {I}, {L} and {W} frame can be computed as: where I 0 W ρ is the modified Rodrigues parameter vector with the orientation from the {I 0 } frame to the {W} frame.
(1) Position error analysis: the error in position can be calculated as follows: Velocity error analysis: the error in velocity is modeled as follows: where g W is the local gravity vector at the moving object location represented in the {W} frame, W ω IL is the angular rate of the {W} frame origin to the {I} frame represented in the {L} frame. γ g is the white Gaussian measurement noise, b g is the gyro bias, which is modeled as a random constant plus random noise.
(3) Attitude error analysis: the attitude error is given by: where δ( W ω IW ) can be calculated as: (4) Time delay calibration parameters error model: the measurement noise vector, the time delay parameter vector and process noise vector of LiDAR-IMU can be expressed as: where the x, b and γ are the time delay of the error state vector, process noise vector and measurement noise vector, respectively.
According to the Equations (25)-(28), the time delay calibration parameters error model of LiDAR-IMU can be computed as:

Time Delay Calibration Using the ICP-ISPKF Integration Method
The ICP algorithm can be used to register the spatial data from IMU and LiDAR measurements, which operates by aligning two curves in a 3-D orientation space generated from integrated IMU gyroscope data and from LiDAR scanner data. Each point on the respective curve has a corresponding timestamp, identifying the time at which the measurement arrived at the receiver. By registering the orientation curves, we are able to use the timestamp values to estimate the relative delay between IMU and LiDAR data streams. ICP incorporates in a principled way the uncertainty in the LiDAR orientation measurements and accounts for the fact that the integrated IMU orientation becomes less accurate over longer intervals due to the incorporation of noise. We specifically avoid this using the ISPKF because of The ISPKF algorithm measurement update step is iterated until the change in the posterior state estimate drops below some small threshold, and can optimally estimate the time delay calibration parameters. The ISPKF is shown to achieve superior performance in terms of efficiency and accuracy compared with the KF, EKF and UKF, also the Gabe Sibley et al. [29] have compared the ISPKF with KF, EKF, UKF and Gauss-Newton filter, and demonstrate the ISPKF's capabilities in avoiding IMU measuring noise. Therefore, in order to solve the problem of LiDAR-IMU time delay calibration, we use the fusion method based on ICP and ISPKF algorithm.
As shown in Figure 4, the LiDAR-IMU time delay calibration is integrated with ICP and ISPKF method. Firstly, once the ICP succeeds matching the LiDAR measurement points, the LiDAR-IMU state estimations are updated with the precise registration value. Secondly, at the time delay filter estimate step, the obtained predicted pose is required to find the corresponding points between LiDAR and IMU measurements. Thirdly, at each time step, the recursive algorithm is used to update the related covariance estimates and precise registration value for LiDAR-IMU. The ICP convergence information and the fault-detection logic are used to adaptively adjust ISPKF reliable estimate of motion state and a set of related parameters. In closed-loop ICP-ISPKF architecture, through ICP initial guess and fault detection, the LiDAR-IMU robust pose tracking and automatic fault recovery are established. Finally, the pose prediction information used to align the time delay error for ICP [6,30].
update the related covariance estimates and precise registration value for LiDAR-IMU. The ICP convergence information and the fault-detection logic are used to adaptively adjust ISPKF reliable estimate of motion state and a set of related parameters. In closed-loop ICP-ISPKF architecture, through ICP initial guess and fault detection, the LiDAR-IMU robust pose tracking and automatic fault recovery are established. Finally, the pose prediction information used to align the time delay error for ICP [6,30].

The ICP Algorithm for Estimation the Time Delay and Relative Orientation of LiDAR-IMU
The ICP algorithm is utilized to estimation the LiDAR-IMU time delay and relative orientation. At the beginning, the transforms between the LiDAR-IMU orientation curves are computed through iteratively selecting n correspondences point using ICP algorithm. We will adjust the search the corresponding time scale with the orientation curves converge. The ICP algorithm can be describe in two steps as follows [5,31]: Step I: Registration Rules.
The ICP algorithm operates by iteratively selecting the closest point between the IMU orientation curve and the LiDAR measurement point, and the concept of ICP proximity requires a suitable distance measurement.
For the LiDAR measurement point P W L i , the distance to the nearby IMU point P W I j is determined by applying the current spatial transformation model and calculating the incremental modified Rodrigues parameter vector. Taking from P W L i to the orientation P W I j , we can get the following equation: According to the Equation (30), it can be known that the closest point is the IMU measurement point, and it produces the minimum value of distance function that can be computed as: where d ij is the incremental rotation arc length taking from ρ W L i to ρ I 0 I j in radians orientation. The search ranges of each LiDAR scanning point in the IMU measurement curve are constrained by the maximum expected time delay, and can be definitely expressed as t L i −τ − δt, t L i −τ + δt , where ±δt is the maximum value. When the algorithm converges, the δt can be reduced on each iteration.
Step II: Nonlinear Iterative Registration.
We can get n correspondence relationships between the IMU and LiDAR orientation measurement curves by selecting the closest IMU point for each LiDAR point. The generalized nonlinear total least squares method is used to align the IMU and LiDAR orientation curves, where we minimize the cost function as follows: where P L k and P I f (k) are the associated covariance matrices, s and t are the stacked residuals vectors, the I 0 W ρ and L I ρ are transform parameters can be computed as: We remark the W L i ρ and ρ are actually estimated quantities, but that we treat them as observations here. Considering the uncertainty in LiDAR and IMU measurements, we take that the residuals subjecting to n model constraints can be expressed as: By using Lagrange multipliers and incorporating constraints, differentiating and rearranging, we subjected the Equation (33) to the n constraints and derived from Equation (34), the following equation can be obtained: where ∆ I 0 W ρ and ∆ L I ρ are the incremental updates to W I 0ρ and L Iρ , respectively, for the present iteration, and ∆y is a stacked vector, ∆y i = I 0 where H k represents the Jacobian matrix with respect to observation W L k ρ: According to Equation (36), the ICP algorithm requires an initial estimate between the LiDAR and IMU data and we will use simple manual measurements to estimate the relative orientation between the LiDAR and IMU.

Iterated Sigma Point Kalman Filter (ISPKF) Algorithm for Compensation Calibration Parameters
After the relative orientation and optimal time delay estimation for the LiDAR-IMU, the ISPKF algorithm is used for compensation of the time delay calibration parameters between the LiDAR and IMU measurements.
The ISPKF algorithm uses the process noise component to implement the incremental state vector and the state covariance matrix, as expressed in: where n(t k ) is the noise vector in measurement process,X a (t k ) is the augmented state vector. At time t k−1 , shortly after the LiDAR and IMU measurement update, the enhanced state covariance matrix P + a (t k−1 ) and the enhanced state meanX + a (t k−1 ) can be formed as: where the state vectorX + a (t k−1 ) has size m = 26 + 3n and Q c (t k−1 ) is the continuous-time covariance matrix for the noise vector n(t k−1 ), when n calibration points are included, for the Cartesian and inverse calibration points parameterization, respectively. The scaled form is employed for unscented transform, which requires a scaling term: The α parameter is used to control the sigma points spread with respect to the mean of the state; we usually set a small positive value for parameter α. In the a posteriori state distribution Taylor series expansion, the β parameter is used to correct the higher order term. We set β = 2 and minimize the joint Gaussian distribution fourth-order error.
We use the augmented state vectorX + a (t k−1 ) to generate a set of sigma points according to the following equation: where P + a (t k−1 ) is the augmented state covariance matrix, (S(·)) j expresses the j-th column of the matrix S. The weight values of the associated sigma point can be described as: A single σ-point can be propagated by the enhanced nonlinear process model function f a . Over the time interval t ∈ [t k−1 , t k ), the a priori state estimate and covariance t k can be computed as: Through propagating each σ point through the nonlinear measurement model function h, we can determine the predicted measurement vector as follows: Through computing the a posteriori state vector, the state covariance matrix, and the Kalman gain matrix, we can perform the state update as follows: where R k is the measurement covariance matrix for Z k , while PẐ kẐk and PX kẐk are the predicted measurement covariance matrix and the state-measurement cross-covariance matrix, respectively.

Experiments and Discussion
To verify the effectiveness of the proposed algorithm, a field experiment was performed on the fourth floor of the School of Mechanical and Electrical Engineering building on the campus of China University of Mining and Technology. The experimental layout is shown in Figure 5. The experiments were conducted with a VLP-16 LiDAR (Velodyne, Morgan Hill, CA, USA) and Spatial FOG IMU (Advanced Navigation Company, Sydney, Australia). The specifications of the IMU and LiDAR are listed in Table 1.  The LiDAR consists of 16 laser scanners that collectively span a 27° of vertical field of view. Firstly, in each experiment, we initialize the IMU biases and keeping the IMU stationary for 3 s. We begin recording data from the LiDAR and the IMU, the moving objector carrying the LiDAR-IMU platform automatically moves following different configurations in front of the calibration plane. The IMU update data is recorded at 100 Hz, while the LiDAR scanning data rate is 100,000 points/s. We chose a set of three different trajectories for the LiDAR-IMU, and for each trajectory, we experimented with a different time shift and a different initial orientation estimate. In all trajectories, the initial true relative orientation of the IMU with respect to the LiDAR was the same. In the experiments, the moving objector ran three different trials, with the LiDAR-IMU platform translating and rotating in front of the calibration plane for approximately 5 s, and the LiDAR-IMU platform then ran in three different orientation and time delay configurations. For configuration 1, we set the IMU-camera relative orientation to a nominal value (a roll of 90°, pitch of 0°, and yaw of 90°), and fixed the estimated time delay value at zero. For configuration 2, we used the mean LiDAR-IMU relative orientation computed by averaging the roll, pitch, and yaw values, while again fixing the time delay at zero. Finally, for configuration 3, we used the averaged LiDAR-IMU relative orientation and the mean time delay value from configuration 2, and we determined the performance of each configuration by computing the RMS error. Once the LiDAR and IMU  The LiDAR consists of 16 laser scanners that collectively span a 27 • of vertical field of view. Firstly, in each experiment, we initialize the IMU biases and keeping the IMU stationary for 3 s. We begin recording data from the LiDAR and the IMU, the moving objector carrying the LiDAR-IMU platform automatically moves following different configurations in front of the calibration plane. The IMU update data is recorded at 100 Hz, while the LiDAR scanning data rate is 100,000 points/s. We chose a set of three different trajectories for the LiDAR-IMU, and for each trajectory, we experimented with a different time shift and a different initial orientation estimate. In all trajectories, the initial true relative orientation of the IMU with respect to the LiDAR was the same. In the experiments, the moving objector ran three different trials, with the LiDAR-IMU platform translating and rotating in front of the calibration plane for approximately 5 s, and the LiDAR-IMU platform then ran in three different orientation and time delay configurations. For configuration 1, we set the IMU-camera relative orientation to a nominal value (a roll of 90 • , pitch of 0 • , and yaw of 90 • ), and fixed the estimated time delay value at zero. For configuration 2, we used the mean LiDAR-IMU relative orientation computed by averaging the roll, pitch, and yaw values, while again fixing the time delay at zero. Finally, for configuration 3, we used the averaged LiDAR-IMU relative orientation and the mean time delay value from configuration 2, and we determined the performance of each configuration by computing the RMS error. Once the LiDAR and IMU measurements for each configuration of the calibration plane were available, we used the method described in Section 4 to accurately estimate the LIDAR-IMU time delay calibration parameters and the LIDAR-IMU transformation parameters.
The experimental results for LiDAR-IMU time delay calibration as shown in Figure 6. Figure 6a,d,g, shows the initial alignment between IMU and LiDAR in East, North and Up orientation respectively. Figure 6b,e,h, shows one time alignment using ICP-ISPKF between IMU and LiDAR in East, North and Up orientation, respectively. Figure 6c,f,i, shows ten times alignment using ICP-ISPKF between IMU and LiDAR in East, North and Up orientation respectively. We examined the error in the LiDAR-IMU time delay calibration using ICP-ISPKF for ten times, as shown in Table 2. using ICP-ISPKF between IMU and LiDAR in East, North and Up orientation respectively. We examined the error in the LiDAR-IMU time delay calibration using ICP-ISPKF for ten times, as shown in Table 2. According to the Table 2 and Figure 6, the initial time delay alignment error is 9.58 ms, the LiDAR-IMU alignment errors in the East, North and Up orientation are 0.093 m, 0.168 m, 0.089 m respectively. After one time calibration using ICP-ISPKF converged, the time delay alignment errors are reduced to 4.67 ms, and the alignment errors in the East, North and Up orientation are reduced to 0.067 m, 0.097 m, 0.063 m, respectively. For clarity, after ten times calibration using ICP-ISPKF converged, the final time delay alignment error was reduced to 0.50 ms, and the final alignment errors in the East, North and Up orientation were reduced to 0.018 m, 0.019 m, 0.017 m, respectively.    In order to prove the efficiency and accuracy of the ISPKF method, we used the ISPKF, KF and EKF methods to estimate the LIDAR-IMU time delay calibration errors. Since obtaining the initial truth time delay value is very difficult, we supposed the initial time delay alignment errors between LiDAR and IMU are zero, when the LiDAR and IMU measurements for each configuration of the Figure 6. Time delay calibration using ICP-ISPKF for LiDAR-IMU. (a) Initial alignment between LiDAR (red dots) and IMU(black line) in East orientation; (b) time delay calibration one time using ICP-ISPKF between LiDAR (magenta dots) and IMU(black line) in East orientation; (c) time delay calibration ten times using ICP-ISPKF between LiDAR (blue dots) and IMU(black line) in East orientation; (d) initial alignment between LiDAR (red dots) and IMU(black line) in North orientation; (e) time delay calibration one time using ICP-ISPKF between LiDAR (magenta dots) and IMU(black line) in North orientation; (f) time delay calibration ten times using ICP-ISPKF between LiDAR (blue dots) and IMU(black line) in North orientation; (g) initial alignment between LiDAR (red dots) and IMU(black line) in Up orientation; (h) time delay calibration one time using ICP-ISPKF between LiDAR (magenta dots) and IMU(black line) in Up orientation; (i) time delay calibration ten times using ICP-ISPKF between LiDAR (blue dots) and IMU(black line) in Up orientation.
According to the Table 2 and Figure 6, the initial time delay alignment error is 9.58 ms, the LiDAR-IMU alignment errors in the East, North and Up orientation are 0.093 m, 0.168 m, 0.089 m respectively. After one time calibration using ICP-ISPKF converged, the time delay alignment errors are reduced to 4.67 ms, and the alignment errors in the East, North and Up orientation are reduced to 0.067 m, 0.097 m, 0.063 m, respectively. For clarity, after ten times calibration using ICP-ISPKF converged, the final time delay alignment error was reduced to 0.50 ms, and the final alignment errors in the East, North and Up orientation were reduced to 0.018 m, 0.019 m, 0.017 m, respectively.
In order to prove the efficiency and accuracy of the ISPKF method, we used the ISPKF, KF and EKF methods to estimate the LIDAR-IMU time delay calibration errors. Since obtaining the initial truth time delay value is very difficult, we supposed the initial time delay alignment errors between LiDAR and IMU are zero, when the LiDAR and IMU measurements for each configuration of the calibration plane were available, the ICP-KF, ICP-EKF and ICP-ISPKF method are used to perform the time delay calibration for LiDAR-IMU.
The experimental results are shown in Figure 7, where the red line is the time delay alignment error result using the ICP-KF method, the blue line is the time delay alignment error result using the ICP-EKF method, the green line is the time delay alignment error result using the ICP-ISPKF method, and the one time calibration mean alignment errors using the ICP-KF, ICP-EKF and ICP-ISPKF methods are 0.233 m, 0.151 m, 0.067 m, respectively. calibration plane were available, the ICP-KF, ICP-EKF and ICP-ISPKF method are used to perform the time delay calibration for LiDAR-IMU.
The experimental results are shown in Figure 7, where the red line is the time delay alignment error result using the ICP-KF method, the blue line is the time delay alignment error result using the ICP-EKF method, the green line is the time delay alignment error result using the ICP-ISPKF method, and the one time calibration mean alignment errors using the ICP-KF, ICP-EKF and ICP-ISPKF methods are 0.233 m, 0.151 m, 0.067 m, respectively.

Conclusions
In order to solve the problem of LiDAR-IMU time delay calibration, we have presented a fusion method based on an iterative closest point algorithm and an iterated sigma point Kalman filter, which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. We realized the coordinate transformation from the LiDAR frame to the IMU frame, and established the measurement model and time delay error model of LiDAR and IMU. We also presented the methodology of the ICP and ISPKF procedure for the LiDAR-IMU time delay calibration. Intensive experimental studies were conducted to check the validity of the theoretical results. The results show that after ten times calibration using ICP-ISPKF converged, the time delay alignment error was reduced from 9.58 ms to 0.50 ms, and the alignment errors in the East, North and Up orientation were reduced from 0.093 m, 0.168 m, 0.089 m to 0.018 m, 0.019 m, 0.017 m, respectively. In the future work, the ICP-ISPKF algorithm will be used to calibrate the time delay error in other multi-sensor systems. Within the batch execution framework, different methods will be examined for incorporating varying time delays. We are convince that our proposed methods will be useful for other types of time delay calibration for multiple sensors.