Performance Index for Extrinsic Calibration of LiDAR and Motion Sensor for Mapping and Localization

Light Detection and Ranging (LiDAR) is a sensor that uses a laser to represent the surrounding environment in three-dimensional information. Thanks to the development of LiDAR, LiDAR-based applications are being actively used in autonomous vehicles. In order to effectively use the information coming from LiDAR, extrinsic calibration which finds the translation and the rotation relationship between LiDAR coordinate and vehicle coordinate is essential. Therefore, many studies on LiDAR extrinsic calibration are steadily in progress. The performance index (PI) of the calibration parameter is a value that quantitatively indicates whether the obtained calibration parameter is similar to the true value or not. In order to effectively use the obtained calibration parameter, it is important to validate the parameter through PI. Therefore, in this paper, we propose an algorithm to obtain the performance index for the calibration parameter between LiDAR and the motion sensor. This performance index is experimentally verified in various environments by Monte Carlo simulation and validated using CarMaker simulation data and real data. As a result of verification, the PI of the calibration parameter obtained through the proposed algorithm has the smallest value when the calibration parameter has a true value, and increases as an error is added to the true value. In other words, it has been proven that PI is convex to the calibration parameter. In addition, it is able to confirm that the PI obtained using the proposed algorithm provides information on the effect of the calibration parameters on mapping and localization.


Introduction
Autonomous vehicles combine information from various sensors, such as light detection and ranging (LiDAR), cameras, Global Navigation Satellite System (GNSS), and Inertial Navigation System (INS), with vehicle data to recognize the surrounding environment and determine the current vehicle status. In this case, each sensor has its own unique coordinate system. Therefore, in order to effectively integrate and use the sensor values measured based on each sensor coordinate system, it is necessary to unify each sensor coordinate system into one reference coordinate system. In order to unify the reference coordinates, it is essential to find a transformation relationship between the respective sensor coordinates and the reference coordinates. The process of obtaining such a transformation, which includes rotation and translation relationship between coordinate systems, is called extrinsic parameter calibration.
Extrinsic calibration between LiDAR and the motion sensor involves finding the transformation relationship between the LiDAR and the motion sensor. At this time, the motion sensor refers to an integrated sensor when it is integrated into one by applying a certain methodology [1] to an in-vehicle sensor, such as a yaw-rate sensor or wheel speed sensor, attached to several locations. Alternatively, the inertial measurement unit (IMU) sensor or INS sensor itself, which includes all information that can indicate vehicle movement, is defined as a motion sensor. It is extremely essential, as the result of calibration has a significant influence on the performance of LiDAR-based autonomous driving applications, such as mapping [2], localization [3,4], and object detection [5]. Therefore, research on extrinsic calibration between LiDAR and the motion sensor is being actively conducted.
Conventional LiDAR and motion sensor calibration methods include a direct calibration method that derives parameters directly using surrounding structures or calibration room and an indirect calibration method that derives parameters indirectly through an algorithm. In order to derive accurate parameters through the direct calibration method, time or human resources are greatly required. Therefore, the algorithm-based indirect calibration method is widely used. However, as this value is derived based on an algorithm, it is not always possible to derive a value similar to the actual value. Therefore, in order to effectively verify the parameters derived from in-direct calibration, it is necessary to quantitatively indicate whether this value is similar to the true value or not, and this quantitative value is called the performance index PI of the extrinsic calibration parameter.
One of the widely used methods for PI is to analyze the root mean square error (RMSE) with the ground-truth value. At this time, it is assumed that the value set when attaching the sensor or the direct calibration value previously measured is ground truth. RMSE can represent the degree of the error directly and intuitively. In real situations, however, as the measured ground truth is likely to change, it is very difficult to always obtain an accurate ground-truth value. Another method is to use standard deviation, repeatability, and convergence as performance indicators after trying indirect calibration several times. Although this method provides the characteristics of the calibration parameters, it has the disadvantage of not responding to biased results. The last method is to compare the generated map smoothness using calibration parameters to evaluate. However, this method only provides a relative PI for the calibration parameter and does not provide an intuitive and absolute PI. In addition, all three methods do not provide information on how much the derived calibration parameters affect mapping and localization.
Therefore, in order to overcome the problems of the existing performance indices, we propose an algorithm to obtain a performance index for a 6-degree-of-freedom (DoF) extrinsic calibration parameter between LiDAR and the motion sensor. The performance index obtained through this algorithm can easily derive an unbiased absolute performance index without ground truth, and it is possible to figure out how much influence it has on mapping and localization from the derived calibration parameter. The main contribution of this paper is as follows: • It presents a novel algorithm to obtain a performance index for the 6-DoF extrinsic calibration parameter between LiDAR and the motion sensor based on localization and mapping. It can determine how much it influences the localization performance as well as the goodness of fit for extrinsic calibration results. • It was experimentally verified through Monte Carlo simulation as to whether it is effective in various environments. • In addition, more accurate correction parameters can be derived by applying global optimization using the obtained PI as a cost function, and this is briefly introduced in Appendix A.
The remainder of this paper is organized as follows: Section 2 briefly reviews previous studies. In Section 3, we introduce the proposed algorithm to obtain the PI of the calibration parameter. We verify the validity of the proposed algorithm in various environments through Monte Carlo simulation in Section 4. Then, we show the experimental results using simulation and real data in Section 5, and, finally, the conclusion is described in Section 6.

Approaches for Extrinsic Parameter Calibration
The traditional method for extrinsic parameter calibration between sensors can be largely divided into direct calibration and indirect calibration. Direct calibration measures distance and angle manually by either placing landmarks to control points or by using known spatial maps, such as calibration room [6][7][8]. These methods, however, consume both time and labor with experts. To address these issues, indirect calibration approaches based on the motion data of each sensor have been studied in recent decades. They are generic approaches because they exploit relative pose information regardless of the type of sensor. There are various ways to find a solution for the calibration parameter.
Hand-eye calibration finds the relationship between the sensor on the robot's endeffector and base frame of the robot [9][10][11]. They attempt to solve a homogeneous matrix equation formed as "AX = XB" where "A" is related to the relative pose of a reference motion sensor and "B" is related to the relative pose of a target sensor. In this problem, it is important to obtain the sensor's trajectory by measurements because they cannot provide the sensor's relative pose information directly.
Reference [12] estimates calibration parameters for LiDARs without any specific and known environmental features based on a fully unsupervised approach. It does not require a specific calibration target. With contiguous surface assumption, it finds globally consistent parameters by optimizing an energy function that evaluates whether points are far away from surfaces.
The graph-optimization-based approach was also proposed to consider properties for sensors mounted in an autonomous vehicle [13]. By applying additional constraints, it improved robustness and reliability.

Approaches for Evaluation of Extrinsic Calibration Parameter
The extrinsic calibration parameters obtained through the various methods introduced above are evaluated in terms of how appropriate they are with some evaluation indices. There are two main ways to evaluate them: with and without ground-truth data. Ground truth can be found by applying various calibration methods and further optimization techniques. For example, the KITTI dataset provides the ground-truth extrinsic calibration parameters between LiDAR-IMU that are calculated based on the hand-eye calibration [7]. The Lyft Level 5 dataset conducted manual calibration with its own calibration room [8]. In addition, ref. [14] also applied a method of measuring parameters using CAD drawings.
Several studies evaluate their calibration parameter by errors between estimated ones and ground truth that are given by either the KITTI dataset or their own dataset [7,13,15]. With ground truth, a mean and standard deviation of errors can represent the reference best fitness of the system. However, these methods cannot be used for online calibration problems that require considering environments in which parameters can be varied over time or events and are inefficient as they require a lot of time and cost when the parameters are changed.
For real data, it is difficult to find ground-truth data with simulation. In this case, the standard deviation of the parameters, convergence, or repeatability can be used to evaluate how the proposed system is robust and stable [13,[16][17][18]. The mean of parameters are useful to figure out the precision of the system but cannot evaluate accuracy from the biased results. A performance index based on map smoothness that is related to mapping performance was applied to evaluate parameters [13,[19][20][21]. Map smoothness is introduced as a metric that can be used to determine how blurry a map will be generated through the estimated calibration parameter. This has the advantage of being able to see how well the estimated parameters fit the mapping application. However, to be used as an absolute performance index, it is inappropriate because it is a relative indicator that cannot provide a standard value for fit.

Limitations of Previous Methods
The requirements to cover the limitations of the existing method are summarized as follows: • It should not require obtaining the ground-truth parameters, which require much time and cost. • It should have an unbiased value. • It should have an intuitive and absolute value, not a relative indicator.
The proposed index can overcome all of them, and if it is used as a cost function that minimizes this index, it can also be used to find the globally optimal calibration parameter through global optimization techniques. In addition, unlike other indices, the proposed performance index can check how much influence the mapping and localization have on performance through the obtained performance index. The next section introduces how to obtain this evaluation index.

Calibration Performance Index of LiDAR and Motion Sensor
This section introduces the algorithm used to obtain the performance index of the calibration parameter between LiDAR and the motion sensor. It is based on two properties that consider map smoothness, introduced in the previous study. The first principle is that the mapping result is accurate and has less blurring when point cloud mapping uses a good calibration parameter, whereas the mapping result using a bad calibration parameter is inaccurate and has large blurring. The second principle is that the matching result is good when matching between an accurate map and point cloud, whereas matching is inaccurate when matching between an inaccurate and blurred map and point cloud. To evaluate a calibration parameter using these properties, the algorithm to obtain the proposed PI consists of two steps. First, the point cloud map (PCM) is generated using motion data obtained from the motion sensor, the point cloud obtained from LiDAR, and a calibration parameter for performance evaluation. Second, the matching error between the created PCM and the input point cloud used for mapping is obtained. The obtained distance and rotation matching error is used as the performance index PI dist , PI rot . This algorithm is shown in Figure 1 and is described in detail in Sections 3.1 and 3.2. Before the detailed algorithm, we define the notation that is used in this algorithm and describe the input data. Three coordinates: world coordinate; motion sensor coordinate; LiDAR coordinate; are used in this algorithm. The relationship between each coordinate system is as follows and is shown in Figure 2. T world lidar i is the transformation matrix consisting of LiDAR's 6-DoF pose in time i based on world coordinate. T world is a rotation matrix that consists of LiDAR's orientation in time i, and t world lidar i ∈ R 3 is a translation vector that consists of LiDAR's position in time i. T world motioni is the transformation matrix consisting of the motion sensor's 6-DoF pose in time i based on world coordinate.
is a rotation matrix that consists of the motion sensor's orientation in time i, and t world motioni ∈ R 3 is a translation vector that consists of the motion sensor's position in time i. T motion lidar i is the calibration transformation matrix consisting of the 6-DoF calibration parameter from the motion sensor to LiDAR. The calibration parameter is a (6 × 1) vector consisting of the 3-axis translation distance (x motion lidar , y motion lidar , z motion lidar ) and rotation angles, which are represented by ZYX Euler angle (roll motion lidar , pitch motion lidar , yaw motion lidar ) of the LiDAR on the motion sensor. R motion lidar ∈ SO(3) and t motion lidar ∈ R 3 can be created using a calibration parameter, and T motion The data used for point cloud mapping and evaluation using the matching error are as follows. p lidar i is the point cloud on the LiDAR coordinate obtained from the LiDAR in time i. If it consists of n points, it is a matrix of size (n × 3). gyro motion i and vel motion i are the angular velocity and linear velocity on motion sensor coordinate obtained from the motion sensor in time i. It contains 3-axis angular velocities and linear velocities. Finally, calib param is a (6 × 1) vector including 3-axis translation distances and rotation angles between LiDAR and the motion sensor, and it is a variable for evaluating performance. A point cloud map is created using p lidar i , gyro motion i , vel motion i , and calib param given as inputs. Next, we evaluate matching error using this generated map and p lidar i , gyro motion i , vel motion i and calib param are used to create the point cloud map. This process is described in detail below.

Mapping of Point Cloud Map (PCM)
It receives p lidar i , gyro motion i , vel motion i , and calib param as input and output PCMs on world coordinates. In general, the motion sensor is faster than the LiDAR. Therefore, when collecting data from the motion sensor quickly, it is assumed that LiDAR data with a slow frequency is collected at the same time at a certain point in time.

Generation of Motion Sensor's Pose Using Input Motion Data
By applying input motion data such as gyro motion i and vel motion i to dead reckoning or other vehicle models, it is necessary to obtain the 6-DoF motion sensor's poses. The homogeneous transformation matrix t world motion is obtained using these poses.

Point Cloud Conversion
To create a point cloud map, p lidar i , is given as input of a LiDAR coordinate and is converted into the point cloud of the world coordinate. At time i, the point cloud p lidar i can be converted to p world i , point cloud on a world coordinate using Equations (1) and (2). To convert point cloud using a transformation matrix, p lidar i is converted into p lidar_hom i = [p lidar i |1], which is in the homogeneous coordinate.

Generation of Point Cloud Map by Accumulation
By accumulating the overall point cloud on the world coordinate, a point cloud map on a world coordinate can be obtained. If the map is created using the correct calibration parameter between the motion sensor and the LiDAR, it will generate an accurate map with less blurring. On the other hand, if a map is created using the wrong calibration parameter, it will create a blurred and inaccurate map. The mapping results according to the calibration parameter are shown in Figure 3, and the algorithm is described in Algorithm 1. PCM ← Accum(PCM, p world i ) 8: end for

Evaluation of Matching Error
To evaluate matching error, it receives created PCM and p lidar , gyro motion , vel motion , calib param used to make PCM as inputs, and outputs the matching error. The assumptions about time synchronization are the same as in Section 3.1.

Generation of Ground-Truth Motion Sensor's Pose Using Input Motion Data
The 6-DoF motion sensor's poses and transformation matrix T world motion can be obtained using the same method and data as when generating PCM.

Generation of Predicted Motion Sensor's Pose Using Point Cloud-PCM Matching
The 6-DoF-predicted LiDAR poses on the world coordinate are obtained through matching between the created PCM on the world coordinate and the point cloud on the LiDAR coordinate given as input. In this case, matching is performed using a registration method such as point-to-point, point-to-line, point-to-plane, generalized ICP, and NDT [22][23][24][25][26], and the matching result becomes the pose of the LiDAR in the world coordinate. Therefore, a value near the LiDAR pose is required as the initial value of registration. If the LiDAR pose used for mapping is given as an initial value in registration, the value applied to the initial value on the input point cloud will almost coincide with the mapping point. Therefore, this initial value will be the registration result. In this case, it does not satisfy the property that the matching result is not accurate when matching between the blurred map and the point cloud. Therefore, the property is satisfied by not using the LiDAR pose as the initial value of registration, but using the value added with the random value to the LiDAR pose as the initial value. The random value to be used for the initial value at time i consists of x rand i , y rand i , z rand i , roll rand i , and pitch rand i , yaw rand i , and T rand i ∈ SE(3), a homogeneous transformation matrix, can be obtained through these parameters. At this time, these x rand i , y rand i , z rand i , roll rand i , pitch rand i , and yaw rand i values are obtained by extracting random values within a reasonable value of the maximum error that a calibration parameter can have. After obtaining the random value, the initial value of registration at time i, T init i ∈ SE(3) can be obtained through Equation (4).
The predicted LiDAR pose on the world coordinate obtained through matching between the PCM and the point cloud in time i is defined as T world Similarly, the predicted motion sensor's pose obtained using the correct calibration parameter will almost match the ground truth motion sensor's pose which is obtained by applying motion data to dead reckoning or other vehicle models, and the predicted motion sensor's pose obtained using the incorrect calibration parameter will be far from the ground truth pose.

Obtaining the Localization Error between Ground-Truth Motion Sensor's Pose and Predicted Pose
Localization errors between T world pred_motion i and T world motioni are calculated, and the distance root mean square error (RMSE) and rotation RMSE from these distance and rotation errors, obtained from the entire data, can be derived. The distance RMSE and rotation RMSE are defined as PI dist , PI rot , respectively. At this time, the amount of computation can be reduced by obtaining PI only at the time selected through distance sampling instead of obtaining PI for the entire data. As a result, when the correct calibration parameter is used in this algorithm, PI dist and PI rot will be small, and the larger the error in the calibration parameter, the larger PI dist , PI rot will be. This is proven through Monte Carlo simulation in Section 4.

Inputs
Motion data in time i: gyro motion i , vel motion i Point cloud in time i: p lidar i Calibration transformation matrix: T motion lidar Accumulated point cloud map: PCM 2: Output Performance index according to calibration parameter: PI dist , PI rot for i = 1 to N do 4: T world motioni ← VehicleModel(gyro motion i , vel motion i ) T world lidar i ← T world motioni T motion lidar 6: T randi ← RandomGenerator T initi ← T world lidar T rand i 8:

Experimental Verification Based on Monte Carlo Simulation
In this section, it was experimentally verified that the proposed PI is convex for the added calibration parameter error using the Monte Carlo simulation method. It is proved that if the error of the calibration parameter is 0, the proposed PI will be 0, and the proposed PI increases when the calibration parameter is added. After introducing the simulation environment, variables, and equations for verification, this verification process and results will be explained.

Definition of Variables
The variables used for validation are as shown in Figure 5. T world motion , which is a transformation matrix composed of 6-DoF pose of motion sensor, and T lidar motion composed of a calibration parameter are obtained through Monte Carlo sampling. The 6-DoF pose of LiDAR can be obtained through Equation (1). In order to exclude registration error during PCM-point cloud matching, it is assumed that there is a directional landmark, and T landmark is the transformation matrix of this landmark. Through this landmark, the transformation matrix T detection , which is the detection of the landmark from the LiDAR, is obtained from Equation (6). Figure 5 shows the relationship described above.

Generation of Landmark Using n Motion Sensor's Pose and One Calibration Parameter
New landmarks are generated using one calibration parameter and n motion sensor's poses. At this time, the process of generating new landmarks using the ith motion sensor's pose and the transformation of detection between LiDAR and landmark is expressed in Equation (7). As the calibration parameter does not contain an error, newly generated landmarks match the actual landmark. However, the calibration parameter to be evaluated contains errors. Therefore, newly generated landmarks obtained using the calibration parameter, including the calibration error, can be derived using Equation (8), and it is shown in Figure 7.
T new_landmark i = T world motioni T motion lidar T detection i T new_landmark i = T world motioni T motion lidar T error T detection i (8) To obtain PI, it is necessary to obtain the predicted LiDAR pose through matching to obtain the predicted motion sensor's pose described in Section 3.2. Unlike previously described, to exclude registration errors during matching, one must randomly select one of the newly generated landmarks instead of registration. If the index of the randomly selected landmark is defined as j, then the selected landmark is expressed as Equation (9). The ith predicted LiDAR pose using this landmark as a matching result can be obtained through Equation (10). The predicted motion sensor's pose can be obtained by inversely multiplying the predicted LiDAR pose by the calibration parameter containing the error. The ith predicted motion sensor's pose is expressed in Equation (11).
T world sel_landmark i = (T world motion j T motion lidar T error T detection j ) j=rand(1,...,n) T world pred_lidar i = (T world motion j T motion lidar T error T detection j ) j=rand(1,...,n) (T detection i ) −1 (10) Distance error can be obtained from the x, y, z component of T world pred_motion i , which is the transformation matrix of the predicted motion sensor's pose, and the x, y, z component of T world motioni in index i. In addition, the rotation error is obtained from the roll, pitch, and yaw components of T world pred_motion i and T world motioni . The distance and rotation errors are calculated from the indexes from 1 to n, respectively, and distance RMSE and rotation RMSE are obtained through these. This distance, rotation RMSE is used as PI dist , PI P rot according to the calibration parameters. To verify that PI dist and PI rot increase as the calibration error increases, the calibration error is added, as shown in Table 1. The above procedure was performed for m calibration parameters. Therefore, Monte Carlo simulation was performed by adding each error in Table 1 to the total number of cases of n × m.

Result of Verification
The results for n × m cases are shown in Figure 8 as box plots. The x-axis of the graph represents the calibration parameter error, and the y-axis represents PI dist or PI rot obtained through Monte Carlo simulation. The error is 0 at the center of the x-axis and increases as it goes on both sides in the x-axis, as in Table 1. The result for n × m cases for each error are shown as box plots. It can be confirmed that the red line, which is the average of the entire simulation, is 0 when the error is 0, and when the error increases, the average value also increases. However, if there is no rotation error in the calibration parameter, T world pred_motion and T world motion , which are the rotation matrix of T world pred_motion and T world motion , are equal by Equation (11). Therefore, in this case, PI rot becomes 0. Through Monte Carlo simulation, it was proved that the proposed PI is convex according to the calibration error, except the PI rot when only x, y, and z errors are included. Therefore, the proposed PI is valid for various motion sensor movements and calibration parameters.

Experiment
In this section, the validity of the proposed algorithm was verified using simulation data and real data. Experiments using simulation data are focused on verifying whether the algorithm is valid in various environments. Experiments using real data are focused on verifying that the algorithm is valid in a noisy real environment.

Simulation Environment
The purpose of the experiments is to verify the validity of the proposed performance index, and it was evaluated using simulation data in various environments. Various environments consist of the various movements of sensors and calibration configuration between LiDAR and the motion sensor. Therefore, it was verified using IPG-CarMaker, the virtual test drive simulator that can generate sensor configurations and a sensor's movements as desired by the user. Data were generated from one motion sensor at 10 Hz, and six time-synchronized Velodyne vlp-16 LiDARs. The configuration of various calibration parameters used in the experiment is shown in Table 2. To verify the algorithm according to the various movements of the sensor, the trajectory of the motion sensor and the surrounding environment were set as shown in Table 3. Table 2. Calibration parameter between one motion sensor and six LiDARs.

Sensors
Translation

Result and Analysis of Experiments
The experiment was conducted using the point-to-point ICP registration method. To reduce the effect of the initial registration value, the experiment was performed 10 times for each error, and the total PI is defined as the average value of 10 PIs. The result of the experiment using simulation data with the calibration parameter error configuration described in Table 4 is shown in Table 5.   In each scenario, PI dist and PI rot are shown when errors are added to the x, y, and z elements and to the roll, pitch, and yaw elements, respectively. In Section 4.3, it was proved that there is no change in PI rot when there is an error only in the x, y, and yaw components among the calibration parameters. However, it was confirmed that rotation error occurred because registration was used rather than randomly selected during matching. In addition, as there is no rotational movement of the motion sensor and LiDAR in the straight scenario, the point cloud map is not blurred, and only bias is generated. Therefore, because the basic concept of the proposed algorithm is not satisfied, there is no change in the results of PI dist and PI rot . Except for this case, it was confirmed that PI dist and PI rot are convex to calibration error for all calibration parameter configurations in the remaining scenarios.

Experimental Environment
To verify that the algorithm is valid in a noisy real environment, data was obtained from the outdoor parking lot with the autonomous vehicle platform of Konkuk University in Figure 9. This vehicle was equipped with two RoboSense rs-lidar-16 LiDARs, one RoboSense rs-lidar-32 LiDAR, and one NovAtel CPT7 INS including HG4930 MEMS IMU. At this time, the INS sensor itself was used as a motion sensor. The performance of the algorithm was verified with respect to the calibration parameters between the NovAtel CPT7 INS and the three LiDARs. The calibration parameter configuration of an INS and three LiDARs is described in Table 6. This value was acquired by obtaining the result of hand-eye calibration, which is one of the calibration methods. In addition, in order to check the correlation between pi and localization, localization RMSE was derived by implementing extended Kalman filter-based map-matching localization [27][28][29] within the same conditions.

Results and Analysis of Experiment
The experiment was conducted using the point-to-point ICP registration method and performed 10 times for each error described in Table 4. The result of PIs using real data is shown in Table 7 by solid lines. Distance RMSE and rotation RMSE are indicated by dotted lines. As a result of the experiment using real data, it was confirmed that PIs generally had a large value because noisy real data, which degrades the performance of registration, was used. Nevertheless, it can be seen that the derived PIs are convex for the calibration parameter error. Although the localization result changed according to the tuning value setting of the localization algorithm, it showed the same distribution as the PI, regardless of the tuning value. In addition, in most cases, it was confirmed that the localization result showed a value smaller than the PI. Through this, it was possible to check the validity of the proposed PI in the real environment, which has the advantage of knowing how much the calibration parameter affects the map-matching localization.

Conclusions
This paper proposes the algorithm involving obtaining the performance index for extrinsic calibration parameters of LiDAR and the motion sensor. It was experimentally verified through Monte Carlo simulation, and its validity has been verified through experiments using CarMaker simulation data and real data. As a result, it was verified that is effective in situations except for the straight scenario without rotation movement of the motion sensor. Through these PIs, the validity of indirect calibration can be derived more easily and conveniently, and an unbiased absolute performance index can be derived. In addition, it can be known how much these PIs affect localization performance, and the more accurate calibration parameters can be obtained by applying the derived PIs to global optimization (described in Appendix A).
The algorithm for obtaining the performance index proposed in this paper can be applied to online calibration. Therefore, in the future, the study can be expanded to update the calibration parameter using this performance index, which is used as an index to determine the fault of the calibration parameter in real time.

Conflicts of Interest:
The author declares no conflict of interest.

Appendix A
In this section, the process of deriving the more accurate calibration parameter through the derived PI is briefly introduced. The algorithm proposed in this paper receives calib_param, point cloud, and motion data and outputs PI dist and PI rot . At this time, the values of PIs change according to calib_param. Therefore, by performing global optimization using these PI dist (calib_param) and PI rot (calib_param) as cost functions, calib_param * that minimizes the cost function can be derived.