OMC-SLIO: Online Multiple Calibrations Spinning LiDAR Inertial Odometry

Light detection and ranging (LiDAR) is often combined with an inertial measurement unit (IMU) to get the LiDAR inertial odometry (LIO) for robot localization and mapping. In order to apply LIO efficiently and non-specialistically, self-calibration LIO is a hot research topic in the related community. Spinning LiDAR (SLiDAR), which uses an additional rotating mechanism to spin a common LiDAR and scan the surrounding environment, achieves a large field of view (FoV) with low cost. Unlike common LiDAR, in addition to the calibration between the IMU and the LiDAR, the self-calibration odometer for SLiDAR must also consider the mechanism calibration between the rotating mechanism and the LiDAR. However, existing self-calibration LIO methods require the LiDAR to be rigidly attached to the IMU and do not take the mechanism calibration into account, which cannot be applied to the SLiDAR. In this paper, we propose firstly a novel self-calibration odometry scheme for SLiDAR, named the online multiple calibration inertial odometer (OMC-SLIO) method, which allows online estimation of multiple extrinsic parameters among the LiDAR, rotating mechanism and IMU, as well as the odometer state. Specially, considering that the rotating and static parts of the motor encoder inside the SLiDAR are rigidly connected to the LiDAR and IMU respectively, we formulate the calibration within the SLiDAR as two separate sets of calibrations: the mechanism calibration between the LiDAR and the rotating part of the motor encoder and the sensor calibration between the static part of the motor encoder and the IMU. Based on such a SLiDAR calibration formulation, we can construct a well-defined kinematic model from the LiDAR to the IMU with the angular information from the motor encoder. Based on the kinematic model, a two-stage motion compensation method is presented to eliminate the point cloud distortion resulting from LiDAR spinning and platform motion. Furthermore, the mechanism and sensor calibration as well as the odometer state are wrapped in a measurement model and estimated via an error-state iterative extended Kalman filter (ESIEKF). Experimental results show that our OMC-SLIO is effective and attains excellent performance.


Introduction
Given the high reliability and accuracy of light detection and ranging (LiDAR) sensors, LiDAR odometry (LO) and LiDAR simultaneous localization and mapping (SLAM) methods have played an important role in areas such as robotics, autonomous driving, and surveying [1][2][3]. However, the small field of view (FoV) and low vertical resolution of common LiDAR make it unsuitable for specific scenarios, such as unmanned aerial vehicles (UAVs) in indoor scenarios [4,5], autonomous ground vehicles (AGVs) in confined spaces [6,7] or uneven terrain [8,9]. Spinning LiDAR (SLiDAR), which uses an additional rotating mechanism to spin a common LiDAR and scan the surrounding environment,

Related Works
There are plenty of works regarding diverse calibrations among IMUs, cameras and LiDAR, and here we focus on the extrinsic calibration between LiDAR and the IMU and the mechanism calibration of SLiDAR.

Extrinsic Calibration between LiDAR and IMU
There are some tailored offline calibration methods for the LiDAR-IMU system. Ref. [22] uses pre-integration over upsampled IMU measurements derived from a Gaussian process (GP) to remove motion distortion and then combines the factors of IMU preintegration and LiDAR point-to-plane distances to calibrate extrinsic parameters. A fusion method, in which the ICP and ISPKF are respectively used to determine the unknown transformation and estimate the time delay between the LiDAR and IMU, is presented in [23]. Without any artificial targets or specific facilities, Ref. [24] proposes a multi-feature-based field calibration method for LiDAR-IMU systems that combines point/sphere, line/cylinder and planar features to reduce labor intensity. To associate laser points with stable environmental objects, Ref. [25] adopts a continuous-time IMU trajectory, modeled using GP regression, and segments a point map as structured planes to calibrate LiDAR and an IMU with on-manifold batch optimization. Ref. [26] adopts the B-spline over IMU measurements to formulate a continuous-time trajectory for fusing high-rate scanning points as [25], and uses the point-to-surfel constraint to calibrate parameters. Based on an EKF framework, Ref. [27] utilizes a discrete-time IMU state propagation model to compensate for motion distortion and proposes a motion-based constraint to refine the estimated states. Based on [26,27], Ref. [28] combines Hausdorff distance between the local trajectories and hand-eye calibration to solve the initial spatiotemporal relationship. Then, the IMU pre-integration and the point, line and plane features of the point cloud are wrapped into the objective function. Ref. [29] proposes an informative path planner to find the admissible path for ensuring accurate calibration. However, these offline calibration methods require that the IMU is rigidly attached to LiDAR and cannot be met by SLiDAR.
Regarding online extrinsic calibration in LIO, related works are around the tightlycoupled scheme based on a filter or a nonlinear optimization. Both LINS [14] and FAST-LIO [15] design an error-state iterated Kalman filter (ESIKF) to fuse the IMU and LiDAR. LINS estimates the relative pose between two consecutive local frames for updating the global pose estimate. FAST-LIO nevertheless adopts a scan-to-submap registration with an efficient Kalman gain computation. Besides filter-based methods, nonlinear optimization has been prevailing recently due to its better accuracy. LIOM [16] tightly couples LiDAR and IMU by jointly minimizing their costs in a local fixed-size window and adds an extra rotation constraint to further refine estimation. LIO-SAM [17] also formulates LIO as a factor graph which is akin to LIOM but adopts a keyframe strategy for performing real-time estimation. Although these methods can estimate the extrinsic parameter between the LiDAR and IMU, they also require that the IMU is rigidly attached to LiDAR and does not take mechanism calibration into account.
For calibration in a more general scene, a point-point constraint scheme is utilized [33][34][35]. Based on the random sample consensus (RANSAC) method, Refs. [18,32] extract planes from the whole scene points to construct point-plane constraints for more accurate calibration. Some methods break the calibrated scene into small regions for extracting trivial planes. Refs. [36,37] search for their neighbor points within a suitable radius with the down-sampling scene point cloud. Refs. [20,21] divide the scene into small grids. In each small region, Ref. [20] adopts principal component analysis (PCA) to choose the planes for building point-plane constraints. With the estimated approximated probabilities, Ref. [21] pre-selects the valuable grids to extract planes based on RANSAC. The above methods for SLiDAR mechanism calibration are offline, and their complex principle cannot be directly applied for real-time self-calibration SLIO. In addition, they do not resolve the sensor calibration.
Different from the work in Sections 2.1 and 2.2, our method can aggregate both the mechanism and sensor extrinsic parameters as well as the odometry state into the ESIEKF framework to realize the online multiple extrinsic calibrations and SLiDAR inertial odometry.

Problem Formulation
Considering that the rotation and static parts of the encoder are rigidly attached to the LiDAR and IMU, respectively, we can formulate the multiple calibrations inside the SLiDAR platform as two independent sets of calibrations; one is between the LiDAR and Sensors 2023, 23, 248 4 of 18 the rotating part of the encoder (it exactly corresponds to the rotating mechanism and is therefore named mechanism calibration), and the other is between the static part of encoder and the IMU (named sensor calibration). To illustrate the formulation of multiple calibrations, the internal coordinate relation for SLiDAR is shown in Figure 1.

Problem Formulation
Considering that the rotation and static parts of the encoder are rigidly attached to the LiDAR and IMU, respectively, we can formulate the multiple calibrations inside the SLiDAR platform as two independent sets of calibrations; one is between the LiDAR and the rotating part of the encoder (it exactly corresponds to the rotating mechanism and is therefore named mechanism calibration), and the other is between the static part of encoder and the IMU (named sensor calibration). To illustrate the formulation of multiple calibrations, the internal coordinate relation for SLiDAR is shown in Figure 1. The subscripts of L, S, B and I respectively denote the LiDAR, the rotation part of the encoder, the static part of the encoder and the IMU. The corresponding coordination systems are denoted as ℱ , ℱ , ℱ and ℱ . In addition, the global frame is denoted as ℱ . A scanning LiDAR point in ℱ , ℱ , ℱ , ℱ and ℱ is respectively denoted as , , , and . Obviously, ℱ and ℱ are rigidly attached to the static part of the SLiDAR platform, and ℱ and ℱ are rigidly attached to the rotation part of the SLiDAR platform. The X coordinates of ℱ and ℱ are collinear, ℱ and ℱ will be exactly coincident, while the spinning angle of the encoder is 0 rad. The relative pose between ℱ and ℱ is the sensor calibration denoting the relative pose from the encoder to the IMU, and the relative pose between ℱ and ℱ is the mechanism calibration denoting the relative pose from LiDAR to the mounting point of the rotation mechanism. As a result, and are the interested multiple extrinsic parameters that we need to be estimated online in LIO.
In addition, as our SLiDAR system is unidirectional, mechanism calibration parameters along the spinning direction cannot be estimated due to unobservability [20]. Furthermore, the range bias of LiDAR might be several centimeters, which is much larger than the mechanism bias, so the translation parts of mechanism extrinsic parameters cannot be estimated in a non-specific scene [9]. Given the above reasons, we simplify mechanism calibration as only two rotational parameters ( , ) in practice, in which both the rotational parameter in X coordination and all the translational parameters are set as zero. It should be noted that we still use to act as mechanism calibration later for clarity.

Overall Pipeline
For SLiDAR, the overlap between the consecutive LiDAR scanning frame is minor due to the rapid LiDAR rotation; we therefore adopt a scan-to-submap match approach to ensure enough associations. Meanwhile, built upon an efficient ESIEKF framework [38], we aggregate both mechanism and sensor calibrations as well as odometry poses into one measurement model to pursue real-time performance. The overall pipeline of the proposed OMC-SLIO is shown in Figure 2, which mainly contains four parts: preprocessing, The subscripts of L, S, B and I respectively denote the LiDAR, the rotation part of the encoder, the static part of the encoder and the IMU. The corresponding coordination systems are denoted as F L , F S , F B and F I . In addition, the global frame is denoted as F G . A scanning LiDAR point in F L , F S , F B , F I and F G is respectively denoted as L p, S p, B p, I p and G p.
Obviously, F B and F I are rigidly attached to the static part of the SLiDAR platform, and F L and F S are rigidly attached to the rotation part of the SLiDAR platform. The X coordinates of F S and F B are collinear, F S and F B will be exactly coincident, while the spinning angle ϕ of the encoder is 0 rad. The relative pose I T B between F B and F I is the sensor calibration denoting the relative pose from the encoder to the IMU, and the relative pose S T L between F L and F S is the mechanism calibration denoting the relative pose from LiDAR to the mounting point of the rotation mechanism. As a result, I T B and S T L are the interested multiple extrinsic parameters that we need to be estimated online in LIO.
In addition, as our SLiDAR system is unidirectional, mechanism calibration parameters along the spinning direction cannot be estimated due to unobservability [20]. Furthermore, the range bias of LiDAR might be several centimeters, which is much larger than the mechanism bias, so the translation parts of mechanism extrinsic parameters cannot be estimated in a non-specific scene [9]. Given the above reasons, we simplify mechanism calibration as only two rotational parameters S ω L ω y , ω z in practice, in which both the rotational parameter in X coordination and all the translational parameters are set as zero. It should be noted that we still use S T L to act as mechanism calibration later for clarity.

Overall Pipeline
For SLiDAR, the overlap between the consecutive LiDAR scanning frame is minor due to the rapid LiDAR rotation; we therefore adopt a scan-to-submap match approach to ensure enough associations. Meanwhile, built upon an efficient ESIEKF framework [38], we aggregate both mechanism and sensor calibrations as well as odometry poses into one measurement model to pursue real-time performance. The overall pipeline of the proposed OMC-SLIO is shown in Figure 2, which mainly contains four parts: preprocessing, global transformation, ESIEKF and global mapping. The two-stage motion compensation acts as part of the global transformation. global transformation, ESIEKF and global mapping. The two-stage motion compensation acts as part of the global transformation. In preprocessing, the planar points are extracted from LiDAR scanning points, the spinning angles are obtained from the encoder, and the IMU states are propagated upon every IMU measurement. In global transformation, extracted planar points are projected to global space via successive transformations consisting of mechanism calibration, motion compensation, sensor calibration and IMU global poses. Specifically, motion compensation covers two kinds of distortions from LiDAR spinning and platform moving. The global planar points are used to associate the global planes in the submap of global mapping. Then, the residuals derived from point-to-plane constraints are passed into ESIEKF to update states. If converged, global mapping and odometry output will be updated, otherwise, global transformation and feature association will be executed again with updated states.
In the following, we will introduce more details on preprocessing, global transformation and ESIEKF; readers can refer to related works for details of global mapping [39].

Preprocessing
The frequencies of the data received from the LiDAR, encoder and IMU are different; for instance, the LiDAR is 10Hz and both the encoder and IMU are 200 Hz in our SLiDAR platform. Thus, before preprocessing, the raw encoder spinning angle, IMU angular rate and IMU acceleration will be temporarily stored in the respective buffers, until receiving one frame of the LiDAR scanning points is completed.
For the raw LiDAR scanning points, the planar points will first be extracted as in LOAM [40], and the extracted planar points will then be projected to the global space. Both the raw spinning angle and IMU data are first utilized in the two-stage motion compensation as introduced in Section 4.3.1. In addition, each angular rate and acceleration of IMU data in the buffer will be propagated to predict the ego-motion states; the detail of state propagation will be introduced in Section 4.4.2.
Note that as the timestamp of the latest rotation angle may be earlier than the timestamp at the end of the current LiDAR scan, this will result in some backward points not being able to be interpolated to correspond to the rotation angle. Therefore, we utilize the accumulated spinning angle measurements to estimate the current spinning rate in the preprocessing. Denote −1 and as the last and current spinning rate, respectively, In preprocessing, the planar points are extracted from LiDAR scanning points, the spinning angles are obtained from the encoder, and the IMU states are propagated upon every IMU measurement. In global transformation, extracted planar points are projected to global space via successive transformations consisting of mechanism calibration, motion compensation, sensor calibration and IMU global poses. Specifically, motion compensation covers two kinds of distortions from LiDAR spinning and platform moving. The global planar points are used to associate the global planes in the submap of global mapping. Then, the residuals derived from point-to-plane constraints are passed into ESIEKF to update states. If converged, global mapping and odometry output will be updated, otherwise, global transformation and feature association will be executed again with updated states.
In the following, we will introduce more details on preprocessing, global transformation and ESIEKF; readers can refer to related works for details of global mapping [39].

Preprocessing
The frequencies of the data received from the LiDAR, encoder and IMU are different; for instance, the LiDAR is 10Hz and both the encoder and IMU are 200 Hz in our SLiDAR platform. Thus, before preprocessing, the raw encoder spinning angle, IMU angular rate and IMU acceleration will be temporarily stored in the respective buffers, until receiving one frame of the LiDAR scanning points is completed.
For the raw LiDAR scanning points, the planar points will first be extracted as in LOAM [40], and the extracted planar points will then be projected to the global space. Both the raw spinning angle and IMU data are first utilized in the two-stage motion compensation as introduced in Section 4.3.1. In addition, each angular rate and acceleration of IMU data in the buffer will be propagated to predict the ego-motion states; the detail of state propagation will be introduced in Section 4.4.2.
Note that as the timestamp of the latest rotation angle may be earlier than the timestamp at the end of the current LiDAR scan, this will result in some backward points not being able to be interpolated to correspond to the rotation angle. Therefore, we utilize the accumulated spinning angle measurements to estimate the current spinning rate in the preprocessing. Denote ω N−1 ϕ and ω N ϕ as the last and current spinning rate, respectively, where ∆ϕ is the measurement period, S N−1 t is the sum of the last N − 1 measured spinning angles, and ϕ N is the latest spinning angle. Based on the estimated spinning rate and the time difference, each point in the tail of LiDAR scanning can be integrated into an accurate spinning angle.

Global Transformation
For associating planar features in the submap, we need to project the raw point L p in the LiDAR frame into the global point G p with global transformation. In addition, due to the LiDAR's continuous spinning and the platform's moving, all the points received in a scanning period are not in the identity coordination, and thus, we need to transform all the points to the end-time of scanning via motion compensation. According to the coordinate relation of SLiDAR as shown in Figure 1 and considering the motion compensation, the global transformation of SLiDAR contains five consecutive transformations: extrinsic mechanism parameter, LiDAR spinning compensation, platform moving compensation, extrinsic sensor parameter and global IMU pose.
Denote L k−1 as the end of the last LiDAR scanning and L k as the end of current LiDAR scanning. Assume S i p i is the point in the frame F S , which is projected from the original L i p i in F L with the extrinsic mechanism parameter S T L , Furthermore, denote S k p i as the point of end time L k in the frame of the rotation part of the encoder F S and T X (ϕ s ) as LiDAR spinning distortion compensation with the corresponding angle ϕ s (derived in Section 4.3.1.1), Then, denote B i p i as the point in the frame of the static part of the encoder F B and T X (ϕ a ) as the transformation that projects S k p i into the frame F B with ϕ a angle (derived in Section Removing LiDAR Spinning Distortion of 4.3.1), Furthermore, assume that B k p i is the point of end time L k in the frame F B and k T i is the distortion compensation of platform moving, Finally, denote G R I and G p I as the attitude and position of the IMU in the global frame respectively, the corresponding global point G i p i is In the following, we will derive the details of the two-stage motion compensation, LiDAR spinning compensation and platform moving compensation, respectively.

The Two-Stage Motion Compensation
Different from the common LiDAR odometry, which only resolves the moving distortion, SLiDAR will incur the extra LiDAR spinning distortion. Based on the coordinate relation of SLiDAR in Figure 1, the bottom-up process of the two-stage motion compensation, which removes the LiDAR spinning distortion and platform moving distortion in turn, is shown in Figure 3.  First, we interpolate the measured spinning angles to get the corresponding spinning angle of the LiDAR point, then rotate the point to the frame ℱ with the relative angle difference.
Assume the sample period of the spinning angle is ∆ , the end time of current scanning is , the sample time for is , and the spinning rate is constant during a sample period ∆ . In Figure 3, ∈ [ ℎ−1 , ℎ ], in which ℎ−1 and ℎ are the adjacent time of spinning angle measurement. Denote the measured spinning angles at the time ℎ−1 and ℎ are ℎ−1 and ℎ , respectively. With the linear interpolation, the spinning angle at the time is If is larger than the latest time ℎ , we can utilize the estimated spinning rate in time ℎ to predict the spinning angle corresponding to , Since LiDAR spinning is only along the X-axis, so Similarly, we can get the spinning angle corresponding to . Denote the measured spinning angle corresponding to the frame ℱ is 0 , we can derive Notably, ( ) in both formulations (9) and (10) is defined as: From formulations (3), (4), (9) and (10), we can get which means that we can group ( ) and ( ) as encapsulated LiDAR spinning distortion compensation.

Removing Platform Moving Distortion
Regarding platform moving compensation, the back-propagation of IMU measurements is employed to get the related pose for each point, and the point is then transformed Assume the sample period of the spinning angle is ∆γ, the end time of current scanning is ρ k , the sample time for S i p i is ρ i , and the spinning rate ω ϕ is constant during a sample period ∆γ. In Figure 3, If ρ i is larger than the latest time γ h , we can utilize the estimated spinning rate ω ϕ in time γ h to predict the spinning angle corresponding to ρ i , Since LiDAR spinning is only along the X-axis, so Similarly, we can get the spinning angle ϕ k corresponding to ρ k . Denote the measured spinning angle corresponding to the frame F B is ϕ 0 , we can derive Notably, R X (ϕ) in both formulations (9) and (10) is defined as: From formulations (3), (4), (9) and (10), we can get which means that we can group T X (ϕ s ) and T X (ϕ a ) as encapsulated LiDAR spinning distortion compensation. Regarding platform moving compensation, the back-propagation of IMU measurements is employed to get the related pose for each point, and the point is then transformed to the scanning end-time with its corresponding related pose [15]. Different from the existing LIO methods which regard the point projected by motion compensation as the raw point for the measurement model, since the moving distortion transformation k T i is the middle parameter in the global transformation for SLiDAR, we need to record the corresponding k T i for raw point L i p i as part of the measurement model.
Performing the forward propagation on IMU inputs (in Section 4.4.2), we can get a predicted IMU pose chain until the time at the LiDAR scanning end L k , in which we denote G I x I k as the predicted state. Assume the time ρ i of point B i p i located in IMU time interval, that is ρ i ∈ t j−1 , t j , as shown in Figure 3. By querying the predicted IMU pose chain, we can get the IMU pose G I x I j at time t j . Then, performing the back-propagation from the initial IMU state G I x I j , we can derive the corresponding IMU state G Ix I i for B i p i with the time interval t j − ρ i . As for the implementation detail of back-propagation, the interested reader can refer to [15].
We denote the back-propagated IMU poses at the time ρ i as GŘ I i , Gp I i ∈ G Ix I i , the front-propagated IMU poses at the end of L k as G R I k , G p I k ∈ G I x I k and the predicted sensor calibration as I T B I R B , I t B , respectively. As a result, projecting B i p i to B k p i as we can derive moving distortion compensation k T i k R i , k t i as For general IMUs, the propagated values Gp I i and G p I k are not reliable, and the related term Gp I i − G p I k in formulation (15) is also usually trivial, so we ignore this term in the initial motion compensation. In the end, estimated odometry results will be used to compensate for distortion again with formulations (14) and (15) while finishing optimization. For pursuing real-time performance, we do not update formulation (14) and (15) with recorrect states during iterative optimization.

ESIEKF
With the ESIEKF optimization framework [38], we aggregate mechanism calibration, sensor calibration and IMU states into one measurement model to build our OMC-SLIO. Specifically, we will introduce the main contents of ESIEKF implementation on OMC-SLIO, which includes the state transition model, forward propagation and ESIEKF update.

State Transition Model
We take the IMU frame F I as the body frame and the first body frame of F I as the global frame F G . The true state is defined as in which I R B and I t B are respectively the rotation and linear parts of sensor calibration, S ω L = S L ω y S L ω z T is the interested rotational parameter vector of mechanism calibration as discussed in Section 3, and G I x I denotes the IMU state. G I x I is defined as where G R I , G p I , G v I and G g denote the IMU attitude, position, velocity and gravity vector in the global frame F G , respectively, b ω is the IMU gyroscope bias, and b a is the IMU accelerometer bias. The continuous time kinematic model for state x is: .
where ω m and a m are the raw IMU measurements, n ω and n a are the measurement white noise of ω m and a m , and both IMU bias b ω and b a are modeled as random walk and corrupted by Gaussian noise n bω and n ba . Note that here we only consider two rotation parameters, S L ω y and S L ω z , of mechanism calibration S T L . Denote S R L and S t L as the rotation and translation parts of S T L respectively, then With the zero-order hold discretization, the discrete state transition model at the i-th IMU measurement can be expressed as [38]: where is encapsulation operation "boxplus" as defined in [41], ∆t is the IMU sample period, and the discrete kinematics function f, input u, and process noise w are respectively defined as: w n ω n a n bω n ba .

Forward Propagation
Assume the optimal estimated state of the last LiDAR scanning L k−1 at the end time is x k−1 with covariance P k−1 , which represents the covariance of the error state defined below: where δ G θ I = Log G R T I G R I is the IMU attitude error, δ I θ B = Log I R T B I R B is the error of the rotation part of sensor calibration, and the rests are the standard errors like the form of x = x − x. Here, δθ vector presents the small deviation between the true and the estimated rotations in the tangent space; it has a minimal representation of three degrees of freedom.
Upon each IMU input u i , forward propagation is performed to predict the state x i+1 by setting the process noise w i to zero, and the covariance P i+1 for x i+1 x i+1 x i+1 ( is also encapsulation operation "boxminus" as defined in [41]) is updated, where Q is the covariance of w, and the detailed derivation of F x and F w can be found in [38]. Note that the forward propagation is from time t k−1 (the end time of the last LiDAR scanning L k−1 ) to t k (the end time of the current LiDAR scanning L k ). Denote the propagated state and covariance at t k as x k and P k , and the error state x k = x k x k complies with the prior distribution,

ESIEKF Update
Denote the current iteration of IEKF as υ, and the corresponding estimated and error states are x υ k and x υ k , respectively. In addition, where L n i is the measurement noise of the LiDAR scanning point. Since L p gt i is a planar point, its neighbors in the global map should be on a local plane. By projecting L p gt i to the global frame with the true state x k , the measurement model can be derived as Ignoring the measurement noise L n i , we can project extracted planar feature point L p i to the global frame with estimated state x υ k , Similarly, since L p i is a planar point, its neighbors in the global map should be on a local plane, so we can define where G n i and G c i are normal vector and center point of the local plane fitted by the neighboring points of G p υ i in the global map. As a result, we can define the residual as where the detailed derivation for H υ i and D υ i can be referred to in [38], and the formulation (33) defines a posteriori distribution for x k , After every iteration, the prior distribution of x k from the forward propagation has been evolved as: where the detailed derivation for J υ can be found in [38], and x υ k = x κ , J υ = I while υ = 0. From formulation (36), we can find the evolutive prior distribution for x υ k is, Furthermore, combine the prior in formulation (38) and the posteriori distribution in formulation (35), which leads to the maximum a-posteriori estimate (MAP) for x υ k , where ||x || 2 Σ = x T Σ T Σx. Substituting the linearization of the priori in the above equation and optimizing the resultant quadratic cost leads to the standard iterated Kalman filter, where k is then used to compute the residual again and the process is repeated until convergence. After convergence, the optimal state estimation and covariance is

Experiment Results
In order to verify the proposed OMC-SLIO, we designed a SLiDAR experimental platform, as shown in Figure 4, in which the hardware system can be divided into two parts: the rotating part and the static part. Specifically, the rotating part consists of Velodyne VLP-16 LiDAR, the rotating mechanism and the rotating part of a DJI GM6020 motor (including the rotating part of the embedded encoder); the static part consists of the static part of a DJI GM6020 motor (including the static part of the embedded encoder), a XSENS MTi-630 IMU, a Piesia WK310CA computer (Intel Core i78565u CPU and 16 GB RAM) and a metal square base. Note that the Velodyne VLP-16 LiDAR has been mounted on the rotating mechanism as accurately as possible to ensure that all external parameters of the mechanism such as S L ω y and S L ω z are approaching zero.
where ‖ ‖ 2 = . Substituting the linearization of the priori in the above equation and optimizing the resultant quadratic cost leads to the standard iterated Kalman filter, ] . The updated estimate ̂+ 1 is then used to compute the residual again and the process is repeated until convergence. After convergence, the optimal state estimation and covariance is

Experiment Results
In order to verify the proposed OMC-SLIO, we designed a SLiDAR experimental platform, as shown in Figure 4, in which the hardware system can be divided into two parts: the rotating part and the static part. Specifically, the rotating part consists of Velodyne VLP-16 LiDAR, the rotating mechanism and the rotating part of a DJI GM6020 motor (including the rotating part of the embedded encoder); the static part consists of the static part of a DJI GM6020 motor (including the static part of the embedded encoder), a XSENS MTi-630 IMU, a Piesia WK310CA computer (Intel Core i78565u CPU and 16GB RAM) and a metal square base. Note that the Velodyne VLP-16 LiDAR has been mounted on the rotating mechanism as accurately as possible to ensure that all external parameters of the mechanism such as and are approaching zero. We chose the state-of-the-art (SOTA) self-calibration methods, FAST-LIO [15] and LIO-SAM [17], and the loosely coupled method, LOAM [40], as comparison methods. Since the relative pose between LiDAR and an IMU is non-constant and severe point cloud distortion results from LiDAR spinning, these comparison methods would lead to failure when they are directly applied for SLiDAR. Therefore, based on our constructed kinematic model and the two-step point cloud distortion compensation method, we transformed the original LiDAR scanning points to the space of the static part of the encoder (i.e., ℱ ); then, the newly formed LiDAR scanning points were fed to these comparison We chose the state-of-the-art (SOTA) self-calibration methods, FAST-LIO [15] and LIO-SAM [17], and the loosely coupled method, LOAM [40], as comparison methods. Since the relative pose between LiDAR and an IMU is non-constant and severe point cloud distortion results from LiDAR spinning, these comparison methods would lead to failure when they are directly applied for SLiDAR. Therefore, based on our constructed kinematic model and the two-step point cloud distortion compensation method, we transformed the original LiDAR scanning points to the space of the static part of the encoder (i.e., F B ); then, the newly formed LiDAR scanning points were fed to these comparison methods. To distinguish the original forms of these comparison methods, we term these modified FAST-LIO, LIO-SAM and LOAM for SLiDAR as FAST-SLIO, SLIO-SAM and SLOAM, respectively.
Furthermore, to validate the correctness of our formulated SLiDAR calibration, we adopted two specific offline calibration methods to estimate the mechanism external parameters [27] and sensor external parameters [21], respectively. Then, the online calibration parameters in OMC-SLIO were replaced with the corresponding external parameters from the above offline calibration. Again, we refer to such pre-calibrated OMC-SLIO as the Calib-SLIO.
We conducted two sets of comparison experiments. One set of comparison experiments was conducted in a lab room with a Nokov Motion Capture System (NMCS) (shown in Figure 5), where the positioning results of the motion capture system were used as ground truth. Another set of comparison experiments was conducted in a larger underground parking to reveal the effect of external parameter self-calibration on localization and mapping. Furthermore, to validate the correctness of our formulated SLiDAR calibration, we adopted two specific offline calibration methods to estimate the mechanism external parameters [27] and sensor external parameters [21], respectively. Then, the online calibration parameters in OMC-SLIO were replaced with the corresponding external parameters from the above offline calibration. Again, we refer to such pre-calibrated OMC-SLIO as the Calib-SLIO.
We conducted two sets of comparison experiments. One set of comparison experiments was conducted in a lab room with a Nokov Motion Capture System (NMCS) (shown in Figure  5), where the positioning results of the motion capture system were used as ground truth. Another set of comparison experiments was conducted in a larger underground parking to reveal the effect of external parameter self-calibration on localization and mapping.

Lab Room
In the lab room test, the positioning result from the NMCS is regarded as the ground truth. The absolute translation errors (ATE) presented by MAE and RMSE and the average processing time (APT) for each LiDAR scanning (10Hz) are listed in Table 1. The mapping results of the lab room using the comparison methods are shown in Figure 6. Note that we only present the results of comparative methods that worked successfully.

Lab Room
In the lab room test, the positioning result from the NMCS is regarded as the ground truth. The absolute translation errors (ATE) presented by MAE and RMSE and the average processing time (APT) for each LiDAR scanning (10 Hz) are listed in Table 1. The mapping results of the lab room using the comparison methods are shown in Figure 6. Note that we only present the results of comparative methods that worked successfully.
In Table 1, it can be found that our OMC-SLIO, Calib-SLIO and FAST-SLIO all achieve centimeter-level localization accuracy, while both SLOAM and SLIO-SAM fail to function successfully. Since fast LiDAR spinning presents minor overlap between successive frame scanning, the failure of SLOAM and SLIO-SAM stems from their basic matching based on successive frames. Further, our OMC-SLIO, pre-calibrated Calib-SLIO and FAST-SLIO pre-processed using our proposed method all achieve reliable localization and mapping, indicating the effectiveness of our proposed SLiDAR calibration formulation and the twostep motion compensation.  In Table 1, it can be found that our OMC-SLIO, Calib-SLIO and FAST-SLIO all achieve centimeter-level localization accuracy, while both SLOAM and SLIO-SAM fail to function successfully. Since fast LiDAR spinning presents minor overlap between successive frame scanning, the failure of SLOAM and SLIO-SAM stems from their basic matching based on successive frames. Further, our OMC-SLIO, pre-calibrated Calib-SLIO and FAST-SLIO pre-processed using our proposed method all achieve reliable localization and mapping, indicating the effectiveness of our proposed SLiDAR calibration formulation and the two-step motion compensation.
Specifically, our OMC-SLIO shows a significant improvement in Z-directional localization, with a 40% improvement in accuracy compared to FAST-SLIO. Furthermore, the Z-direction localization of all methods exhibited the largest MAE and RMSE values, owing to the weaker laser beam reflection intensity from the smooth ground (as shown in Figure 6, the green and red points indicate strong and weak laser beam reflections, respectively). For the APT metric, the average runtime of our OMC-SLIO, Calib-SLIO and FAST-SLIO is around 35 ms. This APT result is reliable because our OMC-SLIO, Calib-SLIO, and FAST-SLIO are all able to achieve similar convergence in the lab room. Consistent with quantitative results, our OMC-SLIO shows sharper mapping details than FAST-SLIO and presents comparable mapping performance with Calib-SLIO, as shown in Figure 6. Specifically, our OMC-SLIO shows a significant improvement in Z-directional localization, with a 40% improvement in accuracy compared to FAST-SLIO. Furthermore, the Z-direction localization of all methods exhibited the largest MAE and RMSE values, owing to the weaker laser beam reflection intensity from the smooth ground (as shown in Figure 6, the green and red points indicate strong and weak laser beam reflections, respectively). For the APT metric, the average runtime of our OMC-SLIO, Calib-SLIO and FAST-SLIO is around 35 ms. This APT result is reliable because our OMC-SLIO, Calib-SLIO, and FAST-SLIO are all able to achieve similar convergence in the lab room. Consistent with quantitative results, our OMC-SLIO shows sharper mapping details than FAST-SLIO and presents comparable mapping performance with Calib-SLIO, as shown in Figure 6.

Underground Parking
In the large underground parking, we cannot use the motion capture system and cannot receive GPS positioning signals. Since Calib-SLIO got the best positioning accuracy in the lab room experiments, the positioning results of Calib-SLIO were adopted as the ground truth in the underground parking test. The absolute translation errors (ATE) presented via MAE and RMSE and the average processing time (APT) for each LiDAR scanning (10 Hz) are presented in Table 2. The underground parking mapping results for the comparison method are shown in Figure 7. Again, we only present the results for the methods that worked successfully. comparison method are shown in Figure 7. Again, we only present the results for the methods that worked successfully.  There is no doubt that SLOAM and SLIO-SAM still fail when applied to SLiDAR. Specifically, compared with FAST-SLIO, our OMC-SLIO has an average improvement of 50% in positioning accuracy in the X and Y directions and 35% in the Z direction, as shown in Table 2. Similarly, since the dark-colored floor and roof can absorb most of the laser There is no doubt that SLOAM and SLIO-SAM still fail when applied to SLiDAR. Specifically, compared with FAST-SLIO, our OMC-SLIO has an average improvement of 50% in positioning accuracy in the X and Y directions and 35% in the Z direction, as shown in Table 2. Similarly, since the dark-colored floor and roof can absorb most of the laser beam energy (as shown in Figure 7, green and red dots indicate strong and weak laser beam reflections, respectively), both MAE and RMSE values in the Z direction in Table 2 are worse than those in the X or Y directions. Meanwhile, OMC-SLIO also has better real-time performance than FAST-SLIO due to better estimation convergence. Since the estimation bias of the rotation part of the extrinsic calibration will lead to more severe point cloud bias in larger scenes, the FAST-SLIO experiments in the underground parking exhibit more prominent localization and mapping errors than the experiments in the lab room. Consistent with quantitative results, OMC-SLIO exhibits clearer mapping details, while FAST-SLIO presents a distinctly blurred mapping, as shown on the right in Figure 7.

Discussion
In this section, we discuss why our OMC-SLIO can show better performance for SLiDAR compared to the comparative self-calibration odometer method, as shown in the previous experimental section.
The best positioning accuracy using Calib-SLIO verified the practicability of the calibration formulation within the SLiDAR. To some extent, our OMC-SLIO can be considered as an online version of Calib-SLIO. Our OMC-SLIO online calculates both mechanism and sensor parameters within SLiDAR, thus achieving excellent positioning and mapping accuracy. In the set of underground parking experiments, we used the offline estimated extrinsic parameters of Calib-SLIO as the ground truth. Then, all the extrinsic parameters estimated online using FAST-SLIO and OMC-SLIO were averaged and taken as the final extrinsic parameter results. Table 3 shows the errors in the online extrinsic parameter estimation of FAST-SLIO and OMC-SLIO. Note that FAST-SLIO estimates only the extrinsic parameters of the sensor, while our OMC-SLIO estimates both the extrinsic parameters of the mechanism and the sensor. As shown in Table 3, compared with FAST-SLIO, OMC-SLIO reduces the online calibration errors by 43%, 44%, 26%, 72%, 27% and 91% on average for the 6 sensor extrinsic parameters, respectively. Meanwhile, the average error of the two additional online mechanism calibration parameters of OMC-SLIO is only 0.057 • and 0.218 • . FAST-SLIO does not consider the correlation of the extrinsic parameters inside the SLiDAR, while our OMC-SLIO establishes the kinematic model relationship between multiple sensors inside the SLiDAR and, on the other hand, constructs a unified observation model with multiple extrinsic parameters and the odometry states. Therefore, OMC-SLIO can be more reliable for SLiDAR to achieve better localization and mapping performance.

Conclusions
In this paper, we propose a novel self-calibration odometry scheme for SLiDAR, named as online multiple calibration spinning LiDAR inertial odometer (OMC-SLIO), which allows online estimation of multiple extrinsic parameters among the LiDAR, rotating mechanism and IMU, as well as the odometer state. To the best of our knowledge, this is the first multi-extrinsic self-calibration LIO method for SLiDAR. By analyzing the relationships of the internal sensors, we formulate the calibration inside the SLiDAR as mechanism calibration and sensor calibration and build the kinematic model with the encoder information. Further, based on the kinematic model, we present a two-stage motion compensation to eliminate the point cloud distortion caused by LiDAR rotation and platform motion. Finally, the two sets of extrinsic parameters and odometer state are wrapped in a measurement model and estimated with the error-state iterative extended Kalman filter (ESIEKF). We design a 3D SLiDAR platform and conduct experiments in a lab room and underground parking. Experimental results show that our OMC-SLIO is effective and attains excellent performance. Further, the extrinsic parameters comparison test verifies that our OMC-SLIO can better estimate the multiple external parameters of SLiDAR and shows why our OMC-SLIO can achieve higher accuracy in localization and mapping. To maintain the consistence of the localization and mapping of our proposed OMC-SLIO, we will introduce the pose graph optimization resulted from the multiple frames match or the loop closure in future work.