Next Article in Journal
Development of a Land Surface Temperature Retrieval Algorithm from GK2A/AMI
Previous Article in Journal
Development of ZJU High-Spectral-Resolution Lidar for Aerosol and Cloud: Extinction Retrieval
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved IMU Preintegration with Gravity Change and Earth Rotation for Optimization-Based GNSS/VINS

1
GNSS Research Center, Wuhan University, No. 129 Luoyu Road, Wuhan 430079, China
2
Collaborative Innovation Center of Geospatial Technology, Wuhan University, No. 129 Luoyu Road, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(18), 3048; https://doi.org/10.3390/rs12183048
Submission received: 28 July 2020 / Revised: 13 September 2020 / Accepted: 13 September 2020 / Published: 18 September 2020

Abstract

:
IMU preintegration technology has been widely used in the optimization-based sensor fusion framework, in order to avoid reintegrating the high-frequency IMU measurements at each iteration and maintain the ability of bias correction when bias estimation changes. Since IMU preintegration technology was first proposed, several improved versions have been designed by changing the attitude parameterization or the numerical integration method in the most current related research. However, all of these versions have failed to take the change of gravity and the earth rotation into consideration. In this paper, we redesign the IMU preintegration algorithm in which the earth rotation and gravity vector are calculated from the geodetic position. Compared with the covariance matrix form, in this paper, the uncertainty of the preintegrated IMU measurements is propagated in the form of a square root information matrix (SRIM) for better numerical stability and easy use in the optimization-based framework. We evaluate the improved IMU preintegration algorithm by using the dataset collected by our sensor platform equipped with two different-grade IMUs. The test results show that the improved IMU preintegration algorithm can cope well with the gravity change and earth rotation. The earth rotation must be taken into consideration for the high-grade IMU that can effectively sense the earth rotation. If the change of gravity is omitted, the root-mean-square error (RMSE) of the horizontal attitude is about 1.38 times greater than the geodetic displacement. Additionally, the positioning RMSE does not increase obviously within a limited range, which means tens of kilometers and several hundred meters for the low-grade and high-grade IMU used in the experiment, respectively.

Graphical Abstract

1. Introduction

Automatic robots have become a hot topic over the last few years due to a growing market demand for products such as autonomous cars, rescue robots, and military robots. In order to succeed in their missions, these robots usually require accurate motion information (position, velocity, and attitude) for control and navigation. Many kinds of sensors can provide useful information for motion estimation, locally or globally, such as a camera, the IMU, and the global navigation satellite system (GNSS). Typically, a modern robot is equipped with several kinds of sensors with complementary sensing capacities, in order to make motion information estimation accurate and robust. Specifically, the inertial navigation system (INS) can provide position, velocity, and attitude information at a high frequency by integrating IMU measurements [1,2,3,4]. However, IMU measurements are typically corrupted by several kinds of errors, such as the scale factor error, misalignment, biases, and sensor-induced random output noise [5]. Such measurement errors make the long-term integration of IMU measurements unreliable. In contrast to the short-term accuracy of the INS, the GNSS can provide position information in the geodetic reference frame with a relatively consistent accuracy and no position drift. However, the provided position information is low-frequency and discontinuous. For those reasons, the INS/GNSS integrated navigation system has been well-studied over the past several decades for providing high-precision, high-frequency motion information without drift [6,7,8]. However, the performance of the integrated system greatly depends on the availability of GNSS signals, especially for the low-grade IMU [7,8]. Unfortunately, satellite signals are vulnerable to electromagnetic interference and easily shielded by high buildings, which makes GNSS measurements unreliable. Another well-known technique with the capacity for local motion information estimation is the visual-inertial navigation system (VINS), which fuses the measurements provided by the camera and the IMU [9,10,11,12,13,14,15,16,17,18,19,20,21]. The two sensors are complementary in many ways. The IMU makes the metric scale and the gravity vector observable and provides accurate inter-frame motion information. The visual information aids the IMU in bias estimation. Nevertheless, the global position and yaw estimation of the VINS finally drifts [14], unless other sensors/systems that provide global measurements, such as the GNSS, are integrated with the VINS [15,16]. According to the above analyses, there is an irresistible trend to fuse the complementary information from the redundant sensors for motion information estimation.
Information fusion of the INS/GNSS integrated navigation system can be implemented by Bayesian filters, such as the Kalman filter or extended Kalman filter [6]. In comparison, the information fusion algorithms of the VINS become more complex, due to the high nonlinearity of visual residual functions and the high number of visual measurements. Generally, the information fusion algorithms of the VINS can be divided into two categories, namely the optimization-based methods [9,10,11,12,13,15,16] and the Bayesian filter-based methods [14,17,18,19,20,21]. The optimization-based methods typically can achieve a higher accuracy than the filter-based methods by benefitting from the ability of re-linearization. The computational complexity of the filtered-based methods is cubic with the number of visible mappoints. For achieving a real-time performance, the number of mappoints in the state of the filter is strictly limited, which also decreases the accuracy of information fusion [22]. The multistate constraint Kalman filter (MSCKF) is a special augmented Kalman filter. It keeps a sliding window of past poses in its state vector [20,21]. The positions of mappoints are marginalized first by using the left null space trick. Then, a probabilistic constraint between the poses is constructed. Hence, its computational complexity is linear with the number of mappoints and cubic with the number of poses in the window. For the optimization-based methods, by exploring the first- and second-order sparsity structure of the normal equation, they also become highly efficient [23,24]. However, full optimization finally becomes infeasible as the number of measurements increases. For limiting the computational complexity, sliding window optimization methods have been designed and widely used for solving the VINS problem. These kinds of methods work in the spirit of optimization, but only keep a sliding window of poses [9,10,11,12,13]. The states outside the sliding window are either directly fixed [9,12] or marginalized out [10,11,13].
Typically, the sampling frequency of the IMU is high—approximately hundreds of Hertz. For the filter-based methods, the time update is required to operate at the IMU sampling frequency. For the optimization-based methods, it is impracticable to perform optimization at the IMU sampling frequency. One possible solution to this problem is to integrate the IMU measurements during the sampling interval of the other sensors working at a low sampling frequency. However, this will cause another problem in that the initial integration values are unknown or inaccurate. Once the initial integration values are updated at each iteration of optimization, the IMU measurements need to be re-integrated, which makes the computational complexity increase. In order to achieve fusion with other sensors, uncertainty of the integrated IMU measurements is also necessary. For propagating the uncertainty, the initial uncertainty traditionally needs to be calculated by back propagating the measurement uncertainties of sensors after each iteration. Then, the uncertainty of the integrated IMU measurements is re-propagated, which will also cause additional computation.
For coping with these two challenges, the IMU preintegration technique was first developed by Lupton et al. [25,26] for the optimization-based VINS. The technique can also be used in other optimization-based multi-sensor fusion frameworks, such as the LiDAR/IMU/GNSS integrated navigation system [27]. In [25,26], the attitude is parametrized by the Euler angles, which causes a singularity in specific cases [28]. In [29,30], the attitude is parametrized by the specific rotation group SO (3), in order to avoid the singularity and carefully cope with uncertainty propagation of the preintegrated IMU measurements. In [10], the attitude is parametrized by the attitude quaternion. The covariance propagation of the preintegrated IMU measurements is calculated by using the dynamic equations of the preintegration error states. In [31,32], closed-form solutions of IMU preintegration are derived by using linear system theory. However, all of the above IMU preintegration algorithms assume that the gravity vector is constant in the VINS reference frame. In practice, the gravity vector changes with the position relative to the VINS reference frame. The longer the relative position, the more obvious the change of gravity. The earth rotation is also not considered in the above IMU preintegration algorithms. In comparison, the gravity change and earth rotation are well-modeled in the strap-down inertial navigation algorithms [1,2,3,4], the INS/GNSS, and the INS/GNSS/Vision integrated navigation systems [6,7,8], in order to remove their influence from the IMU measurements. Motivated by these conventional INS relevant algorithms, we redesigned the IMU preintegration algorithm by taking the change of gravity and the earth rotation into consideration. Both terms are functions of the geodetic position. There are several kinds of geodetic reference models for the earth, such as the geodetic reference system 1980 (GRS-80), the world geodetic system 1984 (WGS-84), and the Chinese Geodetic Coordinate System 2000 (CGCS-2000). In the INS relevant algorithms, the WGS-84 model has been widely used. In this paper, the WGS-84 model is also adopted.
The preintegrated IMU measurements can be used to construct a preintegration factor, and the preintegration factor needs to be weighted by its square-root information matrix before being added to the optimization. In this paper, the uncertainty of the preintegrated IMU measurements is propagated in the form of a square root information matrix [33]. Compared with the covariance propagation form [25,26,29,30,31,32], the square-root information matrix is directly available for weighting the preintegrated IMU measurements and theoretically has a better numerical stability [33]. IMU measurements are typically corrupted by several kinds of errors. In this paper, we assume that the IMU has been well-calibrated offline. Then, the remaining IMU measurement errors include the sensor-induced random output noise and the slowly time-varying biases. In this paper, the sensor-induced random output noise is modeled as a white noise process. Additionally, the slowly time-varying bias is modeled as a one-order Markov process.
In order to evaluate the performance of the improved IMU preintegration algorithm, we implement a tightly-coupled monocular visual-inertial system [34], and the GNSS positioning results are fused in a loosely-coupled way. We perform experiments by using the dataset collected by our sensor platform. The sensor platform is equipped with two different-grade IMUs, in order to analyze the influence of the earth rotation and the change of gravity on different-grade IMUs.
Our contributions can be summarized as follows:
  • We have redesigned the IMU preintegration algorithm by taking the earth rotation and the change of gravity into consideration;
  • The uncertainty of the preintegrated IMU measurements has been propagated in the form of a square-root information matrix;
  • We have evaluated the influence of the earth rotation and the change of gravity on the preintegrated IMU measurements by using two different-grade IMUs.
The remainder of this paper is organized as follows. In Section 2, the coordinate frames, notations, attitude parameterizations, and geodetic reference model of the earth are introduced. In Section 3, all of the details of the improved IMU preintegration algorithm are given, including the position, velocity, and attitude dynamic equations in the W frame; the IMU error model; the uncertainty propagation of preintegrated IMU measurements; and the IMU preintegration factors. In Section 4, the joint optimization framework of the VINS is briefly described. In Section 5, all of the details of experiment setup are introduced. In Section 6, all experimental results are given. In Section 7, we discuss the influence of the gravity change and the earth rotation on the IMU preintegration, according to the experimental results. In Section 8, we draw the conclusions.

2. Preliminaries

In this section, some fundamental professional knowledge used in designing the IMU preintegration algorithm is introduced, including the definitions of coordinate frames and notations, the attitude parameterizations, and the earth geodetic reference model. This fundamental professional knowledge has been well-developed in the INS and GNSS research communities for several decades.

2.1. Coordinate Frames

A coordinate frame is defined by three unit vectors that are perpendicular to each other in a right-hand form. For making a mathematical operation valid, all related free vectors and points should be projected into the same reference coordinate frame.
The E frame is the earth-centered, earth-fixed coordinate frame, with its Z axis parallel to the earth polar axis, its X axis passing through the Greenwich meridian, and its Y axis lying in the equatorial plane to form a right-handed coordinate frame.
The I frame is the non-rotating and non-accelerating inertial coordinate frame used as the reference coordinate frame for the IMU measurement. Theoretically, its orientation can be arbitrary. In this paper, the E frame at the initial time is selected as a historically fixed inertial coordinate frame.
The L frame is the local level coordinate frame, with its X axis parallel to the east, Y axis parallel to the north, and Z axis vertical to the local earth surface.
The B frame is the IMU coordinate frame with axes parallel to the nominal orthogonal input axes.
The C frame is the camera coordinate frame with the origin at the optical center, the X axis horizontal and pointing to the right, and the Y axis vertical and pointing downwards.
The W frame is the reference coordinate frame for the VINS. In this paper, the L frame at the initial position or another fixed position is selected as the W frame.

2.2. Nomenclature

A0, A1, A2arbitrary orthogonal right-handed coordinate frames;
f specific force measured by the accelerometer, which is produced by a non-gravitational force;
g local plumb-bob gravity vector that is a function of the geodetic position;
p A 1 A 2 A 0 position vector of frame A2 relative to frame A1, with projection in frame A0;
v A 1 A 2 A 0 velocity vector of frame A2 relative to frame A1, with projection in frame A0;
ω A 1 A 2 A 0 angular velocity vector of frame A2 relative to frame A1, with projection in frame A0;
C A 2 A 1 direction cosine matrix that transforms a vector from frame A2 to frame A1;
q A 2 A 1 attitude quaternion corresponding to C A 2 A 1 ;
θ A 2 A 1 rotation/angle-axis vector defined by the axis of rotation and the magnitude of rotation around the axis;
I n n-dimension identity matrix;
0 m × n zero matrix with m rows and n columns.
Set an arbitrary vector as V = [ V 1 , V 2 , V 3 ] T , then
[ V × ] the skew-symmetric matrix of the vector, as follows: [ V × ] = [ 0 V 3 V 2 V 3 0 V 1 V 2 V 1 0 ] ;
V q the quaternion form of the vector, as follows: V q = [ 0 V ] ;
V ¯ the bar above the vector means it is a constant vector;
V ^ the hat above the vector means it is an estimation;
V ˜ the tilde above the vector means it is a measurement;
V ˙ the point above the vector means it is the time derivative of the vector.

2.3. Attitude Parameterization

The direction cosine matrix, the rotation vector, and the attitude quaternion can be used to parameterize the attitude (rotation) between two coordinate frames with no singularity. Moreover, the three parameterization methods are mathematically equivalent.
The direction cosine matrix C A 2 A 1 is a unit and orthogonal matrix, and the columns are equal to the projections of the unit vectors along the axes of frame A2 into frame A1. For an arbitrary vector V 3 with projections V A 1 , V A 2 in the coordinate frame A1, A2, respectively, the following relationship holds [1]:
V A 1 = C A 2 A 1 V A 2 , V A 2 = ( C A 2 A 1 ) T V A 1 .
According to Equation (1), it is easy to derive the chain rule of the direction cosine matrix:
C A 2 A 0 = C A 1 A 0 C A 2 A 1 .
The rotation vector θ A 2 A 1 = ϕ u A 2 A 1 is defined by a unit vector u A 2 A 1 and the magnitude ϕ around the unit vector. According to the Rodrigues formula [1,35], the counterpart direction cosine matrix C A 2 A 1 is a function of the rotation vector, as follows:
C ( θ A 2 A 1 ) = c o s ϕ I 3 + ( 1 c o s ϕ ) u A 2 A 1 ( u A 2 A 1 ) T + sin ϕ [ u A 2 A 1 × ] .
The attitude quaternion is composed of four elements, as follows:
q A 2 A 1 = q w + q x i + q y j + q z k = [ q w q v ] ,
where q w , q x , q y , q z , q v = [ q x , q y , q z ] T 3 , | | q A 2 A 1 | | = q w 2 + q x 2 + q y 2 + q z 2 = 1 . We refer readers to reference [35] for more details about the quaternion. The attitude quaternion is also a function of the rotation vector, as follows:
q ( θ A 2 A 1 ) = [ cos ( 0.5 ϕ ) sin ( 0.5 ϕ ) u A 2 A 1 ] .
The chain rule of the attitude quaternion is defined on the basis of quaternion multiplication, as follows:
q A 2 A 0 = q A 1 A 0 q A 2 A 1 = [ q A 1 A 0 ] L q A 2 A 1 = [ q A 2 A 1 ] R q A 1 A 0 ,
where is the quaternion multiplication operator and q A 1 A 0 = [ q w q v T ] T . The left and right quaternion multiplication matrices are as follows:
[ q A 1 A 0 ] L = [ q w q v T q v q w I 3 + [ q v × ] ] ,
[ q A 2 A 1 ] R = [ q w q v T q v q w I 3 [ q v × ] ] .
The conversion relationships between the direction cosine matrix and the attitude quaternion are as follows:
q ( C A 2 A 1 ) = [ q w q v ] , { q w = t r ( C A 2 A 1 ) + 1 / 2 q v = [ C 32 C 23 4 q w C 13 C 31 4 q w C 21 C 12 4 q w ] T ,
C ( q A 2 A 1 ) = ( 2 q w 2 1 ) I 3 + 2 q w [ q v × ] + 2 q v q v T ,
where C A 2 A 1 = { C i j } , C i j , i , j { 1 , 2 , 3 } . t r ( C A 2 A 1 ) is the trace of C A 2 A 1 , i.e., the sum of the diagonal elements of the direction cosine matrix.

2.4. Time Derivative of the Attitude Parameter

The time derivative of the direction cosine matrix C A 2 A 1 is as follows:
C ˙ A 2 A 1 = C A 2 A 1 [ ω A 1 A 2 A 2 × ] .
The time derivative of the attitude quaternion q A 2 A 1 is as follows:
q ˙ A 2 A 1 = 1 2 q A 2 A 1 ω A 1 A 2 A 2 q .

2.5. The Earth Model

In this paper, the WGS-84 geodetic reference model is adopted for modeling the earth as shown in Figure 1. Set the geodetic position (latitude, longitude, and altitude) in the WGS-84 reference frame as φ , λ , and h . Then, the direction cosine matrix between the L frame and E frame is as follows:
C L E ( φ , λ ) = [ sin λ cos λ sin φ cos λ cos φ cos λ sin λ sin φ sin λ cos φ 0 cos φ sin φ ] .
The XYZ position in the E frame can be recovered from the geodetic position, as follows:
p E L E ( φ , λ , h ) = [ ( R N + h ) cos φ cos λ ( R N + h ) cos φ sin λ [ R N ( 1 e 2 ) + h ] sin φ ] ,
where e = 0 . 08181919104282 is the eccentricity of the earth and R N is the radius of curvature of the prime vertical plane. It is a function of the latitude, i.e., R N ( φ ) = a / 1 e 2 sin 2 φ . a = 6378137   m is the equatorial earth radius.
The geodetic position can also be recovered from the XYZ position in the E frame. The longitude can be recovered directly as follows:
λ = arctan ( p E L E ( 1 ) p E L E ( 0 ) ) .
The latitude and altitude need to be recovered by the following iterative algorithm. The initial values are set as follows:
h = 0 R N = a r = p E L E ( 0 ) 2 + p E L E ( 1 ) 2 .
Then, the latitude and altitude are iteratively calculated until convergence, as follows:
sin φ = p E L E ( 2 ) R N ( 1 e 2 ) + h φ = arctan ( p E L E ( 2 ) + e 2 R N ( φ ) sin φ r ) h = r cos φ R N .
For notation convenience, the computation process of Equations (15), (16), and (17) is donated as follows:
[ φ , λ , h ] = L L A ( p E L E ) .
The normal gravity in the L frame is as follows:
g γ L ( φ , h ) = [ 0 , 0 , g γ ( φ , h ) ] T ,
where g γ ( φ , h ) is calculated by the WGS-84 gravity model parameters, as follows:
g γ ( φ , h ) = g γ ( φ ) ( 3.0877 0.0044 sin 2 φ ) × 10 6 h + 0.072 × 10 12 h 2 g γ ( φ ) = 9.7803253 × ( 1 + 0.0053022 sin 2 φ 0.0000058 sin 2 2 φ )
The gravity anomaly is omitted in our algorithm. Therefore, the normal gravity is approximately considered as the local gravity in this paper.
The earth rotation vector in the E frame is as follows:
ω ¯ I E E = [ 0 , 0 , Ω I E ] T ,
where Ω I E = 7.292115 × 10 5 r a d / s .
The measurement of the accelerometer is a specific force. The influence of gravity needs to be removed from the preintegrated IMU measurements, in order to recover the metric motion. Therefore, we need to know the gravity vector projected in the W frame. The measurement of a gyroscope (rate sensor) is the angular velocity relative to the I frame. The influence of the earth rotation needs to be removed from the gyroscope measurement, in order to recover the real angular velocity relative to the earth.
Set the geodetic position of the W frame as φ 0 , λ 0 , and h 0 . The XYZ position p E W E in the E frame can be calculated by Equation (14). Then, the direction cosine matrix C L 0 E = C W E between the W frame and E frame can be calculated by Equation (13).
The earth rotation vector in the W frame is calculated as follows:
ω ¯ I E W = ( C L 0 E ) T ω ¯ I E E .
Set the position of the IMU at time t in the W frame as p W B t W . Then, its position in the E frame can be calculated as follows:
p E B t E = p E W E + C W E p W B t W .
Then, the geodetic position of IMU at time t can be calculated by Equation (18), i.e., [ φ t , λ t , h t ] = L L A ( p E B t E ) . The local normal gravity vector g γ L t ( φ t , h t ) in the L frame can be calculated by Equation (19). The direction cosine matrix C L t E ( φ t , λ t ) can be calculated by Equation (13). Therefore, the direction cosine matrix between the L frame and W frame is C L t W = ( C W E ) T C L t E . Finally, the gravity vector in the W frame is calculated as follows:
g ^ W = C L t W g γ L t .

3. The Improved IMU Preintegration Technology

The position, velocity, and attitude are obtained by integrating the IMU measurements. For performing such integrations, the dynamic equations for the position, velocity, and attitude should be derived first. Generally, the position, velocity, and attitude of the VINS are parametrized in the W frame. Hence, the dynamic equations are also derived in the W frame.
First, the relationship of the positions parameterized in the W frame and I frame is as follows:
p I B I = C E I ( p ¯ I W E + C ¯ W E p W B W ) ,
where p ¯ I W E = p ¯ E W E because the origins of the E frame and I frame are coincident, i.e., p ¯ I E E = 0 3 × 1 , and the W frame is fixed with respect to the earth.
According to Equation (11), the time derivative of Equation (25) is as follows:
v I B I = C E I [ ω ¯ I E E × ] ( p ¯ I W E + C ¯ W E p W B W ) + C E I C ¯ W E v W B W ,
where v I B I = p ˙ I B I , v W B W = p ˙ W B W . ω ¯ I E E is the earth rotation in the E frame.
For obtaining the acceleration information, we need to further derive the time derivative of Equation (26) as follows:
a I B I = C E I [ ω ¯ I E E × ] [ ω ¯ I E E × ] ( p ¯ I W E + C ¯ W E p W B W ) + 2 C E I [ ω ¯ I E E × ] C ¯ W E v W B W + C E I C ¯ W E v ˙ W B W ,
where a I B I = v ˙ I B I .
Substituting the equation [ ω ¯ I E E × ] = C ¯ W E [ ω ¯ I E W × ] C ¯ E W into Equation (27), we have
a I B I = C W I [ ω ¯ I E W × ] [ ω ¯ I E W × ] ( p ¯ I W W + p W B W ) + 2 C W I [ ω ¯ I E W × ] v W B W + C W I v ˙ W B W .
Let us left-multiply both sides of Equation (28) by C I B . After re-organization, we have
C B W a I B B = [ ω ¯ I E W × ] [ ω ¯ I E W × ] ( p ¯ I W W + p W B W ) + 2 [ ω ¯ I E W × ] v W B W + v ˙ W B W .
According to the theory of the force composition and decomposition, the acceleration a I B B relative to the inertial space is the resultant force of the specific force f B and the gravitational force G B , i.e., a I B B = f B + G B . Substituting the equation into Equation (29), we have
v ˙ W B W = C B W f B + G W [ ω ¯ I E W × ] [ ω ¯ I E W × ] ( p ¯ I W W + p W B W ) 2 [ ω ¯ I E W × ] v W B W = C B W f B + g W 2 [ ω ¯ I E W × ] v W B W ,
where g W = G W [ ω ¯ I E W × ] [ ω ¯ I E W × ] ( p ¯ I W W + p W B W ) is the gravity in the W coordinate frame. 2 [ ω ¯ I E W × ] v W B W is the well-known Coriolis acceleration.
Equation (30) is the velocity differential equation. The position differential equation can be easily expressed as follows:
p ˙ W B W = v W B W .
The attitude is parameterized by the attitude quaternion. Furthermore, its time derivative can be derived according to Equation (12), as follows:
q ˙ B W = 1 2 q B W ω W B B q = 1 2 q B W ω I B B ω I E B ω ¯ E W B q = 1 2 q B W ω I B B C ( q W B ) ω ¯ I E W q .
Then, the position, velocity, and attitude can be reckoned by numerically integrating the three dynamic equations, as follows:
p W B k + 1 W = p W B k W + v W B k W Δ t k . k + 1 + t k t k + 1 C B t W f B t d t + t k t k + 1 g t W d t 2 [ ω ¯ I E W × ] t k t k + 1 v W B t W d t p W B k W + v W B k W Δ t k . k + 1 + C B k W α k , k + 1 + 1 2 g k W Δ t k . k + 1 2 2 [ ω ¯ I E W × ] i = 1 i = N ( p W B i W p W B k W ) Δ t i ,
where Δ t k . k + 1 = t k + 1 t k . t k is written as k in many subscripts and superscripts for convenience. N is the sampling number during [ t k , t k + 1 ] and Δ t i is the IMU sampling interval. The change of gravity is negligible during [ t k , t k + 1 ] . Therefore, g k W is used to represent the gravity during the time interval. Moreover, α k , k + 1 is the so-called IMU position preintegration, as follows:
α k , k + 1 = t k t k + 1 C B t B k f B t d t .
The velocity is reckoned as follows:
v W B k + 1 W = v W B k W + t k t k + 1 C B t W f B t d t + t k t k + 1 g t W d t 2 [ ω ¯ I E W × ] t k t k + 1 v W B t W d t = v W B k W + C B k W β k , k + 1 + g k W Δ t k , k + 1 2 [ ω ¯ I E W × ] ( p W B k + 1 W p W B k W ) ,
where β k , k + 1 is the so-called IMU velocity preintegration, as follows:
β k , k + 1 = t k t k + 1 C B t B k f B t d t .
Finally, the attitude increment is the so-called IMU attitude preintegration. It can be reckoned by integrating Equation (32), as follows:
γ B k + 1 B k = t k t k + 1 1 2 q B t B k ω I B t B t C ( q W B t ) ω ¯ I E W q d t .

3.1. IMU Error Model and Preintegrated IMU Measurements

Typically, the IMU measurements are corrupted by several kinds of errors, such as the scale factor error, misalignment, sensor-induced random output noise, and slowly time-varying biases. In this paper, we assume that the IMU has been well-calibrated offline. Therefore, the scale factor error and misalignment are negligible. The sensor-induced random output noise is modeled as a white noise process. The slowly time-varying biases are molded as one-order Markov processes. Then, the IMU measurements are modeled as follows:
f ˜ B t = f B t + b A c c , t + η A c c , t ω ˜ I B t B t = ω I B t B t + b G y r o , t + η G y r o , t ,
where η A c c , t and η G y r o , t are Gaussian white noise processes, and η A c c , t ~ N ( 0 , σ A c c 2 ) , η G y r o , t ~ N ( 0 , σ G y r o 2 ) . N ( * , * ) denotes a normal distribution. b A c c , t and b G y r o , t are IMU time-varying biases and are modeled as follows:
b ˙ A c c , t = 1 τ A C C b A c c , t + n A c c , t b ˙ G y r o , t = 1 τ G y r o b G y r o , t + n G y r o , t ,
where τ A c c and τ G y r o are the correlation times of Markov processes. n A c c , t and n G y r o , t are driving Gaussian white noise processes, and n A c c , t ~ N ( 0 , σ b A c c 2 ) , n G y r o , t ~ N ( 0 , σ b G y r o 2 ) .
The IMU preintegration Equations (34), (36), and (37) can be solved by numerical integration. The initial body frame B k is selected as the reference frame for integration. Additionally, the initial values are known and independent of the position, velocity, and attitude in the W coordinate frame, i.e., α k , k = 0 3 × 1 , β k , k = 0 3 × 1 , C ( q B k B k ) = I 3 . This is the core idea of the IMU preintegration algorithm.
Set the IMU preintegrations at time t i as α ^ k , i , β ^ k , i , and γ ^ B i B k , and the IMU bias estimations as b ^ G y r o , k and b ^ A c c , k . When new IMU measurements f ˜ B j and ω ˜ I B j B j are received, t i < t j [ t k , t k + 1 ] , and the IMU preintegrations are updated recursively, as follows:
γ ^ B j B k = q ^ B j B k = γ ^ B i B k [ 1.0 0.5 ( ω ˜ I B j B j C ( q ^ W B i ) ω ¯ I E W b ^ G y r o , k ) Δ t i j ] α ^ k , j = α ^ k , i + β ^ k , i Δ t i j + 0.5 × C ( γ ^ B j B k ) ( f ˜ B j b ^ A c c , k ) Δ t i j 2 β ^ k , j = β ^ k , i + C ( γ ^ B j B k ) ( f ˜ B j b ^ A c c , k ) Δ t i j ,
where Δ t i j = t j t i .

3.2. Uncertainty Propagation and Jacobian Matrix of Bias Correction

The uncertainty propagation of IMU preintegration is similar to the time update of the Bayesian filter used in the INS/GNSS integrated navigation system. Due to the IMU preintegration errors being time-varying, we also need linear dynamic equations for tracking the errors, just like Psi- and Phi- angle error equations in the INS/GNSS integrated navigation system [36]. The Jacobian matrices for belated bias correction are contained in the transition matrix of the IMU preintegration errors. For iteratively updating the transition matrix, we also need the linear dynamic equations of the IMU preintegration errors. Hence, in this section, we will derive the linear dynamic equations first.
(1). Attitude preintegration error dynamic equation: The attitude preintegration error is defined in a right perturbation form. Set the attitude preintegration error at time t as δ θ B t B k . Then we have
C B t B k = C ^ B t B k ( I + [ δ θ B t B k × ] ) ,
where C ^ B t B k = C ( q ^ B t B k ) = C ( γ ^ B t B k ) .
The gyroscope bias error is defined as follows:
b G y r o , k = b ^ G y r o , k + δ b G y r o , k .
Substituting the Equations (41) and (42) into Equation (2), we have
C ^ B t B k ( I 3 + [ δ θ B t B k × ] ) = C ^ B i B k ( I 3 + [ δ θ B i B k × ] ) C B t B i C B t B i I 3 + Δ t [ ( ω ˜ I B t B t C ^ W B t ( I 3 + [ δ θ W B t × ] ) ω ¯ I E E b ^ G y r o , k δ b G y r o , k η G y r o , t ) × ] ,
where t k < t i < t , and Δ t = t t i , Δ t is sufficiently small.
The expanded form of Equation (43) is as follows:
[ δ θ B t B k × ] = C ^ B i B t [ δ θ B i B k × ] C ^ B t B i C ^ B i B t ( I 3 + [ δ θ B i B k × ] ) [ ( δ b G y r o , k + η G y r o , t ) Δ t × ] [ δ θ B i B k × ] [ ω ^ W B t B t × ] [ δ θ B i B k × ] Δ t + [ δ θ B i B k × ] [ ω ^ W B t B t × ] Δ t [ ( δ b G y r o , k + η G y r o , t ) Δ t × ] = [ δ θ B i B k × ] [ ( [ ω ^ W B t B t × ] δ θ B i B k ) × ] Δ t [ ( δ b G y r o , k + η G y r o , t ) × ] Δ t ,
where ω ^ W B t B t = ω ˜ I B t B t C ^ W B t ω ¯ I E E b ^ G y r o , k , C ^ W B t ω ¯ I E E C ^ W B t ( I 3 + [ δ θ W B t × ] ) ω ¯ I E E because the magnitudes of ω ¯ I E E and δ θ W B t are both quite small. Moreover, C ^ B i B t = I 3 [ ω ^ W B t B t Δ t × ] . All second-order small terms are omitted.
Then, the attitude preintegration error dynamic equation can be derived as follows:
δ θ ˙ B i B k = lim Δ t 0 δ θ B t B k δ θ B i B k Δ t = [ ω ^ W B i B i × ] δ θ B i B k δ b G y r o , k η G y r o , i .
(2). Velocity preintegration error dynamic equation: The velocity preintegration error is defined as follows:
β k , t = β ^ k , t + δ β k , t ,
where δ β k , t is the velocity preintegration error at time t.
The accelerometer bias error is defined as follows:
b A c c , k = b ^ A c c , k + δ b A c c , k .
According to the Equations (40), (41), (46), and (47), we have
β ^ k , t + δ β k , t = β ^ k , i + δ β k , i + C ^ B t B k ( I 3 + [ δ θ B t B k × ] ) ( f ˜ B t b ^ A c c , k δ b A c c , k η A c c , t ) .
The expanded form of Equation (48) is as follows:
δ β k , t δ β k , i + C ^ B t B k [ δ θ B t B k × ] f ^ B t Δ t C ^ B t B k ( δ b A c c , k + η A c c , t ) Δ t ,
where f ^ B t = f ˜ B t b ^ A c c , k .
Then, the velocity preintegration error dynamic equation can be derived as follows:
δ β ˙ k , i = lim Δ t 0 δ β k , t δ β k , i Δ t = C ^ B i B k [ f ^ B i × ] δ θ B i B k C ^ B i B k δ b A c c , k C ^ B i B k η A c c , i .
(3). Position preintegration error dynamic equation: The position preintegration error is defined as follows:
α k , t = α ^ k , t + δ α k , t ,
where δ α k , t is the position preintegration error at time t.
The position preintegration is the integration of velocity preintegration, so
δ α ˙ k , i = δ β k , i .
Rewriting the Equations (39), (45), (50), and (52) in a matrix form, we have
[ δ α ˙ k , i δ β ˙ k , i δ θ ˙ B i B k δ b ˙ A c c , i δ b ˙ G y r o , i ] = [ 0 3 × 3 I 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 C ^ B i B k [ f ^ B i × ] C ^ B i B k 0 3 × 3 0 3 × 3 0 3 × 3 [ ω ^ W B i B i × ] 0 3 × 3 I 3 0 3 × 3 0 3 × 3 0 3 × 3 1 τ A c c I 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 1 τ G y r o I 3 ]   [ δ α k , i δ β k , i δ θ B i B k δ b A c c , i δ b G y r o , i ] + + [ 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 C ^ B i B k 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I 3 ]   [ η A C C , i η G y r o , i n A C C , i n G y r o , i ] .
Rewriting Equation (53) in compact form,
δ 𝓏 ˙ i = F i δ 𝓏 i + B i u i ,
where δ 𝓏 i = [ δ α k , i T , δ β k , i T , δ θ B i B k T , δ b A c c , i T , δ b G y r o , i T ] T and u i = [ η A c c , i T , η G y r o , i T , n A c c , i T , n G y r o , i T ] T .
According to the linear system theorem [37], the system can be discretized as follows:
δ 𝓏 j ( I 15 + F i Δ t i j ) δ 𝓏 i + Δ t i j B i u i j e q u = Φ j i δ 𝓏 i + G i u i j e q u ,
where u i j e q u is the equivalent noise during [ t i , t j ] , as follows:
u i j e q u = 1 Δ t i j t i t j u τ d τ .
The mean of u i j e q u is zero. The covariance is calculated as follows:
c o v ( u i j e q u ) = 1 Δ t i j 2 t i t j c o v ( u τ ) d τ = Q Δ t i j = Q i j e q u ,
where Q = diag ( σ A c c 2 , σ G y r o 2 , σ b A c c 2 , σ b G y r o 2 ) . Additionally, the equivalent noise u i j e q u is a Gaussian white noise sequence.
Hence, the square root information matrix of the equivalent noise is as follows:
Λ i j e q u = diag ( Δ t i j σ A c c I 3 , Δ t i j σ G y r o I 3 , Δ t i j σ b A C C I 3 , Δ t i j σ b G y r o I 3 ) .
The transition matrix from t k to t k + 1 is propagated as follows:
Φ k + 1 , k = t i < t j Φ j i ,
where t i and t j are adjacent IMU sampling epochs, and t i , t j [ t k , t k + 1 ] .
Set the square root information matrix of the IMU preintegration errors at time t i as Λ i . Then, the square root information matrix Λ j can be calculated by performing QR decomposition on the following data matrix [33], as follows:
U i j [ Λ i j e q u 0 3 × 3 Λ i Φ j i 1 G i Λ i Φ j i 1 ] = [ Λ a Λ b 0 3 × 3 Λ j ] ,
where U i j is a unitary matrix. The right side of Equation (60) is an upper triangular matrix. The initial square-root information matrix at time t k should be large enough, because the initial values of IMU preintegration are deterministic. In our implementation, the initial square-root information matrix is set as Λ k = 10 8 × I 15 .

3.3. IMU Preintegration Factor

Given the preintegrated IMU measurements, all residual functions are listed as follows:
r α = ( C B k W ) T { r W B k + 1 W r W B k W v W B k W Δ t k , k + 1 0.5 × g ^ k W Δ t k , k + 1 2 + 2 [ ω ¯ I E W × ] i = 0 i = N ( p ^ W B i W p ^ W B k W ) Δ t i } α ^ k , k + 1 J k , k + 1 α / b A c c ( b A c c , k b ^ A c c , k ) J k , k + 1 α / b G y r o ( b G y r o , k b ^ G y r o , k ) r β = ( C B k W ) T ( v W B k + 1 W v W B k W g ^ k W Δ t k , k + 1 + 2 [ ω ¯ I E W × ] ( p ^ W B k + 1 W p ^ W B k W ) ) β ^ k , k + 1 J k , k + 1 β / b A C C ( b A c c , k b ^ A c c , k ) J k , k + 1 β / b G y r o ( b G y r o , k b ^ G y r o , k ) r γ = 2 [ ( q B k + 1 W ) 1 q B k W γ ^ B k + 1 B k δ γ B k + 1 B k ] v r b A C C = J k + 1 , k A c c b A c c , k b A c c , k + 1 r b G y r o = J k + 1 , k G y r o b G y r o , k b G y r o , k + 1 ,
where J k , k + 1 α / b A C C = Φ k + 1 , k [ 1 : 3 , 10 : 12 ] , J k , k + 1 α / b G y r o = Φ k + 1 , k [ 1 : 3 , 13 : 15 ] , J k , k + 1 β / b A C C = Φ k + 1 , k [ 4 : 6 , 10 : 12 ] , J k , k + 1 β / b G y r o = Φ k + 1 , k [ 4 : 6 , 13 : 15 ] , J k , k + 1 γ / b G y r o = Φ k + 1 , k [ 7 : 9 , 13 : 15 ] , J k + 1 , k A c c = Φ [ 10 : 12 , 10 : 12 ] , and J k + 1 , k G y r o = Φ [ 13 : 15 , 13 : 15 ] . The function Φ k + 1 , k [ a : b , c : d ] means the sub-block of Φ k + 1 , k from a to b rows and C to d columns. Additionally,
δ γ B k + 1 B k = [ 1 0.5 J k , k + 1 γ / b G y r o ( b k b ^ k ) ] .
All of the above residual functions should be weighted by the square-root information matrix Λ k + 1 before adding them to optimization.

4. Nonlinear Joint Optimization

We use the hybrid sliding window nonlinear optimization for state estimation (more details can be found in our previous work [34]).
The full state vector is defined as follows:
χ = [ x n , , x n + N , t B C , ψ m , , ψ m + M ] ,
where
x k = [ p W B k W , v W B k W , q B k W , b A c c , k , b G y r o , k ] , k [ n , n + N ] , t B C = [ p B C B , q C B ]
where ψ m is the inverse depth parameter of the m-th mappoint from its first observing keyframe, N is the number of keyframes in the sliding window, M is the number of visible mappoints, and t B C is the camera-IMU calibration parameter.
The joint optimization problem can be formulated as a nonlinear least-squares problem. The cost function is the sum of the prior and the Mahalanobis norm of all measurement residuals within the sliding window, as follows:
χ * = arg   min χ | | r p H p χ | | 2 + k 𝔅 | | r k + 1 , k I M U ( χ ) | | Λ k + 1 + ( l , j ) 𝒞 ρ ( | | r l C j ( χ ) | | Λ l C j ) + k 𝒢 | | r k G ( χ ) | | Λ k G ,
where r p and H p represent the prior information; 𝔅 , 𝒞 , and 𝒢 are the measurement set of the IMU, camera, and GNSS, respectively; r k + 1 , k I M U ( χ ) is the IMU preintegration residual function, as shown in Equation (61); r l C j ( χ ) is the re-projection residual function of the l-th mappoint in the j-th image (more details can be found in [34]); ρ ( * ) is the Huber loss function for coping with potential outliers [38]; Λ l C j is the square root information matrix of the visual measurement; and r k G ( χ ) is the GNSS residual at time t k , as follows:
r k G = p W B k W + C ( q B k W ) p ¯ B g n s s B C ¯ E W ( p ˜ E g n s s E p ¯ E W E ) ,
where C ¯ E W and p ¯ E W E are functions of the geodetic position of the selected world frame; p ¯ B g n s s B is the lever arm vector between the GNSS antenna and IMU, measured manually; and p ˜ E g n s s E is the XYZ position of the GNSS antenna in the E frame, and provided by the GNSS receiver.

5. Experiments

In order to evaluate the performance of the proposed IMU preintegration algorithm, we conducted experiments by using the dataset collected by our sensor platform. Our sensor platform consists of a left-right stereo camera and two different grades of IMU, as shown in Figure 2. The stereo camera consists of two Prosilica GT 1910 monocular cameras by AVT. Both of them are global shutter monochrome CCD cameras which deliver 1920 × 1080 images. The cameras are synchronously triggered by an Arduino Uno R3 microcontroller at 10 Hz. The microcontroller parses GPS time from the NMEA messages provided by Ublox NEO-M8N. The two cameras are equipped with LM8HC lenses by Kowa. The horizontal field of view of the lens is about 60 degrees. The two IMUs are MTi-G-710 and POS620, respectively. The MTi-G-710 is an industrial-grade MEMS-based IMU by Xsens. The POS620 is a quasi-navigation-grade IMU by the MPSTNAV company of China, and contains three-axis high-precision fiber optic gyroscopes (FOGs).
The technical parameters of the two different-grade IMUs are listed in Table 1. During data collection, the two IMUs were sampled at 200 Hz.
Our dataset contains two sequences collected at two different places in Wuhan city of China, namely the Wuhan Research and Innovation Center (WRIC) and the Guanggu financial harbor (GFH). There are many low-slung buildings and trees in the two places. These static objects provide a high-quality texture for visual measurements, as shown in Figure 3. Additionally, navigation satellite signals are not easily shielded or reflected in the two places. The GNSS raw data was recorded by the Trimble receiver. Furthermore, we used the GINS software developed by the MPSTNAV company of China to post-process the recorded GNSS raw data, in order to obtain high-precision real-time kinematic (RTK) positioning results.
Generally, the RTK positioning error was less than 0.10 m. The integrated navigation results of fusing the RTK positions and the POS620 measurements by Rauch-Tung-Striebel (RTS) smoothing served as the ground truth data. The dataset was used to evaluate the proposed IMU preintegration algorithm. It did not contain GNSS deny environment and visual challenges, such as overexposure, illumination changes, and poorly textured scenes. The motions performed by the vehicle during data collection were variable, including linear acceleration and deceleration, zero-speed motion, turning motion, and backward motion, i.e., almost all motions that can be performed by a ground vehicle. Additionally, changing slopes also exist in the two places. Therefore, the motions in the collected dataset were complicated enough to evaluate the proposed IMU preintegration algorithm.
Because it is quite difficult to obtain sufficient disparities from images recorded by a monocular camera when a vehicle runs along a straight road, we could not initialize the monocular visual-inertial system. For coping with this issue, stereo images were used to initialize the VINS. After initialization, only the images of the left side camera were used in optimization.
It is well-known that the earth is an approximate spheroid, and the direction of gravity approximately points to the center of the earth. Therefore, the direction and amplitude of gravity change with the position on the earth. For two positions that are far apart, the gravity vectors will be quite different. Therefore, it is imprecise for the traditional IMU preintegration algorithms to assume that the gravity vector is constant in the W frame, especially after the vehicle has traveled a long distance. An example is where the VINS system (maybe another system where the IMU preintegration technology is used) was initialized at city A and traveled to city B, where the W coordinate frame was located at city A. To simulate this situation, i.e., to amplify the influence of the change of gravity, we deliberately moved the W coordinate frame a certain distance from the starting position of the testing dataset. Specifically, if the geodetic position of the starting position was φ , λ , and h , the geodetic displacement was δ φ , δ λ , and δ h , respectively. Then, the new position of the W coordinate frame was φ + δ φ , λ + δ λ , and h + δ h , respectively. The new position of the W frame may not be located in a living area, and may be a mountain or a lake. However, this does not affect the evaluation of the proposed algorithm mathematically. In order to systematically evaluate the influence of the change of gravity, the geodetic displacement of the world frame was incrementally set as 0, 0.1, 0.25, 0.5, 1.0, and 2.0 degrees, where δ φ = δ λ , δ h = 0 . The corresponding Euclidean displacements were 0, 14.7, 36.7, 73.3, 146.4, and 292.1 kilometers. The new positions of the W frame were drawn in Google Earth, as shown in Figure 4.
In order to achieve good alignment with the geodetic frame, the RTK positions were added into optimization during the initial period, i.e., we directly used the RTK positioning results. Then, the RTK positions were removed after initialization, for better reflecting the impact of the IMU preintegration. At this time, the VINS system will have accurate prior information. For analyzing the influence of the earth rotation, the performance of the proposed IMU preintegration algorithm was also evaluated with or without a consideration of the earth rotation. Finally, the performance of the proposed IMU preintegration algorithm was evaluated under one of the four settings, as follows:
Setting A: Both the change of gravity and the earth rotation are considered, i.e., the proposed algorithm is used in a normal way;
Setting B: The change of gravity is considered, but the earth rotation is omitted;
Setting C: The change of gravity is omitted, but the earth rotation is considered;
Setting D: Both the change of gravity and the earth rotation are omitted, and the proposed IMU preintegration algorithm degenerates to a traditional IMU preintegration algorithm.
During the initial period, the proposed IMU preintegration algorithm was used under Setting A.

6. Results

The root-mean-square errors (RMSEs) of the estimated position, horizontal attitude, and yaw were calculated by comparing the estimated states with the ground truth data. All results are listed in Table 2, Table 3 and Table 4. Because all states were estimated from noisy sensor measurements by joint optimization, the final results were random due to the keyframe selection method and the change of the initial condition. However, the variation tendency of RMSEs was still obvious under different settings, especially for the RMSEs of the POS620 due to its high-precision measurements. The estimation results were diverged under Setting B and Setting D if the POS620 measurements were used in our implementation. Therefore, the RMSEs are not listed in Table 4. Additionally, the diverged estimation errors of POS620 are plotted in Figure 5.
The trajectories of the two IMUs for the sequences WRIC and GFH under different settings and two-degree geodetic displacements are plotted in Figure 6.
Due to the limited space of the paper, only some of the RMSEs obtained under different settings and different geodetic displacements are plotted in Figure 5, Figure 7, Figure 8 and Figure 9.

7. Discussion

In this section, we will discuss the influence of the gravity change and the earth rotation on the IMU preintegration, according to the results presented in the previous section.
According to the RMSEs of the horizontal attitude of the MTi-G-710 and POS620 under Setting A and Setting C presented in Table 2, Table 3 and Table 4, we can observe that ignoring the change of gravity makes the horizontal attitude errors increase as the geodetic displacement increases linearly. The RMSEs of the horizontal attitude error are about 1.38 times greater than the geodetic displacement. This tendency is also obvious in Figure 8b and Figure 9b, where the pitch and roll errors fluctuate sharply. If the vehicle moves in a small area, i.e., the geodetic displacement is almost 0 degrees, ignoring the change of gravity does not have much of an influence on the horizontal attitude errors, as shown in Table 2, Table 3 and Table 4, under Setting C. This phenomenon can also be directly observed in Figure 7.
According to the position RMSEs of MTi-G-710 under Setting A and Setting C presented in Table 2, we can observe that the RMSEs of the position are lower than 5.6 m under Setting A. The position RMSEs increase and become greater than 8.73 m under Setting C in Table 2, if the geodetic displacement is greater than 0.5 degrees. We can also observe a similar tendency under Setting A and Setting C in Table 3. The increase of the position RMSE becomes obvious when the geodetic displacement is greater than 1 degree, due to the short length of the sequence GFH.
According to the position RMSEs of POS620 under Setting A and Setting C presented in Table 4, we can observe that the position RMSEs are lower than 1.9 m in sequence WRIC and lower than 0.85 m in sequence GFH under Setting A. Additionally, the position RMSEs become greater than 2.36 m in sequence WRIC and 1.83 m in sequence GFH, if the geodetic displacement is greater than 0.1 degrees. The motion estimation results diverge under Setting C, if the geodetic displacement is greater than 0.25 degrees in the sequence WRIC and 0.1 degrees in the sequence GFH. Compared with the MTi-G-710, the POS620 is more sensitive to the change of gravity, due to its high-precision measurements, which means that it has a greater weight during information fusion. If the vehicle moves in a small area, i.e., the geodetic displacement is almost 0 degrees, ignoring the change of gravity has no obvious influence on the position errors for both low-grade and high-grade IMUs, as shown in Table 2, Table 3 and Table 4 under Setting C.
According to the RMSEs of the MTi-G-710 under Setting A and Setting B presented in Table 2 and Table 3, we can observe that ignoring the earth rotation has no obvious influence on the accuracy of the position and the horizontal attitude. From another perspective, the RMSEs of MTi-G-710 under Setting B and Setting D presented in Table 2 and Table 3 show that the positioning accuracy is obviously increased under large geodetic displacement by taking the change of gravity into consideration, but ignoring the earth rotation. Therefore, the earth rotation can be safely omitted for the industrial-grade MEMS-based IMU. In contrast, the motion estimation results diverge if the earth rotation is omitted from the measurements of the navigation-grade IMU (POS620), as shown in Figure 5. This is because the earth rotation is about 15 deg / h , and the gyro bias instability and noise density of the angler random walk of the MTi-G-710 are 100 deg / h and 0.6 deg / h , respectively, making it impossible to sense the earth rotation. In contrast, the gyro bias instability and noise density of the angler random walk of the POS620 are about 0.02 deg / h and 0.005 deg / h , respectively. Therefore, the POS620 can sense the earth rotation effectively. At the same time, it should be noted that the higher noise density of MTi-G-710 makes the weight of its preintegrated IMU measurements lower, which makes the visual constrains more significant. However, the high grade of POS620 results in a greater weight of the preintegrated IMU measurements and reduces the effect of the visual constrains, making the system solution more sensitive to the modeling error of the earth rotation.
Since the yaw state is unobservable for the VINS and therefore cannot be estimated effectively [14], the drift of the yaw angle is completely random, making the yaw RMSEs stochastic under different settings and geodetic displacements, as shown in Table 2, Table 3 and Table 4.

8. Conclusions

In this paper, we have redesigned the IMU preintegration algorithm used in the optimization-based sensor fusion framework by taking the earth rotation and the change of gravity into consideration. Both of the two terms are functions of the geodetic position. In order to evaluate the proposed algorithm, we collected a dataset by using our sensor platform equipped with two different-grade IMUs. Then, the proposed algorithm was systematically evaluated by using the collected field dataset under different settings and test conditions. The test results led to the following conclusions: (1) The earth rotation can be safely omitted for industrial-grade MEMS-based IMUs that cannot sense the rotation effectively. However, for quasi-navigation-grade IMUs that can effectively sense the earth rotation, the earth rotation must be considered so as to maintain the positioning accuracy of the VINS; (2) if the change of gravity is omitted, the horizontal attitude error increases linearly as the geodetic displacement increases. Moreover, the RMSEs of the horizontal attitude error are about 1.38 times greater than the geodetic displacement. If the accuracy of the horizontal attitude matters, the change of gravity must be carefully considered; (3) if the change of gravity is omitted, the position error does not increase much within the limited working area. Furthermore, the threshold of the size of the limited working area is related to the IMU grade. According to the test results, for industrial-grade IMUs, the positioning error will not obviously increase within 0.25-degree geodetic displacement (36.7 km) if the gravity change is omitted. Additionally, for quasi-navigation-grade IMUs, the positioning error will also not increase if the limited size is about several hundred meters; (4) the performance of the proposed algorithm is consistent under different geodetic displacements and settings. Therefore, the proposed algorithm can cope well with the change of gravity and the earth rotation, forming a more robust GNSS/VINS solution.

Author Contributions

Raw idea, J.J. and X.N.; methodology, J.J. and X.N.; software, J.J.; writing—original draft preparation, J.J.; writing—review and editing, X.N.; resources, X.N. and J.L.; supervision, X.N. and J.L.; funding acquisition, X.N. and J.L.; All authors have read and agreed to the published version of the manuscript.

Funding

The National Key Research and Development Program of China, 2018YFB1305001; Joint Fund of Ministry of Education, 6141A02011907.

Acknowledgments

Thanks to Ruonan Guo and Changxin Lai from the Navigation Group of the GNSS Research Center at Wuhan University for helping to collect the dataset. Thanks to Jing Fan from the same group for her suggestion about drawing figures.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Savage, P.G. Strapdown inertial navigation integration algorithm design part 1: Attitude algorithms. J. Guid. Control Dyn. 1998, 21, 19–28. [Google Scholar] [CrossRef]
  2. Savage, P.G. Strapdown inertial navigation integration algorithm design part 2: Velocity and position algorithms. J. Guid. Control Dyn. 1998, 21, 208–221. [Google Scholar] [CrossRef]
  3. Miller, R.B. A New Strapdown Attitude Algorithm. J. Guid. Control Dyn. 1983, 6, 287–291. [Google Scholar] [CrossRef]
  4. Ignagni, M.B. Efficient class of optimized coning compensation algorithms. J. Guid. Control Dyn. 1996, 19, 424–429. [Google Scholar] [CrossRef]
  5. Savage, P.G. Analytical modeling of sensor quantization in strapdown inertial navigation error equations. J. Guid. Control Dyn. 2002, 25, 833–842. [Google Scholar] [CrossRef]
  6. Groves, P.D. Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems; Artech House: Washington, DC, USA, 2013. [Google Scholar]
  7. Wang, J.; Garratt, M.; Lambert, A.; Wang, J.J.; Han, S.; Sinclair, D. Integration of GPS/INS/vision sensors to navigate unmanned aerial vehicles. Int. Arch. Photogramm. Remote Sens. Spat. Inform. Sci. 2008, 37, 963–969. [Google Scholar]
  8. Ding, W.; Wang, J.; Han, S.; Almagbile, A.; Garratt, M.A.; Lambert, A.; Wang, J.J. Adding optical flow into the GPS/INS integration for UAV navigation. In Proceedings of the International Global Navigation Satellite Systems Society Symposium, Jeju, Korea, 4–6 November 2009; pp. 1–13. [Google Scholar]
  9. Mur-Artal, R.; Tardós, J.D. Visual-inertial monocular SLAM with map reuse. IEEE Robot. Autom. Lett. 2017, 2, 796–803. [Google Scholar] [CrossRef] [Green Version]
  10. Qin, T.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version]
  11. Leutenegger, S.; Lynen, S.; Bosse, M.; Siegwart, R.; Furgale, P. Keyframe-based visual–inertial odometry using nonlinear optimization. Int. J. Robot. Res. 2015, 34, 314–334. [Google Scholar] [CrossRef] [Green Version]
  12. Usenko, V.; Engel, J.; Stückler, J.; Cremers, D. Direct visual-inertial odometry with stereo cameras. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–20 May 2016; pp. 1885–1892. [Google Scholar] [CrossRef]
  13. Von Stumberg, L.; Usenko, V.; Cremers, D. Direct sparse visual-inertial odometry using dynamic marginalization. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–26 May 2018; pp. 2510–2517. [Google Scholar] [CrossRef] [Green Version]
  14. Kelly, J.; Sukhatme, G.S. Visual-inertial sensor fusion: Localization, mapping and sensor-to-sensor self-calibration. Int. J. Robot. Res. 2011, 30, 56–79. [Google Scholar] [CrossRef] [Green Version]
  15. Mascaro, R.; Teixeira, L.; Hinzmann, T.; Siegwart, R.; Chli, M. GOMSF: Graph-Optimization based Multi-Sensor Fusion for robust UAV pose estimation. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–26 May 2018; pp. 1421–1428. [Google Scholar] [CrossRef] [Green Version]
  16. Qin, T.; Cao, S.; Pan, J.; Shen, S. A General Optimization-based Framework for Global Pose Estimation with Multiple Sensors. arXiv 2019, arXiv:1901.03642. [Google Scholar]
  17. Davison, A.J.; Reid, I.; Molton, N.D.; Stasse, O. MonoSLAM: Real-Time Single Camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Ebcin, S.; Veth, M. Tightly-Coupled Image-Aided Inertial Navigation Using the Unscented Kalman Filter; Air Force Institute of Technology: Wright-Patterson AFB, OH, USA, 2007. [Google Scholar]
  19. Bloesch, M.; Omari, S.; Hutter, M.; Siegwart, R. Robust visual inertial odometry using a direct EKF-based approach. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 298–304. [Google Scholar] [CrossRef] [Green Version]
  20. Mourikis, A.I.; Roumeliotis, S.I. A multi-state constraint Kalman filter for vision-aided inertial navigation. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 3565–3572. [Google Scholar] [CrossRef]
  21. Li, M.; Mourikis, A.I. High-precision, consistent EKF-based visual-inertial odometry. Int. J. Robot. Res. 2013, 32, 690–711. [Google Scholar] [CrossRef]
  22. Strasdat, H.; Montiel, J.M.; Davison, A.J. Visual SLAM: Why filter? Image Vis. Comput. 2012, 30, 65–77. [Google Scholar] [CrossRef]
  23. Engels, C.; Stewénius, H.; Nistér, D. Bundle adjustment rules. Photogramm. Comput. Vis. 2006, 2. [Google Scholar] [CrossRef]
  24. Strasdat, H. Local Accuracy and Global Consistency for Efficient Visual SLAM; Department of Computing: London, UK, 2012. [Google Scholar]
  25. Lupton, T.; Sukkarieh, S. Efficient integration of inertial observations into visual SLAM without initialization. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 11–15 October 2009; pp. 1547–1552. [Google Scholar] [CrossRef]
  26. Lupton, T.; Sukkarieh, S. Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions. IEEE Trans. Robot. 2011, 28, 61–76. [Google Scholar] [CrossRef]
  27. Chang, L.; Niu, X.; Liu, T.; Tang, J.; Qian, C. GNSS/INS/LiDAR-SLAM Integrated Navigation System Based on Graph Optimization. Remote Sens. 2019, 11, 9. [Google Scholar] [CrossRef] [Green Version]
  28. Ma, Y.; Soatto, S.; Kosecka, J.; Sastry, S.S. An Invitation to 3-d Vision: From Images to Geometric Models; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  29. Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. IMU preintegration on manifold for efficient visual-inertial maximum-a-posteriori estimation. Ga. Inst. Technol. 2015. [Google Scholar] [CrossRef]
  30. Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. On-Manifold Preintegration for Real-Time Visual-Inertial Odometry. IEEE Trans. Robot. 2016, 33, 1–21. [Google Scholar] [CrossRef] [Green Version]
  31. Eckenhoff, K.; Geneva, P.; Huang, G. Direct visual-inertial navigation with analytical preintegration. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Sands Expo and Convention Centre, Marina Bay Sands, Singapore, 29 May–3 June 2017; pp. 1429–1435. [Google Scholar] [CrossRef]
  32. Eckenhoff, K.; Geneva, P.; Huang, G. Closed-form preintegration methods for graph-based visual–inertial navigation. Int J. Robot. Res. 2019, 38, 563–586. [Google Scholar] [CrossRef] [Green Version]
  33. Bierman, G.J. Factorization Methods for Discrete Sequential Estimation; Courier Corporation: North Chelmsford, MA, USA, 2006. [Google Scholar]
  34. Jiang, J.; Niu, X.; Guo, R.; Liu, J. A Hybrid Sliding Window Optimizer for Tightly-Coupled Vision-Aided Inertial Navigation System. Sensors 2019, 19, 3418. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Barfoot, T.D. State Estimation for Robotics; Cambridge University Press: Cambridge, UK, 2017. [Google Scholar]
  36. Goshen-Meskin, D.; Bar-Itzhack, I.Y. Unified approach to inertial navigation system error modeling. J. Guid. Control Dyn. 1992, 15, 648–653. [Google Scholar] [CrossRef]
  37. Chen, C.T. Linear System Theory and Design; Oxford University Press: Oxford, UK, 1998. [Google Scholar]
  38. Huber, P. Robust estimation of a location parameter. Annals Math. Stat. 1964, 35, 73–101. [Google Scholar] [CrossRef]
Figure 1. The earth model and related coordinate frames.
Figure 1. The earth model and related coordinate frames.
Remotesensing 12 03048 g001
Figure 2. Our sensor platform.
Figure 2. Our sensor platform.
Remotesensing 12 03048 g002
Figure 3. Representative images from our dataset.
Figure 3. Representative images from our dataset.
Remotesensing 12 03048 g003
Figure 4. The new position of the world frame with different geodetic displacement.
Figure 4. The new position of the world frame with different geodetic displacement.
Remotesensing 12 03048 g004
Figure 5. The RMSEs of thePOS620 under 0-degree geodetic displacement for the sequence WRIC: (a) The RMSEs for Setting A, and (b) the RMSEs for Setting B.
Figure 5. The RMSEs of thePOS620 under 0-degree geodetic displacement for the sequence WRIC: (a) The RMSEs for Setting A, and (b) the RMSEs for Setting B.
Remotesensing 12 03048 g005
Figure 6. The trajectories with two-degree geodetic displacement of MTI-G-710 under Setting A and Setting D, and POS620 under Setting A: (a) The trajectories for the sequence WRIC, and (b) the trajectories for the sequence GFH.
Figure 6. The trajectories with two-degree geodetic displacement of MTI-G-710 under Setting A and Setting D, and POS620 under Setting A: (a) The trajectories for the sequence WRIC, and (b) the trajectories for the sequence GFH.
Remotesensing 12 03048 g006
Figure 7. The RMSEs of the MTi-G-710 under 0-degree geodetic displacement for the sequence WRIC: (a) The RMSEs for Setting A, and (b) the RMSEs for Setting C.
Figure 7. The RMSEs of the MTi-G-710 under 0-degree geodetic displacement for the sequence WRIC: (a) The RMSEs for Setting A, and (b) the RMSEs for Setting C.
Remotesensing 12 03048 g007
Figure 8. The RMSEs of the MTi-G-710 under 0.5-degree geodetic displacement for the sequence WRIC: (a) The RMSEs for Setting A, and (b) the RMSEs for Setting C.
Figure 8. The RMSEs of the MTi-G-710 under 0.5-degree geodetic displacement for the sequence WRIC: (a) The RMSEs for Setting A, and (b) the RMSEs for Setting C.
Remotesensing 12 03048 g008
Figure 9. The RMSEs of the MTi-G-710 under 1.0-degree geodetic displacement for the sequence WRIC: (a) The RMSEs for Setting A, and (b) the RMSEs for Setting C.
Figure 9. The RMSEs of the MTi-G-710 under 1.0-degree geodetic displacement for the sequence WRIC: (a) The RMSEs for Setting A, and (b) the RMSEs for Setting C.
Remotesensing 12 03048 g009
Table 1. The technical parameters of MTi-G-710 and POS620.
Table 1. The technical parameters of MTi-G-710 and POS620.
IMUGyroscopeAccelerometer
Angular Random Walk
( d e g / h )
Bias Instability
( d e g / h )
Velocity Random Walk
( m / s / h )
Bias Instability
( m G a l )
MTi-G-7100.61000.0482000
POS6200.0050.020.0125
Table 2. The root-mean-square errors (RMSEs) of the position, horizontal attitude, and yaw of MTi-G-710 under different settings for the sequence at the test area of the Wuhan Research and Innovation Center (WRIC).
Table 2. The root-mean-square errors (RMSEs) of the position, horizontal attitude, and yaw of MTi-G-710 under different settings for the sequence at the test area of the Wuhan Research and Innovation Center (WRIC).
Shifting Distance
(deg)
Setting A
The Proposed Solution
Setting B
With Gravity ChangeNo Earth Rotation
Setting C
No Gravity ChangeWith Earth Rotation
Setting D
The Traditional Solution
P(m)H(deg)Y(deg)P(m)H(deg)Y(deg)P(m)H(deg)Y(deg)P(m)H(deg)Y(deg)
05.460.0770.525.530.0640.374.230.0730.255.000.0650.47
0.14.900.0680.425.260.0700.564.650.1610.413.840.1720.61
0.255.540.0660.515.340.0740.445.510.3500.494.750.3550.66
0.54.440.0790.355.670.0820.418.730.6800.378.750.6810.57
1.05.110.0760.494.280.0850.509.551.3380.649.131.3390.67
2.05.030.0700.365.690.0730.3320.92.6400.4215.92.6540.49
Table 3. The RMSEs of the position, horizontal attitude, and yaw of MTi-G-710 under different settings for the sequence at the test area of the Guanggu financial harbor (GFH).
Table 3. The RMSEs of the position, horizontal attitude, and yaw of MTi-G-710 under different settings for the sequence at the test area of the Guanggu financial harbor (GFH).
Shifting Distance
(deg)
Setting A
The Proposed Solution
Setting B
With Gravity Change
No Earth Rotation
Setting C
No Gravity Change
With Earth Rotation
Setting D
The Traditional Solution
P(m)H(deg)Y(deg)P(m)H(deg)Y(deg)P(m)H(deg)Y(deg)P(m)H(deg)Y(deg)
03.050.0640.482.720.0660.482.580.0640.553.070.0650.73
0.12.670.0700.422.450.0590.682.640.1540.512.830.1620.69
0.252.960.0760.632.420.0650.542.590.3390.593.020.3470.54
0.52.400.0650.523.180.0640.363.760.6730.573.830.6740.36
1.03.030.0680.502.590.0740.555.211.3340.504.921.3300.49
2.02.740.0650.683.170.0650.489.042.6430.759.162.6440.50
Table 4. The RMSEs of the position, horizontal attitude, and yaw of POS620 under Setting A and Setting C for the dataset.
Table 4. The RMSEs of the position, horizontal attitude, and yaw of POS620 under Setting A and Setting C for the dataset.
Shifting Distance
(deg)
Setting A
The Proposed Solution
Setting C
No Gravity Change
With Earth Rotation
Sequence WRICSequence GFHSequence WRICSequence GFH
P(m)H(deg)Y(deg)P(m)H(deg)Y(deg)P(m)H(deg)Y(deg)P(m)H(deg)Y(deg)
01.500.0170.450.550.0110.091.270.0170.380.550.0110.10
0.11.780.0170.520.810.0120.192.360.1330.341.830.1340.23
0.251.520.0170.490.700.0120.118.280.3231.90---------
0.51.860.0170.600.620.0120.09------------------
1.01.280.0180.410.780.0120.18------------------
2.01.400.0190.440.630.0120.09------------------
Note: The notation --- means that the estimation diverged and crashed under this setting.

Share and Cite

MDPI and ACS Style

Jiang, J.; Niu, X.; Liu, J. Improved IMU Preintegration with Gravity Change and Earth Rotation for Optimization-Based GNSS/VINS. Remote Sens. 2020, 12, 3048. https://doi.org/10.3390/rs12183048

AMA Style

Jiang J, Niu X, Liu J. Improved IMU Preintegration with Gravity Change and Earth Rotation for Optimization-Based GNSS/VINS. Remote Sensing. 2020; 12(18):3048. https://doi.org/10.3390/rs12183048

Chicago/Turabian Style

Jiang, Junxiang, Xiaoji Niu, and Jingnan Liu. 2020. "Improved IMU Preintegration with Gravity Change and Earth Rotation for Optimization-Based GNSS/VINS" Remote Sensing 12, no. 18: 3048. https://doi.org/10.3390/rs12183048

APA Style

Jiang, J., Niu, X., & Liu, J. (2020). Improved IMU Preintegration with Gravity Change and Earth Rotation for Optimization-Based GNSS/VINS. Remote Sensing, 12(18), 3048. https://doi.org/10.3390/rs12183048

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop