Next Article in Journal
GDP Estimation by Integrating Qimingxing-1 Nighttime Light, Street-View Imagery, and Points of Interest: An Empirical Study in Dongguan City
Previous Article in Journal
Three-Dimensional Instance Segmentation of Rooms in Indoor Building Point Clouds Using Mask3D
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Adaptive Combined Filtering Algorithm for Non-Holonomic Constraints with Time-Varying and Thick-Tailed Measurement Noise

1
GNSS Research Center, Wuhan University, Wuhan 430079, China
2
Electronic Information School, Wuhan University, Wuhan 430079, China
3
School of Electronics and Information Engineering, Hubei University of Science and Technology, Xianning 437100, China
4
Hubei Luojia Laboratory, Wuhan University, Wuhan 430079, China
5
School of Microelectronics, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(7), 1126; https://doi.org/10.3390/rs17071126
Submission received: 30 December 2024 / Revised: 8 March 2025 / Accepted: 19 March 2025 / Published: 21 March 2025

Abstract

:
Aiming at the problem that the pseudo-velocity measurement noise of non-holonomic constraints (NHCs) in the integrated navigation of vehicle-mounted a global navigation satellite system/inertial navigation system (GNSS/INS) is time-varying and thick-tailed in complex road conditions (turning, sideslip, etc.) and cannot be accurately predicted, an adaptive estimation method for the initial value of NHC lateral velocity noise based on multiple linear regression is proposed. On the basis of this method, a Gaussian Student’s T distribution variational Bayesian filtering algorithm (Ga-St VBAKF) based on NHC pseudo-velocity measurement noise modeling is proposed through modeling and analysis of pseudo-velocity measurement noise. Firstly, in order to adaptively adjust the initial value of NHC lateral velocity noise, a vehicle turning detection algorithm is used to detect whether the vehicle is turning. Secondly, based on the vehicle motion state, the variational Bayesian method is used to adaptively estimate the statistical characteristics of the measurement noise in real time based on modeling of the lateral velocity noise as Gaussian white noise or Student’s T distribution thick-tail noise. The test results show that compared to the traditional Kalman filtering algorithm with fixed noise, the Ga-St VBAKF algorithm with noise adaptation reduces the maximum horizontal position error by 65.9% in the GNSS/NHC/OD/INS (where OD stands for odometer and INS stands for inertial measurement unit) system when the vehicle is in a turning state, and by 42.3% in the NHC/OD/INS system. This indicates that the algorithm can effectively suppress the divergence of positioning errors during turning and improve the performance of integrated navigation.

Graphical Abstract

1. Introduction

Global navigation satellite system/inertial navigation system (GNSS/INS) integrated navigation is a common method of vehicle navigation. This method uses GNSS high-precision positioning information to assist INS in achieving autonomous continuous combination and recursive acquisition of positioning results [1]. It has been widely used to reduce the error accumulation of strapdown INS. However, when a vehicle passes through complex scenes such as avenues, tunnels, and viaducts, the GNSS signal encounters frequent signal occlusion or even interruption, resulting in insufficient credibility of the observation information, which leads to the divergence of the integrated navigation results [2,3].
Given the above problems, domestic and foreign scholars have carried out a series of studies. In general, most ideas are based on the use of the kinematics of the carrier to construct an integrated navigation filtering equation with motion constraints. The motion constraint is a non-holonomic constraint. According to the kinematics principle, when the carrier moves close to the ground, if there is no lateral sliding or up-and-down bump, its lateral and vertical velocities are considered to be connected to zero [4]. Therefore, the filtering equation can be constructed by combining the principle with the odometer information to determine the observation. Refs. [5,6,7,8] showed that NHCs are effective in suppressing error divergence for integrated navigation, especially when the GNSS signal is interrupted. However, it is easy to ignore the fact that most road conditions do not meet the conditions of non-integrity constraints. Under conditions of vehicle turning, potholed road surfaces, uphill and downhill bumps, muddy road sections, and other road conditions, the lateral and vertical speed is not zero. At this time, the observation information constructed by NHCs is not credible. Based on this problem, Ref. [9] proposed an adaptive setting of the initial value of NHC lateral noise according to the vehicle motion state, while Ref. [10] proposed a constraint method based on centripetal acceleration. These methods represent good solutions to this problem. However, even if the NHC noise measurement matrix can be adaptively adjusted, there are still some problems. First of all, Ref. [8] pointed out that the NHC lateral noise is inseparable from the changes in forward speed and heading angle, but the speed of human factors is not absolutely controllable, so we wonder how to control these two variables so that the adaptive factor can be controlled within a certain range. Secondly, Ref. [11] pointed out that the nonlinearity of an integrated navigation system is enhanced in environments of strong vehicle load; the Kalman error filter based on Gaussian distribution has some defects in nonlinear estimation; and with an increase in the number of iterations, the problem of filtering error divergence is inevitable.
In summary, to address the problem of GNSS signals being prone to interruption in complex scenes in vehicle-mounted combined GNSS/INS navigation systems, which leads to a decrease in system positioning accuracy, NHCs with a fixed noise matrix cannot be constructed to improve the positioning accuracy of the system in a specific scene. Therefore, it is very necessary to adjust the noise constrain in real time through combination with the vehicle’s motion state. A series of research is carried out in this paper in this regard. Firstly, in order to solve the problem of NHC noise setting, this paper determines the influence factor of NHC lateral noise through the analysis of a large amount of experimental data reported in [9], providing the corresponding weighting factor so that the adaptive factor becomes controllable, in addition to analyzing the applicability of multiple linear regression models for noise estimation. Secondly, with respect to the problem of nonlinear error, this paper re-models NHC lateral noise while increasing elevation constraints, while Kalman filtering and anti-differential filtering with constraints have been richly studied by many scholars [11,12,13]. On this basis, this paper proposes a Ga-St VBAKF algorithm based on a Gaussian–Student’s T hybrid distribution using a variational Bayesian method in combination with the statistical properties of noise. The experimental results show that this method can effectively improve the performance of NHCs in specific scenes and enhance the positioning accuracy of a combined GNSS/INS navigation system in complex scenes. This paper first introduces the NHC/INS/OD model and constructs the filtering equations of NHCs, then establishes the NHC adaptive noise mechanism based on the multiple linear regression model and, finally, proposes a variational Bayesian filtering algorithm based on the mixture distribution based on the statistical characteristics of the NHC noise. Finally algorithmic performance verification experiments are performed.

2. NHC/OD/INS Integrated Model

2.1. State Model

Earth-centered, earth-fixed (ECEF) frames (e-frames) are selected as the reference frames in this paper. The model framework adopts the loose couple model. The system state vector has 15 dimensions, including the inertial navigation position error vector ( δ r n ), inertial navigation speed error vector ( δ v n ), attitude angle error vector ( ϕ ), three-axis gyro zero bias ( b g ), and three-axis accelerometer zero bias ( b a ), expressed as follows:
δ x ( t ) = ( δ r n ) T ( δ v n ) T ϕ T b g T b a T T
The time-domain differential equation of the system state error is expressed as follows [14,15]:
δ r ˙ e = δ v e + η r
δ v ˙ e = f e × ϕ + δ γ e + C b e δ f b 2 ω i e e × δ v e + η v
ϕ ˙ = ω i e e × ϕ C b e δ ω i b b
b ˙ a ( t ) = 1 T a b b a ( t ) + η a b ( t )
b ˙ g ( t ) = 1 T g b b g ( t ) + η g b ( t )
In the formula, f e is the projection of the specific force in the e-frame, δ γ e is the projection of gravity deviation in the e-frame, C b e is the direction cosine matrix from the body frame (b-frame) to the e-frame, ω i e e is the projection of the rotational angular velocity of the inertial coordinate system relative to the e-frame, T x is the correlation time of the first-order Markov process, and η x is the driving noise of x.

2.2. Observation Model

Since the NHC stipulates that the lateral and vertical velocities of the vehicle are approximately 0, the constraint equation is expressed as follows [1]:
v x b = 0 + σ x v y b = v o d b + σ o d v z b = 0 + σ z
where v x b and v z b are the lateral and vertical velocities of the vehicle in the b-frame, respectively; σ x , σ o d , and σ z are the lateral, forward, and vertical velocity noise of the vehicle in the b-frame, theoretically satisfying the relationships of E ( σ x ) = 0 , E ( σ o d ) = 0 , and E ( σ z ) = 0 , respectively. Therefore, the vehicle speed in the b-frame can be expressed as follows:
v N H C b = 0 v o d 0 T
In order to construct the observation equation of the NHC, we first need to convert the velocity error under the n-frame to the b-frame:
v b = C n b v n
The total differential of the above formula is processed to obtain the following formula:
δ v b = M Φ n + N δ v n
Here, M = C n b × v n and N = C n b ; therefore, the NHC observation matrix is constructed as follows [16]:
H N H C = C n b × v n C n b 0 3 × 9
According to [1,16], the following is the NHC measurement update error equation:
δ Z N H C = H N H C δ x + R N H C
where the observation vector is Z N H C = v x b v y b v o d b v z b T and the observation vector covariance matrix is R N H C = d i a g σ x σ o d σ z . In general, the the covariance matrix is set to a fixed value based on experience, resulting in the problem that the vehicle motion state does not conform to the tightness of the speed constraint. Based on the above, σ x is the noise term that needs to be adaptively estimated and adjusted.

3. Adaptive Estimation of NHC Lateral Velocity Pseudo-Measurement Noise

Based on the findings demonstrated in [9] and the vehicle turning detection algorithm proposed in [17], it is evident that the NHC lateral velocity noise is influenced by both the vehicle speed ( v o d ) and the vehicle state, that is, the change value of the heading angle ( δ r ). In order to further determine the correlation between the two and the NHC lateral velocity noise, this paper is based on the measured data of 3500 s collected on Lumo Road, Hongshan District, Wuhan City. The experimental scene includes five turning sections, uneven road sections, and multiple weak signal scenarios (see experimental analysis). The right velocity ( v y ) and heading angle ( δ ) of the vehicle in the body frame (b-frame) are obtained by data post processing. After binary linear regression fitting, the noise term is accurately and dynamically adjusted by error modeling, and the adaptive factor is controllable in a certain range. Since the NHC stipulates that the vehicle’s right speed and ground speed are zero, the actual speed in these two directions can be used as equivalent measurement noise.
According to the theory of multiple regression analysis and [9], the independent variables and dependent variables satisfy the following relationship:
➀ In theory, there is a causal relationship between noise, vehicle speed, and heading-angle error.
➁ Noise is a continuous variable, and there is no multicollinearity between vehicle speed and heading-angle error [18].
➂ The fitting residual should satisfy independence, normality, and homogeneity of variance.
Among the above conditions, 1 and 2 are demonstrated. If condition 3 is established, the regression model is applicable. The multivariate linear regression problem mainly focuses on the degree of correlation between the dependent variable and the independent variable (one or more numerical variables) and establishes a regression model for data prediction. However, the accuracy of data prediction is not of concern here. It is only necessary to verify the applicability of the binary linear regression model. The adjustment range of the adaptive factor is determined by the data fitting error. The analysis process [19] is shown in Table 1.
The binary linear regression model is established as follows:
σ x = α v o d + β δ r + ε + τ
where σ x is the NHC lateral velocity noise; α and β are the weighting factors of forward velocity and heading-angle error, respectively; ε is the constant coefficient; τ is the compensation adaptive factor; and ∗ represents a multiplication operation between matrices or scalars. By taking the measured data of the above four sections for a total of 3500 s as samples, a binary linear regression model is established using the math tool, and the weighting factors and constant coefficients are obtained as shown in Table 2 below. The sample fitting of the combined variables of noise, vehicle speed, and heading-angle error is established as shown in Figure 1 below. In Figure 1, the abscissa ( f ( d h e a d , v ) ) represents a binary function, which is described as α v o d + β δ r .
Based on the above fitting results, it is not difficult to find that a large number of data points are distributed near the fitting line. This is because the speed and heading angle changes are artificially controlled, that is, the data have an unexplainable fixed variation, and the sample points cannot show functional changes. By adjusting the adaptive factor, or the uncertainty of prediction, the intercept of the fitting line can be changed to approximately coincide with the sample points, and the fitting error can be greatly reduced. Because the accuracy of the noise prediction value does require a high value, it is not necessary to pay attention to the goodness of fit ( R 2 ) (R-Square). The relevant evaluation indicators are shown in Table 3 below.
Due to the unbounded nature of the residual sum of squares, it is impossible to find an accurate definition to measure its size. Therefore, the mean square error and the root mean square error are used to measure the difference between the predicted value and the true value [20]. The definition is expressed by Equations (14) and (15).
M S E = 1 m m i = 1 ( y i f ( x i ) ) 2
R M S E = 1 n n i = 1 ( y i f ( x i ) ) 2
The MSE of the four groups of samples in the above table is less than 0.01, and the RMSE is less than 0.1, indicating that the sample regression fitting effect is good and the the difference between the predicted value and the true value is small, which can be completely compensated for by the error adaptive factor. The residual normality test is conducted to verify the applicability of the binary regression model. It is generally believed that when the test probability is p > 0.05, it conforms to the normal distribution at the 95% significance level [21]. Therefore, the four sample residuals are normal, and the binary linear regression model is applicable. The residual normal fitting based on the four sample residuals is shown in Figure 2.
To further validate the applicability of the model, it is also necessary to determine whether the residuals of the regression model are independent of each other. Residual independence is an important assumption of linear regression models, and the residuals not being independent may affect the validity of the model and its prediction accuracy [22]. In this paper, we use the Durbin–Watson test to test whether the residuals of the regression model have first-order autocorrelation. This statistic is determined by the following equation:
D W = t = 2 n ( e t e t 1 ) 2 t = 1 n e t 2
where e t is the residual of the tth observation and n is the sample size.
The residuals were obtained by fitting the experiment to four sets of samples; then, the Durbin–Watson statistic was calculated, as summarized in Table 4.
The Durbin–Watson statistic takes values between 0 and 4, and its value is related to the autocorrelation of the residuals as follows:
➀ When D W 2 , there is no first-order autocorrelation of the residuals, that is, the residuals are independent of each other, and there is no obvious dependence between the residuals.
➁ When D W 0 , there is a positive autocorrelation of the residuals. This means that two neighboring residuals are correlated in the same direction, indicating that the sample residuals are not independent of each other.
➂ When D W 4 , there is a negative autocorrelation of the residuals. This indicates that there is an inverse correlation between two neighboring residuals: one positive and one negative. In the table of sample residual statistics, it can be seen that the DW of all four groups of samples is close to 2, which indicates that the sample residuals are independent of each other, that the samples are well-fit, and that this regression model is applicable.
The applicability of the model needs to be further judged in terms of chi-square variance. Chi-square variance indicates that the error term has the same variance at different levels of the independent variable. In this paper, we determine this property using the residual-fitted value plot method. The sample residuals and fitted values are obtained by sample fitting as shown in Figure 3, which shows that the residuals are uniformly distributed around the zero value and have no obvious relationship with the fitted values, which indicates that the variables are variance-aligned.
Finally, based on the above results, we verified the applicability of the model in terms of sample residual normality, residual independence, and variable chi-square variance. The error between the predicted value and the real value is shown in Figure 4. It is easy to determine that the dynamic range of the error is (−0.3, 0.2). The error adaptive factor only needs to be adjusted within this range to compensate for the error of the established binary linear regression model, and the NHC lateral velocity noise can be accurately set. Subsequent experiments show that the error adaptive factor is adjusted within the range, especially at the turning moment, which reduces the positioning error to a certain extent. The model can achieve adaptive NHC noise adjustment, but there are some limitations. Firstly, the model is more sensitive to extreme values, and the extreme values of the data may have a greater impact on the fitting results of the model, making it necessary to preprocess the data in advance. Secondly, the multiple linear regression model is more demanding on the data, and it is necessary to go through the validation of multiple properties before determining its applicability.
In summary, the binary linear regression model is suitable for the adaptive adjustment of NHC lateral velocity noise. This model not only establishes the equivalent relationship between noise, vehicle speed, and heading-angle error but also more accurately determines the setting method of NHC lateral velocity noise and determines the adjustment range of the error adaptive factor.

4. Adaptive Filtering Algorithm Based on Gaussian–Student’s T Mixed Distribution NHC Noise Model

4.1. NHC Lateral Velocity Noise Modeling

The length of the experimental data used in this part is 2000 s. The experimental equipment and experimental scene are shown in the fourth section of the experimental analysis. During this period, the vehicle passes through five large-scale turning sections before and after 193 s, 415 s, 533 s, 790 s, and 996 s. Among them, the 533 s section is a continuous turn. It also passes through some road potholes. The NHC lateral noise model can be analyzed from the following two perspectives:
➀ A Quantile–Quantile Plot (Q-Q plot) can directly reflect the degree of agreement between random sequences and the Gaussian distribution [23,24]. The closer the sample curve is to the Gaussian distribution reference line, the closer the sample sequence is to the Gaussian distribution. After obtaining the NHC lateral velocity noise, the Q-Q diagrams for 100–250 s, 350–600 s, and 750–1000 s and the overall experimental data are drawn as shown in Figure 5. The first three time periods contain five turning sections each. It can be seen that in most periods, the NHC lateral velocity noise can be modeled by a Gaussian distribution during the smooth driving of the vehicle. However, once the vehicle turns or slides, the Q-Q diagram shows that it is unreasonable to use the Gaussian model.
➁ The probability density of NHC lateral velocity noise is shown in Figure 6, and the Gaussian distribution and Student’s T distribution are used for fitting. It can be seen from the figure that the noise has obvious thick-tail characteristics, and the Gaussian distribution curve cannot correctly fit the abnormal noise, so it is unreasonable to only use the Gaussian distribution. Although the T-distribution curve takes into account the tail noise, when the vehicle is running smoothly, the noise is close to the Gaussian distribution. At this time, the velocity noise matrix error is too large to make full use of the NHC observation information, which leads to the loss of accuracy. Based on the above considerations, combined with Figure 4, it can be concluded that modeling the NHC lateral velocity noise as a Gaussian–Student’s T mixed distribution can better reflect the actual noise distribution characteristics and provide more accurate prior information.
Refs. [25,26] proposed a Gaussian–Student’s T mixed noise model. These references considered the conjugate prior of the Bernoulli distribution is as a Beta distribution. In order to facilitate the representation of the mixed probability model as a hierarchical Gaussian form, the mixed probability ( τ k ) is approximated as a Beta distribution in this paper. In fact, the Gaussian–Student’s T mixed probability is approximated as a Beta distribution. On this basis, considering that the NHC is well observed and the NHC is not established, the NHC lateral velocity noise is modeled as a Gaussian–Student’s T mixture model with mixing probability, and the mixing probability is adaptively adjusted according to the vehicle turning and singular velocity detection results. The distribution characteristics are described in Equation (17):
p ( v k ) = τ k N ( v k ; 0 , R k ) + ( 1 τ k ) S t ( v k ; 0 , R k , ζ )
Here, N ( v k ; 0 , R k ) represents the zero mean; the variance is the Gaussian distribution of R k ; S t ( v k ; 0 , R k , ς ) represents the zero-mean Student’s T distribution; R k is the scale matrix; and ς is the degree of freedom, which is used to describe the fat-tailed degree of noise.

4.2. Mixed State Space Model with Time-Varying and Heavy-Tailed Measurement Noise

The linear discrete state space model can be described as (18) and (19):
x k = ϕ k | k 1 x k 1 + Γ k 1 ω k 1
z k = H k x k + v k
In the formula, subscripts k 1 and k denote times t k 1 and t k , respectively, and x k denotes the state vector (n × 1) at time t k . ϕ k | k 1 is the one-step state transition matrix (n × n) from t k 1 to t k . Γ k 1 is the system noise driving matrix (n × s), z k is the observation vector at time t k (m × 1), H k is the observation matrix (m × n), ω k 1 is the system excitation noise (s × 1), and v k is the measurement noise (m × 1).
According to Equation (16), the measurement noise and one-step prediction error are modeled as a Gaussian–Student’s T mixed distribution. Since the Student’s T distribution can be regarded as an infinite superposition of the Gaussian distribution, the state space model’s likelihood PDFand one-step prediction PDF are expressed by (20) and (21), respectively:
p ( z k | x k , τ k , μ k ) = τ k N ( z k ; H k , R k ) + ( 1 τ k ) N ( z k ; H k x k , R k , ζ μ k )
p ( x k | z 1 : k 1 , γ k , ψ k ) = γ k N ( x k ; ϕ k | k 1 z 1 : k 1 , Γ k 1 ω k 1 ) + ( 1 γ k ) N ( x k ; ϕ k | k 1 z 1 : k 1 , Γ k 1 ω k 1 , σ ψ k )
where μ k and ψ k are auxiliary random variables that obey the Gamma distribution ( G ( ) ) in the conjugate exponential distribution family. The Gamma distribution is also selected as the prior distribution of ζ and σ , expressed as (22)–(25), respectively:
p ( μ k ) = G ( μ k ; ζ 2 ; ζ 2 )
p ( ψ k ) = G ( ψ k ; σ 2 ; σ 2 )
p ( ζ ) = G ( ζ ; α 1 ; β 1 )
p ( σ ) = G ( σ ; α 2 ; β 2 )
In Equation (16), the Gaussian–Student’s T mixed distribution is expressed as a hierarchical Gaussian form. In order to further quantitatively distinguish between a good NHC observation and NHC failure, the Bernoulli random variable ( η ) is introduced to represent these two cases, and the Bernoulli probability mass function is used to skillfully fuse the two-layer Gaussian expression of Equation (16). Therefore, the likelihood PDF of Equation (19) and the one-step prediction error PDF of Equation (20) are rewritten as Equations (26) and (27), respectively:
p ( z k | x k ) = p ( τ k ) p ( η | τ k ) [ N ( z k ; H k x k , R k ) ] η N z k ; H k x k , R k , ζ μ k G μ k ; ζ 2 ; ζ 2 η 1 d τ k d μ k
p ( x k | z 1 : k 1 ) = p ( γ k ) p ( ρ | γ k ) N ( x k ; ϕ k | k 1 z 1 : k 1 , Γ k 1 ω k 1 ) ρ N x k ; ϕ k | k 1 z 1 : k 1 , Γ k 1 ω k 1 , σ ψ k G ψ k ; σ 2 ; σ 2 ρ 1 d ψ k d γ k
where η { 1 , 0 } , ρ { 1 , 0 } and the corresponding probability mass functions are expressed as (28) and (29):
p ( η | τ k ) = τ k η ( 1 τ k ) η 1
p ( ρ | γ k ) = γ k ρ ( 1 γ k ) ρ 1
The mixing probabilities ( τ k ) are updated at each iteration based on the variational lower-bound maximization principle. The update formula is expressed as Equation (30):
τ k = 1 N n = 1 N E q ( z n ) [ z n k ]
where z n is an indicator variable, i.e., it indicates which distribution the current data belong to, and E q ( z n ) [ z n k ] is the expectation of the data ( q ( z n ) ) under the variational distribution. The equation can be decomposed into two parts. The first part involves calculating the a posteriori probability of the kth ephemeral datum in each sliding window, then updating the hybrid probability based on the principle of maximizing the lower bound of the variational division; then with the iteration, the hybrid probability eventually converges to a stable value that is most consistent with the actual state of the vehicle’s movement so as to achieve an accurate fit to the distribution of the data. Sometimes, in order to prevent overfitting, we generally introduce a priori distributions, such as the Dirichlet distribution, to the mixed probability. Finally, the mixing probability is normalized based on the maximum value, enabling adaptive adjustment of the mixing probability.
Based on the above theory, the mixed-state hierarchical Gaussian model can be described as follows: The NHC side-vector measurement noise and one-step prediction error are modeled as a Gaussian–Student’s T mixed distribution. The Student’s T distribution is approximated as a hierarchical Gaussian distribution with an adjustable scale matrix. The modeling of NHC side-vector measurement noise and one-step prediction error is adjusted by mixing probability. When the NHC is well observed, that is, the vehicle is driving smoothly, it is modeled as a Gaussian distribution, that is, τ k , γ k = 1 ; when the NHC fails to a certain extent, that is, the vehicle has a singular speed, especially at the turning moment, it is modeled as the Student’s T distribution, that is, τ k , γ k = 0 .

4.3. Parameter Update and Posterior Approximation Based on Variational Bayesian (VB)

VB method is an approximate method for obtaining the approximate solution of the joint posterior probability density of a state vector and unknown parameters [27]. Based on the establishment of the state space model, the VB method makes full use of the observation information and prior information to find the approximate solution ( q ( X , θ ) ) of the joint posterior probability density of the state vector to be estimated and the unknown parameter. In this way, it can approximate the exact solution of the joint posterior probability density ( p ( X , θ | Z ) where X represents the state vector to be estimated, θ represents the unknown parameter, and Z represents the known observation vector) [28].
The VB method needs to model the approximate solution ( q ( X , θ ) ) as the conjugate distribution form of the likelihood function. Using the conjugate distribution characteristics, the calculation can be simplified to obtain the analytical solution of the posterior distribution. Secondly, the VB method makes a one-step approximation of the coupling relationship between variables, which can be expressed as shown in Equation (31).
q ( X , θ ) = q ( X ) n i = 1 q ( θ i )
KL divergence is an evaluation method for the difference between probability distributions. The VB method uses the Kullback–Leibler Divergence (KLD) method to minimize the KL divergence between the two distributions so that q ( X , θ ) approaches p ( X , θ | Z ) to the maximum extent:
q ( ϕ ) = arg min K L D ( q ( X ) n i = 1 q ( θ i ) | | p ( X , θ i | z 1 : k ) )
Here, q ( ϕ ) is the approximate posterior probability density, and z 1 : k is the observation information at 1:ktime. In Ref. [29], Equation (17) was deduced and calculated in detail by using the VB method. It was concluded that any posterior probability density of the state vector to be estimated and the unknown parameters can be expressed by Equation (33):
ln q ( ε ) = E Θ ε [ ln p ( Θ , z 1 : k ) ] + c o n s t
In the formula, Θ is the set of the state vector to be estimated and the unknown parameters, that is, Θ { X , θ i } ; ε is any element in the set; and E Θ ε [ ] represents the expectation of all random variables in the set except ε [30].
The above formula is the core of the VB method. After providing the prior information of the parameters to be estimated, the updated formula of each parameter can be obtained through Formula (31), that is, the current posterior information. The posterior information is used as the filter input for the prior information of the next moment, and the iteration is carried out continuously. Since the above formula cannot obtain the analytical solution, the fixed-point iteration is usually used to solve for the local optimal solution of q ( ε ) [30,31,32].
Since the joint posterior probability density function ( p ( x k , μ k , ψ k | z 1 : k 1 ) ) usually has difficulty in obtaining a closed analytical solution, based on the idea of the VB method, it is necessary to find the approximation of its free decomposition [33,34], as shown in Equation (34).
p ( x k , μ k , ψ k | z 1 : k 1 ) q ( x k ) q ( μ k ) q ( ψ k )
The next goal is to find the optimal solution of Equation (30) using the KLD method, which is expressed as, Θ { x k , μ k , ψ k } , while the joint PDF can be written as Equations (35)–(38).
p ( Θ , z 1 : k ) = p ( z 1 : k 1 ) p ( z k | x k ) p ( x k | z 1 : k )
p ( z 1 : k 1 ) = N ( z 1 : k 1 ; H k x 1 : k 1 , R k )
p ( z k | x k ) = [ N ( z k ; H k x k , R k ) ] η N z k ; H k x k , R k , ζ μ k G μ k ; ζ 2 ; ζ 2 η 1 τ k η ( 1 τ k ) η 1
p ( x k | z 1 · k ) = N ( x k ; x ^ k | k 1 , P ^ k | k 1 ) ρ N x k ; x ^ k | k 1 , P ^ k | k 1 ψ k G ψ k ; σ 2 ; σ 2 ρ 1 γ k ρ ( 1 γ k ) ρ 1
By substituting Formula (35) into Formula (33), the state vector update Formula (39) can be obtained as
q ( i + 1 ) ( x k ) = N x k ; x ^ k | k ( i + 1 ) , P ^ k | k ( i + 1 )
where q i + 1 ( · ) represents the approximate value of the ( i + 1 )th iteration. The mean value of the state vector iteration ( x ^ k | k ( i + 1 ) ) and the covariance matrix ( P ^ k | k ( i + 1 ) ) can be determined by Equations (40)–(42).
K k ( i + 1 ) = P ^ k | k 1 ( i + 1 ) H k T H k P ^ k | k 1 ( i + 1 ) H k T + R ^ k ( i + 1 ) 1
x ^ k | k ( i + 1 ) = x ^ k | k 1 + K k ( i + 1 ) ( z k H k x ^ k | k 1 )
P ^ k | k ( i + 1 ) = P ^ k | k 1 ( i + 1 ) K k ( i + 1 ) H k P ^ k | k 1 ( i + 1 )
According to the definition of a Bernoulli distribution, the expectation of a Bernoulli random variable ( η , ρ ) is expressed as (43) and (44):
E ( η ) = x V a l ( η ) x p ( x ) = τ k
E ( ρ ) = x V a l ( ρ ) x p ( x ) = γ k
The mixing probability ( τ k , γ k ) is set according to whether the NHC has a certain degree of failure. When the vehicle is running smoothly and there is no singular speed in the lateral or celestial direction, τ k = γ k = 1 and 0 otherwise.
In (38) and (40), the one-step prediction covariance matrix ( P ^ k | k 1 ( i + 1 ) ) and the measurement noise matrix ( R ^ k ( i + 1 ) ) can be determined by (45) and (46) [35,36].
P ^ k | k 1 ( i + 1 ) = P ^ k | k 1 ( i ) c o n s t E ( η ) E ( i ) [ μ k ] ( E ( η ) 1 )
R ^ k ( i + 1 ) = R ^ k ( i ) c o n s t E ( ρ ) E ( i ) [ ψ k ] ( E ( ρ ) 1 )
According to Equation (35), q ( i + 1 ) ( μ k ) is updated by the Gamma distribution, and Equations (47)–(50) can be obtained as follows:
q ( i + 1 ) ( μ k ) = G μ k ; ξ k ( i + 1 ) , σ k ( i + 1 )
ξ k ( i + 1 ) = ζ 2 + c o n s t E ( η ) 0.5 n ( E ( η ) 1 )
σ k ( i + 1 ) = ζ 2 + c o n s t E ( η ) 0.5 t r A k ( i + 1 ) P ^ k | k 1 ( 1 ) ( E ( η ) 1 )
A k ( i + 1 ) = P ^ k | k ( i + 1 ) + x ^ k | k ( i + 1 ) x ^ k | k 1 x ^ k | k ( i + 1 ) x ^ k | k 1 T
where t r ( · ) denotes the trace of the matrix and E ( i + 1 ) [ μ k ] is updated by Equation (51).
E ( i + 1 ) [ μ k ] = ξ k ( i + 1 ) σ k ( i + 1 ) 1
Similarly, q ( i + 1 ) ( ψ k ) is also updated by the Gamma distribution, and E ( i + 1 ) [ ψ k ] is updated as follows ((52)–(56)):,
q ( i + 1 ) ( ψ k ) = G ψ k ; ω k ( i + 1 ) , φ k ( i + 1 )
ω k ( i + 1 ) = σ 2 + c o n s t E ( ρ ) 0.5 m ( E ( ρ ) 1 )
φ k ( i + 1 ) = σ 2 + c o n s t E ( ρ ) 0.5 t r B k ( i + 1 ) R ^ k ( i ) 1 ( E ( ρ ) 1 )
B k ( i + 1 ) = H k P ^ k | k ( i + 1 ) H k T + z k H k x ^ k | k ( i + 1 ) z k H k x ^ k | k ( i + 1 ) T
E ( i + 1 ) [ ψ k ] = ω k ( i + 1 ) φ k ( i + 1 ) 1
We analyze the computational complexity of the algorithm in terms of the number of model parameters in the initialization phase of the algorithm, the time complexity of calculating the a priori mean, and the noise covariance based on the a priori distribution of the predicted state of the system model, as well as the computational time complexity of the parameter-variational approximation phase and the complexity of the update operation, compared to conventional filtering algorithms, such as Kalman filtering and particle filtering, as summarized in Table 5. The computational complexity of the algorithm is higher than that of conventional filtering algorithms, and the cost of high computing power is exchanged for better filtering performance [37].
The computational load of the algorithm can be considered in terms of its demand on memory usage, which consists of the posterior probability matrix used for updating the mixing probabilities within the sliding window. Assuming its size to be M × N and each mixing probability size to be K, as well as the variational parameter, and assuming the covariance matrix to be A × A and the rest of the parameters to be in B dimensions, the algorithm’s demand on memory is O ( M × N + K × A 2 + K × B ) . The time complexity of the algorithm is related to the convergence rate (T) of the algorithm, the data size (N), the data dimension (M), and the number of variational parameters (K). The total time complexity is O ( T × N × K × M 2 ) . The variational Bayesian filtering algorithm is generally more complex compared to other algorithms. The complexity of the algorithm mainly depends on the matrix operation of the mixed probability update; therefore, in the actual operation of the algorithm, we generally use data dimensionality reduction to reduce the actual complexity or, in the mixed probability update, in the lower bound of the variational change to terminate convergence early.
So far, all parameters of the adaptive filtering algorithm based on the VB method have been updated. The flow chart of the Ga-St VBAKF algorithm proposed in this section is shown in Algorithm 1 [38,39,40].
Algorithm 1: Ga-St VBAKF algorithm procedure
Input: x ^ k 1 | k 1 , P ^ k 1 | k 1 , ϕ k | k 1 , Q k 1 , R k , z k ,
            H k , n,m, σ , ζ , τ k , γ k , a _ i t e , b _ i t e , c _ i t e , d _ i t e ,N
Time Update: Updated according to standard Kalman filter algorithm
            x ^ k | k 1 , P ^ k | k 1
Measurement Update:
1. Initialization
E [ ψ k ] , E [ μ k ] , E ( η ) , E ( ρ ) , a _ i t e , b _ i t e , c _ i t e , d _ i t e
2. Iteration
for i = 1:N
➀ Measurement Update according to Equations (38)–(40)
q ( i + 1 ) ( μ k ) , q ( i + 1 ) ( ψ k ) are updated according to Gamma distribution.
➂ According to Equations (41) and (42), E ( η ) is defined as
    Pr _ 1 ( Γ = 1 ) / ( Pr _ 1 ( Γ = 1 ) + Pr _ 1 ( Γ = 0 ) ) , E ( η ) and E ( ρ ) are updated.
     Here, Pr _ 1 ( Γ = 1 , 0 ) is updated by Gamma distribution with
      a _ i t e , b _ i t e , c _ i t e , d _ i t e , and then update a _ i t e , b _ i t e , c _ i t e , d _ i t e .
      A k ( i + 1 ) , B k ( i + 1 ) are updated according to Equations (48)–(53).
      ξ k ( i + 1 ) , σ k ( i + 1 ) , ω k ( i + 1 ) , φ k ( i + 1 ) are updated according to Equations (46) and (47),
    (51) and (52).
      E ( i + 1 ) [ μ k ] , E ( i + 1 ) [ ψ k ] are updated according to Equations (49) and (54).
➃ Updated values will be used in the next iteration.
3. End of loop: x ^ k k = x ^ k k N , P ^ k k = P ^ k k N .

5. Algorithm Verification

The experiments reported in this paper are vehicle-mounted GNSS/INS combined navigation and positioning experiments, and the test data were collected by a self-developed low-cost combined GNSS/INS navigation module. The MEMS-level IMU chip, the GNSS chip, the microsensor chip, and the microprocessor are integrated into the chip. Some parameters of the IMU chip are shown in Table 6. To verify the performance of the proposed algorithm, this paper uses the positioning results of a high-precision Novatel SPAN-CPT navigation and positioning instrument as a reference benchmark. This is a combined INS/GNSS navigation and positioning system that houses NovAtel’s high-end GNSS board and an IMU component consisting of a fiber-optic gyro and a MEMS accelerometer, and its performance parameters are shown in Table 7, Table 8 and Table 9. Before verifying the performance of the algorithm, we first performed a series of preprocessing steps on the data. Firstly, the vehicle speed and heading angle in the data collected by the GNSS/INS module were used to obtain the value of the heading-angle change and vehicle speed in the carrier coordinate system for fitting experiments, and the fitting noise was used as the initial value of noise for adaptive adjustment in the positioning experiments. Secondly, the turning moments of the vehicle were obtained by the turn detection algorithm, and the experiments were focused on the change of the vehicle positioning accuracy at these moments. With this positioning experiment, we need to verify that the algorithm maximizes the performance of the NHC in turning sections and improves the GNSS/INS positioning results in complex scenes.
The data used in this paper were collected by vehicle-mounted INS/GNSS integrated navigation equipment in a complex urban environment. The total duration is 2000 s. The driving path is shown in Figure 7. The driving path includes five sections of turning sections, including a section of continuous turning sections (indicated by red marks); three sections of pavement potholes (blue marks); and eight weak-signal scenarios such as boulevards, tunnels, and mountainside. The driving environment is complex, and the collected data can give full play to the adaptive adjustment method of NHC lateral noise and the Ga-St VBAKF algorithm.

5.1. Experimental Design

In order to verify the effectiveness of the proposed algorithm, this paper sets up two experimental groups and five control groups, as shown in Table 10.
In the experiment in which the GNSS signal is effective, experimental group 1 and control group 1 are used to illustrate the effectiveness of the adaptive adjustment method of NHC lateral velocity noise. Experimental group 1 and control group 2 are used to illustrate the superiority of the Ga-St VBAKF algorithm. Control group 3 can be used as the reference of experimental group 1 and control groups 1 and 2 to illustrate the effectiveness of the NHC lateral velocity noise adaptive adjustment method and the Ga-St VBAKF algorithm. According to the vehicle turning judgment algorithm and speed detection method presented in the second section, in this section of 2000 s of data, the vehicle turns at 193 s, 415 s, 533 s, 790 s, and 996 s, and there are 22 vehicles with lateral singular speed and 28 with vertical singular speed. The turning time of the vehicle is delayed by 15 s before and after the time point when the heading-angle change value and the vehicle speed reach the threshold. In order to verify the applicability of the algorithm when the vehicle is driving in a complex scene, especially in a turning state or with a singular speed, we designed experimental group 2. In experimental group 2, through artificial simulation, the 30 s GNSS signal interruption is implemented before and after the five-section turning of the vehicle, which indicates the auxiliary ability of the NHC with respect to pure INS/OD navigation. During the GNSS outage, experimental group 2, control group 4, and control group 5 are used to illustrate the applicability and effectiveness of the adaptive combined filtering algorithm in complex environments. Experimental group 1 and experimental group 2 are also compared to illustrate the influence of the GNSS signal on the combined filtering algorithm. In this paper, the maximum value of the absolute value of the estimation error and the root mean square of the estimation error are used to quantitatively discuss the estimation accuracy of each group. The definition is shown in Equation (15).
It should be noted that in addition to the difference in the NHC pseudo-measurement noise covariance matrix, the remaining parameters that affect the accuracy (such as the system noise covariance matrix, GNSS position observations, odometer observations, transfer matrix, etc.) are consistent. In addition, in the experiment investigating the effectiveness of the GNSS signal, the filtering algorithm combined with GNSS observations has a certain anti-noise function, but the focus of this article is not here, so it only needs to be consistent but not discussed.

5.2. Experimental Results and Analysis

The north position estimation error of the whole test is shown in Figure 8. The maximum value of the north position estimation error and the root mean square statistics are shown in Table 11. According to the comparison shown in the Figure 8, it can be seen that experimental group 1, control group 1, and control group 2 exhibit a certain improvement in the north position estimation accuracy compared with control group 3. In general, the estimation accuracy of experimental group 1 and control group 1 is better than that of the other two groups. Near the above turning time point, it can be seen that there are gross errors of a certain size in each group. At this time, the lateral or vertical pseudo-measurement noise of NHC/OD begins to increase, and the Kalman filter based on the Gaussian distribution is more sensitive to gross measurement errors, resulting in greater errors. The experimental results shown in Figure 7 were obtained by interrupting the GPS signal when the vehicle turned or when the singular speed occurred. It can be seen that the gross error near the turning time point is further increased. It is obvious that the Ga-St VBAKF algorithm is still superior to other groups in terms of error divergence and divergence speed in the case of GNSS signal interruption.
The east position estimation error of the whole test is shown in Figure 9. The maximum value of the position error in the east direction and the root mean square are shown in Table 12. Based on the upper figure in Figure 8, combined with Table 8, it should be noted that in the vicinity of 500 s, the vehicle continuously passes through complex scenes such as tunnels, mountains, etc., and there are continuous turns, poor road conditions, and large gross errors in GNSS measurement, so the adaptive filter based on the Gaussian distribution for the NHC has little effect on error improvement, which further verifies that the lateral and even vertical pseudo-measurement noise of the NHC under complex road conditions does not simply obey the Gaussian distribution. In the vicinity of the turning time point, it can be seen that the Ga-St VBAKF algorithm with adaptive noise achieves the best performance, while the Ga-St VBAKF algorithm with fixed measurement noise achieves similar performance; however, on the fourth turn, that is, 790 s, it can be clearly seen that the former achieves better performance. As shown in Figure 8, at the second turning point of 415 s, the errors of the three schemes are large because the GNSS signal interruption time is long, which leads to the large error divergence in this section. This is also demonstrated by the north position error in Figure 7. But at the third turning point, it can be clearly seen that when the GNSS signal is interrupted, if the NHC uses the Ga-St VBAKF algorithm with the adaptive noise method to assist the OD/INS integrated navigation, the effect of this method is the largest and the degree of divergence is the smallest.
The ground-position estimation error of the whole test is shown in Figure 10. The maximum value of the ground-position error and the root mean square are shown in Table 13. This part is mainly to verify the effectiveness of the elevation constraint and to verify the effect of the adaptive filter based on the Gaussian–Student’s T mixed distribution in the elevation constraint. At this time, the ground position error is simultaneously affected by the normal velocity correction of the velocity constraint and the sky velocity correction of the elevation constraint. It should be noted that the elevation constraint is not used in control group 3. When the vehicle is bumpy or moving up and down and the road conditions are poor, the vertical velocity of the vehicle is singular. At this time, the fixed vertical velocity noise in the non-integrity constraint is not applicable. In data preprocessing, at 500 s, the vehicle has a singular speed in the vertical direction for many times. During this period (1000–1500 s), the vehicle traveled in pavement-pitted boulevards, underground tunnels, and continuously turning mountain bypasses. It can be seen from the experimental results that the Ga-St VBAKF algorithm used in the elevation constraint achieves better performance than the adaptive filter based on the Gaussian distribution, and the method of adaptively adjusting the vertical velocity noise of the vehicle also has a certain effect. In Figure 9, it is not difficult to determine whether the vertical velocity noise is adaptive and whether the GNSS signal is interrupted; their effects on the ground-position error are not much different. However, when the vehicle has a vertical singular velocity or the GNSS signal is interrupted, the Ga-St VBAKF algorithm, which relies on the elevation constraint condition and the dynamic adjustment of the R matrix, can still minimize the ground-position estimation error.
The estimation errors of north velocity and east velocity in the whole test are shown in Figure 11 and Figure 12. The maximum errors of north velocity and east velocity and the root mean square are shown in Table 14 and Table 15. In the vicinity of the five turning points described above and during the time period when variable velocity occurs, it can be seen in Figure 11 and Figure 12 that the error dispersion is very fast and reaches about 0.7 m/s in the north direction and 0.6 m/s in the east direction. In the figures in Figure 11 and Figure 12, if the lateral pseudo-measurement noise of the NHC is modeled as a Gaussian–Student’s T mixed distribution and the R matrix is adjusted adaptively in complex sections, the error is greatly reduced. As shown in Figure 11 and Figure 12, the GNSS signal is also interrupted in the complex road section, and its divergence degree is further increased, but the convergence speed is faster. However, this part is different from the previous part. After the GNSS signal is interrupted, the performance of the Ga-St VBAKF algorithm with fixed noise in some complex road sections is not better than that based on the Gaussian distribution. Considering the actual road conditions and data errors, the effectiveness of control group 1 and control group 2 cannot be concluded, but the combined performance of the two can still be optimal.
The heading-angle estimation error of the whole test is shown in Figure 13, and the maximum error and the root mean square error are shown in Table 16. As shown in the upper figure in Figure 12, before the first turning time point of 193 s, the heading angles of each group slowly diverge, and the divergence degree of control groups 2 and 3 is relatively large, reaching about 2.3°. At the other turning points, the heading-angle error of each group is affected by road conditions and GNSS signals, and the gross errors are obvious. However, to a certain extent, the adaptive Kalman filter based on a Gaussian–Student’s T mixed distribution exhibits the greatest improvement ability in terms of heading-angle error compared with other groups in special road sections. When the road conditions are good, there is not much difference between the groups. As shown in Figure 12, before 193 s, due to the influence of GNSS signals, the performance of experimental group 2 is weakened, with no significant improvement compared with the other groups. However, in the subsequent special sections, it can be seen that its performance improvement is better than that of the control groups.
Based on the above analysis, the NHC noise adaptive adjustment method based on the vehicle motion state and the Ga-St VBAKF algorithm based on the modeling of the NHC side-vector measurement noise can make full and effective use of the NHC observation information, adjust the tightness of the constraints in time in special sections, and improve the redundancy of the adaptive Kalman filter. When a GNSS signal exists, there are large differences in the positioning results of each group in the turning section, indicating that the NHC noise of vehicles in the turning section has time-varying characteristics and does not obey the Gaussian distribution, which is incompatible with the traditional fixed noise matrix. After the GNSS interruption, the results of each group show the same pattern, and the dispersion of error in the results of the experimental group is significantly lower than that of the control group. However, since the combined navigation completely relies on INS recursion and NHC at this time, the positioning error increases, indicating that GNSS signal interruption reduces the performance of the algorithm to a certain extent. According to the measured data processing results, the GNSS/NHC/OD/INS integrated algorithm based on the Gaussian–Student’s T mixed distribution adaptive Kalman filter can effectively improve the positioning accuracy of the vehicle navigation system in special road sections including turning, sideslip, and bumps. In five 300 s turning sections and a variety of complex scenes, the maximum horizontal position error is reduced by 65.9% near the time point of NHC failure. After the GNSS signal is interrupted in the special road section, the horizontal position error is reduced by 42.3%, at most.

6. Conclusions

During the effective period of the GNSS signal, the NHC and GNSS observations fail, to a certain extent, when the vehicle is turning, sideslipping, or moving up and down. The NHC lateral noise adaptive setting is used to adjust the tightness of NHCs, and the Ga-St VBAKF algorithm based on noise modeling is used to assist GNSS/INS integrated navigation to reduce the positioning error caused by the failure of NHC or GNSS observations. During GNSS signal interruption, the NHC/OD/INS system can effectively suppress the divergence of the integrated navigation and positioning results. On this basis, taking full advantage of the lateral noise characteristics of NHCs, the adaptive adjustment method of NHC noise and the Ga-St VBAKF algorithm can effectively reduce the positioning error and improve the performance of NHCs. The experimental results show that the maximum horizontal position error of the noise-adaptive Ga-St VBAKF algorithm in the GNSS/NHC/OD/INS system is reduced by 65.9% (compared with the control groups) and the maximum horizontal position error in the NHC/OD/INS system is reduced by 42.3%. At the same time, the Ga-St VBAKF algorithm has the smallest divergence of integrated navigation and positioning errors during GNSS outages, indicating that the algorithm can improve the performance of NHCs.

Author Contributions

Conceptualization, Z.W. and J.J.; methodology, Z.W., J.J. and J.L. (Jianghua Liu); software, Z.W. and J.W.; validation, Z.W. and J.L. (Jianghua Liu); formal analysis, Z.W. and J.J.; investigation, Z.W., J.J. and J.L. (Jianghua Liu); writing—original draft preparation, Z.W.; writing—review and editing, Z.W., J.J., J.L. (Jianghua Liu), Q.W. and J.L. (Jingnan Liu); visualization, Z.W.; supervision, Z.W.; project administration, J.J. and J.L. (Jianghua Liu); funding acquisition, Z.W. and J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was in part funded by Major Program (JD) of Hubei Province (2023BAA025) and in part by the Ph.D. Research Startup Foundation of Hubei University of Science and Technology (BK201801).

Data Availability Statement

The data collected and analyzed supporting the current research are available from the corresponding author upon reasonable request.

Acknowledgments

The authors would like to thank all group members of Jinguang Jiang’s group for the good working environment.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, Y.; Yan, Y.; He, H. Effects analysis of constraints on GNSS/INS integrated navigation. Geomat. Inf. Sci. Wuhan Univ. 2017, 42, 1249–1255. [Google Scholar]
  2. Wu, X.; Su, Z.; Li, L.; Bai, Z. Improved adaptive federated kalman filtering for INS/GNSS/VNS integrated navigation algorithm. Appl. Sci. 2023, 13, 5790. [Google Scholar]
  3. Yuan, Y.; Li, F.; Chen, J.; Wang, Y.; Liu, K. An improved kalman filter algorithm for tightly GNSS/INS integrated navigation system. Math. Biosci. Eng. 2023, 21, 963–983. [Google Scholar] [CrossRef]
  4. Ahmed, H.; Tahir, M. Accurate attitude estimation of a moving land vehicle using low-cost mems imu sensors. IEEE Trans. Intell. Transp. Syst. 2017, 18, 1723–1739. [Google Scholar]
  5. Cui, X.; Zhou, Q.; Wu, D.; Zhong, X. Design and experimental analysis of INS/GNSS integrated navigation system with odometer. Aeronaut. Sci. Technol. 2022, 33, 84–89. [Google Scholar]
  6. Xu, Y.; Wang, K.; Yang, C.; Li, Z.; Zhou, F.; Liu, D. GNSS/INS/OD/NHC adaptive integrated navigation method considering the vehicle motion state. IEEE Sensors J. 2023, 23, 13511–13523. [Google Scholar]
  7. Gao, J.; Tang, X.; Zhang, H.; Wu, M. Vehicle INS/GNSS/OD integrated navigation algorithm based on factor graph. Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Syst. Eng. Electron. 2018, 40, 2547–2553. [Google Scholar]
  8. Yao, Z.; Zhang, H. Performance analysis on vehicle GNSS/INS integrated navigation system aided by odometer. J. Geod. Geodyn. 2018, 38, 206–210. [Google Scholar]
  9. Liu, W.; Qi, N.; Tao, X.; Zhu, F.; Hu, J. OD/SINS adaptive integrated navigation method with non-holonomic constraints. Acta Geod. Cartogr. Sin. 2022, 51, 9. [Google Scholar]
  10. Zhang, S.; Chen, X. Motion constraint aided integrated navigation method based on SVD-CKF. J. Electron. Meas. Instrum. 2022, 36, 82–89. [Google Scholar]
  11. Mahboub, V.; Ebrahimzadeh, S.; Saadatseresht, M.; Faramarzi, M. On robust constrained kalman filter for dynamic errors-in-variables model. Surv. Rev. 2020, 52, 253–260. [Google Scholar] [CrossRef]
  12. Lv, X.; Duan, P.; Duan, Z.; Song, J. Distributed maximum correntropy unscented kalman filtering with state equality constraints. Int. J. Robust Nonlinear Control 2021, 31, 7053–7071. [Google Scholar]
  13. Ruiz, E.; Poncela, P. Factor extraction in dynamic factor models: Kalman filter versus principal components. Found. Trends Econom. 2022, 12, 121–231. [Google Scholar] [CrossRef]
  14. Zhang, X.; Zhu, F.; Tao, X.; Duan, R. New optimal smoothing scheme for improving relative and absolute accuracy of tightly coupled GNSS/SINS integration. GPS Solut. 2017, 21, 861–872. [Google Scholar] [CrossRef]
  15. Groves, P.D. Principles of gnss, inertial, and multisensor integrated navigation systems, 2nd edition [book review]. IEEE Aerosp. Electron. Syst. Mag. 2015, 30, 26–27. [Google Scholar] [CrossRef]
  16. Li, S.; Nie, J.; Xie, X.; Mao, W. A robust kalman filter via gamma student’s t-mixture distribution under heavy-tailed measurement noise. IEEE Sensors J. 2023, 23, 26215–26225. [Google Scholar]
  17. Hu, H.; Zhu, F.; Zhang, X. An adaptive method to detect vehicle motion state using MEMS-IMU. J. Navig. Position. 2020, 8, 11–18. [Google Scholar]
  18. Kalnins, A. Multicollinearity: How common factors cause type 1 errors in multivariate regression. Strateg. Manag. J. 2018, 39, 2362–2385. [Google Scholar]
  19. Reio, T.; Chambers, S.; Gavrilova-Aguilar, M.; Nimon, K. Commonality analysis: A reference librarian’s tool for decomposing regression effects. Ref. Libr. 2015, 56, 315–326. [Google Scholar] [CrossRef]
  20. Kvalheim, O.M.; Arneberg, R.; Grung, B.; Rajalahti, T. Determination of optimum number of components in partial least squares regression from distributions of the root-mean-squared error obtained by monte carlo resampling. J. Chemom. 2018, 32, e2993. [Google Scholar]
  21. Nathans, L.L.; Oswald, F.L.; Nimon, K. Interpreting multiple linear regression: A guidebook of variable importance. Pract. Assess. Res. Eval. 2012, 17, n9. [Google Scholar]
  22. Tao, Y.; Liu, C.; Liu, C.; Zhao, X.; Hu, H. Empirical wavelet transform method for GNSS coordinate series denoising. J. Geovis. Spat. Anal. 2021, 5, 1–7. [Google Scholar] [CrossRef]
  23. Vujnovic, S.; Marjanovic, A.; Djurovic, Z. Acoustic contamination detection using qq-plot based decision scheme. Mech. Syst. Signal Process. 2019, 116, 1–11. [Google Scholar]
  24. Jammalamadaka, S.R.; Taufer, E.; Terdik, G.H. On multivariate skewness and kurtosis. Sankhya A 2021, 83, 607–644. [Google Scholar]
  25. Bai, M.; Huang, Y.; Chen, B.; Yang, L.; Zhang, Y. A novel mixture distributions-based robust kalman filter for cooperative localization. IEEE Sensors J. 2020, 20, 14994–15006. [Google Scholar]
  26. Li, X.; Li, H.; Huang, G.; Zhang, Q.; Meng, S. Non-holonomic constraint (nhc)-assisted GNSS/SINS positioning using a vehicle motion state classification (vmsc)-based convolution neural network. GPS Solut. 2023, 27, 1–15. [Google Scholar]
  27. Galdo, M.; Bahg, G.; Turner, B.M. Variational bayesian methods for cognitive science. Psychol. Methods 2020, 25, 535–559. [Google Scholar]
  28. Lopez, R.; Boyeau, P.; Yosef, N.; Jordan, M.; Regier, J. Decision-making with auto-encoding variational bayes. Adv. Neural Inf. Process. Syst. 2020, 33, 5081–5092. [Google Scholar]
  29. Huang, Y. Researches on High-Accuracy State Estimation Methods and Their Applications to Target Tracking and Cooperative Localization. Ph.D. Dissertation, Harbin Engineering University, Harbin, China, 2021. [Google Scholar]
  30. Zhu, H.; Zhang, G.; Li, Y.; Leung, H. An adaptive kalman filter with inaccurate noise covariances in the presence of outliers. IEEE Trans. Autom. Control 2022, 67, 374–381. [Google Scholar]
  31. Karl, M.; Soelch, M.; Bayer, J.; van der Smagt, P. Deep variational bayes filters: Unsupervised learning of state space models from raw data. Stat 2017, 1050, 3. [Google Scholar]
  32. Xueli, L.; Jinye, C.; Qi, Z.; Yemao, X. Variational bayesian inference for two-part latent variable models. J. Syst. Sci. Math. Sci. 2023, 43, 1039–1068. [Google Scholar]
  33. Simkus, V.; Rhodes, B.; Gutmann, M.U. Variational gibbs inference for statistical model estimation from incomplete data. J. Mach. Learn. Res. 2023, 24, 1–72. [Google Scholar]
  34. Asadi, F.; Sadati, H. Adaptive kalman filter for noise estimation and identification with bayesian approach. World Acad. Sci. Eng. Technol. Int. J. Math. Comput. Sci. 2021, 12, 103–108. [Google Scholar]
  35. Zhang, Y.; Jia, G.; Li, N.; Bai, M. A novel adaptive kalman filter with colored measurement noise. IEEE Access 2018, 6, 74569–74578. [Google Scholar]
  36. Bai, M.; Huang, Y.; Zhang, Y.; Jia, G. A novel progressive gaussian approximate filter for tightly coupled GNSS/INS integration. IEEE Trans. Instrum. Meas. 2020, 69, 3493–3505. [Google Scholar]
  37. Wang, J.; Ma, X.; Zhang, Y.; Huang, D. Constrained two-stage Kalman filter for real-time state estimation of systems with time-varying measurement noise covariance. ISA Trans. 2022, 129, 336–344. [Google Scholar]
  38. Zhang, T.; Wang, J.; Zhang, L.; Guo, L. A student’s t-based measurement uncertainty filter for SINS/USBL tightly integration navigation system. IEEE Trans. Veh. Technol. 2021, 70, 8627–8638. [Google Scholar]
  39. Wang, J.; Zhang, T.; Jin, B.; Zhu, Y.; Tong, J. Student’s t-based robust kalman filter for a SINS/USBL integration navigation strategy. IEEE Sensors J. 2020, 20, 5540–5553. [Google Scholar]
  40. Liu, X.; Guo, Y. An improved multi-state constraint kalman filter based on maximum correntropy criterion. Phys. Scr. 2023, 98, 105218. [Google Scholar]
Figure 1. These are sample fitting results. In this diagram, the abscissa represents f ( d h e a d , v ) , and the ordinate represents the fit noise. The figure depicts the distribution of the fitted straight line and the actual sample points, and the vertical axis represents the lateral noise of the NHC.
Figure 1. These are sample fitting results. In this diagram, the abscissa represents f ( d h e a d , v ) , and the ordinate represents the fit noise. The figure depicts the distribution of the fitted straight line and the actual sample points, and the vertical axis represents the lateral noise of the NHC.
Remotesensing 17 01126 g001
Figure 2. Residual normality test.
Figure 2. Residual normality test.
Remotesensing 17 01126 g002
Figure 3. Residual-fitted value plots.
Figure 3. Residual-fitted value plots.
Remotesensing 17 01126 g003
Figure 4. Prediction error curve.
Figure 4. Prediction error curve.
Remotesensing 17 01126 g004
Figure 5. NHC lateral noise Q-Q plot.
Figure 5. NHC lateral noise Q-Q plot.
Remotesensing 17 01126 g005
Figure 6. NHC lateral noise probability density fitting.
Figure 6. NHC lateral noise probability density fitting.
Remotesensing 17 01126 g006
Figure 7. Driving trajectory diagram.
Figure 7. Driving trajectory diagram.
Remotesensing 17 01126 g007
Figure 8. Northward position estimation error.
Figure 8. Northward position estimation error.
Remotesensing 17 01126 g008
Figure 9. Eastward position estimation error.
Figure 9. Eastward position estimation error.
Remotesensing 17 01126 g009
Figure 10. Ground -position estimation error.
Figure 10. Ground -position estimation error.
Remotesensing 17 01126 g010
Figure 11. Northward velocity estimation error.
Figure 11. Northward velocity estimation error.
Remotesensing 17 01126 g011
Figure 12. East velocity estimation error.
Figure 12. East velocity estimation error.
Remotesensing 17 01126 g012
Figure 13. Heading-angle estimation error.
Figure 13. Heading-angle estimation error.
Remotesensing 17 01126 g013
Table 1. Analysis process of multivariate linear regression.
Table 1. Analysis process of multivariate linear regression.
Analysis process
Step1: Confirm the model of the problem
Step2: Variable adaptability identification, which is performed as follows:
            ➀ Causality identification
            ➁ Continuous variable
            ➂ Multicollinearity identification
Step3: Collect samples
Step4: Parameter estimation, which is performed as follows:
            ➀ Parameter evaluation
            ➁ Interpretation ability of the whole model
Step5: Model hypothesis judgment, which is performed as follows:
            ➀ Sample fitting and error analysis
            ➁ Residual error inspection
Step6: Explanation and analysis
Table 2. Regression model weighting factor with a constant coefficient.
Table 2. Regression model weighting factor with a constant coefficient.
Number α β ε
sample 1−0.174−0.0160.33532
sample 2−0.184−0.0130.32636
sample 3−0.0710.0760.18366
sample 4−0.274−0.0370.36558
Table 3. Regression model evaluation metrics. Here, Dof represents degree of freedom, corresponding to the number of sample points; MSE represents mean square error; RMSE represents root mean square error; and p represents the residual normality test.
Table 3. Regression model evaluation metrics. Here, Dof represents degree of freedom, corresponding to the number of sample points; MSE represents mean square error; RMSE represents root mean square error; and p represents the residual normality test.
DofMSERMSEp
sample 14970.00480.06950.3644
sample 24970.00220.04680.0953
sample 34970.00250.05000.2334
sample 419970.00410.06440.0512
Table 4. Durbin–Watson volume statistics.
Table 4. Durbin–Watson volume statistics.
Sample 1Sample 2Sample 3Sample 4
DW1.8961.9222.1121.995
Table 5. Statistical table of computational complexity. n represents the dimension of the covariance matrix, p is the number of variational parameters, q is the complexity of the update operation, m is the dimension of the observation matrix, T is the number of iterations, and N is the number of particles.
Table 5. Statistical table of computational complexity. n represents the dimension of the covariance matrix, p is the number of variational parameters, q is the complexity of the update operation, m is the dimension of the observation matrix, T is the number of iterations, and N is the number of particles.
Ga-St VBAKFEKFParticle Filter
Complexity of computation O T n 3 + n m + p q O ( T n 3 ) O ( n + n 2 )
Table 6. Performance parameters of IMU chip on GNSS/INS module.
Table 6. Performance parameters of IMU chip on GNSS/INS module.
IMU Chip IndexParameter
Gyro bias stability10°/h
Gyroscope angle random walk0.27°/sqrt(h)
Accelerometer bias stability1800 mGal
Accelerometer speed random walk0.042 m/s/sqrt(h)
Table 7. Positioning accuracy of SPAN-CPT.
Table 7. Positioning accuracy of SPAN-CPT.
Index of PrecisionParameter
Dual-frequency RTK1 cm + 1 ppm
Heading-angle RMS0.06°
Pitch RMS and roll RMS0.02°
X, Y, and Z speed accuracy0.03 m/s
Table 8. Performance index of SPAN-CPT gyroscope.
Table 8. Performance index of SPAN-CPT gyroscope.
Performance IndexParameter
Measuring range±375°/s
Deviation±20°/h
Deviation stability±1°/h
Precision of scale factor1500 ppm
Random walk0.0667°/sqrt(h)
Table 9. Performance index of SPAN-CPT gyroscope.
Table 9. Performance index of SPAN-CPT gyroscope.
Performance IndexParameter
Measuring range±10 g
Deviation50 mg
Deviation stability±0.75 mg
Precision of scale factor4000 ppm
Table 10. The table represents the complete experimental design. It should be noted that in the following text, these groups are defined as ➀➁➂➃➄➅➆.
Table 10. The table represents the complete experimental design. It should be noted that in the following text, these groups are defined as ➀➁➂➃➄➅➆.
GNSS σ x Gaussian
Distribution
Mixed
Distribution
Experimental group 1self-adaptation
Control group 1fixation
Control group 2self-adaptation
Control group 3fixation
Experimental group 2×self-adaptation
Control group 4×fixation
Control group 5×self-adaptation
Table 11. North position estimation error statistics. The maximum improvement in the table indicates the case when the GNSS signal is present in the former and the case when the GNSS signal is interrupted in the latter.
Table 11. North position estimation error statistics. The maximum improvement in the table indicates the case when the GNSS signal is present in the former and the case when the GNSS signal is interrupted in the latter.
North Position/mMaximum ImprovementMaximum Improvement
RMS0.280.320.750.8265.9%0.711.231.1942.3%
MAX1.321.533.722.9264.5%4.556.446.4829.8%
Table 12. East position estimation error statistics.
Table 12. East position estimation error statistics.
East Position/mMaximum ImprovementMaximum Improvement
RMS0.430.521.151.1963.9%1.151.411.6329.4%
MAX2.072.415.598.4875.6%6.977.208.0012.9%
Table 13. Ground-position estimation error statistics.
Table 13. Ground-position estimation error statistics.
Ground Position/mMaximum ImprovementMaximum Improvement
RMS0.190.290.281.2484.7%0.530.541.3460.4%
MAX1.151.761.759.8288.3%2.582.586.7161.5%
Table 14. North velocity estimation error statistics.
Table 14. North velocity estimation error statistics.
North Velocity/m/sMaximum ImprovementMaximum Improvement
RMS0.0270.0330.0770.09571.6%0.0650.1360.02752.2%
MAX0.240.300.430.6663.6%0.270.860.6368.6%
Table 15. East velocity estimation error statistics.
Table 15. East velocity estimation error statistics.
East Velocity/m/sMaximum ImprovementMaximum Improvement
RMS0.0270.0340.0900.09872.4%0.0670.1270.11547.2%
MAX0.200.250.510.5966.1%0.320.780.5559%
Table 16. Heading-angle estimation error statistics.
Table 16. Heading-angle estimation error statistics.
Heading Angle/°Maximum ImprovementMaximum Improvement
RMS0.230.380.420.4346.5%0.490.500.5714.1%
MAX0.851.752.142.1460.3%2.142.132.14
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Z.; Liu, J.; Jiang, J.; Wu, J.; Wang, Q.; Liu, J. An Adaptive Combined Filtering Algorithm for Non-Holonomic Constraints with Time-Varying and Thick-Tailed Measurement Noise. Remote Sens. 2025, 17, 1126. https://doi.org/10.3390/rs17071126

AMA Style

Wang Z, Liu J, Jiang J, Wu J, Wang Q, Liu J. An Adaptive Combined Filtering Algorithm for Non-Holonomic Constraints with Time-Varying and Thick-Tailed Measurement Noise. Remote Sensing. 2025; 17(7):1126. https://doi.org/10.3390/rs17071126

Chicago/Turabian Style

Wang, Zijian, Jianghua Liu, Jinguang Jiang, Jiaji Wu, Qinghai Wang, and Jingnan Liu. 2025. "An Adaptive Combined Filtering Algorithm for Non-Holonomic Constraints with Time-Varying and Thick-Tailed Measurement Noise" Remote Sensing 17, no. 7: 1126. https://doi.org/10.3390/rs17071126

APA Style

Wang, Z., Liu, J., Jiang, J., Wu, J., Wang, Q., & Liu, J. (2025). An Adaptive Combined Filtering Algorithm for Non-Holonomic Constraints with Time-Varying and Thick-Tailed Measurement Noise. Remote Sensing, 17(7), 1126. https://doi.org/10.3390/rs17071126

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop