Next Article in Journal
Retention of Tuning for Vibro-Impact and Linear Dampers Under Periodic Excitation
Previous Article in Journal
Prediction of the Spare Parts Range Based on Time and Economic Factors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Robust GPS Navigation via Centered Error Entropy Variational Bayesian Extended Kalman Filter †

1
Department of Communications, Navigation and Control Engineering, National Taiwan Ocean University, Keelung 202301, Taiwan
2
Department of Electrical Engineering, National Taiwan Ocean University, Keelung 202301, Taiwan
*
Author to whom correspondence should be addressed.
Presented at 8th International Conference on Knowledge Innovation and Invention 2025 (ICKII 2025), Fukuoka, Japan, 22–24 August 2025.
Eng. Proc. 2025, 120(1), 35; https://doi.org/10.3390/engproc2025120035
Published: 2 February 2026
(This article belongs to the Proceedings of 8th International Conference on Knowledge Innovation and Invention)

Abstract

Managing unknown, time-varying noise and outliers presents a critical challenge in GPS applications. Variational Bayesian (VB) inference effectively estimates unknown noise statistics but lacks robustness to outliers, while robust filters such as the centered error entropy (CEE) suppress outliers but rely on fixed noise assumptions. To address both limitations, we propose the centered error entropy-based variational Bayesian extended Kalman filter (CEEVB-EKF), which integrates VB inference with the CEE criterion in a unified framework. The method estimates time-varying noise covariance via recursive VB updates and applies the CEE cost function for robustness to heavy-tailed disturbances and outliers. This dual-stage design improves adaptability and reliability, with simulations showing superior, stable state estimation, making CEEVB-EKF suitable for positioning, tracking, and autonomous navigation.

1. Introduction

Global navigation satellite system (GNSS) [1] is essential for modern applications such as autonomous vehicles and precision agriculture, but complex environments introduce multipath, non-line-of-sight (NLOS), and signal degradation, severely reducing the accuracy and reliability of traditional positioning methods.
Under these conditions, real-time state estimation becomes challenging, particularly when the system exhibits nonlinear dynamics with unknown or time-varying noise. The classical Kalman filter [2], which relies on linear models and Gaussian noise assumptions, struggles in such environments, often diverging or producing biased estimates under non-Gaussian disturbances, outliers, or sudden shocks. To overcome the limitations of the standard Kalman framework, several nonlinear filtering techniques, including the extended Kalman filter (EKF), unscented Kalman filter (UKF), and cubature Kalman filter (CKF), have been proposed. While these methods can accommodate system nonlinearity, they still rely heavily on Gaussian noise assumptions and thus remain vulnerable to degradation in non-Gaussian scenarios.
In recent years, information-theoretic learning (ITL) methods [3] have provided alternative objective functions for robust filtering. These methods depart from the conventional minimum mean square error (MMSE) and instead focus on entropy-based cost functions such as the minimum error entropy (MEE) [4], maximum correntropy criterion (MCC) [5,6], and centered error entropy (CEE) [7]. Among them, CEE has shown strong potential in suppressing the influence of outliers due to its symmetry and sensitivity to the statistical concentration of the estimation error. However, most entropy-based filters still assume fixed noise statistics, which limits their adaptability in time-varying environments and practical GNSS applications.
To address these limitations, we developed the variational Bayesian centered error entropy extended Kalman filter (VBCEE-EKF), a hybrid framework that integrates the adaptive capabilities of Variational Bayesian (VB) inference [8] with the robustness of the Centered Error Entropy (CEE) criterion. VBCEE-EKF operates in two stages. First, VB inference is used to recursively estimate the time-varying process and measurement noise covariance matrices by minimizing the Kullback–Leibler divergence between the true and approximated posterior distributions. Then, the CEE cost function is applied to enhance robustness against heavy-tailed noise and outliers by emphasizing the statistical concentration of the prediction error [9,10,11]. This dual-stage design enables the filter to adapt to dynamic noise conditions while maintaining resilience to non-Gaussian disturbances.

2. Centered Error Entropy-Based Variational Bayesian Extended Kalman Filter

A nonlinear discrete-time system is described by the following equations.
x k = f ( x k 1 ) + w k 1
z k = h ( x k ) + v k
Here, x k R n and z k R m are the state and measurement vectors, respectively. The functions f ( · ) and h ( · ) represent the nonlinear state transition and observation models, respectively. The process noise w k 1 and measurement noise v k are assumed to be mutually independent. The covariance matrix of the process noise is Q k 1 = E [ w k 1 w k 1 T ] and the measurement noise covariance is R k = E [ v k v k T ] .

2.1. Linearization of Nonlinear Function

To apply the extended Kalman filter framework to nonlinear systems, the nonlinear functions must be linearized around the predicted state. The linearized approximations of these functions using a first-order Taylor series expansion are given as follows.
f ( x k 1 ) f ( x ^ k 1 ) + Φ k 1 ( x k 1 x ^ k 1 )
h ( x k ) h ( x ^ k | k 1 ) + H k ( x k x ^ k | k 1 )
Here, Φ k 1 and H k are the Jacobian matrices evaluated at the estimates.
According to Equation (1), the predicted state vector x ^ k | k 1 and the predicted error covariance matrix P k | k 1 is expressed as follows.
x ^ k | k 1 = f ( x ^ k 1 )
P k | k 1 = Φ k 1 P k 1 Φ k 1 T + Q k 1
To establish a recursive model for posterior estimation, we first rewrite the system and measurement error equations to derive the prediction residual difference: ε k | k 1 = x k x ^ k | k 1 . By extending the system model (1) and measurement model (2) into a joint concatenated form, we obtain:
x ^ k | k 1 z k h ( x ^ k | k 1 ) + H k x ^ k | k 1 = I H k x k + μ k
The residual error vector μ k is expressed as the following equations.
μ k = ε k | k 1 v k
The factorized form of the joint covariance matrix provides a convenient structure for subsequent Bayesian inference.
E [ μ k μ k T ] = P k | k 1 0 0 R k = B k B k T = B p k | k 1 B p k | k 1 T 0 0 B r k B r k T
It facilitates the incorporation of prior distributions over the unknown covariance components during the variational optimization process.

2.2. Variational Bayesian Inference

A variational Bayesian (VB) framework is used to adaptively estimate prediction and measurement noise error covariances, P k | k 1 and R k employing inverse Wishart (IW) priors for symmetric, positive-definite matrices within the Kalman filtering process. The variational Bayesian framework enables adaptive covariance estimation by introducing probabilistic modeling into the Kalman filtering process. The prior distributions of P k | k 1 and R k are defined as follows:
p ( P k | k 1 | z 1 : k 1 ) = I W ( P k | k 1 ; t ^ k | k 1 , T ^ k | k 1 )
p ( R k | z 1 : k 1 ) = I W ( R k | k 1 ; u ^ k | k 1 , U ^ k | k 1 )
Here, I W ( · ) denotes the probability density function of the IW distribution. The parameters t ^ k | k 1 and u ^ k | k 1 denote the degrees of freedom, while T ^ k | k 1 and U ^ k | k 1 are the corresponding inverse scale matrices for the prior covariance estimates P k | k 1 and R k , respectively. To derive the posterior update of the state prediction covariance P k | k 1 , its expected value is defined as P ¯ k | k 1 . The inverse scale matrix is computed by:
T ^ k | k 1 t ^ k | k 1 n 1 = P ¯ k | k 1 = Φ k 1 P k 1 Φ k 1 T + Q ¯ k 1
Here, n indicates the state dimension, and Q ¯ k 1 denotes the process noise covariance. The degree of freedom is determined by:
t ^ k | k 1 = n + τ + 1
To improve flexibility in prior modeling, a tunable parameter τ 0 is introduced to scale the predicted error covariance P ¯ k | k 1 , resulting in the following.
T ^ k | k 1 = τ P ¯ k | k 1
According to the principles of Bayesian inference, the prior distribution of the measurement noise covariance matrix R k can be formulated as follows.
p ( R k | z 1 : k 1 ) = p ( R k | R k 1 ) p ( R k 1 | z 1 : k 1 ) d R k 1
Here, p ( R k 1 | z 1 : k 1 ) denotes the posterior distribution of the measurement noise covariance matrix from the previous time step. Since an IW distribution has been assigned as its prior, the posterior maintains the same distribution form due to the conjugate relationship. Therefore, the updated posterior distribution of R k 1 is given by as follows.
p ( R k 1 | z 1 : k 1 ) = I W ( R k 1 ; u ^ k 1 , U ^ k 1 )
To preserve the IW structure of the conditional prior p ( R k | R k 1 ) , a forgetting factor ρ ( 0 , 1 ] is introduced. Given that R k often evolves slowly, the posterior from the previous step can be reused to form the current prior. Consequently, the prior parameters for R k are updated as follows.
u ^ k | k 1 = ρ ( u ^ k 1 m 1 ) + m + 1 ,   U ^ k | k 1 = ρ U ^ k 1
The variable m denotes the dimension of the measurement vector. At the initial time step k = 1 , the degree of freedom is initialized as:
u ^ 0 = τ + m + 1 ,   U ^ 0 = τ R k
Given the observation at time step k , the joint posterior is approximated as:
p ( x k , P k | k 1 , R k | z 1 : k ) q ( x k ) q ( P k | k 1 ) q ( R k )
In VB inference, the approximate posterior distribution q ( · ) is obtained by minimizing the Kullback–Leibler (KL) divergence. This leads to the following optimization.
{ q ( x k ) , q ( P k | k 1 ) , q ( R k ) } = arg min K L ( q ( x k ) q ( P k | k 1 ) q ( R k ) | | p ( x k , P k | k 1 , R k | z 1 : k ) )
To derive the optimal posterior form for q ( P k | k 1 ) , we solve the following variational objective.
ln q ( i + 1 ) ( P k | k 1 ) = 1 2 ( n + t ^ k | k 1 + 2 ) ln P k | k 1 1 2 t r ( ( A k ( i ) + T ^ k | k 1 ) P k | k 1 1 )
Here q ( i + 1 ) ( · ) denotes the approximate distribution after the i + 1 -th update step, and the matrix A k ( i ) is defined as in Equation (22).
A k ( i ) = P k ( i ) + ( x ^ k ( i ) x ^ k | k 1 ) ( x ^ k ( i ) x ^ k | k 1 ) T
Based on Equation (21), the posterior distribution of P k | k 1 at iteration i + 1 is again an IW distribution, with updated degrees of freedom and scale matrix given by t ^ k | k 1 ( i + 1 ) and T ^ k | k 1 ( i + 1 ) , respectively.
q ( i + 1 ) ( P k | k 1 ) = I W ( P k | k 1 ; t ^ k ( i + 1 ) , T ^ k ( i + 1 ) )
The recursive update rules for the degree of freedom and the scale matrix are given as follows.
t ^ k ( i + 1 ) = t ^ k | k 1 + 1 ,   T ^ k ( i + 1 ) = T ^ k | k 1 + A k ( i )
The variational distribution q ( R k ) , as defined in Equation (20), is optimized by maximizing the following expression.
ln q ( i + 1 ) ( R k ) = 1 2 ( m + u ^ k | k 1 + 2 ) ln | R k | 1 2 t r ( ( B k ( i ) + U ^ k | k 1 ) R k 1 )
The matrix B k ( i ) is defined to represent the expected outer product of the residual vector.
B k ( i ) = ( z k h ( x ^ k ( i ) ) ) ( z k h ( x ^ k ( i ) ) ) T + H k P k ( i ) H k T
As a result, the posterior distribution of R k at iteration i + 1 remains in the IW form, parameterized by u ^ k | k 1 ( i + 1 ) and U ^ k | k 1 ( i + 1 ) .
q ( i + 1 ) ( R k ) = I W ( R k ; u ^ k ( i + 1 ) , U ^ k ( i + 1 ) )
The degree of freedom and the corresponding scale matrix are iteratively updated using the following relations.
u ^ k ( i + 1 ) = u ^ k | k 1 + 1
U ^ k ( i + 1 ) = U ^ k | k 1 + B k ( i )
Based on the VB inference, the distribution q ( x k ) can be obtained by maximizing
ln q ( i + 1 ) ( x k ) = 1 2 ( z k h ( x k ) ) T E ( i + 1 ) [ R k 1 ] ( z k h ( x k ) )         1 2 ( x k x ^ k | k 1 ) T E ( i + 1 ) [ P k | k 1 1 ] ( x k x ^ k | k 1 )
The expectations of the inverse covariance matrices involved in Equation (30) are computed as follows.
E ( i + 1 ) [ R k 1 ] = ( u ^ k ( i + 1 ) m 1 ) ( U ^ k ( i + 1 ) ) 1
E ( i + 1 ) [ P k | k 1 1 ] = ( t ^ k ( i + 1 ) n 1 ) ( T ^ k ( i + 1 ) ) 1
Upon completion of the i + 1 iteration, the conditional distributions p ( i + 1 ) ( x k | z 1 : k 1 ) and p ( i + 1 ) ( z k | x k ) are defined as follows:
p ( i + 1 ) ( x k | z 1 : k 1 ) = N ( x k ; x ^ k | k 1 , P ^ k | k 1 ( i + 1 ) )
p ( i + 1 ) ( z k | x k ) = N ( z k ; h ( x k ) , R ^ k ( i + 1 ) )
The expectations of the updated covariance matrices are given by
P ^ k | k 1 ( i + 1 ) = { E ( i + 1 ) [ P k | k 1 1 ] } 1
R ^ k ( i + 1 ) = { E ( i + 1 ) [ R k 1 ] } 1
These updated expectations are subsequently utilized in the Kalman gain computation and the state update step. The fixed-point procedure continues until either the state estimate converges or the entropy-based cost reaches a predefined threshold.

2.3. CEEVB-EKF

The CEE cost function is used to measure the information-theoretic dispersion of the estimation error and provides a more resilient objective in scenarios where traditional second-order statistics fail to capture heavy-tailed or outlier-prone behaviors. We introduced the mathematical formulation of the CEE-based cost and describe how it is integrated into the variational Bayesian update process.
The estimation error is defined as e = x y , and the corresponding CEE-based cost function is formulated as follows.
J C E E ( x k ) = λ 1 i = 1 L G σ 1 ( e i ) + λ 2 i = 1 L j = 1 L G σ 2 ( e i e j )
where the weight coefficient is set to:
λ 1 = λ L ,   λ 2 = 1 λ L 2
and λ [ 0 , 1 ] is a weighting factor and G σ ( · ) is usually a Gaussian kernel function:
G σ ( · ) = exp ( | | · | | 2 2 σ 2 )
where σ denote the kernel bandwidths. The σ 1 corresponds to the MCC term, and σ 2 corresponds to the MEE term. Therefore,
E [ μ ˜ k μ ˜ k T ] = B ˜ p ^ k | k 1 ( i + 1 ) B ˜ p ^ k | k 1 ( i + 1 ) T 0 0 B ˜ r ^ k ( i + 1 ) B ˜ r ^ k ( i + 1 ) T = B ˜ k B ˜ k T
The matrix B ˜ k is obtained via Cholesky decomposition. By multiplying both sides of Equation (7) by B ˜ k 1 , we can derive the new regression model.
y k = W k x k + e ˜ k
with
y ˜ k = B ˜ k 1 x ^ k | k 1 z k h ( x ^ k | k 1 ) + H k x ^ k | k 1 ,   W ˜ k = B ˜ k 1 I H k ,   e ˜ k = B ˜ k 1 ε k | k 1 v k
The optimal state estimate is obtained by minimizing J C E E through fixed-point iteration. The condition for convergence is defined as follows.
J C E E ( x k ) x k = W ˜ k T ( λ 1 σ 1 2 C k + 2 λ 2 σ 2 2 ( Ξ k Ψ k ) ) e ˜ k = W ˜ k T M k e ˜ k = 0
The matrices involved in the convergence condition are defined as follows.
C k = d i a g ( G σ 1 ( e ˜ 1 , k ) , , G σ 1 ( e ˜ L , k ) )
Ξ k = d i a g i = 1 L ( G σ 2 ( e ˜ 1 , k e ˜ i , k ) , , G σ 2 ( e ˜ L , k e ˜ i , k )
[ Ψ k ] i , j = G σ 2 ( e ˜ i , k e ˜ j , k )
Solving the above yields the updated estimate at iteration.
( x ^ k ( t ) ) C E E = f ( x ^ k ( t 1 ) ) = ( W ˜ k T M k ( t 1 ) W ˜ k ) 1 ( W ˜ k T M k ( t 1 ) y ˜ k )
Let t denote the iteration index. The matrix M k ( t 1 ) is constructed as follows.
M k ( t 1 ) = λ 1 σ 1 2 C k ( t 1 ) + 2 λ 2 σ 2 2 ( Ξ k ( t 1 ) Ψ k ( t 1 ) ) = M x , k ( t 1 ) M y x , k ( t 1 ) M x y , k ( t 1 ) M y , k ( t 1 )
To facilitate computation, the matrix product ( B ˜ k 1 ) T M k B ˜ k 1 is expanded as the multiplication of block matrices.
( B ˜ k 1 ) T M k B ˜ k 1 = ( B ˜ p ^ k | k 1 ( i + 1 ) 1 ) T M x , k ( t 1 ) B ˜ p ^ k | k 1 ( i + 1 ) 1 ( B ˜ p ^ k | k 1 ( i + 1 ) 1 ) T M y x , k B ˜ r ^ k ( i + 1 ) 1 ( B ˜ r ^ k ( i + 1 ) 1 ) T M x y , k ( t 1 ) B ˜ p ^ k | k 1 ( i + 1 ) 1 ( B ˜ r ^ k ( i + 1 ) 1 ) T M y , k B ˜ r ^ k ( i + 1 ) 1
This leads to the following simplified notations.
P x , k | k 1 ( t 1 ) = ( B ˜ p ^ k | k 1 ( i + 1 ) 1 ) T M x , k ( t 1 ) B ˜ p ^ k | k 1 ( i + 1 ) 1 ;   P x y , k | k 1 ( t 1 ) = ( B ˜ r ^ k ( i + 1 ) 1 ) T M x y , k ( t 1 ) B ˜ p ^ k | k 1 ( i + 1 ) 1
P y x , k | k 1 ( t 1 ) = ( B ˜ p ^ k | k 1 ( i + 1 ) 1 ) T M y x , k B ˜ r ^ k ( i + 1 ) 1 ;   R k ( t 1 ) = ( B ˜ r ^ k ( i + 1 ) 1 ) T M y , k B ˜ r ^ k ( i + 1 ) 1
The final state estimate and Kalman gain are derived after completing the fixed-point iteration procedure. The updated state at iteration t is calculated as
( x ^ k ( t ) ) C E E = x ^ k | k 1 + K k ( t 1 ) ( z k h ( x ^ k | k 1 ) )
The Kalman gain K k is obtained using the following expression.
K k ( t 1 ) = [ P x , k | k 1 ( t 1 ) + H k T P x y , k | k 1 ( t 1 ) + ( P y x , k | k 1 ( t 1 ) + H k T R k ( t 1 ) ) H k ] 1 ( P y x , k | k 1 ( t 1 ) + H k T R k ( t 1 ) )
The posterior covariance matrix is then refined through Equation (52).
P ˜ k = ( I K k ( t 1 ) H k ) P ^ k | k 1 ( i + 1 ) ( I K k ( t 1 ) H k ) + K k ( t 1 ) R ^ k ( i + 1 ) ( K k ( t 1 ) ) T

3. Results and Discussion

To verify the performance of the proposed algorithms, a simulation framework is established under degraded GPS signal conditions. A 3D motion trajectory is used as the reference path to assess the tracking capability of each algorithm in complex motion scenarios, as shown in Figure 1. The testing environment used Matlab 2022a, with the Satellite Navigation (Satnav) Toolbox 3.0 [12] to generate the necessary satellite data for GPS positioning, including satellite positions, velocities, and pseudoranges.
To evaluate the performance of the filtering algorithms under non-ideal conditions, a simulation scenario includes both time-varying noise sequence and outlier interference (Figure 2). In this setup, measurement noise is primarily modeled using Gaussian distributions with varying parameters to emulate fluctuations in thermal noise intensity. In addition, biased outlier observations are intentionally introduced at specific time steps to reflect realistic situations such as sudden sensor faults or external disturbances.
Although both filters attempt to estimate the time-varying measurement noise covariance R k , the CEEVB-EKF more accurately captures abrupt changes and maintains stable tracking performance, while the VB-EKF exhibits larger deviations, particularly during high-noise intervals. Compared to VB-EKF, the CEEVB-EKF exhibits improved resilience to abrupt disturbances and maintains tighter error bounds, particularly during outlier-rich periods in the North and Altitude directions.
This illustration presents a four-filter comparative analysis for EKF, CEE-EKF, VB-EKF, and CEEVB-EKF. Figure 3 shows that EKF and VB-EKF are more susceptible to error spikes in the presence of outliers, while CEE-EKF, though robust, lacks adaptability to rapidly changing noise conditions. In contrast, VBCEE-EKF combines the robustness of entropy-based filtering with the adaptability of Bayesian inference, effectively mitigating the degradation in positioning accuracy caused by outlier contamination.
The filters exhibit significant performance differences when dealing with time-varying noise Gaussian and impulsive outlier interference. EKF and CEE-EKF lack adaptability to noise variation and therefore fail to reflect changes in noise characteristics in real time, resulting in generally biased estimation. Although VB-EKF possesses adaptive tuning capability, it is overly sensitive to outliers, leading to severe fluctuations in estimation. Overall, VBCEE-EKF, which integrates the adaptability of variational Bayesian inference and the robustness of centered error entropy, demonstrates strong resilience. It maintains stable and accurate estimation performance of R k tracking under high-noise and extreme outlier conditions, thereby showcasing its superior filtering capability. CEEVB-EKF exhibits the most stable and accurate performance in both covariance tracking and state estimation. It not only closely follows the true noise variance but also effectively mitigates the impact of outliers on positioning accuracy.

4. Conclusions

The proposed CEEVB-EKF method enhances robustness against non-Gaussian and outlier-contaminated measurements and adaptability to unknown and time-varying noise statistics. The performance of four filtering approaches: EKF, VB-EKF, CEE-EKF, and CEEVB-EKF, was evaluated under the simulation scenarios involving thermal noise and outlier contamination.
Compared with conventional EKF and VB-EKF, the proposed CEEVB-EKF effectively suppresses outlier-induced deviations and adapts to time-varying noise characteristics. While CEE-EKF exhibits strong robustness against impulsive noise, its performance is limited by fixed kernel parameters and a lack of adaptability. By integrating entropy-based cost functions with variational Bayesian inference, CEEVB-EKF achieves a balance between accuracy and robustness, as reflected in its lower estimation errors, reduced peak RMSE values, and narrower error bounds across all directions. These results confirm the effectiveness of the CEEVB-EKF framework in handling non-ideal GNSS measurement environments and highlight its potential for application in real-world navigation systems subject to outliers and time-varying noise.

Author Contributions

Conceptualization, D.-J.J.; methodology, D.-J.J.; software, H.-L.C. and Y.C.; validation, D.-J.J. and Y.C.; formal analysis, D.-J.J. and Y.C.; investigation, H.-L.C., Y.C. and D.-J.J.; data curation, H.-L.C., Y.C. and D.-J.J.; writing—original draft preparation, H.-L.C. and Y.C.; writing—review and editing, D.-J.J.; supervision, D.-J.J.; project administration, D.-J.J.; funding acquisition, D.-J.J. All authors have read and agreed to the published version of the manuscript.

Funding

The authors express their gratitude for the assistance from the grant number NSTC 113-2221-E-019-059 of the National Science and Technology Council, Taiwan.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available upon reasonable request from the authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Teunissen, P.J.; Montenbruck, O. Springer Handbook of Global Navigation Satellite Systems; Springer: Berlin/Heidelberg, Germany, 2017; Volume 10. [Google Scholar]
  2. Brown, R.G.; Hwang, P.Y. Introduction to random signals and applied Kalman filtering: With MATLAB exercises and solutions. In Introduction to Random Signals and Applied Kalman Filtering: With MATLAB Exercises and Solutions; Wiley: Hoboken, NJ, USA, 1997. [Google Scholar]
  3. Principe, J.C. Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  4. Chen, B.; Dang, L.; Gu, Y.; Zheng, N.; Príncipe, J.C. Minimum error entropy Kalman filter. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 5819–5829. [Google Scholar] [CrossRef]
  5. Chen, B.; Liu, X.; Zhao, H.; Principe, J.C. Maximum correntropy Kalman filter. Automatica 2017, 76, 70–77. [Google Scholar] [CrossRef]
  6. Liu, X.; Ren, Z.; Lyu, H.; Jiang, Z.; Ren, P.; Chen, B. Linear and nonlinear regression-based maximum correntropy extended Kalman filtering. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 3093–3102. [Google Scholar] [CrossRef]
  7. Yang, B.; Huang, H.; Cao, L. Centered error entropy-based sigma-point Kalman filter for spacecraft state estimation with non-Gaussian noise. Space Sci. Technol. 2022, 2022, 9854601. [Google Scholar] [CrossRef]
  8. Fu, Q.; Wang, L.; Xie, Q.; Zhou, Y. An improved adaptive iterative extended kalman filter based on variational bayesian. Appl. Sci. 2024, 14, 1393. [Google Scholar] [CrossRef]
  9. Yang, B.; Du, B.; Li, N.; Li, S.; Shi, Z. Centered error entropy-based variational Bayesian adaptive and robust Kalman filter. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 5179–5183. [Google Scholar] [CrossRef]
  10. Wang, S.; Dai, P.; Xu, T.; Nie, W.; Cong, Y.; Xing, J.; Gao, F. Maximum Mixture Correntropy Criterion-Based Variational Bayesian Adaptive Kalman Filter for INS/UWB/GNSS-RTK Integrated Positioning. Remote Sens. 2025, 17, 207. [Google Scholar] [CrossRef]
  11. Qiao, S.; Fan, Y.; Wang, G.; Mu, D.; He, Z. Maximum correntropy criterion variational Bayesian adaptive Kalman filter based on strong tracking with unknown noise covariances. J. Frankl. Inst. 2023, 360, 6515–6536. [Google Scholar] [CrossRef]
  12. GPSoft LLC. Satellite Navigation Toolbox 3.0 User’s Guide; GPSoft LLC: Athens, OH, USA, 2003. [Google Scholar]
Figure 1. Trajectory of the simulated vehicle.
Figure 1. Trajectory of the simulated vehicle.
Engproc 120 00035 g001
Figure 2. Time series of time-varying noisy sequence with outliers.
Figure 2. Time series of time-varying noisy sequence with outliers.
Engproc 120 00035 g002
Figure 3. Estimation errors in the East, North, and Altitude components under time-varying noise for the four filters: (a) East; (b) North; (c) Altitude.
Figure 3. Estimation errors in the East, North, and Altitude components under time-varying noise for the four filters: (a) East; (b) North; (c) Altitude.
Engproc 120 00035 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jwo, D.-J.; Chen, H.-L.; Chang, Y. Robust GPS Navigation via Centered Error Entropy Variational Bayesian Extended Kalman Filter. Eng. Proc. 2025, 120, 35. https://doi.org/10.3390/engproc2025120035

AMA Style

Jwo D-J, Chen H-L, Chang Y. Robust GPS Navigation via Centered Error Entropy Variational Bayesian Extended Kalman Filter. Engineering Proceedings. 2025; 120(1):35. https://doi.org/10.3390/engproc2025120035

Chicago/Turabian Style

Jwo, Dah-Jing, Hsi-Lung Chen, and Yi Chang. 2025. "Robust GPS Navigation via Centered Error Entropy Variational Bayesian Extended Kalman Filter" Engineering Proceedings 120, no. 1: 35. https://doi.org/10.3390/engproc2025120035

APA Style

Jwo, D.-J., Chen, H.-L., & Chang, Y. (2025). Robust GPS Navigation via Centered Error Entropy Variational Bayesian Extended Kalman Filter. Engineering Proceedings, 120(1), 35. https://doi.org/10.3390/engproc2025120035

Article Metrics

Back to TopTop