Next Article in Journal
The Efficiency of Social Network Services Management in Organizations. An In-Depth Analysis Applying Machine Learning Algorithms and Multiple Linear Regressions
Previous Article in Journal
Enhanced Artificial Neural Network with Harris Hawks Optimization for Predicting Scour Depth Downstream of Ski-Jump Spillway
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Engineering Applications of Adaptive Kalman Filtering Based on Singular Value Decomposition (SVD)

by
Juan Carlos Bermúdez Ordoñez
*,
Rosa María Arnaldo Valdés
and
Victor Fernando Gómez Comendador
*
School of Aeronautical and Space Engineering, Universidad Politécnica de Madrid, Plaza Cardenal Cisneros 3, 28040 Madrid, Spain
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2020, 10(15), 5168; https://doi.org/10.3390/app10155168
Submission received: 11 May 2020 / Revised: 3 July 2020 / Accepted: 21 July 2020 / Published: 27 July 2020

Abstract

:
This paper presents the results of applying the new mechanization of the Kalman filter (KF) algorithm using singular value decomposition (SVD). The proposed algorithm is useful in applications where the influence of round-off errors reduces the accuracy of the numerical solution of the associated Riccati equation. When the Riccati equation does not remain symmetric and positive definite, the fidelity of the solution can degrade to the point where it corrupts the Kalman gain, and it can corrupt the estimate. In this research, we design an adaptive KF implementation based on SVD, provide its derivation, and discuss the stability issues numerically. The filter is derived by substituting the SVD of the covariance matrix into the conventional discrete KF equations after its initial propagation, and an adaptive estimation of the covariance measurement matrix R k is introduced. The results show that the algorithm is equivalent to current methods in terms of robustness, and it outperforms the estimation accuracy of the conventional Kalman filter, square root, and unit triangular matrix diagonal (UD) factorization methods under ill-conditioned and dynamic applications, and is applicable to most nonlinear systems. Four sample problems from different areas are presented for comparative study from an ill-conditioned sensitivity matrix, navigation with a dual-frequency Global Positioning System (GPS) receiver, host vehicle dynamic models, and distance measuring equipment (DME) using simultaneous slant range measurements, performed with a conventional KF and SVD-based (K-SVD) filter.

1. Introduction

By defining what robustness means in the context of this research, we can focus on the engineering applications of the proposed method. We refer to robustness as the “relative insensitivity of the solution to errors of some sort” [1]; also, we refer to numerical stability to denote robustness against round-off errors [2].
The design of the Kalman filter (KF) usually aims to “ensure that the covariance matrix of estimation uncertainty (the dependent variable of the matrix Riccati equation) remains symmetric and positive definite” [1,3]. The main problem to be solved in this research is the numerical robustness of KF. The proposed algorithm is useful in applications where the influence of round-off errors reduces the accuracy of the numerical solution of the associated Riccati equation. The fidelity of the solution may degrade to the point that it corrupts the Kalman gain when the Riccati equation does not stay symmetric and positive definite, and it can corrupt the estimate [1]. Verhaegen and Van Dooren have shown that the asymmetry of P is one of the factors contributing to the numerical instability of the Riccati equation [4].
The problem was circumvented by Bierman [5] using unit triangular matrix diagonal (UD) factorization, which was one of several alternatives available to those in need of computationally efficient and stable algorithms. This decomposition of the covariance matrix is of the form:
P = U D U T ,
where D denotes the diagonal matrix and U is the upper triangular matrix with ones along its main diagonal. One of the individual characteristics of UD factorization and the Cholesky method is that there is no need to solve simultaneous Equations [6]. Additionally, UD factorization does not use a square root numerical operation [6], but more recently square root filtering using standard matrix decomposition has found greater acceptance [7]. In this matrix,
S = P 1 2   such   that   P = S S T .
S is obtained in a triangular matrix from the well-known Cholesky decomposition [8,9], a drawback of which is that it cannot be used if the covariance matrix is singular [6]. Both the square root and Bierman methods can be used only if one has single-dimensional measurements with uncorrelated measurement noise. Generally, one does not have this in practice [8,9]. To overcome the limitation, another method can be used—which is one of the topics of this research—that will facilitate the solution of this type of problem, as the computation can be performed in parallel because it is formulated in the form of matrix and matrix-vector operations without the need for additional transformations [10], where the solution also is able to deal with ill-conditioned matrices. This method is singular value decomposition (SVD).
The singular values (which are positive by definition) or principal standard deviation σ i are the square roots of the diagonal elements. For the discrete time linear problem, which is evaluated in this research, the first KF based on SVD was researched by Wang, Libert, and Manneback [8,9]. In this paper, the algorithm is further improved by making the filter adaptable to changes in sensor quality.
The Riccati equation of the Kalman filter is an important tool for designing navigation systems, particularly those that incorporate sensors with diverse error characteristics, such as the Global Navigation Satellite System (GNSS) and the inertial navigation system (INS). The Riccati equation permits us to predict the performance of a given system design as a function of the subsystem and component error characteristics [2]. Designers can then search for the mixture of components that will reach a predefined set of performance endpoints at the lowest framework cost. The design of the Kalman filter is aimed at making sure the estimation uncertainty of the covariance matrix (the dependent variable of the matrix Riccati equation) remains symmetric and positive definite [1]. In this research, we design an adaptive Kalman filter implementation based on SVD, provide its derivation, and discuss the stability issues numerically. The filter is derived by substituting the SVD of the covariance matrix into the conventional discrete Kalman filter equations after its initial propagation and adaptive estimation of the covariance measurement matrix R k .
One application of SVD in navigation is to apply it in calculating position errors to decompose the geometric matrix G, where error analysis with SVD is used to construct the pseudo inverse of G, making the solution numerically stable. While the linear least squares (LLS) is somewhat susceptible to round-up errors, where the normal equation is sometimes close to singular, SVD fixes the round-up problem and produces a solution that is the best approximation in the least-squares sense [11].
Different applications in the area of ionospheric tomography, as the problem to be solved, encounter the same ill-conditioned issue for the design matrix, which is composed of contributions of unknown parameters containing the ionospheric electron density (IED) in the total electron content (STEC) measurements, which is often not square, ill-posed, and ill-conditioned. Many methods have been employed to circumvent the problem of ill-posedness in the GNSS tomographic equation [12]. Those algorithms are iterative and include the Kalman filter, singular value decomposition, and algebraic reconstruction [7,9].
In the field of industrial electronic applications where numerical stability is necessary, an example is sensorless motor control, where the advantages of estimating the speed/position of the rotor include reduced hardware complexity, lower cost, reduced machine drive size, the elimination of the sensor cable, noise immunity, and increased reliability [13]; for example, a motor without a position sensor is suitable for harsh environments (such as submerged pumps). Other applications include diagnostics, the fault-tolerant control of AC drives, distributed generation and storage systems, robotics, vision and sensor fusion technologies, signal processing and instrumentation applications, and the real-time implementation of the Kalman filter for industrial control systems [13]. We briefly discuss the numerical stability of the Kalman filter related to industrial control systems and sensor fusion for cardiorespiratory signal extraction, but focus on numerical examples in aerospace applications like GNSS navigation.
The safe and efficient operation of control systems requires computer-based failure detection algorithms [13], where diagnosis may be used to provide fault-tolerant control. Several failures can occur in electrical systems, and redundant design is one of the solutions to assure the continuity and, therefore, the reliability of operations. Kalman filters can be used in fault-tolerant controls, for example, in aircraft control, home and municipal appliances (gas turbines, cooling systems, and heat pumps), and electric vehicles.
A challenge in the real-time implementation of Kalman filters is the high computational complexity, and among the main difficulties is numerical stability when computing the covariance matrices as well the computational load and execution time. There are three possible solutions to circumvent these issues, and one solution is to modify the algorithm, which is the scope of this paper. In addition, an efficient digital architecture to implement the estimator [13], and parallel computing can be used. The observability problem physically means that one or more state variables (or linear combinations) are hidden from the view of the observer (i.e., the measurements) [6]. This is a challenge in the sensorless control of AC drives, which can be accentuated when the stator frequency is close to zero and the rotor voltage carries small values. In addition, under continuous excitation conditions, when the stator frequency is close to zero and when the inductor rotor voltage carries small values, an estimation of the speed of an induction motor (IM) is not likely to be obtainable. In practice, compared with extended Luenberger observers and slipping modes, the Kalman filter appears to function well in these conditions with no divergence [13]. With this trade-off, the complexity of the Kalman filter is higher.
Other important applications of the KF are in the signal extraction and fusion of sensors, self-driving cars, GNSS/INS integration, and biomedical sensors; for example, when extracting cardiorespiratory signals from noninvasive and noncontacting sensor arrangements, such as magnetic induction sensors. In those applications, the signals are separated for monitoring purposes; by using the KF, it is possible to measure the heart and breathing rates in real time [14], improving the estimation of results, whereas other filtering techniques fail to extract relevant information due to the mixing of respiratory and cardiac signals and disturbance by measurement noise [14].
The second objective of this paper is to promote the use of continuous adaptation changes in the measurement error covariance matrix R k . The Kalman gain K k weights the innovation for the measurement error covariance matrix R k (the estimation error covariance matrix P k + 1 is influenced by Q k ), where higher values of R k make the measurement update step differ a little from the predicted value when the Kalman gain K k becomes very small [15]. On the contrary, high values of Q k engender more trust in the estimation of the state; Zhang et al. [16] showed that inappropriate tuning parameters may lead to low convergence by introducing adaptive technology consisting of adjustments to the noise covariance matrices to improve the estimation accuracy. Muñoz-Bañon et al. [17] showed that the Kalman filter can be used to integrate high-level localization subsystems, where the covariance information plays an essential role as new observations become available. This approach will be used in this research.
The proper selection of R k and Q k is vital for optimal filter performance, as the system model should describe as closely as possible the time evolution of the measured data, and the choices should reflect the system model error and measurement noise [15]. This research only explores changes in the measurement error covariance matrix R k . Zhang et al. [18] showed that when the system being considered has a stronger nonlinear action, the accuracy of the mode estimation will be poor. In order to address the shortcomings of the KF-based algorithm (K-SVD) and enhance the estimation precision, the adaptive filtering algorithm (K-SVD adaptive) was investigated, consisting of the adaptive estimation of R k , which is a function of the standard deviation.

2. Conventional Kalman Filter

The filter estimates, at each time, the a priori state of the system based on the a posteriori estimate for the previous moment and its dynamic equation. With this a priori estimate of the state, the corresponding predicted measurement ( H k x _ ) that corresponds to that state is obtained. Then, it is compared with the actual measurement z ¯ made at the same moment, then it corrects the a priori estimation as a function of its discrepancy and weight between the trust in the a priori estimates and the measurements represented by the gain K of the filter. The process performed by the filter can be separated into two phases: prediction and correction [19].
Consider a discrete time model for a linear stochastic system:
x k + 1 = Φ k + 1 , k X k + G k w k ,
Z k = H k x k + v k ,
where x k   n x 1 is the state vector; Z k m x 1 is the measurement vector; w k n x 1 is the disturbance input vector; v k m x 1 is the measurement noise vector; and Φ k + 1 , k n x n , G k n x n , and H k m x n are the system matrices. The covariance of w k is Q k , known as the process covariance matrix, and that of V k is R k , known as the measurement covariance matrix, and they are presumed to be zero-mean Gaussian white noise with the symmetric positive definite covariance matrices Q k and R k , respectively. Then, w   ~   N 0 , Q   and   v   ~   N 0 , R at time t k , where a superscript indicates the a priori values of the variables and a superscript + denotes values after the measurement update. The Kalman filter is described by following the recursive equations under the assumption that the system matrices are well known:
Time extrapolation:
x ^ k + 1 = Φ k 1 , k x ^ k 1 + ,
P k + 1 = Φ k 1 P k 1 + Φ k 1 , k T + Ψ k 1 Q k 1 Ψ k 1 T .
Measurement update:
x ^ k + = x ^ k + K k z k H k x ^ k ,
K k = P k H k T H k P k H k T + R k 1 ,
P k + = I K k H P k ,
where P k is the covariance of estimation uncertainty, K k is the Kalman gain matrix, and z k H k x ^ k are the innovations of the discrete Kalman filter.

3. Singular Value Decomposition

Here, we consider singular value decomposition, which is a way of decomposing a matrix, A , in the product of three other matrices: A = U S V T .
Remark 1. Let A be an n × d matrix with the right singular vectors v 1 ,   v 2 , …,   v k ; left singular vectors u 1 ,   u 2 , …,   u k ; and corresponding singular values σ 1 ,   σ 2 ,   , σ k . Then, A can be decomposed into a sum of rank-one matrices as follows:
A = i = 1 k σ i u i v i T σ 1 u 1 v 1 T + σ 2 u 2 v 2 T + + σ k u k v k T = U Σ V T .
Here, U     R m x m and V     R n x n , such that:
A = U Σ V T ,
where:
Σ = S 0 0 0 .
Here, Σ   R m x n and S = diag( σ 2 ,   , σ r ) , with:
σ 1   σ r 0 ,
σ r + 1 = 0 σ n = 0 . ,
The numbers σ 1 σ r along with σ r + 1 = 0 , σ n = 0 are known as the singular values of A, and they are the positive square roots of the eigenvalues of A T A .
Columns U of A are the orthonormal eigenvectors of A A T , whereas columns V of A are the orthonormal eigenvectors of A T A . It is understood that the singular values and singular vectors of a matrix are relatively insensitive to disturbance from the entries of the matrix and finite precision errors [8,20].
Given a non-square matrix A = U   Σ V T , two matrices and their factorizations are of special interest:
A T   A = V Σ 2 V T ,
A A T   = U   Σ 2 U T .
The algorithm is developed by comparing both sides of the obtained matrix equations.

4. SVD of Kalman Filter

In practical terms, if A T   A is positive definite, then Equation (13) can be compacted to:
A = U S 0 V T ,
where S is an n × n diagonal matrix. Specifically, if A is itself positive definite, then it will have an asymmetric singular decomposition (which is also an eigendecomposition):
A = U Σ U T = U D 2 U T ,
where the principal standard deviations σ i are the square roots of the diagonal elements of A. Assuming that the covariance of the Kalman filter has been propagated and updated for everything from the filter algorithm, the singular value decomposition of the equation of P . is:
P k + = U k + D k + 2 U k + T ,
Equation (6) can then be written as:
P k + 1 = Φ k + 1 U k + D k + 2 U k + T Φ k + 1 , k T + Ψ k Q k   Ψ k T .
The idea is to find the factors U k + 1 and D k + 1 from Equation (18), such that:
P k + 1 = U k + 1 D k + 1 2 U k + 1 T .
We define the following s + n × n matrix:
D + 2 U + T Φ k + 1 , k T Q 1 1 2 T Ψ T P r e a r r a y .
Then, we compute its singular value decomposition:
D + 2 U + T Φ k + 1 , k T Q 1 1 2 T Ψ T P r e a r r a y = U k D k 0 V k P o s t a r r a y   S V D   f a c t o s .
Multiplying each term on the left by its transpose, we get:
Φ k + 1 , k U k + D k + T D k + U k + T Φ k + 1 , k T + ψ k Q k 1 2 Q k 1 2 T ψ k T = V k D k T |   0 U T U D k 0 V k T .
We obtain:
Φ k + 1 , k U k + D k + 2 Φ k + 1 , k T + ψ k Q k ψ k T = V k D k 2 V k T .
Comparing the result of Equation (23) with Equations (18) and (19), it is easy to associate V k and D k with U k + 1 and D k + 1 . In the orthodox Kalman measurement update, substituting Equation (8) into (9) yields:
P k + = P k P k H k T H k P k H k T + R k 1 H k P k .
Lemma 1. Using the matrix inversion lemma in the case where the inverse of a positive definite matrix A adds a rank-one matrix bb T yields:
A + b b T 1 = A 1 A 1 b 1 + b T A 1 1 b t A 1 ,
where the superscript T denotes the complex conjugate transpose (or Hermitian) operation. It is well known in the literature that this formula is handy to develop a recursive least-squares algorithm for the recursive identification or design of an adaptive filter [21].
The lemma on Equations (24) and (25) yields:
P k + 1 = P k 1 + H k T R k 1 H k .
Implementing the singular value decomposition of a symmetric positive definite matrix into P k + 1 + and P k yields:
U k + D k + 2 U k + T 1 = U k D k 2 U k T 1 + H k T R k 1 H k ,
U k + D k + 2 U k + T 1 = U k T 1 D k + 2 + U k T H k T R k 1 H k U k U k 1 .
To make the system adaptable to changes in sensor quality, the measurement noise is also estimated, which leads to a time-variant matrix. When the signal comes from different sensors, at each discrete time step k the noise covariance matrix is updated by appraising the standard deviation (std) of an arithmetically differentiated (diff) small-signal segment [14]:
R k = 1 0 0 0 0 1 * σ n o i s e , , k 2 ,
σ n o i s e , k i = s t d d i f f S i 2 .
The adaptive computation of the KF and SVD-based (K-SVD) algorithm can be added independent of the SVD portion, as for some applications the measurement covariance matrix is known and can be pre-computed at the algorithm initiation.
After computing Equations (29) and (30), let
R k 1 = L k L k T
be the Cholesky decomposition of the inverse of the covariance matrix. If covariance matrix R is known, but not the inverse, then the reverse Cholesky decomposition R 1 2 R 1 2 T = R 0 ,   , R 1 2 [8] upper triangular can be identified. It then follows that L k = R k 1 2 T 1 is the necessary Cholesky decomposition [20] in Equation (31).
Now, we consider the m + n × n matrix:
L k T H k U k D k 1 ,
and compute its singular value decomposition:
L k T H k U k D k 1 P r e = a r r a y = U k * D k * 0 V k * T P o s t a r r a y   S V D   f a c t o r s .
Multiplying each term on the left by its transpose produces:
D k 2 + U k T H k T L k L k T H k U k = V k * D k * 2 V k * T .
Then, Equation (28) can be written as follows:
U k T 1 D k + 2 U k + 1 = U k T 1 V k * D k * 2 V k * T U k 1 ,
U k T 1 D k + 2 U k + 1 = U k V k * T 1 D k * 2 U k V k * 1 .
Comparing both sides of Equation (36):
U k + = U k V k * ,
D k + = D k * 1 .
The update measurement formulation is obtained, which requires calculating the singular value decomposition of the nonsymmetrical matrix without specifically defining its left orthogonal component that has a larger dimension. The Kalman gain term can be derived as:
K k = P k H k T H k P k H k T + R k 1 .
Inserting P k + P k + 1 and R k 1 R k will not alter the gain. Thus, the Kalman filter gain can be written as:
K k = P k + P k + 1 P k H k T R k 1 R k H k P k H k T + R k 1 ,
K k = P k + P k + 1 P k H k T R k 1 H k P k H k T R k 1 + I 1 ,
where I denotes the identity matrix. Next, from Equation (26), for P k + 1 we get:
K k = P k + H k T R k 1 ,
K k = U k + D k + 2 U k + T H k T L k L k T .
The Kalman gain K k is fully defined, and the state vector measurement is given by:
X ^ k + = X ^ k + K k Z k H k X ^ k .
Algorithm 1. Singular value decomposition-based Kalman filtering:
Input:
Initialization (k = 0). Perform SVD on P 0 to determine the initial U 0 and D 0 . The initial P 0     n x n , where U 0 and D 0 are orthogonal and diagonal matrices, respectively. Matrix U 0 contains the singular values of P 0 .
Steps (k = 1 to n):
  • Update U k + and D k + by constructing the m + n × n matrix using a pre-array (32). Use only the right SVD factor V.
  • Compute the Kalman gain K k from Equation (43).
  • Update the estimate x ^ k + from Equation (44). Notice that L k T is triangular, and the computation of H k T L k = L k T H k T can be obtained from step 2.
  • Compute the extrapolation x ^ k + 1 ; from Equation (3),   U k + 1 and D k + 1 can be obtained by the SVD from the Equation (21) pre-array matrix without forming the left orthogonal factor (the one associated with Equation (13) of the SVD).
  • Compute the measured noise time-variant matrix using Equations (29) and (30).
Return: Compute L k   using Equation (31) as L k = R k 1 2 T 1 .
The Algorithm 1 requires the Cholesky decomposition of noise covariance matrices R ; for time-invariant matrices, the Cholesky decomposition is performed once during the initial step—i.e., before the KF recursion. Note that the algorithm also requires the m × m matrix inversion for the terms R 1 2 and R 1 2 T in Equation (31). Again, if the process noise covariance is constant over time, the matrix calculation is pre-computed while the filter is initialized.

5. Numerical Experiments: Applications in Navigation

The computer round-off for floating-point arithmetic is always demarcated by a single parameter ϵ r o u n d o f f , called the unit round-off error, which is demarcated as 1 + ϵ r o u n d o f f 1 in machine precision.
In MATLAB, the parameter “eps” satisfies the definition above [2].
Round-off errors are a side effect of computer arithmetic using fixed or floating-point data words with a fixed number of bits [2]. The next example demonstrates that round-off can still be a problem when implementing KF in MATLAB environments and how a problem that is well-conditioned can be made ill-conditioned by the filter implementation [2]. The problem is taken from Grewal and Andrews, and it is significant, as it shows the filter’s observational behavior. The example does not make use of the Algorithm 1 adaptive computation from Equations (29) and (30), and the resulting algorithm is called K-SVD.
Example 1. Let I n denote the n × n identity matrix. Considering the filtering problem with the measurement sensitivity matrix:
H = 1 1 1 1 1 1 + δ ,
and covariance matrices.
P 0 = I 3 and R = δ 2 I 2 ; to simulate round-off, it is assumed that:
δ 2 < ϵ r o u n d o f f but δ > ϵ r o u n d o f f .
In this case, although H has rank = 2 in machine precision, the product H k P 0 H k T with round-off will equal:
3 3 + δ 3 + δ 3 + 2 δ ,
which is singular, and the results remain unchanged when R is added to H k P 0 H k T .
All the KF tests were carried out with the same precision (64-bit floating-point) in MATLAB running on a standard personal computer with an Intel(R) Core(TM) processor, i5-4200U CPU @ 1.60 GHz 2.30 GHz, 8.0 GB RAM, where the unit round-off error is 2 53 ~ 1.11.10 16 . The MATLAB eps function is twice the unit round-off error [2,22]. Hereafter, the set of numerical experiments is repeated for various values of the ill-conditioned parameter δ ; as such, delta tends toward the machine precision limit δ ϵ r o u n d o f f . This example does not prove the robustness of the solution of the UD, square root Kalman (SQRK), and K-SVD methods. Full implementation would require temporal updates; however, observational updates have created difficulties with traditional implementation [2].
Figure 1 illustrates how the conventional KF and a few other choice implementation methods work on various ill-conditioned experiments as the conditioned parameter δ 0 . For this particular example, the square root-based (SQRK-based Carlson), Cholesky-based (UD-based Bierman), and K-SVD-based methods appear to degrade more gracefully than the others. The SQRK, UD decomposition, and K-SVD solutions also retain nine digits ( 30   bits ) of accuracy at δ , while the other methods have no bits of accuracy in the calculated solution [1]. The other methods worth mentioning in this research are conventional KF, Potter square root filter, and Joseph stabilization.
The problem analyzed corresponds to an ill-conditioned intermediate result where R * = H P H T + R for inversion in the Kalman gain formula in Equation (8). In this case, although H has rank = 2 in machine precision, the product H P 0   H T + R is not invertible, and the observational filter updates fail. The results are provided graphically in Figure 1, showing that the K-SVD-based method performs as well as the UD-based and SQRK-based Kalman filter methods.
Examples 2 and 3 as outlined were run one time—i.e., the research team performed a simulation. In each run, the dynamic state real trajectory was x k t r u e ,   k = 1 , , K ; the estimated dynamic state path was x ^ k / k ,   k = 1 , , K ; and the root mean square error (RMSE) of the state vector for each component is given as follows [22]:
RMSE xi = 1 MK i = 1 M j = 1 k x k true x ^ k / k 2 .
Examples 2 and 3 use a GPS configuration loaded as Yuma ephemeris data from a GPS satellite from March 2014, downloaded from the US Coast Guard site YUMA Almanac [2].
Example 2. The following example shows a navigation simulation with a dual-frequency GPS receiver on a 100 km-long figure-8 track for 1 h with signal outages of 1 min each. The problem is taken from Grewal and Andrews, and it is significant as it shows the filter’s observational behavior [2]. The state vector corresponds to nine host vehicle model dynamic variables and two receiver clock error variables. The nine host vehicle variables are: east error (m), north error (m), altitude error (m), east velocity error (m/s), north velocity error (m/s), altitude rate error (m/s), east acceleration error (m), north acceleration error (m), and upward acceleration error (m). The MATLAB m-file solves the Riccati equation with a dynamic coefficient matrix structure [2], where the 9 × 9 matrix corresponds to the host vehicle dynamics and the 2 × 2 to the clock error.
9 × 9 0 0 2 × 2
Table 1 summarizes the outcome for a simulation using real ephemeris GPS data and an ill-conditioned intermediate result where R 1 = H P H T + R for inversion in the Kalman gain formula in Equation (8). The example does not make use of the Algorithm 1 adaptive computation from Equations (29) and (30), and the resulting algorithm is called K-SVD.
The results in Table 1 show that the conventional KF is not suitable to gracefully handle round-off errors, and other filter methods can be used to better handle the round-off issue. A large δ corresponds to a well-conditioned state. Once degradation happens, some of the filters decay faster than the K-SVD. The filter robustness can then be obtained by comparing the outcomes shown in Table 1 against those with the worst performance. The threshold where the filter can handle the error is reached when the algorithm produces “NaN,” or not a number, and this happens when the conventional KF is at δ = 10 10 . A comparison of the K-SVD with SQR-KF and UD-KF shows that the K-SVD algorithm is superior in handling round-off errors. UD-KF is slightly better than SQR-KF, and the conventional KF performs as well as the other two. Regarding the execution time, K-SVD and conventional KF are slower than SQRT and UD, and this is because SVD is computationally more costly than Cholesky decomposition [22]. However, SVD is suitable for parallel computing and has a better numerical stability.
Example 3. To compare and evaluate the performance of the new algorithm with conventional KF implementations, we include example 3, which is taken from [2]. The example is a comparison of the GNSS navigation performance with four vehicle tracking models, using a simulated racetrack with a 1.5 km figure-8 track at 90, 140, 240, and 360 kph average speed and a nominal low-cost clock, with a relative stability of 1 × 10 9 part/part over 1 s. The number of states in the KF model varies as satellites come into and fall from view. The vehicle states (position, velocity, and acceleration) are as follows: position north, position east, position up (vertical), velocity north, velocity east, velocity vertical, acceleration north, acceleration east, acceleration vertical. Models 3 and 5 do not include the three acceleration states (north, east, vertical).
Table 2 lists the number of state variables and the minimum and maximum number of visible satellites. Table 3 lists the parametric models used in this research to model a single component of the motion of a host vehicle. The model parameters mentioned in Table 3 include the standard deviation σ and the correlation time constant τ of position, velocity, acceleration, and jerk (derivative of acceleration) [2]. The example 3 makes use of the Algorithm 1 adaptive computation from Equations (29) and (30), and the resulting algorithm is called K-SVD adaptive.
Regarding the host vehicle dynamic models in Table 3, the third model is a navigation model that is often used to track the GNSS receiver phase and frequency error, and to track the vehicle position and velocity. The fifth model is restricted root mean square (RMS) speed and acceleration, and is a further refinement of a vehicle with bounded velocity and acceleration. These also perform better when the signals are lost, because the model takes the true vehicle limitations into consideration. Model 6 is the bounded RMS location, and may be suitable for ships and land vehicles with restricted altitude excursions, including vehicles operating within tightly confined areas [2]. Each model was tuned to the simulated vehicle dynamics by putting its autonomous model parameters to fit the values described in [2].
The results from the experiment in Example 3 are shown in Table 4 and Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13. The experiment was designed to have a side-by-side comparison between the conventional KF and its novel factored form, K-SVD and K-SVD adaptive.
Comparing the values for each model in Table 4 at different velocities, it is clear that when the velocity is increased (e.g., from 90 to 360 kph), the conventional KF always increases the RMS error absolute value well above the K-SVD by about 172%. At the model level, the best performer in all cases is model 5. From the figures above, it can be inferred that with all things being equal (the only difference in the MATLAB source code is the algorithm to compute the Kalman filter), K-SVD adaptive has a better prediction accuracy than the conventional KF and traditional factored KFs. This means that K-SVD adaptive is more robust than the current known factor-based algorithms, at the same time providing a better filtering capability when it performs the same operations. The algorithm captures the information that matters, which is the N-dimensional subspace of R N spanned by the first columns of V or the right orthogonal factor [8,9]. By performing a matrix multiplication, the algorithm realizes a low dimensionality reduction that captures much of the variability and corresponds with the selection of the eigenvectors with the largest eigenvalues (since SVD orders them from the largest to smallest). Additionally, by adapting to variations in the covariance measurement error, the algorithm has a higher accuracy.
Figure 14, Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19 represent the results of Table 4. By observing Figure 14, it is easy to see that K-SVD adaptive model 6 performs better for the north position. K-SVD model 5 also performs relatively well compared with the conventional KF. Similar results can be inferred from Figure 15; K-SVD adaptive model 6 is the best performer, followed by K-SVD adaptive models 3 and 5. Conclusive results can also be seen for the downward position RMS error in Figure 16. Model 5 performs better, independent of the filter type. Model 6 is also a good performer for downward velocity, and the worst is model 3.
Figure 17 illustrates the results for the north velocity RMS. From this figure, K-SVD adaptive model 6 is the best performer, followed closely by K-SVD model 5. From Figure 18, the best performer is K-SVD adaptive model 5, and finally, regarding the downward speed velocity RMS from Figure 19, K-SVD models 3 and 6 perform relatively well.
Example 4. Distance measuring equipment (DME) operations will continue to expand as a different navigation source in space-based navigational systems such as GPS and Galileo [23]. In the 21st century, airborne navigation has become increasingly reliant on satellite guidance; nonetheless, satellite signals are extremely low, can be spoofed, and are not necessarily available. There are mandates from the US and European Union that require maintaining and upgrading ground-based navigation aids.
The following example simulates a simplified model of an electronic DME-to-DME position and velocity navigation system, where the noise measurement of a slant range (distance) is the underlying observable. The airborne distance measuring equipment transmits a pulse that is received and returned by the ground station, while the aircraft equipment triangulates the transit time in terms of distance based on the propagation delay. The aircraft flies along a fixed path at a cruising speed of 583 kph [6]. The duration of the flight is 3000 s, the initial coordinates at t = 0 are appropriately defined as normal random variables, x 0 ~ N 60 , 10   km 2 , y 0 ~ N 210 , 10   km 2 , sample time T = 2 s, g = 10 m/s, and the maximum deviation in the acceleration for the aircraft is σ a   m a x = 0.001 m s e c 2 .
Suppose that two DME stations are found on the x-axis, as shown in Figure 20. The positions of the DME stations are identified. The aircraft follows a circular path at the end of the trajectory, at a constant speed with the same velocity as the linear motion, turning for 270° with a gyro of 20 km. The trajectory is depicted in Figure 21 and assumed to be level, with constant velocity plus small perturbations caused by small random accelerations. This model is only practical for short intervals.
For navigation based on estimation, it is necessary to extrapolate the position or initial state of the airplane, which is supposedly known, to a posterior time step using the dynamic equation of the process and knowing the excitation that disturbs the system.
To obtain distance z 1   and   z 2 , it is necessary to have the following geometric relations for both DMEs; the measurements in terms of x and y are then:
z 1 = x x 1 2 + y y 1 2 ,
z 2 = x x 2 2 + y y 2 2 ,
z 1 = f 1 x , y ,   z 2 = f 2 x , y .
To estimate the aircraft position using the above mathematical expressions, it is necessary to clear x and y as a function of trigonometric functions, and the equations are solved using linearization by expansion:
z 1 = f 1 x , y x dx + f 1 x , y y dy ,
z 2 = f 2 x , y x dx + f 2 x , y y dy .
Writing Equations (50) and (51) in matrix form, we have:
z 1 z 2   = f 1 x , y x f 1 x , y y f 2 x , y x f 2 x , y y . x y ,
z 1 z 2   = f 1 x , y x f 1 x , y y f 2 x , y x f 2 x , y y . x y .
Thus, the evaluation of the partial derivatives leads to:
f 1 x , y x = x x 1 z 1 = sin θ 1 ,
f 1 x , y y = y y 1 z 1 = cos θ 1 ,
f 2 x , y x = x x 2 z 2 = sin θ 2 ,
f 2 x , y y = y y 2 z 2 = cos θ 2 .
Substituting in (53), we have:
z 1 z 2   = sin θ 1 cos θ 1 sin θ 2 cos θ 2 . x y .
Then,
Δ z 1 Δ z 2   = sin θ 1 cos θ 1 sin θ 2 cos θ 2 . Δ x Δ y .
Finally, we notice that the equation above can be generalized further because the cosine and sine terms represent the line of sight into the two DME channels, along with direction cosine between the y- and x-axes [6]. Please note the connection between the observables, z 2 , and the estimated quantity, x , y , is not linear. Therefore,
H = sin θ 1 cos θ 1 sin θ 2 cos θ 2 .
Matrix H is sometimes called the linearized matrix. Applying the inverse matrix method, we have:
Δ x Δ y = H 1 .   Δ z 1 Δ z 2   .
Figure 21 depicts the results after running Equations (47)–(61) in MATLAB code:
As seen in Figure 21, the error does not go to infinity when the nominal trajectory goes through the origin, because the assumed aircraft dynamics (i.e., random walk) gives the filter some memory, and all the weight is not placed on the current measurement. From the figure, it can be seen that the aircraft reaches the origin in about 1063 s. What it is illustrated here is the classical error dilution of precision (DOP) when the line of sight is 180° apart relative to the aircraft. The detected peak is the greatest in the y-axis, and the error occurs due to the mathematical effect of navigating the DME geometry on the position measurement. After that, the position is calculated using conventional KF and K-SVD, and the general formulation is:
x _ k + 1 = A . ¯ ¯ x _ k + B . ¯ ¯ u _ k + w _ k ,
z _ k = C k ¯ ¯ . x _ k + v _ k .
The covariance of w _ k is Qk and that of v _   k is Rk , then W   ~   N 0 , Q   and   V   ~ N 0 , R .
The state equations are then:
x k + 1 y k + 1 v x   k + 1 v y   k + 1 = 1 0 T 0 0 1 0 T 0 0 1 0 0 0 0 1 . x k y k v x   k v y   k + w 1 k w 2 k d w 1 k d w 1 k ,
x k + 1 y k + 1 v x   k + 1 v y   k + 1 = 1 0 T 0 0 1 0 T 0 0 1 0 0 0 0 1 . x k y k v x   k v y   k + T 2 2 0 0 T 2 2 T 0 0 T ,
where x k + 1 and y k + 1 denote the incremental position in the horizontal and vertical plane, respectively.
z ¯ k   = 1 0 0 0 0 1 0 0 . x k y k v x   k v y   k + v x   k   v y   k   .
Let Q be the error in the process, and the matrix maintains the values of P k from Equation (6):
Q = E ( [ T 2 2 0 0 T 2 2 T 0 0 T ] [ T 2 2 0 0 T 2 2 T 0 0 T ] T [ w k a ] ) 2 ,
Q = T 4 4 0 T 3 2 0 0 T 4 4 0 T 3 2 T 3 2 0 T 2 0 0 T 3 2 0 T 2 . σ a m a x 2 .
v k   denotes the measurement error:
v k   = σ x 2 0 0 σ y 2 .
The results from calculating the position using conventional KF and Algorithm 1 without the adaptive computation (K-SVD) from Equations (29) and (30) are shown in Figure 22.
From Figure 22, is it easy to see that at about 333 m from the origin, the DME presents singularity in the calculation due to DOP, and at that critical point the KF and K-SVD perform fine; however, the KF does not perform as well as K-SVD at the cusp of the circular return. Additionally, the results show how far the RMS is above the KF compared with the K-SVD in this DME example (see Figure 23).
The results from Figure 23 indicate that the RMS is lower for the K-SVD filter compared with the conventional KF. When solving the observation equation to obtain the calculated position, it is likely that the error of the observables is manifested by producing erratic oscillations in the position. This makes the computed trajectory uneven, making the motion of the airplane very unstable. By using a prediction system that minimizes that noise, it is possible to produce estimates of unknown or known variables that tend to be more precise than those based on the measurement alone (see Equation (61)). It is demonstrated that advantages can be obtained when combining the position of the airplane based on observables and predictions. Given that the DME was left in place to circumvent the reliability problem of GPS and to increase reliability, the best filter to deal with the singularities at the origin is K-SVD.

6. Computational Complexity

Computational complexity is evaluated as time complexity as a function of input size, where m and n denote the input size and the complexity of the number of required operations (big O ), the matrix inversion is O ( n 3 ), and an economic SVD rank reduction step requires O ( m 2 n ) operations. With the matrix multiplication O ( n 3 ), to compute the Kalman gain k t requires O ( s n 2 + n 3 ). The Cholesky decomposition of the commonly used algorithm is O ( n 3 ). Computing the matrix covariance is O N . n 2 , and in the worst case the algorithm complexity is O ( n 3 ), and this is still within an acceptable range when compared with the conventional KF, where it has been established that the number of arithmetic operations to be executed at each sampling period is O ( n 3 ) [24].

7. Conclusions and Future Scope

The computation of the novel Kalman-SVD algorithm is based on the computation of the singular value decomposition of an unsymmetrical matrix without explicitly forming its left orthogonal factor that has a high dimension [19]. K-SVD does a truncation that keeps only the top n right singular vectors by setting V T (where superscript T denotes the transpose of a matrix and n denotes the number of rows), equal to the n rows of V T (n × n matrix). That is equivalent to removing or zeroing out m number of singular values from the original matrix (m + n) by –n, where (m + n) denotes the rows, therefore reducing the dimension, which is far less than the original dimension, thus filtering out the noise.
The other conclusion of this research is related to the robustness of the adaptive algorithm. Compared with other KF-based decomposition algorithms, K-SVD adaptive can handle round-off errors, which makes the algorithm useful in critical applications, making it possible to deal with singular systems that are ill-conditioned.
From the analysis of example 3, we can conclude that the accuracy of position and speed can be improved by choosing the model that performs the best for the worst-case scenario, which in this case is the highly dynamic conditions of maneuvering a vehicle. The tracking capabilities can also be increased by adding to the K-SVD algorithm the adaptive computation of the measurement noise matrix as a time-variant matrix. From the analysis of example 4, we can conclude that K-SVD is suitable to deal with real dynamic conditions for ground navigation systems. From the analysis of the literature, it seems that one of the drawbacks of using K-SVD and KF in general is their high computational load. In that sense, K-SVD can be used in applications where the KF is currently used with the advantages of robustness and accuracy.
The algorithm developed in this research also has superior accuracy compared with regular decomposition-based KF algorithms by adapting the measurement matrix at each step and at the same time using the simplest form to update the error of the covariance matrix P, which is the most common equation to compute the algorithm. Additionally, the proposed Algorithm 1 does not require matrix inverting, and the system of interest does not need to have a square shape.
Our effort in this paper is guided by the increasing utilization of Kalman filtering in operations that demand high reliability, where the robustness of the algorithm is a necessary condition to perform critical operations. More work can be done in future K-SVD adaptive applications by including the Joseph stabilized form instead of the conventional form of the Riccati equation to avoid the propagation of antisymmetric problems and fine-tuning the process noise covariance matrix Q k . We believe that the key points and principles derived here will find their way to applications in high-speed signal processing by combining K-SVD, K-SVD adaptive and SVD compressive sensing [25]. For future research, we intend to explore particular technology uses.

Author Contributions

J.C.B.O. designed the mathematical model of the proposed algorithm and wrote the paper, R.M.A.V. performed the experiments, and V.F.G.C. analyzed the data. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Grewal, M.S.; Andrews, A.P.; Bartone, C.G. Global Navigation Satellite Systems, Inertial Navigation, and Integration; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  2. Grewal, M.S.; Andrews, A.P. Kalman Filtering: Theory and Practice Using MATLAB; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  3. Grewal, M.S.; Weill, L.R.; Andrews, A.P. Global Positioning Systems, Inertial Navigation, and Integration; John Wiley & Sons: Hoboken, NJ, USA, 2001. [Google Scholar]
  4. Verhaegen, M.; Van Dooren, P. Numerical aspects of different Kalman filter implementations. IEEE Trans. Autom. Control 1986, 31, 907–917. [Google Scholar] [CrossRef]
  5. Bierman, G.J. Factorization Methods for Discrete Sequential Estimation; Academic Press: New York, NY, USA, 1977. [Google Scholar]
  6. Brown, R.G.; Hwang, P.G. Introduction to Random Signals and Applied Kalman Filtering with MATLAB Exercises, 4th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  7. Prvan, T.; Osborne, M.R. A square-root fixed-interval discrete-time smoother. J. Aust. Math. Soc. Ser. B. Appl. Math. 1988, 30, 57–68. [Google Scholar] [CrossRef] [Green Version]
  8. Wang, L.; Libert, G.; Manneback, P. Kalman Filter Algorithm Based on Singular Value Decomposition. In Proceedings of the 31st IEEE Conference on Decision and Control, Tucson, AZ, USA, 16–18 December 1992. [Google Scholar]
  9. Wang, L.; Libert, G.; Manneback, P. A Singular Value Decomposition Based Kalman Filter Algorithm. In Proceedings of the 1992 International Conference on Industrial Electronics, Control, Instrumentation, and Automation, San Diego, CA, USA, 13 November 1992. [Google Scholar]
  10. Zhang, Y.; Dai, G.; Zhang, H.; Li, Q. A SVD-based extended Kalman filter and applications to aircraft flight state and parameter estimation. In Proceedings of the 1994 American Control Conference—ACC ’94, Baltimore, MD, USA, 29 June–1 July 1994. [Google Scholar]
  11. Fan, J.; Ma, G. Characteristics of GPS positioning error with non-uniform pseudorange error. GPS Solut. 2014, 18, 615–623. [Google Scholar] [CrossRef] [Green Version]
  12. Chen, B.; Wu, L.; Dai, W.; Luo, X.; Xu, Y. A new parameterized approach for ionospheric tomography. GPS Solut. 2019, 23. [Google Scholar] [CrossRef] [Green Version]
  13. Auger, F.; Hilairet, M.; Guerrero, J.M.; Monmasson, E.; Orlowska-Kowalska, T.; Katsura, S. Industrial Applications of the Kalman Filter: A Review. IEEE Trans. Ind. Electron. 2013, 60, 5458–5471. [Google Scholar] [CrossRef] [Green Version]
  14. Foussier, J.; Teichmann, D.; Jia, J.; Misgeld, B.J.; Leonhardt, S. An adaptive Kalman filter approach for cardiorespiratory signal extraction and fusion of non-contacting sensors. BMC Med. Inform. Decis. Making 2014, 14, 37. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Spincemaille, P.; Nguyen, T.D.; Prince, M.R.; Wang, Y. Kalman filtering for real-time navigator processing. Magn. Reson. Med. 2008, 60, 158–168. [Google Scholar] [CrossRef] [PubMed]
  16. Zhang, C.; Yan, F.; Du, C.; Rizzoni, G. An Improved Model-Based Self-Adaptive Filter for Online State-of-Charge Estimation of Li-Ion Batteries. Appl. Sci. 2018, 8, 2084. [Google Scholar] [CrossRef] [Green Version]
  17. Muñoz-Bañón, M.Á.; Del Pino, I.; Candelas-Herías, F.A.; Torres, F. Framework for Fast Experimental Testing of Autonomous Navigation Algorithms. Appl. Sci. 2019, 9, 1997. [Google Scholar]
  18. Zhang, C.; Guo, C.; Zhang, D. Data Fusion Based on Adaptive Interacting Multiple Model for GPS/INS Integrated Navigation System. Appl. Sci. 2018, 8, 1682. [Google Scholar] [CrossRef] [Green Version]
  19. Nieto, F.J.S. Navegacion Aerea; Garceta Grupo Editorial: Madrid, Spain, 2012. [Google Scholar]
  20. Durdu, O.F. Estimation of state variables for controlled irrigation canals via a singular value based Kalman filter. Fresenius Environ. Bull. 2004, 13, 1139–1150. [Google Scholar]
  21. Kohno, K.; Inouye, Y.; Kawamoto, M. A Matrix Pseudo-Inversion Lemma for Positive Semidefinite Hermitian Matrices and Its Application to Adaptive Blind Deconvolution of MIMO Systems. IEEE Trans. Circuits Syst. I: Regul. Pap. 2008, 55, 424–435. [Google Scholar] [CrossRef] [Green Version]
  22. Kulikova, M.V.; Tsyganova, J.V. Improved Discrete-Time Kalman Filtering within Singular Value Decomposition. Available online: https://www.groundai.com/project/improved-discrete-time-kalman-filtering-within-singular-value-decomposition/3 (accessed on 25 March 2020).
  23. Department of Defense; U.S. Department of Transportation. Available online: https://www.transportation.gov/pnt/radionavigation-systems-planning#:~:text=The%20Federal%20Radionavigation%20Plan%20 (accessed on 2 July 2020).
  24. Mendel, J. Computational requirements for a discrete Kalman filter. IEEE Trans. Autom. Control. 1971, 16, 748–758. [Google Scholar] [CrossRef]
  25. Ordoñez, J.C.B.; Valdés, R.M.A.; Comendador, F.G. Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD). Sensors 2018, 18, 1586. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Degradation of observational updates with the problem conditioning of the Riccati equation.
Figure 1. Degradation of observational updates with the problem conditioning of the Riccati equation.
Applsci 10 05168 g001
Figure 2. Plan view of the estimated and true locations at 90 kph using the conventional Kalman filter algorithm.
Figure 2. Plan view of the estimated and true locations at 90 kph using the conventional Kalman filter algorithm.
Applsci 10 05168 g002
Figure 3. Plan view of the estimated and true locations at 90 kph using the Kalman filter (KF) and singular value decomposition (SVD)-based (K-SVD) filter algorithm.
Figure 3. Plan view of the estimated and true locations at 90 kph using the Kalman filter (KF) and singular value decomposition (SVD)-based (K-SVD) filter algorithm.
Applsci 10 05168 g003
Figure 4. Plan view of the estimated and true locations at 90 kph using the conventional Kalman filter and singular value decomposition based (K-SVD) adaptive algorithm.
Figure 4. Plan view of the estimated and true locations at 90 kph using the conventional Kalman filter and singular value decomposition based (K-SVD) adaptive algorithm.
Applsci 10 05168 g004
Figure 5. Plan view of the estimated and true locations at 140 kph using the conventional Kalman filter algorithm.
Figure 5. Plan view of the estimated and true locations at 140 kph using the conventional Kalman filter algorithm.
Applsci 10 05168 g005
Figure 6. Plan view of the estimated and true locations at 140 kph using the K-SVD filter algorithm.
Figure 6. Plan view of the estimated and true locations at 140 kph using the K-SVD filter algorithm.
Applsci 10 05168 g006
Figure 7. Plan view of the estimated and true locations at 140 kph using the K-SVD adaptive algorithm.
Figure 7. Plan view of the estimated and true locations at 140 kph using the K-SVD adaptive algorithm.
Applsci 10 05168 g007
Figure 8. Plan view of the estimated and true locations at 240 kph using the conventional Kalman filter algorithm.
Figure 8. Plan view of the estimated and true locations at 240 kph using the conventional Kalman filter algorithm.
Applsci 10 05168 g008
Figure 9. Plan view of the estimated and true locations at 240 kph using the K-SVD filter algorithm.
Figure 9. Plan view of the estimated and true locations at 240 kph using the K-SVD filter algorithm.
Applsci 10 05168 g009
Figure 10. Plan view of the estimated and true locations at 240 kph using the K-SVD adaptive algorithm.
Figure 10. Plan view of the estimated and true locations at 240 kph using the K-SVD adaptive algorithm.
Applsci 10 05168 g010
Figure 11. Plan view of the estimated and true locations at 360 kph using the conventional Kalman filter algorithm.
Figure 11. Plan view of the estimated and true locations at 360 kph using the conventional Kalman filter algorithm.
Applsci 10 05168 g011
Figure 12. Plan view of the estimated and true locations at 360 kph using the K-SVD filter algorithm.
Figure 12. Plan view of the estimated and true locations at 360 kph using the K-SVD filter algorithm.
Applsci 10 05168 g012
Figure 13. Plan view of the estimated and true locations at 360 kph using the K-SVD adaptive algorithm.
Figure 13. Plan view of the estimated and true locations at 360 kph using the K-SVD adaptive algorithm.
Applsci 10 05168 g013
Figure 14. Vector norm of the absolute errors at the north position of the dynamic state at different speeds for models 3, 5, and 6.
Figure 14. Vector norm of the absolute errors at the north position of the dynamic state at different speeds for models 3, 5, and 6.
Applsci 10 05168 g014
Figure 15. Vector norm of the absolute errors at the east position of the dynamic state at different speeds for models 3, 5, and 6.
Figure 15. Vector norm of the absolute errors at the east position of the dynamic state at different speeds for models 3, 5, and 6.
Applsci 10 05168 g015
Figure 16. Vector norm of the absolute errors at the downward position of the dynamic state at different speeds for models 3, 5, and 6.
Figure 16. Vector norm of the absolute errors at the downward position of the dynamic state at different speeds for models 3, 5, and 6.
Applsci 10 05168 g016
Figure 17. Vector norm of the absolute errors at the north velocity of the dynamic state at different speeds for models 3, 5, and 6.
Figure 17. Vector norm of the absolute errors at the north velocity of the dynamic state at different speeds for models 3, 5, and 6.
Applsci 10 05168 g017
Figure 18. Vector norm of the absolute errors at the east velocity of the dynamic state at different speeds for models 3, 5, and 6.
Figure 18. Vector norm of the absolute errors at the east velocity of the dynamic state at different speeds for models 3, 5, and 6.
Applsci 10 05168 g018
Figure 19. Vector norm of the absolute errors at the downward velocity of the dynamic state at different speeds for models 3, 5, and 6.
Figure 19. Vector norm of the absolute errors at the downward velocity of the dynamic state at different speeds for models 3, 5, and 6.
Applsci 10 05168 g019
Figure 20. Distance measuring equipment (DME)-to-DME problem geometry.
Figure 20. Distance measuring equipment (DME)-to-DME problem geometry.
Applsci 10 05168 g020
Figure 21. Estimated parameters.
Figure 21. Estimated parameters.
Applsci 10 05168 g021
Figure 22. Position with the conventional Kalman filter and K-SVD vs. the estimated position.
Figure 22. Position with the conventional Kalman filter and K-SVD vs. the estimated position.
Applsci 10 05168 g022
Figure 23. RMS for K-SVD vs. the conventional Kalman filter.
Figure 23. RMS for K-SVD vs. the conventional Kalman filter.
Applsci 10 05168 g023
Table 1. Vector norm of absolute errors for each component of the dynamic state—i.e., R M S E x j 2 (example 2). RMSE, root mean square error; KF, Kalman filter; SVD, singular value decomposition; SQRK, square root; UD, unit triangular matrix diagonal; NaN, not a number.
Table 1. Vector norm of absolute errors for each component of the dynamic state—i.e., R M S E x j 2 (example 2). RMSE, root mean square error; KF, Kalman filter; SVD, singular value decomposition; SQRK, square root; UD, unit triangular matrix diagonal; NaN, not a number.
Algorithm R M S E x j 2 ,   i = 1 , , 19   while   Growing   Ill - Conditioned   δ ε   Round - Off Execution Time (ms)
10 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 10 9 10 10 10 11 10 12 10 13 10 14 10 15 10 16
Kalman filter (KF)0.3180.323.23.1853.1863.1863.1863.1863.846NaNNaNNaNNaNNaNNaNNaN1.75
K-SVD0.3180.323.23.1863.1863.1863.1863.1863.1863.1863.1863.1863.1863.1863.1861.20951.87
SQR-KF0.3170.323.23.1853.1863.1863.1863.1983.380NaNNaNNaNNaNNaNNaNNaN0.123
UD-KF0.3180.323.23.1853.1863.1863.1863.2012.607NaNNaNNaNNaNNaNNaNNaN0.06
Table 2. State variables of the host vehicle dynamic models and the min/max number of visible satellites.
Table 2. State variables of the host vehicle dynamic models and the min/max number of visible satellites.
NameState VariablesMin SatellitesMax Satellites
MODL361719
MODL561719
MODL692022
Table 3. Host vehicle dynamic models for example 3.
Table 3. Host vehicle dynamic models for example 3.
Model No.Model Parameters (Each Axis)Independent ParametersDependent Parameters
FQ
3 0 1 0 0 0 0 0 σ a c c 2 Δ t 2 σ a c c 2 σ p o s 2
5 0 1 0 0 1 τ v e l τ a c c 0 0 0 0 0 0 0 0 σ j e r k 2 Δ t 2 σ v e l 2
σ a c c 2
σ a c c 2
τ a c c
σ p o s 2
τ v e l
ρ v e l , a c c
σ j e r k 2
τ p o s
6 1 τ p o s 1 τ v e l 1 τ a c c 0 0 0 0 0 0 0 0 σ j e r k 2 Δ t 2 σ p o s 2
σ p o s 2
σ a c c 2
τ a c c
τ v e l
ρ p o s , v e l
ρ p o s , a c c
ρ v e l , a c c
σ j e r k 2
Table 4. Vector norm of absolute errors at each component of dynamic state—i.e., R M S E x j 2 (example 3).
Table 4. Vector norm of absolute errors at each component of dynamic state—i.e., R M S E x j 2 (example 3).
Velocity (kph)Filter AlgorithmRMS N Pos. Err (m)RMS E Pos. Err (m)RMS D Err Pos. (m)RMS N Vel. Err (m)RMS E Vel. Err (m)RMS D Vel. Err (m)
ModelModelModelModelModelModel
356356356356356356
90K-SVD18.979513.14348.498515.25949.50047.6964.95553.50343.164513.445518.101221.344212.118613.319815.73120.37720.370480.46792
KF49.739627.89088.825735.180417.74788.74225.28313.52223.342418.205620.114722.213416.144115.006216.80970.378950.371540.43506
K-SVD Adaptive10.277812.2079.25069.50389.33478.87915.36693.49833.25310.853817.365721.41749.213612.846615.74060.373450.369720.52434
140K-SVD34.38116.42138.678924.165811.1247.99775.75763.51043.596728.159424.96823.54523.403119.23319.39270.583140.577330.68302
KF91.597145.486410.097649.687827.854911.27525.80593.53043.683536.567631.201227.070327.144524.063823.99970.583280.578430.65682
K-SVD Adaptive13.08114.520810.14759.15749.82838.50836.34183.50693.617922.614722.18622.664418.013317.227418.20080.579570.576070.6972
240K-SVD72.41533.264411.481338.971619.408211.41236.21483.52813.829166.789550.912137.655947.474539.911742.4271.07491.0741.1929
KF147.4348101.548233.0158.150749.787532.25795.92363.54183.877874.036168.854357.532947.022849.072757.78281.07171.07361.1298
K-SVD Adaptive15.943918.87149.823910.724511.95338.99795.62693.51883.631449.405537.055728.820736.464329.942930.13370.991120.989081.0801
360K-SVD95.575650.641419.839843.996326.426218.26167.07643.53384.079896.50379.656265.881164.383359.854572.57211.48351.48761.6069
KF163.5172135.800969.251159.026757.083544.02716.06993.54393.928899.1573101.0778105.972662.647666.881881.10561.48321.48561.535
K-SVD Adaptive16.887124.09599.934710.586613.6368.79876.07493.5254.016481.996861.570447.459458.057549.358758.12951.48631.48551.5748

Share and Cite

MDPI and ACS Style

Bermúdez Ordoñez, J.C.; Arnaldo Valdés, R.M.; Gómez Comendador, V.F. Engineering Applications of Adaptive Kalman Filtering Based on Singular Value Decomposition (SVD). Appl. Sci. 2020, 10, 5168. https://doi.org/10.3390/app10155168

AMA Style

Bermúdez Ordoñez JC, Arnaldo Valdés RM, Gómez Comendador VF. Engineering Applications of Adaptive Kalman Filtering Based on Singular Value Decomposition (SVD). Applied Sciences. 2020; 10(15):5168. https://doi.org/10.3390/app10155168

Chicago/Turabian Style

Bermúdez Ordoñez, Juan Carlos, Rosa María Arnaldo Valdés, and Victor Fernando Gómez Comendador. 2020. "Engineering Applications of Adaptive Kalman Filtering Based on Singular Value Decomposition (SVD)" Applied Sciences 10, no. 15: 5168. https://doi.org/10.3390/app10155168

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop