Implementation and Performance Analysis of Kalman Filters with Consistency Validation

: This paper provides a useful supplement note for implementing the Kalman ﬁlters. The material presented in this work points out several signiﬁcant highlights with emphasis on performance evaluation and consistency validation between the discrete Kalman ﬁlter (DKF) and the continuous Kalman ﬁlter (CKF). Several important issues are delivered through comprehensive exposition accompanied by supporting examples, both qualitatively and quantitatively for implementing the Kalman ﬁlter algorithms. The lesson learned assists the readers to capture the basic principles of the topic and enables the readers to better interpret the theory, understand the algorithms, and correctly implement the computer codes for further study on the theory and applications of the topic. A wide spectrum of content is covered from theoretical to implementation aspects, where the DKF and CKF along with the theoretical error covariance check based on Riccati and Lyapunov equations are involved. Consistency check of performance between discrete and continuous Kalman ﬁlters enables readers to assure correctness on implementing and coding for the algorithm. The tutorial-based exposition presented in this article involves the materials from a practical usage perspective that can provide profound insights into the topic with an appropriate understanding of the stochastic process and system theory.


Introduction
The Kalman filter (KF) [1][2][3][4][5][6][7] describes a recursive solution to the linear filtering problem and has been one of the most common estimation techniques widely used today.It is a standard method used in control engineering for measuring the mean-square error between the output signal of a linear plant subject to a stochastic disturbance and the estimated output signal.The Kalman filter is a set of mathematical equations that provides an efficient computational means to estimate the state of a process in a way that minimizes the mean squared error.As an optimal recursive data processing algorithm, the Kalman filter combines all available measurement data plus prior knowledge about the system and measuring devices to produce an estimate of the desired variables in such a manner that error is minimized statistically.It processes all available measurements regardless of their precision to estimate the current value of the variables of interest.In addition, it does not require all previous data to be stored and reprocessed every time a new measurement is taken.
The Kalman filter algorithm is one of the most common estimation techniques currently used.Due to advances in digital computing, the Kalman filter has been a useful tool for a variety of various applications [8,9].Although the Kalman filter was originally developed for the case of discrete observations that enter into the estimation of the state variables at discrete times, the observations could be continuous as with analog measuring devices [10][11][12][13][14].They might on some occasions be considered nearly continuous if the data rate is very high.The continuous Kalman filter (CKF), sometimes termed the Kalman-Bucy filter, provides the optimal solution for the state estimation problem of the systems modeled by a linear stochastic differential equation.As a continuous-time counterpart to the discrete-time Kalman filter, the distinction between the prediction and update steps of the discrete Kalman filter (DKF) does not exist in the continuous-time case.Further, the majority of Kalman filter applications are implemented in digital computers, however, a thorough study of optimal estimation should include the CKF from which some intuition can be yielded for designing DKF.It is still valuable to investigate the CKF as a baseline system design and as an evaluation tool for DKF, even though the implementation of CKF is not as practical as the DKF.Furthermore, in some cases, the statistical behavior of the system can be determined in a closed, analytical form if formulated as a continuous process.
Some existing works of literature intend to serve as tutorials [15][16][17][18][19], and the purpose of this paper is to provide a practical introduction with implementation practice to the topic.While there are some valuable references detailing the derivation and theory behind the Kalman filter, discrete and continuous, the KF technique is sometimes not easily accessible to some readers from the existing publications.Implementation of the algorithms sometimes bothers or confuses the readers.Generally, engineers do not encounter it until they have begun their graduate or professional careers.It is reasonable to expect working engineers to be capable of making use of this computational tool for different applications.However, it may not be practical to expect working engineers to obtain a deep and thorough understanding of the stochastic theory behind Kalman filter techniques.
The steady-state Kalman filter is considered a type of suboptimal filter, which has a constant gain matrix during the estimation process.It is applicable in some applications with some limitations and can be realized through the analog circuit, which is particularly attractive in real-time applications at the cost of some performance degradation.However, under the conditions of a time-varying environment, where the process and measurement models change with time, the adaptive Kalman filter (AKF) is popular through tuning the covariance parameters Q k and R k .In such a case, the steady-state Kalman filter may not be able to comply with the desired flexibility.Furthermore, when compared to the other filters, the Suboptimal Kalman filter (SKF) has identical tracking accuracy and is highly scalable.The SKFs are designed in a feedback-controlled system to obtain the estimation of the root-mean-square error.It simply requires the filtering calculation and foregoes the reasonably priced enhanced high-dimension computation and the challenging smoothing computation, resulting in a less computational load [20][21][22][23].
This article aims to take a more tutorial-based exposition to present the topics that can provide profound insights into the topic with an appropriate understanding of the stochastic process and system theory involved from a practical usage perspective.Several important issues are delivered through introductory exposition accompanied by supporting examples qualitatively and quantitatively for better clarification of the Kalman filter estimation algorithm.
The remainder of this paper is organized as follows.A brief review of the discrete Kalman filter and continuous Kalman filter is reviewed in Section 2. In Section 3, discretization of the continuous Kalman filter to the discrete-time formulation is revisited.In Section 4, illustrative examples and discussion are presented.Conclusions are given in Section 5.

The Kalman Filters and Suboptimal Filters
In this section, preliminary background on discrete and continuous Kalman filters is reviewed.The optimal Kalman gain and general arbitrary gain, respectively, are introduced.The covariance matrices that describe error propagations of the dynamical system with and without measurement, respectively, are presented.

Discrete Kalman Filter
Consider a dynamical system whose state is described by a linear, vector differential equation.The process model and measurement model are represented as the following: The discrete Kalman filter equations are summarized in Table 1.

Continuous Kalman Filter
Consider a dynamical system whose state is described by a linear, vector differential equation.The process model and measurement model are represented as the following: Process model: Measurement model: where the vectors u(t) and v(t) are both white noise sequences with zero means and mutually independent: where δ(t − τ) is the Dirac delta function, E[•] represents expectation, and superscript "T" denotes matrix transpose.The CKF equations are summarized in Table 2. (1) Solve the error covariance propagation by the matrix Riccati equation for P, which is symmetric positive-definite.
The discrete filter gain and continuous filter gain are related by the following: where ∆t = t k+1 − t k represents the sampling period.

Suboptimal Filters: Estimators with a General Gain
The error covariance P k for a discrete filter with the same structure as the Kalman filter, but with a general (namely an arbitrary) gain matrix is given by the following: The error covariance described in a differential equation: defines the error covariance for the filter with a general filter gain matrix K, which can be solved for the covariance of a general gain model.Taking the partial derivative of P ∞ with respect to K and setting: for a minimum leads to the same result as the matrix Riccati equation in continuous form: . P = FP + PF T + GQG T − PH T R −1 HP, which becomes an Algebraic Riccati Equation (ARE) and can be solved for the steady-state minimum covariance matrix when the system reaches steady-state, .P = 0.The Riccati equation deteriorates to the Lyapunov equation given by the following: for the case that no measurement is available.Equation ( 8) can be considered as either of the following two cases: ( . . P = (F − KH)P + P(F − KH) T + GQG T + KRK T with K = 0 or H = 0 If the general gain matrix K has been designed for particular values of Q and R, the steady-state error covariance will vary linearly with the actual spectral densities of either process or measurement noises.Any deviation of the design variances, and consequently, K, from the correct values will cause an increase in the filter error variance.Further information on sensitivity analysis can be referred to Gelb [3] and Jwo [10].
For the discrete Kalman filter, there are two stages where five equations are involved to complete an estimation cycle: two at the time update for the a priori estimation and three at the measurement update for the a posteriori estimation.For the continuous Kalman filter, only three equations are involved since, there is no distinguishment between the a priori and a posteriori versions of covariance and estimate, more specifically, P − k+1 → P k and x− k+1 → xk .For the steady-state Kalman filter as the suboptimal filter, the gain matrix is fixed as constant.Since the constant gain matrix can be calculated offline, the algorithm now involves four equations, including two for state estimates (a priori and a posteriori, respectively) and the other two for the calculation of covariance matrices (a priori and a posteriori, respectively).For the case that the theoretical covariance matrix is not required, only two equations, i.e., x− k+1 and xk are involved.

Discrete Kalman Filter from Discretization of Continuous Kalman Filter
Expressing Equation (1a) in discrete-time equivalent form via discretisation of a continuous time system leads to the following: In the subsequent discussion, derivation of the key parameters from the continuous form for implementing the DKF will be revisited.Two types of process models for the DKF are involved.
(1) Realization based on Equation (1a): The state transition matrix can be represented as the following: using the Taylor's series expansion.For the process model given by Equation (1a), the noise input is given by the following: where t k ≡ k∆t, t k+1 ≡ (k + 1)∆t, and the process noise covariance can be calculated via the following: Using Taylor's series expansion, we have the following: The first-order approximation is obtained by setting Φ k ≈ I (which is equivalently to F = 0), as the following: It should be mentioned that even Q is diagonal, and Q k need not be due to discretization of the system.Sampling can destroy independence among the components of the process noise.
(2) Realization based on Equation (1b): On the other hand, for the process model given by Equation (1b), the total noise input is now represented as the following: and, consequently, the process noise covariance is now the following: which, by Taylor's series, gives the following expression: The corresponding first-order approximation is given by the following: Equation ( 14) can be regarded as a special case of Equation ( 18) with the noise gain set as an identity matrix: Γ k = I.
An alternative approach is based on the piecewise white noise or discrete white noise approximation.Assuming that the forcing function w(τ) remains constant w(t) = w k over the integration interval, i.e., for t ∈ [t k , t k+1 ] for all k = 0, 1, 2, . .., then the noise gain the following: Equation ( 19) can be written as the following series expansion: For the first-order approximation when Φ k ≈ I, we have Γ k ≈ G∆t, and the following: Equating Equations ( 18) and ( 21) gives the following: It should be noticed that the Q k 's in Equations ( 14) and ( 22) are different.The Q k itself in Equation ( 14) represents the total amount of noise covariance, differing from Equation (22), where Γ k Q k Γ T k is the total amount of noise covariance due to two different representations.
Furthermore, a continuous system model involving deterministic control input is described by the following: It can be discretized by either of the following two forms: depending on the representation of process model.The gain matrix of the deterministic control input is given by the following:

Illustrative Examples and Discussion
In this section, various important issues will be delivered, along with some supporting examples.Four supporting examples are involved for discussion, including the scalar Gauss-Markov process, two examples of the extensions of the process, and the integrated Gauss-Markov process.Table 3 summarizes the objectives and highlights important issues to be delivered from the supporting examples.The reader can utilize the illustrative examples in this paper as step-by-step exercises.Beginning with a standard scalar Gauss-Markov process, and extending to the case of larger deterministic control input introduced, larger random input introduced and then integrated Gauss-Markov process.Both the scalar and vector Kalman filters are involved.For the scalar Kalman filter, it is easier for a beginner to understand the mathematical equations and implement the computer coding.It is more practical in engineering applications for the vector Kalman filter, where matrix calculation, such as inversion and decomposition of matrices, is involved and makes the realization more challenging.The numerical data accompanied by the illustrative examples can be carefully checked with the analytical ones to assuring the correct implementation of algorithms and provide an efficient way for troubleshooting.The examples also provide a connection to the probability and stochastic process, and system theory.

Example 1: The Scalar Gauss-Markov Process
The Gauss-Markov process is a stochastic process that satisfies the requirements for both Gaussian processes and Markov processes.The scalar Gauss-Markov process is described by the stochastic differential equation: It can be represented by the transfer function based on the following Laplace transform: It can also be based on the Fourier transform: which has the impulse response h(t) = e −βt u(t).The process can be represented using the system block diagram shown as in Figure 1.
It can also be based on the Fourier transform: which has the impulse response ( ) ( ) . The process can be represented using the system block diagram shown as in Figure 1.
White noise with a spectral amplitude q Firstly, the theoretical result is presented.The mean-square value of the output ( ) x t can be calculated through the following: As t → ∞ , the value approaches Furthermore, since in this given model is the following: and the spectral amplitude of the input ( ) f S j q ω = .Based on the relation for the wise- sense stationary (WSS) random process applied to a linear time-invariant (LTI) system, the spectral function of the output can be calculated through the following: where the spectral function of input in this example is given by the following: Firstly, the theoretical result is presented.The mean-square value of the output x(t) can be calculated through the following: As t → ∞ , the value approaches q 2β .Furthermore, since in this given model is the following: and the spectral amplitude of the input S f (jω) = q.Based on the relation for the wisesense stationary (WSS) random process applied to a linear time-invariant (LTI) system, the spectral function of the output can be calculated through the following: where the spectral function of input in this example is given by the following: Taking the inverse Fourier transform of S x (jω) yields the autocorrelation function: which provides another means of computing the mean-square value of a stationary process given its spectral function: Since the autocorrelation function in this example is the following: the mean-square value obtained based on Equation (29) has the same result as that based on Equation ( 27): Alternatively, the propagation of error covariance based on the Lyapunov equation for this Gauss-Markov process leads to the following: .P = −2βP + q Mathematics 2023, 11, 521 9 of 19 from which the following steady-state result can also be obtained: When the linear measurement is available in the following continuous form: the differential equation for error covariance of the CKF (Riccati equation) yields the following: .
Note that for this Gauss-Markov process, F = −β, G = 1, H = 1.When the system reaches steady state, we have the ARE: which can be solved to obtain the steady-state covariance: and, consequently, the associated steady-state Kalman gain can be calculated to be the following: Alternatively, the error covariance differential equation for a filter with the structure of a CKF with a general gain shown as in Equation ( 7) is given by the following: .P = 2(−β − K)P + q + K 2 r When the system reaches steady state, we have the following: and thus the following: The same result can be obtained by taking the partial derivative of P ∞ with respect to K and setting it to zero to find the optimal gain: Figure 2 illustrates performance deterioration due to increase in r, where r = 1 and 0.01 are shown.The result based on Riccati equation with r → ∞ results in the same result as that based on Lyapunov equation.It can be seen from the Kalman gain equation K = PH T R −1 that when the measurement noise r increases to a very large value, K becomes very small and P approaches 1 in this example.Figure 3 shows the variations of covariance and Kalman gain as r increases.For two selected values of r, the corresponding covariance and Kalman gain are given by: (1) r = 0.01: P ∞ = 0.1318 and K ∞ = 13.1774;(2) r = 1: P ∞ = 0.7321 and K ∞ = 0.7321, as indicated by the circle symbols in the figure.
where the covariance is the following:  Figures 4-6 provide the state estimation results for the first-order Gauss-Markov process with various values of r.The state estimation in the case of a very large measurement noise r ( r →∞ ) is shown in Figure 4.In this case, the Kalman filter gain approaches 0, and the correction capability on state vector is no longer available, meaning that only time update is implemented.Figures 5 and 6 present the estimation results in the case of larger ( 1 r = ) and smaller ( 0.01 r = ) measurement noises, respectively are involved.The plot on the right provides a closer look at the time interval 50-60 s for better observation.To collect the data for calculating the error variance from the estimation results, a recursive loop for evaluating estimation errors is employed based on Figure 7.
Performance degradation due to deviation of Kalman gain K , and three other pa- rameters, including β , q , and r , respectively, is examined in Figure 8.In each of the four plots, two sets of results are shown for observation of the effect by deviation of the parameters from the appropriate points and also for consistency check of DKF and CKF results.The solid lines represent the theoretical values while the circles are based on the DKF, respectively.Figure 9 provides a three-dimensional surface and contour of the covariance due to deviation of q and r from the optimal point.In this case 2 q = and 1 r = , as indicated by a circle symbol on the figure.The discrete Kalman filter is performed for performance comparison and consistency check between DKF and CKF.The continuous-time equation can be discretized as the following: where the covariance is the following: Figures 4-6 provide the state estimation results for the first-order Gauss-Markov process with various values of r.The state estimation in the case of a very large measurement noise r ( r → ∞ ) is shown in Figure 4.In this case, the Kalman filter gain approaches 0, and the correction capability on state vector is no longer available, meaning that only time update is implemented.Figures 5 and 6 present the estimation results in the case of larger (r = 1) and smaller (r = 0.01) measurement noises, respectively are involved.The plot on the right provides a closer look at the time interval 50-60 s for better observation.To collect the data for calculating the error variance from the estimation results, a recursive loop for evaluating estimation errors is employed based on Figure 7. rameters, including β , q , and r , respectively, is examined in Figure 8.In each of the four plots, two sets of results are shown for observation of the effect by deviation of the parameters from the appropriate points and also for consistency check of DKF and CKF results.The solid lines represent the theoretical values while the circles are based on the DKF, respectively.Figure 9 provides a three-dimensional surface and contour of the covariance due to deviation of q and r from the optimal point.In this case 2 q = and 1 r = , as indicated by a circle symbol on the figure.Performance degradation due to deviation of Kalman gain K, and three other parameters, including β, q, and r, respectively, is examined in Figure 8.In each of the four plots, two sets of results are shown for observation of the effect by deviation of the parameters from the appropriate points and also for consistency check of DKF and CKF results.The solid lines represent the theoretical values while the circles are based on the DKF, respectively.Figure 9 provides a three-dimensional surface and contour of the covariance due to deviation of q and r from the optimal point.In this case q = 2 and r = 1, as indicated by a circle symbol on the figure.

Example 2: An Additional Deterministic Control Input Is Introduced
An additional deterministic control input is introduced to the system, shown as in Figure 10.Two extensions of the scalar Gauss-Markov system are presented.Propagation

Example 2: An Additional Deterministic Control Input Is Introduced
An additional deterministic control input is introduced to the system, shown as in Figure 10.Two extensions of the scalar Gauss-Markov system are presented.Propagation of mean value estimate in continuous-time systems is involved is the discussion.An additional deterministic control input is introduced into the Gauss-Markov process, leading to the system described by the following stochastic differential equation: with initial condition y(0) = 0, where u(t) is the unit step function and w(t) is the unity Gaussian white noise.Since the impulse response is h(t) = 6e −βt u(t), and therefore the transfer function is given by the following: of mean value estimate in continuous-time systems is involved is the discussion.An additional deterministic control input is introduced into the Gauss-Markov process, leading to the system described by the following stochastic differential equation: , ( ) ~(0, ) w t N q (i.e., with initial condition (0) 0 y = , where ( ) u t is the unit step function and ( ) w t is the unity Gaussian white noise.Since the impulse response is ( ) 6 ( ) , and therefore the transfer function is given by the following: The discrete model discretized from the continuous model can be represented as the following: where the covariance k Q remains the same as Example 1.
The mean values of the output can be evaluated based on the relation and the error covariance are thus given by the following:  The discrete model discretized from the continuous model can be represented as the following: where the covariance Q k remains the same as Example 1.
The mean values of the output can be evaluated based on the relation µ x (t) = µ f (t)H(0): and the error covariance are thus given by the following: Figure 11 provides the estimation result in the case of additional deterministic control input.The plot on the right provides a close look at the time interval 0-10 s.The curve in black indicates the response due to the deterministic control input.The results are consistent with the theoretical result shown as in Figure 2

Example 3: A Larger Gain Is Applied to the System
A larger gain is applied to the system, shown as in Figure 12.A larger gain applied to the system leads to the system described by the following stochastic differential equation: with initial condition (0) 0 y = , where ( ) u t is the unit step function and ( ) w t is the unity Gaussian white noise.The discrete model from the continuous model can be represented as the following: where the covariance is two times larger than the previous two examples.

Deterministic control input
White noise with a spectral amplitude q .

Example 3: A Larger Gain Is Applied to the System
A larger gain is applied to the system, shown as in Figure 12.A larger gain applied to the system leads to the system described by the following stochastic differential equation: with initial condition y(0) = 0, where u(t) is the unit step function and w(t) is the unity Gaussian white noise.

Example 3: A Larger Gain Is Applied to the System
A larger gain is applied to the system, shown as in Figure 12.A larger gain applied to the system leads to the system described by the following stochastic differential equation: with initial condition (0) 0 y = , where ( ) u t is the unit step function and ( ) w t is the unity Gaussian white noise.The discrete model from the continuous model can be represented as the following: where the covariance is two times larger than the previous two examples.The discrete model from the continuous model can be represented as the following: where the covariance is two times larger than the previous two examples.
The transfer function as follows: Accordingly, the output mean values based on the relation µ x (t) = µ f (t)H(0) and the error covariance, respectively, are given by the following: Figure 13 provides the estimation result in the case of a larger gain involved.As a check of consistency, the result based on the DKF P k = 1.2353 matches very well with the result based on CKF.

Example 4: The Integrated Gauss-Markov Process
The integrated Gauss-Markov process shown in Figure 14 is frequently in engineering applications.By defining two state variables, 1 x x = and 2 x x =  , the corresponding continuous model is as follows: White noise with a spectral amplitude q The mean-square values for this integrated Gauss-Markov process can be shown to be the following:

Example 4: The Integrated Gauss-Markov Process
The integrated Gauss-Markov process shown in Figure 14 is frequently in engineering applications.By defining two state variables, x 1 = x and x 2 = .
x, the corresponding continuous model is as follows:

Example 4: The Integrated Gauss-Markov Process
The integrated Gauss-Markov process shown in Figure 14 is frequently in engineering applications.By defining two state variables, 1 x x = and 2 x x =  , the corresponding continuous model is as follows: White noise with a spectral amplitude q The mean-square values for this integrated Gauss-Markov process can be shown to be the following: and as t → ∞ , the error covariance matrix is approaching the following: The mean-square values for this integrated Gauss-Markov process can be shown to be the following: and as t → ∞ , the error covariance matrix is approaching the following: The mean-square value of x 1 , namely the error covariance P 11 = E[x 2 1 ] grows unbounded.The time history of covariances can be obtained using numerical integration for solving the Riccati equation: . P Figure 15 shows the propagation of the error covariance when no measurement is available for the integrated Gauss-Markov process, using the Lyapunov equation, which can be regarded as the Riccati equation by setting r → ∞ .When the measurement is available, Figure 16 presents the propagation of the error covariance and Kalman gains for the integrated Gauss-Markov process using the Riccati equation of KF.The steady-state error covariance and Kalman gain matrices for the integrated Gauss-Markov process by CKF are the following:    To implement the DKF, the parameters Φ k and Q k are found to be the following: where Figure 17 shows the time histories of trajectories for the two states based on the KF compared to the actual process, for the integrated Gauss-Markov process.The results from DKF are given as P k = P ∞ , and the following: which is very close to the following result based on the CKF:   The results form the DKF are shown to be consistent with those from CKF very well.

Conclusions
This paper can be served to the readers as a supplement note for the Kalman filter for a better understanding of the topic without requiring a deep theoretical understanding of probability and stochastic process, as well as system theory.The illustrative examples are employed to provide further insights into understanding the analysis and design of the Kalman filter both qualitatively and quantitatively, enabling the readers to correctly interpret the theory, practice the algorithms, and design the computer codes.This article provides a good explanation of the Kalman filter with illustrative examples so the reader can have a grasp of some of the basic principles.A detailed description is accompanied by several examples offered for clear illustration to provide readers a better understanding of this topic.
The supporting examples employed in this work include the scalar Gauss-Markov process, followed by two extensions of the process, including an additional deterministic control input introduced, a larger gain applied, and finally an integrated Gauss-Markov process.The main issues covered are the connection between the two types of Kalman filters based on DKF and CKF and the verification of results by theoretical and numerical approaches.A consistence check of results for DKF and CKF, including mean value, mean-square value, Kalman gain, and theoretical covariance is provided.Performance degradation caused by the deviation from the optimal point due to parameter uncertainties as presented were also involved are unbounded errors caused by unavailable measurement and bounded errors due to available measurement updates.Besides, the influence on the estimation results when an additional control input is introduced as well as a larger gain applied to the dynamical system selected for illustration.
This presented material is especially helpful for those with less experience or background on the optimal estimation theory to build up a solid foundation for further study on the theory and applications of the topic.

Figure 2 .
Figure 2. Performance deterioration due to increase in r , where 1 r = versus 0.01 are provided.The result based on Riccati equation with r → ∞ results in that based on Lyapunov equation.

Figure 2 .Figure 3 .
Figure 2. Performance deterioration due to increase in r, where r = 1 versus 0.01 are provided.The result based on Riccati equation with r → ∞ results in that based on Lyapunov equation.Mathematics 2023, 11, x FOR PEER REVIEW 11 of 20

Figure 3 .
Figure 3. Variations of (a) covariance and (b) Kalman gain as r increases.

Figure 4 .
Figure 4. State estimation for the first-order Gauss-Markov process in the case of very large measurement noise r ( r →∞ ).

Figure 4 .Figure 5 .Figure 6 .
Figure 4. State estimation for the first-order Gauss-Markov process in the case of very large measurement noise r ( r → ∞ ).Mathematics 2023, 11, x FOR PEER REVIEW 12 of 20

Figure 5 .Figure 5 .Figure 6 .
Figure 5. State estimation for the first-order Gauss-Markov process in the case of larger measurement noise (r = 1) involved: (a) state estimation; (b) a closer look.

Figure 6 .
Figure 6.State estimation for the first-order Gauss-Markov process in the case of smaller measurement noise (r = 0.01) involved: (a) state estimation; (b) a closer look.

Figure 6 .
Figure 6.State estimation for the first-order Gauss-Markov process in the case of smaller measurement noise ( 0.01 r = ) involved: (a) state estimation; (b) a closer look.

Figure 9 .
Figure 9. Three-dimensional surface and contour of the covariance due to deviation of q and r from the optimal point (in this case q and r, as indicated by a circle symbol on the figure) (a) surface plot; (b) contour plot.

Figure 11 Figure 10 .
Figure 11 provides the estimation result in the case of additional deterministic control input.The plot on the right provides a close look at the time interval 0-10 s.The curve in black indicates the response due to the deterministic control input.The results are consistent with the theoretical result shown as in Figure 2 in Example 1.

Figure 10 .
Figure 10.Block diagram of Example 2: the scalar Gauss-Markov process with a deterministic control input.

in Example 1 .Figure 11 .
Figure 11 provides the estimation result in the case of additional deterministic control input.The plot on the right provides a close look at the time interval 0-10 s.The curve in black indicates the response due to the deterministic control input.The results are consistent with the theoretical result shown as in Figure 2 in Example 1. Mathematics 2023, 11, x FOR PEER REVIEW 15 of 20

Figure 12 .Figure 11 .
Figure 12.Block diagram of Example 3: the scalar Gauss-Markov process with a larger gain.The transfer function as follows:

Mathematics 2023, 11 ,Figure 11 .
Figure 11.Estimation results for Example 2. The plot on the right provides a closer look at the time interval 0-10 s: (a) estimation results; (b) a closer look.

.Figure 12 .Figure 12 .
Figure 12.Block diagram of Example 3: the scalar Gauss-Markov process with a larger gain.The transfer function as follows:

Mathematics 2023 , 20 Figure 13 Figure 13 .
Figure 13 provides the estimation result in the case of a larger gain involved.As a check of consistency, the result based on the DKF 1.2353 k P = matches very well with the result based on CKF.

Figure 13 .
Figure 13.Estimation results for Example 3. The plot on the right provides a closer look at the time interval 0-10 s: (a) estimation results; (b) a closer look.

) 20 Figure 13 Figure 13 .
Figure 13 provides the estimation result in the case of a larger gain involved.As a check of consistency, the result based on the DKF 1.2353 k P = matches very well with the result based on CKF.

Figure 15 .Figure 16 .
Figure15.Propagation of the error covariance for the integrated Gauss-Markov process when no measurement is available using the Lyapunov equation.

Figure 15 . 20 Figure 15 .Figure 16 .
Figure15.Propagation of the error covariance for the integrated Gauss-Markov process when no measurement is available using the Lyapunov equation.

Figure 16 .
Figure 16.Propagation of the error covariance and Kalman gains for the integrated Gauss-Markov process using the Riccati equation of KF: (a) error covariance; (b) Kalman gains.

Figure 15 .Figure 16 .Figure 17 .
Figure15.Propagation of the error covariance for the integrated Gauss-Markov process when no measurement is available using the Lyapunov equation.

Figure 17 .
Figure 17.Propagation of the two states using KF for the integrated Gauss-Markov process KF: (a) first state; (b) second state.

Table 1 .
Implementation algorithm for the discrete Kalman filter (DKF) equations.

Table 3 .
Objectives and highlights of important issues to be delivered from the examples.
3Larger random input: a larger gain is applied to the scalar Gauss-Markov process -Influence on the estimation result due to a larger gain applied to the system.4 Integrated Gauss-Markov process -Unbounded errors due to measurement unavailable.-Bounded errors due to available measurement update.-Consistence check of results for DKF and CKF, including, mean-square value, Kalman gain, and theoretical covariance.