Next Article in Journal
An Isogeometric Over-Deterministic Method (IG-ODM) to Determine Elastic Stress Intensity Factor (SIF) and T-Stress
Next Article in Special Issue
A Multi-Objective Optimization-Algorithm-Based ANFIS Approach for Modeling Dynamic Customer Preferences with Explicit Nonlinearity
Previous Article in Journal
Explicit Symplectic Runge–Kutta–Nyström Methods Based on Roots of Shifted Legendre Polynomial
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SVD-Based Identification of Parameters of the Discrete-Time Stochastic Systems Models with Multiplicative and Additive Noises Using Metaheuristic Optimization

by
Andrey Tsyganov
1,† and
Yulia Tsyganova
2,*,†
1
Department of Mathematics, Physics and Technology Education, Ulyanovsk State University of Education, 432071 Ulyanovsk, Russia
2
Department of Mathematics, Information and Aviation Technology, Ulyanovsk State University, 432017 Ulyanovsk, Russia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2023, 11(20), 4292; https://doi.org/10.3390/math11204292
Submission received: 26 September 2023 / Revised: 12 October 2023 / Accepted: 13 October 2023 / Published: 15 October 2023
(This article belongs to the Special Issue Advanced Research in Fuzzy Systems and Artificial Intelligence)

Abstract

:
The paper addresses a parameter identification problem for discrete-time stochastic systems models with multiplicative and additive noises. Stochastic systems with additive and multiplicative noises are considered when solving many practical problems related to the processing of measurements information. The purpose of this work is to develop a numerically stable gradient-free instrumental method for solving the parameter identification problems for a class of mathematical models described by discrete-time linear stochastic systems with multiplicative and additive noises on the basis of metaheuristic optimization and singular value decomposition. We construct an identification criterion in the form of the negative log-likelihood function based on the values calculated by the newly proposed SVD-based Kalman-type filtering algorithm, taking into account the multiplicative noises in the equations of the state and measurements. Metaheuristic optimization algorithms such as the GA (genetic algorithm) and SA (simulated annealing) are used to minimize the identification criterion. Numerical experiments confirm the validity of the proposed method and its numerical stability compared with the usage of the conventional Kalman-type filtering algorithm.

1. Introduction

Discrete-time stochastic systems with additive and multiplicative noises are considered when solving many practical problems related to the processing of measurement information (for example, image and signal processing problems, financial mathematics, tracking problems, etc.). The reasons for the appearance of multiplicative noises in the system may have a different nature depending on the problem being solved, or the object or the process being modeled. For example, there are linearization and modeling errors, quantization, physical phenomena such as fading in communication channels, random disturbances in the dynamics of the system or in sensors.
It is well known that, for systems with additive noises, a conventional Kalman filter may suffer from numerical instability caused by machine roundoff errors [1]. This is also true for systems with multiplicative noises [2].
The purpose of this work is to develop a numerically stable instrumental method for solving the parameter identification problem for a class of mathematical models described by discrete-time linear stochastic systems with multiplicative and additive noises based on metaheuristic optimization of the quadratic identification criterion [3] and newly constructed SVD-based Kalman-type filtering algorithm.
Metaheuristics are one of the powerful classes of gradient-free optimization methods used in parameter identification algorithms. They are high-level solution search strategies that can be used to solve a wide range of optimization problems in which the objective function may be nonsmooth, discontinuous, or highly nonlinear. A feature of metaheuristics is that almost all of them are nondeterministic [4].
The singular value decomposition method (or SVD factorization) is known as the most accurate method of matrix factorization, especially for matrices close to singular ones. Moreover, the singular value decomposition exists for any matrix, which cannot be said, for example, about the Cholesky or modified Cholesky decompositions [5]. Therefore, SVD factorization-based modifications of the Kalman filter (KF) have the same improved numerical robustness to machine roundoff errors as all known square-root modifications [1,6]. In addition, as mentioned in [7], SVD filters have additional advantages such as:
(1)
All eigenvalues of the error covariance matrices are automatically computed at each step of the filtering algorithm and can be used for automatic analysis and/or reduction of the original model;
(2)
The information matrices (inverse of the covariance matrices) are easily computed by inverting the diagonal factors in the SVD decomposition, which creates an elegant way to construct information-type algorithms and mixed-type filters with automatic switching from the covariance filtering mode to the information filtering mode.
As far as the authors of this paper know, the idea of constructing a numerically stable modification of the Kalman filter using singular value decomposition was first proposed by Oshman and Bar-Itzhack [8]. The authors named their variant of the SVD filter as the V- Λ filter. To implement the algorithm, both the SVD decomposition and the Cholesky decomposition must be performed, as well as at least three matrix inverse operations. Oshman then proposed an information form of the V- Λ filter [9]. In [10], the V- Λ filter was applied to solve the parameter identification problem of a linear discrete-time stochastic system.
Later, other authors proposed their own variants of the SVD filter corresponding to the standard Kalman [11] and extended Kalman [12] filters. The above modifications are similar to the V- Λ filter in many aspects. The limitation of their application is the requirement of positive definiteness of the noise covariance matrices in the state and the sensor models equations at each iteration of the algorithm, since it is necessary to apply the Cholesky decomposition to compute the square root of the covariance matrix. It is also necessary to perform a minimum of three matrix inversions.
In order to eliminate these drawbacks of the previous versions of the SVD filter, a new, improved version of the SVD Kalman filter was proposed in [13]. Its difference from other known variants is that this modification of the Kalman filter is free from the conditions Q k 0 and R k > 0 , required both in the standard Kalman filter and in all its square-root modifications [6]. Another significant numerical advantage of the above modification is the presence of only one diagonal matrix inversion in the filter equations. As shown in [13], according to the results of comparative analysis, this variant of the SVD filter showed the best results in terms of numerical stability against machine roundoff errors.
It should also be noted that the SVD filter confirmed its efficiency in solving the problems of parameter identification [14], the problem of estimating the state and the flight parameters of an aircraft [12], the problem of Kalman filtering of inertial measurement unit readings [15], and others.
The main idea of this paper is to develop a new numerically stable parameter identification method that combines the benefits of metaheuristic optimization and SVD factorization. The paper has the following structure. Section 2 provides basic definitions associated with the conventional Kalman-type filtering algorithm for discrete-time stochastic systems with multiplicative and additive noises. Also, the problem of parameter identification is described. Section 3 contains the main result of the paper—the newly constructed SVD-based Kalman-type filtering algorithm for systems with multiplicative and additive noises. Two lemmas that state algebraic equivalence between the SVD-based and conventional Kalman-type filtering algorithms are proved. Section 4 demonstrates how the proposed method can be applied for solving the parameter identification problems of the considered stochastic system models and its numerical superiority in dealing with machine roundoff errors compared with the usage of conventional Kalman-type algorithms. Section 5 concludes the paper.

2. Methodology

2.1. Conventional Kalman-Type Filtering Algorithm for Discrete-Time Stochastic Systems with Multiplicative and Additive Noises

Consider a discrete-time linear stochastic system with multiplicative and additive noises
x k = ( F k 1 + F ˜ k 1 ξ k 1 ) x k 1 + G k 1 w k 1 , z k = ( H k + H ˜ k ζ k ) x k + v k , k = 1 , 2 , , M
where x k R n is the system state vector; z k R m is the measurement vector; matrices F k , F ˜ k R n × n ; H k , H ˜ k R m × n ; G R n × q ; M is the number of measurements; x 0 N ( x ¯ 0 , Π 0 ) is the initial state; ξ k R N ( 0 , σ ξ 2 ) and w k R q N ( 0 , Q k )   are multiplicative and additive noises in the system state, respectively; ζ k R N ( 0 , σ ζ 2 ) and v k R m N ( 0 , R k )  are multiplicative and additive noises in the measurement scheme, respectively; covariance matrices Q k and R k of noises w k and v k , respectively, are positive semidefinite; and all noises and the initial state are mutually independent.
The discrete-time Kalman-type filtering algorithm for the systems under consideration is known (see, for example, [2,16]). It allows for computing linear optimal estimates, x ^ k , of the state vector, x k , from the available measurements z k , k = 1 , , M .
Let us introduce the following notation:
w ˜ k 1 = F ˜ k 1 ξ k 1 x k 1 + G k 1 w k 1 , v ˜ k = H ˜ k ζ k x k + v k
where
E x k x k T = X k , X k = F k 1 X k 1 F k 1 T + Q ˜ k 1 ; E w ˜ k = 0 , E w ˜ k w ˜ k T = Q ˜ k = σ ξ 2 F ˜ k X k F ˜ k T + G k Q k G k T ; E v ˜ k = 0 , E v ˜ k v ˜ k T = R ˜ k = σ ζ 2 H ˜ k X k H ˜ k T + R k .
Firstly, consider a conventional Kalman-type filtering algorithm for the system (1), Algorithm 1. The derivation of this algorithm can be found, for example, in [16].
Algorithm 1: Conventional Kalman-type filtering algorithm (KF).
Initialization. Calculate X 0 = Π 0 + x ¯ 0 x ¯ 0 T . Set initial values P 0 = Π 0 , x ^ 0 = x ¯ 0 .
For  k = 1 , 2 , , M   do
I. Time Update step. Find a priori covariance estimation error matrix P k | k 1 and a priori estimate of the state vector x ^ k | k 1 as follows:
Q ˜ k 1 = σ ξ 2 F ˜ k 1 X k 1 F ˜ k 1 T + G k 1 Q k 1 G k 1 T ,
X k = F k 1 X k 1 F k 1 T + Q ˜ k 1 ,
P k | k 1 = F k 1 P k 1 F k 1 T + Q ˜ k 1 ,
x ^ k | k 1 = F k 1 x ^ k 1 .
II. Measurement Update step. Using the a priori estimates P k | k 1 and x ^ k | k 1 , find their a posteriori values P k and x ^ k as follows:
R ˜ k = σ ζ 2 H ˜ k X k H ˜ k T + R k ,
Σ k = H k P k | k 1 H k T + R ˜ k ,
K k = P k | k 1 H k T Σ k 1 ,
P k = ( I K k H k ) P k | k 1 ,
x ^ k = x ^ k | k 1 + K k e k ,
ν k = z k H k x ^ k | k 1 .
End.

2.2. The Problem of Parameter Identification

Suppose that the matrices defining the equations of the system model (1) depend on the unknown parameters. Let us set the problem of their identification by available measurements Z 1 M = { z 1 , , z k , , z M } .
Let the vector of the unknown parameters be θ R p . Then the value of the estimation error e k = x k x ^ k will depend on the value of the parameter θ , which is specified in the equations of the discrete-time filtering algorithm. The minimum value of the error e k can be obtained under the condition of a minimum by θ of the quadratic functional
J k o ( θ ) = E e k T ( θ ) e k ( θ ) .
The problem is that the functional (12) is not instrumental, i.e., it is not practically feasible because the errors, e k , are not available for direct observation. The most popular approach to solving this problem are MPE (minimum prediction error) methods [17], based on minimizing an identification criterion that depends on the observed measurement residuals. Such criteria include the well-known least squares and maximum likelihood criteria. An alternative approach is the auxiliary performance index method [18]. Thus, the algorithm of numerical minimization of the original functional (12) by the parameter θ is replaced by the algorithm of numerical minimization of the selected instrumental criterion, which is practically feasible.
To solve the problem of identifying the parameters of the system (1), we construct an instrumental criterion in the form of a negative logarithmic likelihood function:
J KF ( θ ; Z 1 M ) = M m 2 ln 2 π + 1 2 k = 1 M ln | Σ k ( θ ) | + | | ν k ( θ ) | | Σ k 1 ( θ ) 2 ,
the values of which, for a given θ , we will calculate using Equations (2)–(11) of Algorithm 1.
The identification criterion (13), calculated on the basis of the values obtained by the Kalman filter or its modification, can serve as an objective function for minimization algorithms of various types. If the gradient of the identification criterion is unknown or finding it is computationally expensive, then gradient-free methods such as metaheuristic algorithms can be used for minimization. Also, these methods are extremely useful if the objective function loses its smoothness or continuity properties, for example, due to machine roundoff errors.
Depending on the method of obtaining a solution, most metaheuristic optimization algorithms can be divided into two large groups: trajectory and population algorithms. In trajectory algorithms, the solution search process can be viewed as a movement between individual solutions in the solution space of the problem, while, in population algorithms, the search for an optimal solution involves changing the group of solutions.
One of the most popular trajectory algorithms used in solving global optimization problems is the simulated annealing method (SA). The key feature of the method is the use of a control parameter—temperature, which allows for controlling the nondeterministic process of solution search. As a rule, the temperature decreases during the algorithm operation according to a certain law, starting from some initial value. At each iteration of the algorithm, a randomly generated new solution from the neighborhood of the current solution is accepted, with a probability of 1 if it is better and a probability of less than 1 if it is worse than the current one, and the probability of accepting the worst solution decreases with decreasing temperature. The quality of the solutions is evaluated using a cost function with integer or real values.
A genetic algorithm (GA) is a popular version of evolutionary optimization algorithms based on the modeling of natural selection processes. In evolutionary algorithms, the quality of solutions is evaluated using a fitness function, and the basic idea is that solutions with the best values of this function “survive” during evolution. In a genetic algorithm, at each iteration of the evolutionary process, a new population is obtained from the current population by successive use of one or more genetic operators, the most common of which are crossover, which allows descendant solutions from parental solutions to be obtained, and mutation, which randomly modifies solutions.

3. Main Results

The New SVD-Based Kalman-Type Filtering Algorithm for Discrete-Time Stochastic Systems with Multiplicative and Additive Noises

Now, we are ready to present our new result—the SVD-based Kalman-type filtering Algorithm 2. Consider the SVD factorization [19]. Any matrix, A C m × n , of rank r can be represented as
A = W Σ V * , Σ = S 0 0 0 C m × n , S = diag { σ 1 , , σ r }
where W C m × m , V C n × n are unitary matrices, V * means conjugate and transposed to V , and S R r × r is a real non-negative diagonal matrix. The values σ 1 σ 2 σ r > 0 are singular values of the matrix A . Note that if r = n and/or r = m , some of the zero submatrices in Σ are absent.
Algorithm 2: SVD-based Kalman-type filtering algorithm (SVD-KF).
Initialization. Apply SVD factorization for the initial matrices X 0 = Θ X 0 D X 0 Θ X 0 T and Π 0 = Θ Π 0 D Π 0 Θ Π 0 T . Set the initial values: Θ P 0 = Θ Π 0 , D P 0 = D Π 0 and x ^ 0 = x ¯ 0 .
For  k = 1 , 2 , , M   do
I. Time Update step.
I.1. Apply SVD factorization for the process noise covariance matrix
Q k 1 = Θ Q k 1 D Q k 1 Θ Q k 1 T .
I.2. Build the pre-arrays and apply the SVD factorization in order to obtain the SVD factors { Θ Q ˜ k 1 , D Q ˜ k 1 } , { Θ X k , D X k } and { Θ P k | k 1 , D P k | k 1 } as follows
σ ξ D X k 1 1 2 Θ X k 1 T F ˜ k 1 T D Q k 1 1 2 Θ Q k 1 T G k 1 T = W TU ( 1 ) D Q ˜ k 1 1 2 0 Θ Q ˜ k 1 T ;
D X k 1 1 2 Θ X k 1 T F k 1 T D Q ˜ k 1 1 2 Θ Q ˜ k 1 T = W TU ( 2 ) D X k 1 2 0 Θ X k T ;
D P k 1 1 2 Θ P k 1 T F k 1 T D Q ˜ k 1 1 2 Θ Q ˜ k 1 T = W TU ( 3 ) D P k | k 1 1 2 0 Θ P k | k 1 T .
I.3. Given x ^ k 1 , compute a priori estimate x ^ k | k 1 by (5).
II. Measurement Update step.
II.1. Apply SVD factorization for the measurements noise covariance matrix
R k = Θ R k D R k Θ R k T .
II.2. In order to obtain the SVD factors { Θ R ˜ k , D R ˜ k } and { Θ Σ k , D Σ k } , apply the SVD factorization to the next left hand side pre-arrays:
σ ζ D X k 1 2 Θ X k T H ˜ k T D R k 1 2 Θ R k T = W MU ( 1 ) D R ˜ k 1 2 0 Θ R ˜ k T ;
D P k | k 1 1 2 Θ P k | k 1 T H k T D R ˜ k 1 2 Θ R ˜ k T = W MU ( 2 ) D Σ k 1 2 0 Θ Σ k T .
II.3. Find the feedback gain K k as follows:
K k = K ¯ k D Σ k 1 Θ Σ k T where K ¯ k = P k | k 1 H k T Θ Σ k .
II.4. In order to obtain the SVD factors { Θ P k , D P k } , apply the SVD factorization to the next left hand side pre-array:
D P k | k 1 1 2 Θ P k | k 1 T ( I K k H k ) T D R ˜ k 1 2 Θ R ˜ k T K k T = W MU ( 2 ) D P k 1 2 0 Θ P k T .
II.5. Find a posteriori estimate x ^ k as follows:
x ^ k = x ^ k | k 1 + K ¯ k D Σ k 1 ν ¯ k where ν ¯ k = Θ Σ k T ( z k H k x ^ k | k 1 ) .
End.
Lemma 1.
Time update steps of the KF and SVD-KF algorithms for system model (1) are algebraically equivalent.
Proof. 
Let us prove that (2) and (14) are equivalent.
From a general form A = W Σ V T we obtain
A T A = W Σ V T T W Σ V T = V Σ 2 V T .
So, from (14) we have
A T A = σ ξ F ˜ k 1 Θ X k 1 D X k 1 1 2 G k 1 Θ Q k 1 D Q k 1 1 2 σ ξ D X k 1 1 2 Θ X k 1 T F ˜ k 1 T D Q k 1 1 2 Θ Q k 1 T G k 1 T = σ ξ F ˜ k 1 Θ X k 1 D X k 1 Θ X k 1 T F ˜ k 1 T + G k 1 Θ Q k 1 D Q k 1 Θ Q k 1 T G k 1 T = σ ξ 2 F ˜ k 1 X k 1 F ˜ k 1 T + G k 1 Q k 1 G k 1 T . V Σ 2 V T = Θ Q ˜ k 1 D Q ˜ k 1 Θ Q ˜ k 1 T = Q ˜ k 1 .
Hence, we have (2) from (22). The equivalence of (3) and (15), and (4) and (16) can be proved in the same manner.    □
Lemma 2.
Measurement update steps of the KF and SVD-KF algorithms for system model (1) are algebraically equivalent.
Proof. 
The equivalence of (6) and (17), and (7) and (18) can be proved in the same way as in Lemma 1.
Next, expression (19) for calculating K k is derived from (8), where the matrix Σ k is the SVD factorized. Indeed,
K k = P k | k 1 H k T Σ k 1 = P k | k 1 H k T ( Θ Σ k D Σ k Θ Σ k T ) 1 = K ¯ k D Σ k 1 Θ Σ k T .
Let us prove the equivalence of (9) and (20). Taking into account (20), the left hand side of (22) may be written as follows:
A T A = ( I K k H k ) Θ P k | k 1 D P k | k 1 1 2 K k Θ R ˜ k D R ˜ k 1 2 D P k | k 1 1 2 Θ P k | k 1 T ( I K k H k ) T D R ˜ k 1 2 Θ R ˜ k T K k T = ( I K k H k ) Θ P k | k 1 D P k | k 1 1 2 D P k | k 1 1 2 Θ P k | k 1 T ( I K k H k ) T + K k Θ R ˜ k D R ˜ k 1 2 D R ˜ k 1 2 Θ R ˜ k T K k T = ( I K k H k ) P k | k 1 ( I K k H k ) T + K k R ˜ k K k T .
On the other hand,
V Σ 2 V T = Θ P k D P k 1 2 D P k 1 2 Θ P k T = P k .
Thus,
P k = ( I K k H k ) P k | k 1 ( I K k H k ) T + K k R ˜ k K k T .
Let us rewrite (23) using (7) and (8):
P k = ( I K k H k ) P k | k 1 P k | k 1 H k T K k T + K k H k P k | k 1 H k T K k T + K k R ˜ k K k T = ( I K k H k ) P k | k 1 P k | k 1 H k T K k T + K k ( H k P k | k 1 H k T + R ˜ k ) K k T = ( I K k H k ) P k | k 1 P k | k 1 H k T K k T + P k | k 1 H k T Σ k 1 Σ k K k T = ( I K k H k ) P k | k 1 P k | k 1 H k T K k T + P k | k 1 H k T K k T = ( I K k H k ) P k | k 1 ( 9 ) .
Finally, expression (21) follows directly from (10), taking into account K k = K ¯ k D Σ k 1 Θ Σ k T .    □
Now, to construct the parameter identification procedure, we need to rewrite the expression for computing the negative logarithmic likelihood function (13) in terms of the SVD-KF. Given that
det ( Σ k ) = det ( D Σ k )   and   ν k T Σ k 1 ν k = ν ¯ k T D Σ k 1 ν ¯ k ,
we can write
J SVD ( θ ; Z 1 M ) = M m 2 ln ( 2 π ) + 1 2 k = 1 M { ln [ det ( D Σ k ) ] + ν ¯ k T D Σ k 1 ν ¯ k }
where the diagonal matrix D Σ k and vector ν ¯ k are available at each step of Algorithm 2.
To find the optimal value, θ * , of unknown parameter θ with the objective function (24), we use the metaheuristic optimization methods GA and SA.
The identification of unknown parameter θ and the estimation of state vector x k of system (1) can be performed simultaneously according to the criterion
θ * = argmin D ( θ ) J SVD ( θ ; Z 1 M ) .

4. Discussion

In this section, we wish to show the validity and numerical superiority of the proposed method in dealing with machine roundoff errors. In order to conduct numerical experiments, we have implemented in MATLAB Algorithms 1 and 2, functions for calculating identification criteria J KF ( θ ; Z 1 M ) and J SVD ( θ ; Z 1 M ) according to (13) and (24), as well as functions for modeling system dynamics and measurements. The functions ga and simulannealbnd from the MATLAB Global Optimization Toolbox were used for numerical minimization of both identification criteria by the GA and SA methods, respectively. All experiments were conducted on the following platform: Windows 11, Intel Core i3-1115G4 CPU @ 3.00 GHz, 8 GB of RAM.
Table 1 presents the main GA and SA settings used in the numerical experiments. The remaining settings are taken by default.
Example 1. First, let us demonstrate the validity of the proposed method. Consider a nearly constant velocity model for the uniform motion [20] augmented with multiplicative noises ξ k and ζ k :
x k = 1 θ 0 1 + 0 0 0 1 ξ k 1 x k 1 + θ 2 2 θ w k 1 , z k = 1 0 0 1 + 0 0 0 1 ζ k x k + v k , k = 1 , , 100
where x k = [ x 1 , x 2 ] k T , x 1 = x is the coordinate of the object, x 2 = v x is its velocity, x 0 N ( [ 0 , 1 ] T , 10 I 2 ) , w k N ( 0 , 10 2 ) , v k N ( 0 , σ 2 I 2 ) ( σ = 0.1 , 0.5 , 1.0 ), ξ k N ( 0 , 10 4 ) , ζ k N ( 0 , 10 4 ) , and θ is the model parameter to be identified. Let us put the “true” value of the parameter equal to θ * = 0.1 .
Figure 1 shows estimation results of the system state vector x k obtained with Algorithms 1 and 2 for model (26) with σ = 0.5 . As one can see, the newly proposed SVD-KF algorithm shows the ability to solve the discrete-time filtering problem successfully and yields the same state estimates as the conventional KF-type algorithm.
Further, we wish to demonstrate how the new Algorithm 2 can be applied to solve the parameter identification problem compared with the conventional Algorithm 1.
A series of 100 numerical experiments was conducted in MATLAB for each value of the noise level, σ . In each experiment, numerical identification of parameter θ using both identification criteria was performed based on the results of simulated measurements. The solution, θ * , was searched on the segment [ 0 ; 1 ] .
The average running times of the GA and SA minimizations based on the KF algorithm were 0.886 sec and 0.357 sec, respectively. The average running times of the GA and SA minimizations based on the SVD-KF algorithm were 3.695 sec and 1.376 sec, respectively (see Table 2).
We can conclude that for both identification criteria the GA works on average about 2.5–2.7 times slower compared with the SA. This is because the GA works with a group of solutions and performs several genetic operators (selection, crossover, mutation, etc.) at each iteration compared with SA, which works with a single solution and uses much simpler calculations. At the same time, optimization methods based on the SVD-KF algorithm work approximately four times slower than those based on the conventional KF because of the usage of several SVD procedures at each iteration of the algorithm. This may be overcome by using parallel implementations of the SVD procedure.
The results of the numerical identification of parameter θ are summarized in Table 3 and Table 4. They show that, with the selected settings, for both identification criteria the GA shows better identification accuracy compared with SA, although it works slower. Also, the accuracy of parameter identification for both identification criteria is practically the same. RMSE and MAPE values decrease with decreasing noise level σ .
Thus, the results of the numerical experiments confirm the validity of using the proposed SVD-KF algorithm to solve identification problems.
Despite the fact that the SVD-KF algorithm is slower compared with the conventional KF, in the following example we will show that it has an undoubted superiority in terms of numerical robustness to machine roundoff errors.
Example 2. To demonstrate numerical efficiency of the proposed SVD-based identification method, the state-space model with multiplicative and additive noises is explored. The system dynamics considered in [21] is given by equation
x k = θ 0.15 0 0.15 + 0.01 0 0 0.01 ξ k 1 x k 1 + θ 2.5 w k 1 , k = 1 , , 100 ,
x 0 N ( x ¯ 0 , 10 I 2 ) , x ¯ 0 = [ 0 , 1 ] T , w k N ( 0 , 0.1 ) , ξ k N ( 0 , 0.01 ) , and θ is the system parameter that needs to be identified. Let us put the “true” value of the parameter equal to θ * = 0.2 .
Consider the ill-conditioned measurement scheme as in [2]
z k = 1 1 1 1 + δ + 0 0 0 1 ζ k x k + v k ,
v k N ( 0 , δ 2 I 2 ) , ζ k N ( 0 , δ 2 ) , where δ 2 < ϵ roundoff but δ > ϵ roundoff , and ϵ roundoff denotes the unit roundoff error (computer roundoff for floating-point arithmetic is often characterized by a single parameter ϵ roundoff , defined as the largest number such that either 1 + ϵ roundoff = 1 or 1 + ϵ roundoff / 2 = 1 in machine precision).
A series of 100 numerical experiments was conducted in MATLAB for different values of δ . In each experiment, numerical identification of parameter θ using both identification criteria was performed based on the results of simulated measurements. The solution, θ * , was searched on the segment [ 0 ; 1 ] .
The average running time of the GA and SA minimizations based on the KF and SVD-KF algorithms is presented in Table 5. It can be seen that starting from δ = 10 8 the SA algorithm for identification criterion (13) fails to solve the problem within the required time and the GA starts to slow down dramatically. At the same time, the GA and SA, which use identification criterion (24), perform adequately.
The results of the numerical identification of parameter θ are summarized in Table 6 and Table 7. They show that the GA and SA methods based on the SVD-KF Algorithm 2 remain stable for all values of δ and yield adequate results. At the same time, metaheuristic optimization methods based on Algorithm 1 start to diverge at δ = 10 8 . This happens because the identification criterion (13) loses smoothness and continuity due to machine roundoff errors.

5. Conclusions

The paper proposes an instrumental method for identifying parameters of discrete-time stochastic system models with multiplicative and additive noises. We have constructed a new SVD-based Kalman-type filtering algorithm that allows the calculation of all filter quantities using numerically stable singular value decomposition. It is similar to the existing SVD-based algorithms for systems with additive noises only. In contrast, our newly constructed algorithm takes into account the presence of multiplicative noises in the state and measurement equations of the system model.
Lemmas 1 and 2 contain the main theoretical results of the paper. We have proved the algebraic equivalency of time update and measurement update steps in Algorithms 1 and 2.
With the aim to solve the problem of parameter identification for the considered class of system models, we have constructed an SVD-based identification criterion in the form of a negative logarithmic likelihood function. In order to find the optimal value of the unknown system model parameter, we have used SA and GA methods for its optimization.
Having carried out a series of numerical experiments in MATLAB, we have demonstrated how the new Algorithm 2 can be applied for solving the parameter identification problems. The results of numerical experiments confirm the superiority of the newly developed method in managing machine roundoff errors compared with the one based on conventional Kalman-type filtering. It is worth noting that the GA shows better accuracy in minimizing the identification criteria than SA, although it works slower. Taking into account the speed/accuracy ratio, the usage of the SVD-KF filtering algorithm for computing the identification criterion in conjunction with the SA algorithm for its minimization seems to be the best choice.

Author Contributions

Conceptualization, A.T. and Y.T.; methodology, A.T. and Y.T.; software, A.T.; validation, Y.T.; formal analysis, A.T. and Y.T.; investigation, A.T. and Y.T.; resources, A.T. and Y.T.; data curation, A.T. and Y.T.; writing—original draft preparation, Y.T.; writing—review and editing, A.T. and Y.T.; visualization, A.T. and Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Russian Science Foundation, grant no. 22–21–00387, https://rscf.ru/en/project/22-21-00387/ (accessed on 25 September 2023).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SVDSingular value decomposition
KFKalman filter
TUTime update
MUMeasurement update
MPEMinimum prediction error
GAGenetic algorithm
SASimulated annealing
RMSERoot mean square error
MAPEMean absolute percentage error

References

  1. Grewal, M.S.; Andrews, A.P. Kalman Filtering: Theory and Practice Using MATLAB, 4th ed.; John Wiley & Sons, Inc.: New York, NY, USA, 2015. [Google Scholar]
  2. Tsyganov, A.V.; Tsyganova, J.V.; Kureneva, T.N. UD-based Linear Filtering for Discrete-Time Systems with Multiplicative and Additive Noises. In Proceedings of the 19th European Control Conference, Saint Petersburg, Russia, 12–15 May 2020; pp. 1389–1394. [Google Scholar]
  3. Caines, P. Linear Stochastic Systems; John Wiley & Sons, Inc.: New York, NY, USA, 1988. [Google Scholar]
  4. Hromkovič, J. Algorithmics for Hard Problems. Introduction to Combinatorial Optimization, Randomization, Approximation, and Heuristics, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  5. Golub, G.H.; Van Loan, C.F. Matrix Computations; Johns Hopkins University Press: Baltimore, MD, USA, 1983. [Google Scholar]
  6. Tsyganova, Y.V.; Kulikova, M.V. On modern array algorithms for optimal discrete filtering. Vestnik YuUrGU. Ser. Mat. Model. Progr. 2018, 11, 5–30. (In Russian) [Google Scholar] [CrossRef]
  7. Kulikova, M.V.; Tsyganova, J.V.; Kulikov, G.Y. SVD-based state and parameter estimation approach for generalized Kalman filtering with application to GARCH-in-Mean estimation. J. Comput. Appl. Math. 2021, 387, 112487. [Google Scholar] [CrossRef]
  8. Oshman, Y.; Bar-Itzhack, I.Y. Square root filtering via covariance and information eigenfactors. Automatica 1986, 22, 599–604. [Google Scholar] [CrossRef]
  9. Oshman, Y. Square root information filtering using the covariance spectral decomposition. In Proceedings of the 27th IEEE Conference on Decision and Control, Austin, TX, USA, 7–9 December 1988; pp. 382–387. [Google Scholar]
  10. Oshman, Y. Maximum likelihood state and parameter estimation via derivatives of the V-Lambda filter. J. Guid. Control. Dyn. 1992, 15, 717–726. [Google Scholar] [CrossRef]
  11. Wang, L.; Libert, G.; Manneback, P. Kalman filter algorithm based on Singular Value Decomposition. In Proceedings of the 31st Conference on Decision and Control, Westin La Paloma, Tucson, AZ, USA, 16–18 December 1992; pp. 1224–1229. [Google Scholar]
  12. Zhang, Y.; Dai, G.; Zhang, H.; Li, Q. A SVD-based extended Kalman filter and applications to aircraft flight state and parameter estimation. In Proceedings of the 1994 American Control Conference—ACC’94, Baltimore, MD, USA, 29 June–1 July 1994; pp. 1809–1813. [Google Scholar]
  13. Kulikova, M.V.; Tsyganova, J.V. Improved discrete-time Kalman filtering within singular value decomposition. IET Control. Theory Appl. 2017, 11, 2412–2418. [Google Scholar] [CrossRef]
  14. Tsyganova, J.V.; Kulikova, M.V. SVD-based Kalman filter derivative computation. IEEE Trans. Autom. Control. 2017, 62, 4869–4875. [Google Scholar] [CrossRef]
  15. Alessandrini, M.; Biagetti, G.; Crippa, P.; Falaschetti, L.; Manoni, L.; Turchetti, C. Singular Value Decomposition in Embedded Systems Based on ARM Cortex-M Architecture. Electronics 2021, 10, 34. [Google Scholar] [CrossRef]
  16. Wu, Y.; Zhang, Q.; Shen, Z. Kalman filtering with multiplicative and additive noises. In Proceedings of the 12th World Congress on Intelligent Control and Automation (WCICA 2016), Guilin, China, 12–15 June 2016; pp. 483–487. [Google Scholar]
  17. Åström, K.-J. Maximum Likelihood and Prediction Error Methods. Automatica 1980, 16, 551–574. [Google Scholar] [CrossRef]
  18. Semushin, I.V.; Tsyganova, J.V. Adaptation in Stochastic Dynamic Systems—Survey and New Results IV: Seeking Minimum of API in Parameters of Data. Int. J. Commun. Netw. Syst. Sci. 2013, 6, 513–518. [Google Scholar] [CrossRef]
  19. Björck, Å. Numerical Methods in Matrix Computations, Series: Texts in Applied Mathematics; Springer International Publishing: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  20. Bar-Shalom, Y.; Li, X.-R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation; John Wiley & Sons, Inc.: New York, NY, USA, 2001. [Google Scholar]
  21. Chen, D.; Yu, Y.; Xu, L.; Liu, X. Kalman Filtering for Discrete Stochastic Systems with Multiplicative Noises and Random Two-Step Sensor Delays. Discret. Dyn. Nat. Soc. 2015, 2015, 809734. [Google Scholar] [CrossRef]
Figure 1. Estimation results. (a) Coordinate x, its measurements and estimates. (b) Velocity v x , its measurements and estimates.
Figure 1. Estimation results. (a) Coordinate x, its measurements and estimates. (b) Velocity v x , its measurements and estimates.
Mathematics 11 04292 g001
Table 1. Settings of the algorithms.
Table 1. Settings of the algorithms.
GASA
ParameterValueParameterValue
TimeLimit60TimeLimit60
GenerationsInfMaxIterInf
StallGenLimit20StallIterLimit100
PopulationSize10ReannealInterval100
PopInitRange[0; 1]MaxFunEvalsInf
MutationFcn@mutationadaptfeasible
Table 2. Average time, sec.
Table 2. Average time, sec.
KFSVD-KF
GASAGASA
σ = 1.0 0.8200.3733.4421.434
σ = 0.5 0.8660.3543.5261.362
σ = 0.1 0.9720.3434.1181.333
Average0.8860.3573.6951.376
Table 3. Identification results (KF).
Table 3. Identification results (KF).
GASA
MeanRMSEMAPEMeanRMSEMAPE
σ = 1.0 0.0991640.0111938.1779050.0984300.0121168.874215
σ = 0.5 0.0992610.0053764.4033280.0977890.0072935.773781
σ = 0.1 0.0998850.0011850.9592920.0974910.0056083.443892
Table 4. Identification results (SVD-KF).
Table 4. Identification results (SVD-KF).
GASA
MeanRMSEMAPEMeanRMSEMAPE
σ = 1.0 0.0991310.0110858.1642020.0985040.0120498.949169
σ = 0.5 0.0992590.0053754.4047180.0981270.0069445.551390
σ = 0.1 0.0998820.0011860.9622160.0969720.0065814.470887
Table 5. Average time, sec.
Table 5. Average time, sec.
KFSVD-KF
GASAGASA
δ = 10 6 0.6840.4212.9331.595
δ = 10 7 0.7650.4542.8141.653
δ = 10 8 5.7526.6133.607
δ = 10 9 5.5606.9283.873
Table 6. Identification results (KF).
Table 6. Identification results (KF).
GASA
MeanRMSEMAPEMeanRMSEMAPE
δ = 10 6 0.2018260.03024112.1755450.2023390.03057312.246241
δ = 10 7 0.1968800.02871711.3722770.1977310.02835311.220115
δ = 10 8 0.1266080.22064892.177307
δ = 10 9 0.3192310.407681162.665399
Table 7. Identification results (SVD-KF).
Table 7. Identification results (SVD-KF).
GASA
MeanRMSEMAPEMeanRMSEMAPE
δ = 10 6 0.2017870.03028212.2210070.2016560.03039012.089709
δ = 10 7 0.1968230.02802911.1526890.1979500.02857611.303757
δ = 10 8 0.2032420.03103212.6624140.2034190.03166013.103576
δ = 10 9 0.2011950.02777111.5575290.2008270.02826711.554388
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tsyganov, A.; Tsyganova, Y. SVD-Based Identification of Parameters of the Discrete-Time Stochastic Systems Models with Multiplicative and Additive Noises Using Metaheuristic Optimization. Mathematics 2023, 11, 4292. https://doi.org/10.3390/math11204292

AMA Style

Tsyganov A, Tsyganova Y. SVD-Based Identification of Parameters of the Discrete-Time Stochastic Systems Models with Multiplicative and Additive Noises Using Metaheuristic Optimization. Mathematics. 2023; 11(20):4292. https://doi.org/10.3390/math11204292

Chicago/Turabian Style

Tsyganov, Andrey, and Yulia Tsyganova. 2023. "SVD-Based Identification of Parameters of the Discrete-Time Stochastic Systems Models with Multiplicative and Additive Noises Using Metaheuristic Optimization" Mathematics 11, no. 20: 4292. https://doi.org/10.3390/math11204292

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop