Next Article in Journal
Hybrid Neural Networks for Solving Fully Coupled, High-Dimensional Forward–Backward Stochastic Differential Equations
Previous Article in Journal
Hybrid Model of Natural Time Series with Neural Network Component and Adaptive Nonlinear Scheme: Application for Anomaly Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Successive over Relaxation Implicit Iterative Algorithm for Solving Stochastic Linear Systems with Markov Jumps

School of Science, Nanjing Forestry University, Nanjing 210037, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(7), 1080; https://doi.org/10.3390/math12071080
Submission received: 10 March 2024 / Revised: 29 March 2024 / Accepted: 2 April 2024 / Published: 3 April 2024

Abstract

:
In order to solve continuous stochastic Lyapunov equations, a novel implicit iterative algorithm is presented by means of successive over relaxation (SOR) iteration in this article. Throughout this method, three tuning parameters are added for the improvement of the convergence rate. It is shown that this algorithm is monotonically bounded, and the convergence condition is also given and extended. Applying the latest updated estimates, this algorithm can attain a better convergence performance compared with other existing iterative algorithms when choosing appropriate tuning parameters. Finally, a numerical example is provided to illustrate the feasibility and priority of this approach.

1. Introduction

In the design of control systems, various matrix equations are widely used [1,2,3,4,5,6]. Specifically, some coupled matrix equations are important tools for studying the stability of Markov jump linear systems [7]. It can be seen in [8] that the mean square stability of an Itô system can be characterized by the corresponding Lyapunov-type matrix equation. Markov jump systems are useful tools that have been instrumental in modeling certain real-world systems susceptible to random factors. This concept, which is highlighted by reference [9], has garnered significant interest among researchers within this discipline. By employing Markov jumps, scientists and engineers can gain insight into how various components of these complex systems might behave under conditions of uncertainty or change, providing valuable tools for predicting outcomes and making informed decisions. In [10], the notation of exact detectability is presented for linear systems with state-multiplicative noise. Based on critical stability and exact detectability, some useful properties are derived in [10] for Itô Lyapunov matrix equations. Uniform definitions are given for the notions of observable and detectable properties in the realm of continuous-time stochastic systems reported in [11], where the so-called stochastic Lyapunov matrix equations are reviewed. In [12], some stochastic stability properties are presented for the Markov jump linear system owing to the existence of the solution of coupled Lyapunov matrix equations (CLMEs). In the study referenced in [13], an approach is developed to design a suboptimal controller, denoted as H 2 / H , specifically for continuous-time linear Itô stochastic systems with Markov jumps (MJIS systems). The extended concept of exact detectability in the literature [14] has been incorporated into the linear MJIS systems, making it more convenient to analyze the properties of control systems. Research has suggested that under exact detectability conditions, the mean square stability of a linear MJIS system depends on whether the corresponding Lyapunov matrix equations have positive semi-definite solutions. Two iterative algorithms are established for both continuous-time and discrete-time MJIS systems in [15]. And their convergence are studied using linear positive operators. According to the characteristic of the solution of corresponding Lyapunov equations, a necessary and sufficient condition for the mean square stability of detectable systems of this type is developed in [16]. In fact, the CLMEs, characterized by their intricate coupling dynamics and appearance in MJIS systems, can indeed be considered as a peculiar example of generalized ones.
For the above case, ref. [17] proposes a novel implicit iterative approach specifically tailored to the solution of the CLMEs in discrete-time situations. Ref. [18] proposes a method to tackle the complex challenge of solving coupled equations, through minimizing a quadratic function embedded within the iterative process. In addition, ref. [19] provides a direct method for the coupled matrix equations, which is less effective for large-scale systems. In recent years, various kinds of iterative methods have been put forward by different researchers. Ref. [20] presents a new method of least-square iteration to deal with equations of this kind, in which a hierarchical identification principle is employed. In [21], a full-rank gradient-based algorithm, as well as a reduced-rank one, is constructed. In fact, due to the similarity of coupled and general Lyapunov matrix equations, both can be solved by applying the iterative algorithms. Moreover, all of the above algorithms use the latest estimated value generated by the last iteration process.
In 2016, an implicit iterative algorithm [22] was put forward to solve continuous CLMEs by using some adjustable parameters. The successive over relaxation (SOR) technique is known to be a classic way to increase the convergence rate of Jacobi iterations in [23]. Due to this advantage, it was used to solve the discrete periodic Lyapunov matrix equations [24]. In [25], a novel iterative algorithm is proposed to deal with coupled Lyapunov equations arising in continuous-time Itô stochastic systems with Markov jumps, in which two adjustment parameters are introduced. And it combines both the latest and the former step information to estimate the unknown matrices. Ref. [26] establishes a new iterative method for solving the CLMEs associated with continuous-time Markov jump linear systems by the SOR technique. A relaxation parameter is included, which can be selected appropriately to improve the convergence performance.
Inspired by the aforementioned facts, an SOR implicit iterative algorithm is established in this paper for solving continuous stochastic Lyapunov equations. And three tunable parameters are included to improve the convergence speed of the algorithm. Subsequently, the monotonicity and boundedness of the proposed algorithm are analyzed, and the convergence condition is further investigated. With the up-to-date knowledge of the iteration value, this approach exhibits better convergence than others by choosing the tuning parameters appropriately.
In this paper, we use R m × n to represent the set of all the m × n -dimensional real matrices. E ( · ) denotes the mathematical expectation. And A T , A 1 represent the transpose and inverse matrices of A, respectively. ρ ( A ) is written as the spectral radius of A. The notation I [ a , b ] denotes a discrete finite set { a , a + 1 , , b } , in which both a and b are integers and a < b . The vectorization of a matrix A = [ a 1 , a 2 , , a n ] R n × n is defined as v e c ( A ) = [ a 1 T , a 2 T , , a n T ] T . M W means the Kronecker product of matrices M and W. If P is a positive definite matrix, we record it as P > 0 . Similarly, ( P 1 ( m ) , P 2 ( m ) , , P N ( m ) ) > 0 indicates that in this sequence, P i ( m ) > 0 , i I [ 1 , N ] . · F is used to represent Frobenius norm.

2. Problem Formulation and Preliminaries

There exists a continuous-time Itô stochastic system with Markov jumps:
d x ( t ) = A 0 ( θ ( t ) ) x ( t ) d t + s = 1 r A s ( θ ( t ) ) x ( t ) d ω s ( t ) ,
where x ( t ) R n is the system state; A 0 ( θ ( t ) ) , A s ( θ ( t ) ) are both the coefficient matrices; ω s ( t ) R , s I [ 1 , r ] are real random processes, which are defined on a complete probability space { Ω , F , P } with E ( ω i ( t ) ) = 0 and E ( ω i ( t ) ω j ( s ) ) = δ i j , i , j I [ 1 , r ] . { θ ( t ) , t 0 } is a continuous-time discrete-state Markov process. The transition rate matrix is Π = [ π i j ] N × N , while π i j 0 for j i , and j = 1 N π i j = 0 , i I [ 1 , N ] . The initial values are set to be x ( 0 ) = x 0 and θ ( 0 ) = θ 0 . Next, the definition of asymptotically mean square stability (AMSS) is introduced for system (1).
Definition 1 ([15]).
System (1) has AMSS if for any x 0 R n and θ 0 I [ 1 , N ] , there exists
lim t E ( x ( t , x 0 , θ 0 ) 2 | x 0 , θ 0 ) = 0 .
The corresponding continuous stochastic Lyapunov equation of system (1) can be written as follows:
A 0 i T P i + P i A 0 i + s = 1 r A s i T P i A s i + j = 1 N π i j P j = Q i , i I [ 1 , N ] .
In this equation, P i , i I [ 1 , N ] are unknown solutions of Lyapunov Equation (2) and Q i > 0 , i I [ 1 , N ] , are known matrices. This equation is usually used to analyze the stability of system (1). Regarding the asymptotically mean square stability of (1), the following result is obtained.
Lemma 1 ([15]).
System (1) has AMSS if and only if there is a unique solution ( P 1 , P 2 , , P N ) > 0 of the corresponding Lyapunov Equation (2), for any given Q i > 0 .
Thus, the stability of the system (1) is equivalent to finding a unique solution of Equation (2). While Lyapunov Equation (2) can be changed into the following format:
A i T P i + P i A i = s = 1 r A s i T P i A s i j = 1 , j i N π i j P j β i P i Q i ,
with
A i = A 0 i + 0.5 π i i I n 0.5 β i I n , β i 0 , i I [ 1 , N ] .
Through Lemma 1 and the associated theories of stability, we can obtain the following conclusions:
Lemma 2 ([25]).
Any matrix A i , i I [ 1 , N ] is Hurwitz-stable if system (1) has AMSS.
An iterative algorithm in the implicit form is proposed in [25] to deal with the coupled Lyapunov Equation (2). It is summarized as follows:
Theorem 1 ([25]).
Assume the Markov jump system (1) has AMSS. If the following algorithm (5) satisfies Q i > 0 , β i 0 , and the tuning parameters α j [ 0 , 1 ] , then the sequence ( P 1 ( m ) , P 2 ( m ) , , P N ( m ) ) generated by the algorithm (5) with zero initial conditions P i ( 0 ) = 0 , i I [ 1 , N ] converges to the unique positive definite solution of the continuous coupled Lyapunov matrix Equation (2). The following is the designed iteration form:
A i T P i ( m + 1 ) + P i ( m + 1 ) A i = s = 1 r A s i T P i ( m ) A s i j = 1 i 1 π i j [ α j P j ( m + 1 ) + ( 1 α j ) P j ( m ) ] j = i + 1 N π i j P j ( m ) β i P i ( m ) Q i , i I [ 1 , N ] .

3. Main Result

Enlightened by the idea of SOR technique, in what follows, we plan to find a new algorithm to obtain the solution of Equation (2). Since we have
A i T P i + P i A i = ( 1 γ ) ( A i T P i + P i A i ) + γ ( A i T P i + P i A i ) , i I [ 1 , N ] ,
and γ is a tunable parameter. From the preceding section, we rewrite Equation (2) as (3) by substituting the first term on the right-hand side of (6) with the right-hand item of (3); we obtain the following relation:
A i T P i + P i A i = ( 1 γ ) ( s = 1 r A s i T P i A s i j = 1 i 1 π i j [ α j P j + ( 1 α j ) P j ] j = i + 1 N π i j P j β i P i Q i ) + γ ( A i T P i + P i A i ) , i I [ 1 , N ] .
Utilizing the latest updated estimation, we bring up an implicit iterative algorithm below to solve Equation (2):
A i T P i ( m + 1 ) + P i ( m + 1 ) A i = ( 1 γ ) ( s = 1 r A s i T P i ( m ) A s i j = 1 i 1 π i j [ α j P j ( m + 1 ) + ( 1 α j ) P j ( m ) ] j = i + 1 N π i j P j ( m ) β i P i ( m ) Q i ) + γ ( A i T P i ( m ) + P i ( m ) A i ) , i I [ 1 , N ] .
Remark 1.
If the parameter γ = 0 , then the proposed algorithm (8) is reduced to algorithm (5).
Remark 2.
For every iteration step of algorithm (8), N standard continuous Lyapunov matrix equations need to be solved. The algorithm proposed in this paper is an implicit form.
Remark 3.
It can be seen that the algorithm (8) proposed in this article is very similar to the SOR iteration method in ordinary linear equations. Therefore, in the following text, algorithm (8) is abbreviated as the SOR implicit iterative algorithm.
First, the following lemma illustrates that the sequence generated by (8) is bounded.
Lemma 3.
Assume system (1) has AMSS, and Q i > 0 , i I [ 1 , N ] . If β i 0 , and the tuning parameters α j [ 0 , 1 ] , then on the condition of zero initial values, there is an upper bound of the sequence ( P 1 ( m ) , P 2 ( m ) , , P N ( m ) ) generated by algorithm (8) when 0 γ < 1 . In other words, for any integer m 0 , we have
P i ( m ) < P i , i I [ 1 , N ] .
Proof. 
According to Lemmas 1 and 2, since the matrices A i , i I [ 1 , N ] are Hurwitz-stable and Equation (2) has a unique solution ( P 1 , P 2 , , P N ) , we can subtract (8) from (7) to obtain
A i T [ P i P i ( m + 1 ) γ ( P i P i ( m ) ) ] + [ P i P i ( m + 1 ) γ ( P i P i ( m ) ) ] A i = V i ( m ) ,
and
V i ( m ) = ( 1 γ ) ( s = 1 r A s i T ( P i P i ( m ) ) A s i j = 1 i 1 π i j [ α j ( P j P j ( m + 1 ) ) + ( 1 α j ) ( P j P j ( m ) ) ] j = i + 1 N π i j ( P j P j ( m ) ) β i ( P i P i ( m ) ) ) .
In the following text, the proof of this theorem is mainly based on mathematical induction. Because of the zero initial value condition of the algorithm mentioned before, apparently, (9) holds for m = 0 . Now, we suppose when m = k and k 0 , (9) holds. This means
P i ( k ) < P i , i I [ 1 , N ] .
Let m = k , i = 1 in relation (11); we can obtain
V 1 ( k ) = ( 1 γ ) ( s = 1 r A s 1 T ( P 1 P 1 ( k ) ) A s 1 j = 2 N π 1 j ( P j P j ( k ) ) β 1 ( P 1 P 1 ( k ) ) ) .
Due to the assumptions set by mathematical induction above and 0 γ < 1 , we can obtain V 1 ( k ) < 0 . Therefore, from (10), we know that
A 1 T [ P 1 P 1 ( k + 1 ) γ ( P 1 P 1 ( k ) ) ] + [ P 1 P 1 ( k + 1 ) γ ( P 1 P 1 ( k ) ) ] A 1 < 0 .
A 1 is Hurwitz-stable, so we can obtain the following inequality from Theorem 3.1 in [27]:
P 1 P 1 ( k + 1 ) γ ( P 1 P 1 ( k ) ) > 0 .
That is,
P 1 P 1 ( k + 1 ) > γ ( P 1 P 1 ( k ) ) .
Since P 1 > P 1 ( k ) from assumption (12), we have
P 1 > P 1 ( k + 1 ) .
Using mathematical induction again, we now assume that
P j > P j ( k + 1 ) , j I [ 1 , l 1 ] , l 2 .
Let m = k , i = l in Equations (10) and (11); we can obtain
A l T [ P l P l ( k + 1 ) γ ( P l P l ( k ) ) ] + [ P l P l ( k + 1 ) γ ( P l P l ( k ) ) ] A l = V l ( k ) ,
and
V l ( k ) = ( 1   γ ) ( s = 1 r A s l T ( P l P l ( k ) ) A s l j = 1 l 1 π l j [ α j ( P j P j ( k + 1 ) ) + ( 1 α j ) ( P j P j ( k ) ) ] j = l + 1 N π l j ( P j P j ( k ) ) β l ( P l P l ( k ) ) ) .
For 0 γ < 1 ; from assumptions (12) and (16), we can obtain V l ( k ) < 0 . That is,
A l T [ P l P l ( k + 1 ) γ ( P l P l ( k ) ) ] + [ P l P l ( k + 1 ) γ ( P l P l ( k ) ) ] A l < 0 .
Due to the Hurwitz stability of A l , the following inequality holds:
P l P l ( k + 1 ) γ ( P l P l ( k ) ) > 0 .
It means that
P l P l ( k + 1 ) > γ ( P l P l ( k ) ) .
According to assumption (12), we have the conclusion P l > P l ( k + 1 ) . Combine it with relation (16), by knowledge of mathematical induction, we know that
P i > P i ( k + 1 ) , i I [ 1 , N ] .
Eventually, with (17) and assumption (12), it could be summarized that for any integer m 0 , (9) holds. Thus, we complete this proof. □
Second, it is found that the sequence generated by (8) is monotonically increasing.
Lemma 4.
Assume that system (1) has AMSS, and Q i > 0 , i I [ 1 , N ] . If β i 0 , 0 γ < 1 , and the tuning parameters α j [ 0 , 1 ] , then on the condition of zero initial values, the sequence ( P 1 ( m ) , P 2 ( m ) , , P N ( m ) ) produced by algorithm (8) is strictly monotonically incremental. In other words, for any integer m 0 , we have
P i ( m ) < P i ( m + 1 ) , i I [ 1 , N ] .
Proof. 
The matrices A i , i I [ 1 , N ] are Hurwitz-stable and Equation (2) has the only solution ( P 1 , P 2 , , P N ) according to Lemmas 1 and 2. Let m = k and m = k + 1 in (8); subtract the former from the latter to obtain
A i T [ P i ( k + 2 ) P i ( k + 1 ) γ ( P i ( k + 1 ) P i ( k ) ) ] + [ P i ( k + 2 ) P i ( k + 1 ) γ ( P i ( k + 1 ) P i ( k ) ) ] A i = M i ( k ) ,
where
M i ( k ) = ( 1   γ ) ( s = 1 r A s i T ( P i ( k + 1 ) P i ( k ) ) A s i j = 1 i 1 π i j [ α j ( P j ( k + 2 ) P j ( k + 1 ) ) + ( 1 α j ) ( P j ( k + 1 ) P j ( k ) ) ] j = i + 1 N π i j ( P j ( k + 1 ) P j ( k ) ) β i ( P i ( k + 1 ) P i ( k ) ) ) , i I [ 1 , N ] .
Subsequently, mathematical induction is utilized to help prove the above lemma. Since P i ( 0 ) = 0 , i I [ 1 , N ] , it is easily known that (18) holds for m = 0 . Now, we suppose when m = k , k 0 , (18) holds, which means
P i ( k ) < P i ( k + 1 ) , i I [ 1 , N ] .
Let i = 1 in (20); we can obtain
M 1 ( k ) = ( 1   γ ) ( s = 1 r A s 1 T ( P 1 ( k + 1 ) P 1 ( k ) ) A s 1 j = 2 N π 1 j ( P j ( k + 1 ) P j ( k ) ) β 1 ( P 1 ( k + 1 ) P 1 ( k ) ) ) .
For 0 γ < 1 , according to assumption (21), it can be easily found that M 1 ( k ) < 0 . Therefore, from (19), we can obtain the following inequality:
A 1 T [ P 1 ( k + 2 ) P 1 ( k + 1 ) γ ( P 1 ( k + 1 ) P 1 ( k ) ) ] + [ P 1 ( k + 2 ) P 1 ( k + 1 ) γ ( P 1 ( k + 1 ) P 1 ( k ) ) ] A 1 < 0 .
For the Hurwitz stability of A 1 , we have the following inequality from Theorem 3.1 in [27]:
P 1 ( k + 2 ) P 1 ( k + 1 ) γ ( P 1 ( k + 1 ) P 1 ( k ) ) > 0 .
Equivalently,
P 1 ( k + 2 ) P 1 ( k + 1 ) > γ ( P 1 ( k + 1 ) P 1 ( k ) ) .
From assumption (21) that P 1 ( k + 1 ) > P 1 ( k ) , we have
P 1 ( k + 2 ) > P 1 ( k + 1 ) .
Using mathematical induction repeatedly, we now assume that
P j ( k + 2 ) > P j ( k + 1 ) , j I [ 1 , l 1 ] , l 2 .
For relation (20), let i = l ; we can obtain
M l ( k ) = ( 1   γ ) ( s = 1 r A s l T ( P l ( k + 1 ) P l ( k ) ) A s l j = 1 l 1 π l j [ α j ( P j ( k + 2 ) P j ( k + 1 ) ) + ( 1 α j ) ( P j ( k + 1 ) P j ( k ) ) ] j = l + 1 N π l j ( P j ( k + 1 ) P j ( k ) ) β l ( P l ( k + 1 ) P l ( k ) ) ) .
Since 0 γ < 1 , if induction assumptions (21) and (25) hold, we can obtain M l ( k ) < 0 . For relation (19) with i = l , we have that
A l T [ P l ( k +   2 )     P l ( k + 1 ) γ ( P l ( k + 1 ) P l ( k ) ) ] + [ P l ( k + 2 ) P l ( k + 1 ) γ ( P l ( k + 1 ) P l ( k ) ) ] A l < 0 .
A l is Hurwitz-stable; hence,
P l ( k + 2 ) P l ( k + 1 ) γ ( P l ( k + 1 ) P l ( k ) ) > 0 .
This is equivalent to
P l ( k + 2 ) P l ( k + 1 ) > γ ( P l ( k + 1 ) P l ( k ) ) .
In terms of assumption (21), we have P l ( k + 2 ) > P l ( k + 1 ) . Based on this and relation (25), the following is evident from mathematical induction:
P i ( k + 2 ) > P i ( k + 1 ) , i I [ 1 , N ] .
Consequently, with (27) and assumption (21), it is proven that for any integer m 0 , (18) holds. Thus, the proof is finished. □
Combining Lemmas 3 and 4, the following convergence property of algorithm (8) can be deduced.
Theorem 2.
Assume system (1) has AMSS, and Q i > 0 , i I [ 1 , N ] . If β i 0 , 0 γ < 1 , and the tuning parameters α j [ 0 , 1 ] , then on the condition of zero initial values P i ( 0 ) = 0 , i I [ 1 , N ] , the sequence ( P 1 ( m ) , P 2 ( m ) , , P N ( m ) ) produced by algorithm (8) monotonically converges to the unique solution ( P 1 , P 2 , . . . , P N ) of Equation (2).
Proof. 
From Lemmas 3 and 4 and the conditions of Theorem 2, the sequence ( P 1 ( m ) , P 2 ( m ) , , P N ( m ) ) produced by the SOR implicit algorithm (8) increases monotonically and has an upper bound, which is the solution of (2). It means that
0 = P i ( 0 ) < P i ( 1 ) < P i ( 2 ) < < P i ( N ) < P i , i I [ 1 , N ] .
Thus, the sequence ( P 1 ( m ) , P 2 ( m ) , , P N ( m ) ) is convergent. Take the limit on P i ( k ) , and denote
lim k P i ( k ) = P i , i I [ 1 , N ] .
By simultaneously taking limits on both sides of algorithm (8), we can easily obtain
A i T P i + P i A i = ( 1   γ ) ( s = 1 r A s i T P i A s i j = 1 i 1 π i j [ α j P j + ( 1 α j ) P j ] j = i + 1 N π i j P j β i P i Q i ) + γ ( A i T P i + P i A i ) .
This is equivalent to
( A 0 i + 0.5 π i i I n 0.5 β i I n ) T P i + P i ( A 0 i + 0.5 π i i I n 0.5 β i I n ) = ( 1 γ ) ( s = 1 r A s i T P i A s i j = 1 i 1 π i j [ α j P j + ( 1 α j ) P j ] j = i + 1 N π i j P j β i P i Q i ) + γ [ ( A 0 i + 0.5 π i i I n 0.5 β i I n ) T P i + P i ( A 0 i + 0.5 π i i I n 0.5 β i I n ) ] ,
so
( 1 γ ) [ ( A 0 i + 0.5 π i i I n 0.5 β i I n ) T P i + P i ( A 0 i + 0.5 π i i I n 0.5 β i I n ) ] = ( 1 γ ) ( s = 1 r A s i T P i A s i j = 1 i 1 π i j [ α j P j + ( 1 α j ) P j ] j = i + 1 N π i j P j β i P i Q i ) = ( 1 γ ) s = 1 r A s i T P i A s i j = 1 N π i j P j + π i i P i β i P i Q i .
Dividing both sides by 1 γ simultaneously, we obtain
( A 0 i + 0.5 π i i I n 0.5 β i I n ) T P i + P i ( A 0 i + 0.5 π i i I n 0.5 β i I n ) = s = 1 r A s i T P i A s i j = 1 N π i j P j + π i i P i β i P i Q i ,
which can be equivalently written as follows:
A 0 i T P i + P i A 0 i + s = 1 r A s i T P i A s i + j = 1 N π i j P j = Q i .
Compared with matrix Equation (2), we find that the sequence ( P 1 , P 2 , , P N ) is exactly the same solution of the coupled Lyapunov matrix Equation (2). □
It has been demonstrated that algorithm (8) can operate under a zero initial condition. However, this condition is too harsh. Consequently, a sufficient condition is provided for the convergence of algorithm (8) under an arbitrary initial value.
Theorem 3.
Assume that system (1) has AMSS, and Q i > 0 , i I [ 1 , N ] . Matrices M and W are expressed from Equation (29) as follows:
M = d i a g ( I n A 1 T + A 1 T I n , I n A 2 T + A 2 T I n , , I n A N T + A N T I n ) + ( 1 γ ) M ̲ ,
where
M ̲ = 0 0 0 0 α 1 π 21 I n 2 0 0 0 α 1 π N 1 , 1 I n 2 a 2 π N 1 , 2 I n 2 0 0 a 1 π N 1 I n 2 a 2 π N 2 I n 2 α N 1 π N , N 1 I n 2 0 .
And
W = γ d i a g ( I n A 1 T + A 1 T I n , I n A 2 T + A 2 T I n , , I n A N T + A N T I n ) + ( 1 γ ) d i a g ( s = 1 r A s 1 T A s 1 T + β 1 I n 2 , s = 1 r A s 2 T A s 2 T + β 2 I n 2 , , s = 1 r A s N T A s N T + β N I n 2 ) + ( 1 γ ) W ̲ ,
where
W ̲ = 0 π 12 I n 2 π 1 , N 1 I n 2 π 1 N I n 2 ( 1 α 1 ) π 21 I n 2 0 π 2 , N 1 I n 2 π 2 N I n 2 . . . . . . . . . . . . ( 1 α 1 ) π N 1 , 1 I n 2 ( 1 α 2 ) π N 1 , 2 I n 2 . . . 0 π N 1 , N I n 2 ( 1 α 1 ) π N 1 I n 2 ( 1 α 2 ) π N 2 I n 2 . . . ( 1 α N 1 ) π N , N 1 I n 2 0 .
If the tuning parameters α j and β i , and the relaxation parameter γ, satisfy ρ ( M 1 W ) < 1 , then the sequence ( P 1 ( m ) , P 2 ( m ) , , P N ( m ) ) produced by the SOR implicit algorithm (8), and with any initial condition ( P 1 ( 0 ) , P 2 ( 0 ) , , P N ( 0 ) ) , i I [ 1 , N ] will converge to the solution of Equation (2).
Proof. 
We define the vectors ξ and q as
ξ = [ [ v e c ( P 1 ) ] T , [ v e c ( P 2 ) ] T , , [ v e c ( P N ) ] T ] T , ξ ( m ) = [ [ v e c ( P 1 ( m ) ) ] T , [ v e c ( P 2 ( m ) ) ] T , , [ v e c ( P N ( m ) ) ] T ] T , q = [ [ ( 1 γ ) v e c ( Q 1 ) ] T , [ ( 1 γ ) v e c ( Q 2 ) ] T , , [ ( 1 γ ) v e c ( Q N ) ] T ] T .
Next, algorithm (8) can be rewritten in the form of Kronecker product:
( I n A i T + A i T I n ) v e c ( P i ( m + 1 ) ) ( 1 γ ) j = 1 i 1 α j π i j v e c ( P j ( m + 1 ) ) = ( 1 γ ) ( s = 1 r A s i T A s i T + β i I n 2 ) v e c ( P i ( m ) ) γ ( I n A i T + A i T I n ) v e c ( P i ( m ) ) + ( 1 γ ) j = 1 i 1 π i j ( 1 α j ) v e c ( P j ( m ) ) + ( 1 γ ) j = i + 1 N π i j v e c ( P j ( m ) ) + ( 1 γ ) v e c ( Q i ) .
The definitions of matrices M and W are based on the above equation. Combining the vector ξ , q, and matrices M and W defined earlier, we can further obtain
M ξ ( m + 1 ) = W ξ ( m ) + q .
By multiplying the inverse matrix of M from the left, we have
ξ ( m + 1 ) = M 1 W ξ ( m ) + M 1 q .
According to the relationship between the previous and subsequent vectors ξ , we can deduce repeatedly and obtain
ξ ( m + 1 ) = ( M 1 W ) m + 1 ξ ( 0 ) + j = 0 m ( M 1 W ) j M 1 q .
If ρ ( M 1 W ) < 1 , then when m approaches infinity, ( M 1 W ) k approaches a zero matrix; hence,
lim m ξ ( m + 1 ) = lim m j = 0 m ( M 1 W ) j M 1 q = ( M W ) 1 q .
Since the matrix Equation (2) could be rewritten as ξ = ( M W ) 1 q , the sequence { ξ ( m ) } generated by the SOR algorithm (8) converges to the unique solution of Equation (2). □

4. Illustrative Example

Consider Equation (2), in which N = 2 , r = 1 . Q 1 , Q 2 are both fourth-order identity matrices. The system matrices and probability transition rate matrix Π are defined as follows:
A 01 = 1.0000 1.0000 2.0000 1.0000 0.6667 3.5000 1.0000 1.1670 1.0000 0.5000 3.0000 0.5000 2.0000 2.5000 1.0000 2.5000 ,
A 02 = 1.3333 1.0000 2.0000 0.3333 0.0000 3.5000 1.0000 1.5000 1.0000 1.5000 4.0000 0.5000 1.3333 3.5000 1.0000 1.1667 ,
A 11 = 0.9003 0.7826 0.6428 0.8436 0.5377 0.5242 0.1106 0.4764 0.2137 0.0871 0.2309 0.6475 0.0280 0.9630 0.5839 0.1886 ,
A 12 = 0.8709 0.8842 0.7222 0.4556 0.8338 0.2943 0.5945 0.6024 0.1796 0.6263 0.6026 0.9695 0.7873 0.9803 0.2076 0.4936 ,
Π = 0.6 0.6 1 1 .
This example is used in [25]. The iterative error at step k is defined as
ϵ ( k ) = i = 1 N A 0 i T P i ( k ) + P i ( k ) A 0 i + s = 1 r A s i T P i ( k ) A s i + j = 1 N π i j P j ( k ) + Q i F 2 .
First, we investigated the convergence of algorithm (8) when α = α 1 = 1 and β = β 1 = β 2 . The value of β was chosen to be the same as that in [25]. In order to make the Figure looks concise, in Figure 1, we only selected three sets of different tuning parameter values from [25] and compared the convergence performance after selecting a proper parameter γ with the Matlab function. We found that by adjusting the value of parameter γ , a smaller spectral radius ρ ( M 1 W ) and fewer iteration steps k could be obtained. The convergence curves are shown in Figure 1. It should be mentioned that when γ takes the value of 0, algorithm (8) degenerates to algorithm (5), which is proposed in [25], so the two algorithms show the same convergence curves. The red convergence curve represents the convergence situation of algorithm (8), while the black curve represents that of algorithm (5). So, it is evident that when the values of α and β remained constant, selecting the appropriate parameter γ achieved better convergence results. In [25], the minimum spectral radius ρ ( M 1 W ) = 0.3128 was gained when α = 1 and β = 0.4240 . By using the Matlab function, we found that under this condition, it converged fast only when γ took the value of 0. But when α = 1 and β = 1 , we obtained a smaller ρ ( M 1 W ) = 0.2638 by choosing γ = 0.147 . It is obvious that when the values of α and β were chosen to be the same as the algorithm (5) proposed in [25], we achieved a faster convergence rate by selecting different values of parameter γ with the help of Matlab (version R2017a).
Figure 1 shows the algorithm’s convergence after selecting appropriate relaxation parameters. The left figure displays the case when α = 1 , and the right figure displays the case when β = 0 . The value of α was chosen to be the same as the algorithm in [25]. By adjusting the appropriate parameter γ , we still acquired a smaller spectral radius ρ ( M 1 W ) and fewer iteration numbers. Obviously, when the values of α and β were the same as in algorithm (5) proposed in [25], a better convergence performance was obtained by choosing the appropriate parameter γ with the Matlab function. Figure 1 indicates that all the three parameters α , β , and γ simultaneously influence the convergence rate of the algorithm. And through extensive calculations, we found that the fastest convergence rate is approximately achieved when the spectral radius is around 0.3.
In algorithm (8), by employing a Matlab function, spectral radius ρ ( M 1 W ) with different β and γ were calculated, with the value of α fixed. Through calculation verification, we found that if α [ 0 , 1 ] , no matter what value α took, it was always possible to obtain a spectral radius smaller than that of algorithm (5) under the same values of α and β by selecting the appropriate γ . For an iterative algorithm, a smaller spectral radius means a faster convergence speed and a better convergence performance. Since this study had three tuning parameters, it was necessary to first fix the value of one tuning parameter before analyzing the convergence effect of the algorithm. In Figure 2, due to space limitations, we only selected three representative α values as α = 0 , α = 0.5 , and α = 1 , since in the previous text we have α j [ 0 , 1 ] .
Figure 2 displays the spectral radius ρ ( M 1 W ) when β , γ take different values and α = 0 , 0.5 , 1, respectively. The red line represents γ = 0 , which is the spectral radius of algorithm (5). In Figure 3, the lowest valley of the surface, which has the smallest spectral radius, is significantly lower than the red curve. This means that when the values of α and β are the same as those of algorithm (5), through selecting the appropriate relaxation parameter γ , algorithm (8) always yields a smaller spectral radius.
We found the optimal parameters of β and γ in Figure 2, when α = 0 , 0.5 , and 1, respectively, and presented the convergence curves for these three sets of parameters in Figure 3. It is worth mentioning that when the value of α was fixed to 1, by adjusting the value of the parameter γ , we obtained many spectral radii around 0.26. Algorithm (5) proposed in [25] obtained the minimum spectral radius around approximately 0.31. Obviously, as the parameters changed, algorithm (8) could achieve a better convergence performance.

5. Conclusions

In this article, a novel implicit iterative algorithm for solving continuous stochastic Lyapunov equations is presented via SOR iteration. And the up-to-date information of the solution is utilized to be the latest estimates of the unknown matrices. This method includes three parameters. When proper parameters are chosen, the convergence performance can be greatly improved. Furthermore, the convergence properties are analyzed in detail. Based on the method proposed in [25], we added a parameter to improve the convergence of the algorithm. We always obtained a faster convergence rate by selecting a proper parameter γ with the Matlab function. Finally, a numerical example is employed to illustrate that algorithm (8) has a better convergence performance than algorithm (5) proposed in [25].

Author Contributions

Conceptualization, P.H. and T.W.; methodology, P.H.; software, T.W.; validation, P.H., T.W. and H.C.; formal analysis, H.C.; resources, P.H.; writing—original draft preparation, T.W.; writing—review and editing, P.H. and H.C.; visualization, T.W.; supervision, P.H.; project administration, P.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science Foundation of Jiangsu Province grant number BK20191386.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, Y.; Karimi, H.R.; Zhang, Q.; Zhao, D.; Li, Y. Fault detection for linear discrete time-varying systems subject to random sensor delay: A Riccati equation approach. IEEE Trans. Circuits Syst. Regul. Pap. 2017, 65, 1707–1716. [Google Scholar] [CrossRef]
  2. Li, Y.; Zhao, W. Haar wavelet operational matrix of fractional order integration and its applications in solving the fractional order differential equations. Appl. Math. Comput. 2010, 216, 2276–2285. [Google Scholar] [CrossRef]
  3. Zhang, L.; Zhang, G.; Liu, W. Optimal control of second-order and high-order descriptor systems. Optim. Control Appl. Methods 2019, 40, 791–806. [Google Scholar] [CrossRef]
  4. Zhao, X.; Deng, F. Solution of the HJI equations for nonlinear H control design by state-dependent Riccati equations approach. J. Syst. Eng. Electron. 2011, 22, 654–660. [Google Scholar] [CrossRef]
  5. Hazell, A.; Limebeer, D.J. An efficient algorithm for discrete-time H preview control. Automatica 2008, 44, 2441–2448. [Google Scholar] [CrossRef]
  6. Hashemi, B.; Dehghan, M. The interval Lyapunov matrix equation: Analytical results and an efficient numerical technique for outer estimation of the united solution set. Math. Comput. Model. 2012, 55, 622–633. [Google Scholar] [CrossRef]
  7. Sun, H.J.; Zhang, J.; Wu, Y.Y. A Newton iterative method for coupled Lyapunov matrix equations. J. Ind. Manag. Optim. 2023, 19, 8791–8806. [Google Scholar] [CrossRef]
  8. Rami, M.A.; Zhou, X.Y. Linear matrix inequalities, Riccati equations, and indefinite stochastic linear quadratic controls. IEEE Trans. Autom. Control 2000, 45, 1131–1143. [Google Scholar] [CrossRef]
  9. Dragan, V.; Morozan, T. Stability and robust stabilization to linear stochastic systems described by differential equations with Markovian jumping and multiplicative white noise. Stoch. Anal. Appl. 2002, 20, 33–92. [Google Scholar] [CrossRef]
  10. Zhang, W.; Zhang, H.; Chen, B.S. Generalized Lyapunov equation approach to state-dependent stochastic stabilization/detectability criterion. IEEE Trans. Autom. Control 2008, 53, 1630–1642. [Google Scholar] [CrossRef]
  11. Li, Z.Y.; Wang, Y.; Zhou, B.; Duan, G.R. On unified concepts of detectability and observability for continuous-time stochastic systems. Appl. Math. Comput. 2010, 217, 521–536. [Google Scholar] [CrossRef]
  12. Ji, Y.; Chizeck, H.J. Controllability, stabilizability, and continuous-time Markovian jump linear quadratic control. IEEE Trans. Autom. Control. 1990, 35, 777–788. [Google Scholar] [CrossRef]
  13. Huang, Y.; Zhang, W.; Feng, G. Infinite horizon H2/H control for stochastic systems with Markovian jumps. Automatica 2008, 44, 857–863. [Google Scholar] [CrossRef]
  14. Ni, Y.; Zhang, W.; Fang, H. On the observability and detectability of linear stochastic systems with Markov jumps and multiplicative noise. J. Syst. Sci. Complex. 2010, 23, 102–115. [Google Scholar] [CrossRef]
  15. Li, Z.Y.; Zhou, B.; Lam, J.; Wang, Y. Positive operator based iterative algorithms for solving Lyapunov equations for Itô stochastic systems with Markovian jumps. Appl. Math. Comput. 2011, 217, 8179–8195. [Google Scholar] [CrossRef]
  16. Shen, L. On the detectability and observability of continuous stochastic Markov jump linear systems. J. Math. Anal. Appl. 2015, 424, 878–891. [Google Scholar] [CrossRef]
  17. Wang, Q.; Lam, J.; Wei, Y.; Chen, T. Iterative solutions of coupled discrete Markovian jump Lyapunov equations. Comput. Math. Appl. 2008, 55, 843–850. [Google Scholar] [CrossRef]
  18. Zhou, B.; Lam, J.; Duan, G.R. Convergence of gradient-based iterative solution of coupled Markovian jump Lyapunov equations. Comput. Math. Appl. 2008, 56, 3070–3078. [Google Scholar] [CrossRef]
  19. Jodar, L.; Mariton, M. Explicit solutions for a system of coupled Lyapunov differential matrix equations. Proc. Edinb. Math. Soc. 1987, 30, 427–434. [Google Scholar] [CrossRef]
  20. Ding, F.; Chen, T. Iterative least-squares solutions of coupled Sylvester matrix equations. Syst. Control Lett. 2005, 54, 95–107. [Google Scholar] [CrossRef]
  21. Zhang, H. Reduced-rank gradient-based algorithms for generalized coupled Sylvester matrix equations and its applications. Comput. Math. Appl. 2015, 70, 2049–2062. [Google Scholar] [CrossRef]
  22. Wu, A.G.; Duan, G.R.; Liu, W. Implicit iterative algorithms for continuous Markovian jump Lyapunov equations. IEEE Trans. Autom. Control 2016, 61, 3183–3189. [Google Scholar] [CrossRef]
  23. Young, D.M. Iterative Solution of Large Linear Systems; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
  24. Wu, A.G.; Zhang, W.X.; Zhang, Y. An iterative algorithm for discrete periodic Lyapunov matrix equations. Automatica 2018, 87, 395–403. [Google Scholar] [CrossRef]
  25. Wu, A.G.; Wang, X.; Sreeram, V. Iterative algorithms for solving continuous stochastic Lyapunov equations. IET Control Theory Appl. 2017, 11, 73–80. [Google Scholar] [CrossRef]
  26. Wu, A.G.; Sun, H.J.; Zhang, Y. An SOR implicit iterative algorithm for coupled Lyapunov equations. Automatica 2018, 97, 38–47. [Google Scholar] [CrossRef]
  27. Feitzinger, F.; Hylla, T.; Sachs, E.W. Inexact Kleinman–Newton method for Riccati equations. SIAM J. Matrix Anal. Appl. 2009, 31, 272–288. [Google Scholar] [CrossRef]
Figure 1. Convergence performance comparisons between algorithms (5) and (8) after selecting appropriate relaxation parameters when α = 1 (left) and β = 0 (right).
Figure 1. Convergence performance comparisons between algorithms (5) and (8) after selecting appropriate relaxation parameters when α = 1 (left) and β = 0 (right).
Mathematics 12 01080 g001
Figure 2. Spectral radii of M 1 W for different parameters β and γ in algorithm (8) when α = 0 (left), α = 0.5 (middle), and α = 1 (right).
Figure 2. Spectral radii of M 1 W for different parameters β and γ in algorithm (8) when α = 0 (left), α = 0.5 (middle), and α = 1 (right).
Mathematics 12 01080 g002
Figure 3. Convergence performance of different parameters in the preceding cases.
Figure 3. Convergence performance of different parameters in the preceding cases.
Mathematics 12 01080 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, T.; Huang, P.; Chen, H. A Successive over Relaxation Implicit Iterative Algorithm for Solving Stochastic Linear Systems with Markov Jumps. Mathematics 2024, 12, 1080. https://doi.org/10.3390/math12071080

AMA Style

Wu T, Huang P, Chen H. A Successive over Relaxation Implicit Iterative Algorithm for Solving Stochastic Linear Systems with Markov Jumps. Mathematics. 2024; 12(7):1080. https://doi.org/10.3390/math12071080

Chicago/Turabian Style

Wu, Tianrui, Peiqi Huang, and Hong Chen. 2024. "A Successive over Relaxation Implicit Iterative Algorithm for Solving Stochastic Linear Systems with Markov Jumps" Mathematics 12, no. 7: 1080. https://doi.org/10.3390/math12071080

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop