Next Article in Journal
Iterative Kneser-Type Criteria for Oscillation of Half-Linear Second-Order Advanced Dynamic Equations
Previous Article in Journal
Impact of the Allee Effect on the Dynamics of a Predator–Prey Model Exhibiting Group Defense
Previous Article in Special Issue
Sliding Mode Control of Uncertain Switched Systems via Length-Limited Coding Dynamic Quantization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Output Feedback Optimal Control for Discrete-Time Singular Systems Driven by Stochastic Disturbances and Markov Chains

School of Information and Control Engineering, Qingdao University of Technology, Qingdao 266000, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2025, 13(4), 634; https://doi.org/10.3390/math13040634
Submission received: 20 January 2025 / Revised: 9 February 2025 / Accepted: 12 February 2025 / Published: 14 February 2025
(This article belongs to the Special Issue Stochastic System Analysis and Control)

Abstract

:
This paper delves into the exploration of the indefinite linear quadratic optimal control (LQOC) problem for discrete-time stochastic singular systems driven by discrete-time Markov chains. Initially, the conversion of the indefinite LQOC problem mentioned above for stochastic singular systems into an equivalent problem of normal stochastic systems is executed through a sequence of transformations. Following this, the paper furnishes sufficient and necessary conditions for resolving the transformed LQOC problem with indefinite matrix parameters, alongside optimal control strategies ensuring system regularity and causality, thereby establishing the solvability of the optimal controller. Additionally, conditions are derived to verify the definiteness of the transformed LQOC problem and the uniqueness of solutions for the generalized Markov jumping algebraic Riccati equation (GMJARE). The study attains optimal controls and nonnegative cost values, guaranteeing system admissibility. The results of the finite horizon are extended to the infinite horizon. Furthermore, it introduces the design of an output feedback controller using the LMI method. Finally, a demonstrative example demonstrates the validity of the main findings.

1. Introduction

Singular systems (also referred to as implicit or descriptor or combined differential and algebraic systems) are often more appropriate for modeling real-world systems such as those found in mechanics, biology, economics, and chemistry, etc. [1,2,3,4]. Recent academic studies on singular systems have prominently emphasized investigations into stability analysis, fuzzy control methodologies, quantized regulation techniques, adaptive control strategies, and related domains [5,6,7]. Stochastic singular systems are widely used in power systems, financial markets, biomedicine, and robot control, etc. For example, in power systems, the system can be used to model the dynamic behavior of the grid, taking into account the effects of random fluctuations such as load variations and renewable energy intermittency, and thus improve the stability and reliability of the grid. In financial markets, stochastic singular systems can be used to model asset price fluctuations and help in risk management. In the field of biomedicine, it can be used to model the dynamic behavior of biological systems and assist in disease diagnosis and treatment. In robot control, it can be used to model the dynamic behavior of a robot and improve its adaptability in a complex environment. These applications show the reliability and importance of stochastic singular systems and provide a broad prospect for subsequent research. Furthermore, researchers have integrated Markov chains into the discrete-time singular systems, adding complexity to the systems while yielding significant findings [8,9,10]. With further research, the stochastic theory has made considerable advancements in the modeling of mechanical engineering, circuit dynamics, and economic systems and has been extended to singular systems. Notably, stochastic singular systems offer superior expressiveness concerning the structural features inherent in dynamic systems when compared to conventional singular systems [11,12,13]. Incorporating stochastic disturbances within a system provides a means to emulate the uncertainties and fluctuations characteristic of real-world contexts. These uncertainties may emanate from a myriad of factors, including environmental volatility, external interferences, or inherent stochasticity within the system dynamics. By integrating stochastic disturbances, a more accurate evaluation of the system’s resilience, equilibrium, and capacity to accommodate diverse environmental stimuli can be achieved. Hence, the examination of stochastic singular systems driven by discrete-time Markov chains bears substantial pragmatic implications.
Reference [14] pioneered the LQOC problem, significantly influencing modern control theory. On the one hand, the LQOC problem provides a framework for optimizing controller design by minimizing a quadratic cost function, thereby achieving a balance between system stability, performance, and robustness. On the other hand, the LQOC problem studied in this paper is to adjust the weights of system states and inputs in the cost function according to practical needs, allowing for flexible optimization of system performance metrics such as response speed, stability, and energy consumption. Hence, the LQOC problem is currently being applied in various dynamic systems, including mechanical, electrical, and control systems, with notable applications in engineering, robotics, aerospace, and other fields [15,16]. Simultaneously, the Riccati equation stands as a pivotal tool in addressing the LQOC problem in the discrete-time case. By solving the Riccati equation, the linear feedback optimal controller in the discrete time case was obtained, and the optimal control of the studied system was finally realized [17,18]. Recently, the study of the LQOC problem has attracted the attention of many scholars. Reference [19] centered on the LQOC problem applied to discrete-time Markovian jumping systems (DTMJSs), with a particular emphasis on constraining control actions within a suitable subspace. Reference [20] concerned singular DTMJSs in infinite and finite horizons to address the LQOC problem with indefinite parameters. References [21,22] analyzed the LQOC problem in fractional-order discrete systems. Recent works in this area have shown that despite weight matrices Q and R having certain uncertainties in the stochastic system, the associated stochastic LQOC problem possibly still be well-posed. In discrete-time LQOC problems, positive definiteness, which was necessary for the control weighting matrix, will not be a mandatory condition, and it may even be negative in the presence of uncertainty factors within the system [23,24]. The above article fails to take into account the influence of stochastic disturbances on the system in practical applications. Thus, this paper investigated the subject where the LQOC problem of the discrete-time stochastic singular systems was affected by stochastic disturbances and discrete-time Markov chains.
Based on the above discussion, we investigated discrete-time stochastic singular systems with stochastic disturbances and discrete-time Markov chains. Compared with existing studies in this field, our findings highlight the following motivations and advantages.
In this paper, both the state and control weighting matrices could be introduced by indefinite ones in the quadratic cost function; that is, it allows negative eigenvalues for the weighting matrix. Secondly, an equivalent transformation from the indefinite LQOC problem of stochastic singular systems to normal stochastic systems is obtained in this paper, with the condition for equivalent transformation expressed in terms of the initial coefficient matrix of singular stochastic systems, which is straightforward to verify and imposes relatively few constraints.
This paper introduces stochastic disturbances to simulate the actual impact of a stochastic environment on the operation of real systems while also considering the output feedback control to enhance the robustness of the closed-loop system. Additionally, for the purpose of ensuring the stochastic admissibility of the singular stochastic systems of the system under study, this paper constructs a Lyapunov function with relatively low conservatism and utilizes MATLAB’s LMI toolbox to design the controller gains.
The structure of the paper is outlined as follows. In Section 2, the transformation equivalently converts the indefinite LQOC problem of stochastic singular systems driven by discrete-time Markov chains into an indefinite LQOC problem for regular systems through a sequence of equivalent transformations. Section 3 establishes necessary and sufficient conditions for the solvability of the indefinite LQOC problem. Specifically, it proves that the corresponding optimal control strategy and optimal cost can be obtained, ensuring the regularity and causality of the resulting closed-loop system. Additionally, criteria are provided to guarantee the definiteness of the transformed LQOC problem for stochastic singular systems, and it is shown that the semi-positive definite solution to the GMJARE is unique. Furthermore, the acquisition of the optimal control gain matrix and optimal cost values is detailed, demonstrating that the resulting closed-loop system is stochastically admissible with the output feedback control. At the end of Section 3, the specific form of the optimal control gain matrix is derived using the rigorous LMI method. Finally, in Section 4, a numerical example with four Markovian switching modes is presented to validate the main results.

2. Foundation and Problem Formulation

Consider a class of discrete-time singular systems subjected to stochastic disturbances and driven by discrete-time Markov chains:
E x k + 1 = A θ ( k ) x k + B θ ( k ) u k + ( C θ ( k ) x k + D θ ( k ) u k ) ω k , y k = G θ ( k ) x k + H θ ( k ) ω k ,
where x k R n , u k R m and y k R n ˜ . The matrix E R n × n is singular and rank ( E ) = r < n . θ ( k ) represents a Markov chain in discrete-time case to take values in the positive-integer set L = { 1 , , l } and its transition probabilities are p i j , which means the probability of θ ( k + 1 ) being j under θ ( k ) = i , with 0 p i j 1 and i = 1 l p i j = 1 ( i , j L ). All the system parameter matrices can be abbreviated as A i R n × n , B i R n × m , C i R n × n , D i R n × m , G i R n ˜ × n , H i R n ˜ for θ ( k ) = i . Assume that θ ( k ) is unrelated to the initial state x 0 . { ω k : k = 1 , 2 , } is a one-dimensional white noise in a given complete probability space ( Ω , F , P ) , where ω k and θ ( k ) are independent with each other.
Problem 1.
For the LQOC problem of system (1), it defines the quadratic cost function as
J ( x 0 , θ ( 0 ) , u k , N + 1 ) = 1 2 E k = 0 N x k u k T Q θ ( k ) S θ ( k ) S θ ( k ) T R θ ( k ) x k u k + x N + 1 T E T P θ ( N + 1 ) E x N + 1 ,
where E stands for the mathematical expectation. It is worth mentioning that the weight matrix Q i S i S i T R i for the system states and control inputs is only symmetric for Q i R n × n , S i R n × m , R i R m × m with θ ( k ) = i L , while P θ ( N + 1 ) R n × n is symmetric.
Note that the weight matrix Q i S i S i T R i in LQOC Problem 1 is merely symmetric and thus may appear inappropriate at this stage. To solve LQOC Problem 1, it is necessary to obtain the controller u k * in a F k -measurable space that minimizes (2) over the set of admissible controls of (1). The optimal value function is defined as
J * = min u k , k [ 0 , N ] J ( x 0 , θ ( 0 ) , u k , N + 1 ) .
Now, we will introduce the following definitions and lemma needed for our analysis in this paper.
Definition 1
([2,25]). Consider a class of systems of the form as E x k + 1 = A i x k .
(i) 
If det ( z E A i ) 0 for any i L , the regularity will be satisfied.
(ii) 
If it is regular and degree ( det ( z E A i ) ) = rank ( E ) for any i L , causality will be satisfied.
(iii) 
If the consider system is regular, causal and stochastically stable, stochastic admissibility will be satisfied.
Definition 2
([26]). For the compatible initial ( x 0 , θ ( 0 ) ) , if
inf u k , k [ 0 , N ] J ( x 0 , θ ( 0 ) , u k , N + 1 ) > ,
the LQOC Problem 1 is referred to be well-posed.
Definition 3
([26]). The LQOC Problem 1 is called attainable if there has an admissible control u k * that minimizes (2) for any initial state ( x 0 , θ ( 0 ) ) .
Assumption 1.
For system (1), suppose that
rank 0 E 0 E A i B i = n + rank ( E ) = n + r , i L .
And it holds that rank ( E , C i ) = rank ( E ) = r , i L .
Remark 1.
The rank condition in Assumption 1 ensures that the singular system preserves regularity and causality in each mode. This is a fundamental requirement in the design of control systems. Second, this rank condition also implies the controllability and observability of the system in each mode. For singular systems, these properties are defined through the rank of the matrix. The existence of Assumption 1 ensures that the system has enough degrees of freedom in each mode to achieve the control objective.
In the following, the form of LQOC Problem 1 after a series of transformations will be given. First, assume that the singular matrix E satisfies the following singular value decomposition (SVD):
U E V = r 0 0 0 ,
where δ l > 0 ( l = 1 , 2 , , r ) and r = diag { δ 1 , δ 2 , , δ r } . U R n × n and V R n × n are the corresponding orthogonal matrices. Consequently, indicate
V T x k = x k 1 x k 2 , U A i V = A i 1 A i 2 A i 3 A i 4 , U B i = B i 1 B i 2 , U C i V = C i 1 C i 2 0 0 , U D i = D i 1 0 , G i V = G i 1 G i 2 ,
where x k 1 R r , x k 2 R n r , A i 1 R r × r , A i 4 R ( n r ) × ( n r ) , B i 1 R r × m , B i 2 R ( n r ) × m , C i 1 R r × r , C i 2 R r × ( n r ) , D i 1 R r × m , G i 1 R n ˜ × r and G i 2 R n ˜ × ( n r ) . Thus system (1) can be viewed as restrictively equivalent to system
r 0 0 0 x k + 1 1 x k + 1 2 = A i 1 A i 2 A i 3 A i 4 x k 1 x k 2 + B i 1 B i 2 u k + C i 1 C i 2 0 0 x k 1 x k 2 + D i 1 0 u k ω k , y k = G i 1 x k 1 + G i 2 x k 2 + H i ω k .
Meanwhile, the corresponding equivalent transformation of (2) can also be given as
J ( x 0 , θ ( 0 ) , u k , N + 1 ) = 1 2 E k = 0 N x k 1 x k 2 u k T Q ˇ i x k 1 x k 2 u k + ( x N + 1 1 ) T Σ r P θ ( N + 1 ) 1 Σ r x N + 1 1 ,
where
Q ˇ i = V 0 0 I T Q i S i S i T R i V 0 0 I = Q i 1 Q i 2 S i 1 ( Q i 2 ) T Q i 4 S i 2 ( S i 1 ) T ( S i 2 ) T R i , V T Q i V = Q i 1 Q i 2 ( Q i 2 ) T Q i 4 , U P θ ( N + 1 ) U T = P θ ( N + 1 ) 1 P θ ( N + 1 ) 2 ( P θ ( N + 1 ) 2 ) T P θ ( N + 1 ) 4 , V T S i = S i 1 S i 2 ,
and Q i 1 R r × r , Q i 4 R ( n r ) × ( n r ) , S i 1 R r × m , P θ ( N + 1 ) 1 R r × r . According to [20], Assumption 1 is synonymous with a full-row rank matrix [ A i 4 , B i 2 ] . Suppose that there is a nonsingular matrix M i satisfying
A i 4 B i 2 M i = I n r 0
with appropriate dimensions for any i L . It is important at this point, M i and M j are dependent on the Markovian switching modes i and j, thus we will introduce a non-singular matrix M that is uncorrelated with the Markovian switching mode to replace M i ( i L ) in the follow-up study.
Remark 2.
The purpose of the proposed Equation (10) is to transform the singular system into an ordinary system by matrix transformation. This transformation is mathematically sound because it provides a way to simplify the solution of the problem while preserving the dynamics of the system. This is crucial for designing effective controllers. By choosing an appropriate matrix M i , a complex singular system can be transformed into a more tractable ordinary system. Moreover, the matrix M i in (10) is nonsingular, which means that the transformation is invertible. This ensures that the transformed system is mathematically equivalent to the original system and no information is lost. Moreover, it not only improves the computational efficiency, but also reduces the difficulty of implementing complex control algorithms.
Proposition 1.
For Assumption 1, there exists a nonsingular matrix M, it holds that
A i 4 B i 2 M = A ¯ i 4 0
if and only if
rank E A i 0 B i 0 A j E B j 0 E 0 0 = n + 2 r
for i , j L , where A ¯ i 4 R ( n r ) × ( n r ) is nonsingular.
Proof. 
From (4) and (5), it follows that
rank 0 E 0 0 E A i 0 B i 0 A j E B j = rank 0 0 r 0 0 0 0 0 0 0 0 0 0 0 r 0 A i 1 A i 2 0 0 B i 1 0 0 A i 3 A i 4 0 0 B i 2 0 0 A j 1 A j 2 r 0 B j 1 0 0 A j 3 A j 4 0 0 B j 2 = 3 r + rank A i 4 B i 2 A j 4 B j 2 = n + 2 r , rank A i 4 B i 2 A j 4 B j 2 = n r , i , j L .
Taking into account rank [ A i 4 B i 2 ] = n r , [ A i 4 B i 2 ] and [ A j 4 B j 2 ] are linearly dependent can be further obtained. Therefore, there is a matrix J i j R ( n r ) × ( n r ) with a nonsingular property such that [ A j 4 B j 2 ] = J i j [ A i 4 B i 2 ] . From (9), one has [ A j 4 B j 2 ] = J i j [ A i 4 B i 2 ] M i = [ J i j 0 ] . Without loss of generality, we can choose i = 1 , M = M 1 and A ¯ j 4 = J j 1 . The proof is completed. □
To proceed, denote
M ¯ = I r 0 0 0 M 11 M 12 0 M 21 M 22 , M = M 11 M 12 M 21 M 22
with
A i 2 B i 1 M = A ¯ i 2 B ¯ i 1 , C i 2 D i 1 M = C ¯ i 2 D ¯ i 1 ,
where M 11 R ( n r ) × ( n r ) , M 12 R ( n r ) × m , M 21 R m × ( n r ) , M 22 R m × m , A ¯ i 2 R r × ( n r ) , B ¯ i 1 R r × m , C ¯ i 2 R r × ( n r ) , D ¯ i 1 R r × m . Then using the following nonsingular linear transformation
x k 1 x k 2 u k = M ¯ x k 1 x ¯ k 2 u ¯ k = I r 0 0 0 M 11 M 12 0 M 21 M 22 x k 1 x ¯ k 2 u ¯ k ,
system (7) can be transformed into
Σ r x k + 1 1 = ( A i 1 A ¯ i 2 ( A ¯ i 4 ) 1 A i 3 ) x k 1 + B ¯ i 1 u ¯ k + ( ( C i 1 C ¯ i 2 ( A ¯ i 4 ) 1 A i 3 ) x k 1 + D ¯ i 1 u ¯ k ) ω k , 0 = A i 3 x k 1 + A ¯ i 4 x ¯ k 2 , y k = ( G i 1 G i 2 ( A ¯ i 4 ) 1 A i 3 ) x k 1 + H i ω k .
Let A ¯ i 1 = ( A i 1 A ¯ i 2 ( A ¯ i 4 ) 1 A i 3 ) , C ¯ i 1 = ( C i 1 C ¯ i 2 ( A ¯ i 4 ) 1 A i 3 ) , and G ¯ i 1 = G i 1 G i 2 ( A ¯ i 4 ) 1 A i 3 , then (15) can be converted to
Σ r x k + 1 1 = A ¯ i 1 x k 1 + B ¯ i 1 u ¯ k + ( C ¯ i 1 x k 1 + D ¯ i 1 u ¯ k ) ω k , 0 = A i 3 x k 1 + A ¯ i 4 x ¯ k 2 y k = G ¯ i 1 x k 1 + H i ω k .
From the second formula of (16) and Proposition 1, we can obtain x ¯ k 2 = ( A ¯ i 4 ) 1 A i 3 x k 1 . By substituting this formula into (14), one has
x k 1 x k 2 u k = I r 0 M 11 ( A ¯ i 4 ) 1 A i 3 M 12 M 21 ( A ¯ i 4 ) 1 A i 3 M 22 x k 1 u ¯ k .
Under (17), the function (8) can be expressed as
J ( x 0 1 , θ ( 0 ) , u ¯ k , N + 1 ) = 1 2 E k = 0 N x k 1 u ¯ k T Q ˜ i S ˜ i S ˜ i T R ˜ i x k 1 u ¯ k + ( x N + 1 1 ) T Σ r P θ ( N + 1 ) 1 Σ r x N + 1 1 ,
where
Q ˜ i = I r M 11 ( A ¯ i 4 ) 1 A i 3 M 21 ( A ¯ i 4 ) 1 A i 3 T Q i 1 Q i 2 S i 1 ( Q i 2 ) T Q i 4 S i 2 ( S i 1 ) T ( S i 2 ) T R i I r M 11 ( A ¯ i 4 ) 1 A i 3 M 21 ( A ¯ i 4 ) 1 A i 3 , S ˜ i = I r M 11 ( A ¯ i 4 ) 1 A i 3 M 21 ( A ¯ i 4 ) 1 A i 3 T Q i 2 S i 1 Q i 4 S i 2 ( S i 2 ) T R i M 12 M 22 , R ˜ i = M 12 M 22 T Q i 4 S i 2 ( S i 2 ) T R i M 12 M 22 .
Problem 2.
For the LQOC problem of system (16), the corresponding quadratic cost function is defined by (18).
According to the above analysis and transformation, LQOC Problem 1 and LQOC Problem 2 are equivalent, as shown in (17). Therefore, in Section 3, LQOC Problem 1 will be solved using LQOC Problem 2. Subsequently, some lemmas will be presented to prove the main results of Section 3.
Lemma 1
([27]). Allow matrices F = F T , H and G = G T with suitable dimensions. And the quadratic form is q ( x , u ) = E { x T F x + 2 x T H u + u T G u } , where x and u are random variables with appropriate dimensions defined in a probability space ( Ω , F , P ) . The following conditions are considered to be equivalent.
(i) 
There is inf u q ( x , u ) > for any random variable x.
(ii) 
There is inf u q ( x , u ) = E { x T S x } for any x and matrix S , where matrix S is symmetric.
(iii) 
H ( I G G + ) = 0 and G 0 .
(iv) 
Ker ( G ) Ker ( H ) and G 0 .
(v) 
There is F T H H T G 0 for the existing symmetric matrix T.
In addition, there exists S = F H G + H T such that (ii) is satisfied if any of the above conditions holds. Furthermore, it holds S T for any T in (v). Eventually, the optimal control u + = G + H T x and the optimal value q ( x , u * ) = E { x T ( F H G + H T ) x } can be obtained for any random variable x.

3. Main Result

From above, we know that LQOC Problem 1 and LQOC Problem 2 can be studied equivalently. Therefore, in this section, we directly investigate the solvability of LQOC Problem 2, which is the system’s indefinite LQOC problem. It can be obtained by Lemma 1 that the LQOC Problem 2 is clearly well-posed when all the matrices P i 1 ( i L ) should be symmetric and satisfy the following LMI condition
Q ^ i ( k ) S i T ( k ) R i ( k ) 0 ,
where E i ( k + 1 ) = j = θ ( k + 1 ) = 1 l p i j P j 1 , Q ^ i ( k ) = ( A ¯ i 1 ) T E i ( k + 1 ) A ¯ i 1 + ( C ¯ i 1 ) T E i ( k + 1 ) C ¯ i 1 + Q ˜ i Σ r P i 1 Σ r , R i ( k ) = R ˜ i + ( B ¯ i 1 ) T E i ( k + 1 ) B ¯ i 1 + ( D ¯ i 1 ) T E i ( k + 1 ) D ¯ i 1 and S i T ( k ) = ( A ¯ i 1 ) T E i ( k + 1 ) B ¯ i 1 + ( C ¯ i 1 ) T E i ( k + 1 ) D ¯ i 1 + S ˜ i .
Immediately following, we will obtain the optimal control and cost value for LQOC Problem 2 by some consequent results.
Theorem 1.
When there exists u ¯ k for k [ 0 , N ] , LQOC Problem 2 is solvable if and only if the following GMJARE can be solved,
Σ r P ˇ i 1 ( k ) Σ r = ( A ¯ i 1 ) T E i ( k + 1 ) A ¯ i 1 + ( C ¯ i 1 ) T E i ( k + 1 ) C ¯ i 1 + Q ˜ i S i T ( k ) R i + ( k ) S i ( k ) , S i T ( k ) = S i T ( k ) R i + ( k ) R i ( k ) , R i ( k ) 0 ,
Furthermore, the optimal control is defined by
u ¯ k * = R i + ( k ) S i ( k ) x k 1 .
Finally, the corresponding optimal cost value is
J ( x 0 1 , θ ( 0 ) , u ¯ k * , N + 1 ) = 1 2 E { ( x 0 1 ) T Σ r P ˇ θ ( 0 ) 1 ( 0 ) Σ r x 0 1 } .
Proof. 
In the first part, the proof of necessity holds. Assuming that LQOC Problem 2 can be solved, we will demonstrate that when (21) and (22) hold, there exists a symmetric matrix P ˇ i 1 ( k ) ( i L ) satisfying (20). By further induction assumptions, we firstly show that (20)–(22) holds when k = N . From this, the following middle-range issue is introduced by
J ( x τ 1 , θ ( τ ) , u ¯ k * , N + 1 ) = inf u ¯ k , k [ τ , N ] J ( x τ 1 , θ ( τ ) , u ¯ k , N + 1 ) ,
and
J ( x τ 1 , θ ( τ ) , u ¯ k , N + 1 ) = 1 2 E k = τ N x k 1 u ¯ k T Q ˜ i S ˜ i S ˜ i T R ˜ i x k 1 u ¯ k + ( x N + 1 1 ) T Σ r P θ ( N + 1 ) 1 Σ r x N + 1 1 | F τ .
Notice that the discrete-time Markov chain θ ( k ) and the disturbance signal ω k are independent with each other. Combining with system (16) for τ = N , (24) could be transformed to
J x N 1 , θ ( N ) , u ¯ N , N + 1 = 1 2 E ( x N 1 ) T Q ^ θ ( N ) ( N ) x N 1 + u ¯ N T R θ ( N ) ( N ) u ¯ N + 2 ( x N 1 ) T S θ ( N ) T ( N ) u ¯ N ,
where
Q ^ θ ( N ) ( N ) = Q ˜ θ ( N ) + ( A ¯ θ ( N ) 1 ) T E θ ( N ) ( N + 1 ) A ¯ θ ( N ) 1 + ( C ¯ θ ( N ) 1 ) T E θ ( N ) ( N + 1 ) C ¯ θ ( N ) 1 , R θ ( N ) ( N ) = R ˜ θ ( N ) + ( B ¯ θ ( N ) 1 ) T E θ ( N ) ( N + 1 ) B ¯ θ ( N ) 1 + ( D ¯ θ ( N ) 1 ) T E θ ( N ) ( N + 1 ) D ¯ θ ( N ) 1 , S θ ( N ) T ( N ) = S ˜ θ ( N ) + ( A ¯ θ ( N ) 1 ) T E θ ( N ) ( N + 1 ) B ¯ θ ( N ) 1 + ( C ¯ θ ( N ) 1 ) T E θ ( N ) ( N + 1 ) D ¯ θ ( N ) 1 , E θ ( N ) ( N + 1 ) = θ ( N + 1 ) = 1 l p θ ( N ) θ ( N + 1 ) P θ ( N + 1 ) 1 .
Later, it follows from the conclusion of Lemma 1 that there exists suitable P ˇ θ ( N ) 1 ( N ) such that
inf u ¯ N J ( x N 1 , θ ( N ) , u ¯ N , N + 1 ) = 1 2 E ( x N 1 ) T Σ r P ˇ θ ( N ) 1 ( N ) Σ r x N 1 ,
where
Σ r P ˇ θ ( N ) 1 ( N ) Σ r = Q ˜ θ ( N ) + ( A ¯ θ ( N ) 1 ) T E θ ( N ) ( N + 1 ) A ¯ θ ( N ) 1 + ( C ¯ θ ( N ) 1 ) T E θ ( N ) ( N + 1 ) C ¯ θ ( N ) 1 S θ ( N ) T ( N ) R θ ( N ) + ( N ) S θ ( N ) ( N ) , S θ ( N ) T ( N ) = S θ ( N ) T ( N ) R θ ( N ) + ( N ) R θ ( N ) ( N ) , R θ ( N ) ( N ) 0 .
Furthermore, the corresponding optimal control expression can be obtained by
u ¯ N * = R θ ( N ) + ( N ) S θ ( N ) ( N ) x N 1 .
In the case of k = N , when (26)–(28) hold, it is convenient to obtain that (20)–(22) hold. Subsequently, we suppose that (20) has a solution P ˇ i 1 ( l + 1 ) with θ ( l ) = i such that (21) and (22) holding as well for l [ 0 , N 1 ] . Therefore, from system (16) and the Bellman equation, we can obtain that
inf u ¯ k , k [ l , N ] J ( x l 1 , θ ( l ) = i , u ¯ k , N + 1 ) = inf u ¯ l J ( x l 1 , θ ( l ) = i , u ¯ l , l + 1 ) = inf u ¯ l 1 2 E x l 1 u ¯ l T Q ˜ i S ˜ i S ˜ i T R ˜ i x l 1 u ¯ l + ( x l + 1 1 ) T Σ r P θ ( l + 1 ) 1 Σ r x l + 1 1 | F l .
Similarly, there exists P ˇ i 1 ( l ) satisfying
Σ r P ˇ i 1 ( l ) Σ r = ( A ¯ i 1 ) T E i ( l + 1 ) A ¯ i 1 + ( C ¯ i 1 ) T E i ( l + 1 ) C ¯ i 1 + Q ˜ i S i T ( l ) R i + ( l ) S i ( l ) , S i T ( l ) = S i T ( l ) R i + ( l ) R i ( l ) , R i ( l ) 0 ,
such that
inf u ¯ l J ( x l 1 , θ ( l ) = i , u ¯ l , l + 1 ) = 1 2 E ( x l 1 ) T Σ r P ˇ i 1 ( l ) Σ r x l 1 ,
and its optimal control is
u ¯ l * = R θ ( l ) + ( l ) S θ ( l ) ( l ) x l 1 ,
From (29), (31) and (32), the conclusion that can be drawn is that
inf u ¯ k , k [ l , N ] J ( x l 1 , θ ( l ) , u ¯ k , N + 1 ) = J ( x l 1 , θ ( l ) , u ¯ k * , N + 1 ) = 1 2 E ( x l 1 ) T Σ r P ˇ θ ( l ) 1 ( l ) Σ r x l 1 ,
with
u ¯ k , k [ l , N ] * = R θ ( l ) + ( k ) S θ ( l ) ( k ) x k 1 ,
then, (20)–(22) hold when k = l [ 0 , N 1 ] . To sum up, we can conclude that (20)–(22) is satisfied when k [ 0 , N ] .
In the second part, we prove that the sufficiency holds. Thus, we need to display that when GMJARE (20) can be solved, then (21) and (22) are obtainable. We get
V ( k , x k 1 ) = E { ( x k 1 ) T Σ r P θ ( k ) 1 Σ r x k 1 } .
It is evident that Q ^ i ( k ) S i T ( k ) R i ( k ) S i ( k ) 0 from (19) and the Schur Complement lemma. Then applying (16), GMJARE (20) and the Moore–Penrose pseudoinverse definition with θ ( k ) = i , we can obtain that
V k + 1 , x k + 1 1 V k , x k 1 = E { ( x k + 1 1 ) T Σ r j = 1 l ( p i j P j 1 ) Σ r x k + 1 1 ( x k 1 ) T Σ r P i 1 Σ r x k 1 } = E { ( A ¯ i 1 x k 1 + B ¯ i 1 u ¯ k + ( C ¯ i 1 x k 1 + D ¯ i 1 u ¯ k ) ω k ) T E i ( k + 1 ) ( A ¯ i 1 x k 1 + B ¯ i 1 u ¯ k + ( C ¯ i 1 x k 1 + D ¯ i 1 u ¯ k ) ω k ) ( x k 1 ) T Σ r P i 1 Σ r x k 1 + ( x k 1 ) T u ¯ k T Q ˜ i S ˜ i S ˜ i T R ˜ i x k 1 u ¯ k ( x k 1 ) T u ¯ k T Q ˜ i S ˜ i S ˜ i T R ˜ i x k 1 u ¯ k } E { ( x k 1 ) T u ¯ k T Q ˜ i S ˜ i S ˜ i T R ˜ i x k 1 u ¯ k + ( x k 1 ) T S i T ( k ) R i ( k ) S i ( k ) x k 1 + 2 ( x k 1 ) T S i T ( k ) R i + ( k ) R i ( k ) u ¯ k + u ¯ k T R i ( k ) u ¯ k } = E { u ¯ k + R i + ( k ) S i ( k ) x k 1 T R i ( k ) u ¯ k + R i + ( k ) S i ( k ) x k 1 ( x k 1 ) T u ¯ k T Q ˜ i S ˜ i S ˜ i T R ˜ i x k 1 u ¯ k } .
When k = 0 to N, the two sides of the above formula are processed by the corresponding summation
V N + 1 , x N + 1 1 V 0 , x 0 1 = k = 0 N E { u ¯ k + R i + ( k ) S i ( k ) x k 1 T R i ( k ) u ¯ k + R i + ( k ) S i ( k ) x k 1 ( x k 1 ) T u ¯ k T Q ˜ i S ˜ i S ˜ i T R ˜ i x k 1 u ¯ k } ,
and this indicates that
J x 0 1 , θ ( 0 ) , u ¯ k , N + 1 = 1 2 E { k = 0 N [ ( u ¯ k + R i + ( k ) S i ( k ) x k 1 ) T R i ( k ) ( u ¯ k + R i + ( k ) S i ( k ) x k 1 ) ] + ( x 0 1 ) T Σ r P θ ( 0 ) 1 Σ r x 0 1 } .
Since R i ( k ) 0 , it holds
J ( x 0 1 , θ ( 0 ) , u ¯ k , N + 1 ) 1 2 E { ( x 0 1 ) T Σ r P θ ( 0 ) 1 Σ r x 0 1 } ,
with u ¯ k , k [ l , N ] * = R θ ( l ) + ( k ) S θ ( l ) ( k ) x k 1 . Subsequently, the value of optimal cost can be obtained
J ( x 0 1 , θ ( 0 ) , u ¯ k * , N + 1 ) = 1 2 E { ( x 0 1 ) T Σ r P ˇ θ ( 0 ) 1 ( 0 ) Σ r x 0 1 } .
Therefore, V ( k + 1 , x k + 1 1 ) V ( k , x k 1 ) is always less than 0. That concludes the proof of Theorem 1. □
Theorem 1 proves that the optimal controller is solvable and its cost function will also reach the optimal value under the determined optimal controller. The following Theorem 2 will be further solved for the specific form of the optimal controller via LMI.
With Assumption 1 and (16), the LQOC Problem 2 is solvable if there is a matrix P ˇ i 1 ( k ) satisfying GMJARE (20), while the unique solution of the optimal control expression is described by
u ¯ k * = R i + ( k ) S i ( k ) x k 1 K i x k 1 .
Secondly, the corresponding cost value of LQOC Problem 2 is given by
J ( x 0 1 , θ ( 0 ) , u ¯ k * , N + 1 ) = 1 2 E { ( x 0 1 ) T Σ r P ˇ θ ( 0 ) 1 ( 0 ) Σ r x 0 1 } .
Eventually, it will have the following closed-loop system equation
Σ r x k + 1 1 = ( A ¯ i 1 B ¯ i 1 K i ) x k 1 + ( C ¯ i 1 D ¯ i 1 K i ) x k 1 ω k , 0 = A i 3 x k 1 + A ¯ i 4 x ¯ k 2 , y k = G ¯ i 1 x k 1 + H i ω k .
Building on previous work, we now translate the indefinite LQOC Problem 1 into a infinite-horizon definite LQOC Problem 3.
Problem 3.
Under system (16), the equivalent transformation is considered by
x k + 1 1 = A ˜ i 1 x k 1 + B ˜ i 1 u ¯ k + ( C ˜ i 1 x k 1 + D ˜ i 1 u ¯ k ) ω k , y k = G ¯ i 1 x k 1 + H i ω k .
where A ˜ i 1 = Σ r 1 A ¯ i 1 , B ˜ i 1 = Σ r 1 B ¯ i 1 , C ˜ i 1 = Σ r 1 C ¯ i 1 , D ˜ i 1 = Σ r 1 D ¯ i 1 and the quadratic cost function associated is given by
J ( x 0 1 , θ ( 0 ) , u ¯ k ) = 1 2 E k = 0 x k 1 u ¯ k T Q ˜ i S ˜ i S ˜ i T R ˜ i x k 1 u ¯ k .
The LQOC Problem 3 is to find a F -measurable controller u ¯ k * , to minimize (44) subject to (43) and guarantee that J ( x 0 1 , θ ( 0 ) , u ¯ k ) 0 . It can be found that the LQOC Problem 1 is equivalent to the LQOC Problem 3, if the LQOC Problem 3 has a solution, then the LQOC Problem 1 solvable as well.
Notice that Q ˜ i S ˜ i S ˜ i T R ˜ i 0 , i L and R ˜ i > 0 are easily derived by the original coefficient matrices of system (1) and weight matrices of cost function (2), while Q ˜ i S ˜ i S ˜ i T R ˜ i 0 , i L , then J ( x 0 1 , θ ( 0 ) , u ¯ k ) 0 , which shows the success of the definite LQOC Problem 3 in infinite horizon based on [28].
Next, we consider the LQOC Problem 3, then define the following GMJARE
P ˇ i 1 ( k ) = ( A ˜ i 1 ) T E i ( k + 1 ) A ˜ i 1 + ( C ˜ i 1 ) T E i ( k + 1 ) C ˜ i 1 + Q ˜ i S i ˜ T ( k ) R i ˜ + ( k ) S i ˜ ( k ) , S i ˜ T ( k ) = S i ˜ T ( k ) R i ˜ + ( k ) R i ˜ ( k ) , R i ˜ ( k ) 0 ,
where R i ˜ ( k ) = R ˜ i + ( B ˜ i 1 ) T E i ( k + 1 ) B ˜ i 1 + ( D ˜ i 1 ) T E i ( k + 1 ) D ˜ i 1 and S i ˜ T ( k ) = ( A ˜ i 1 ) T E i ( k + 1 ) B ˜ i 1 + ( C ˜ i 1 ) T E i ( k + 1 ) D ˜ i 1 + S ˜ i .
On the basis of Q ˜ i S ˜ i S ˜ i T R ˜ i 0 , i L and R ˜ i > 0 , we find that the existing solution P ˇ i 1 ( k ) > 0 for GMJARE (45) could directly judge the solvability of the LQOC Problem 3. We may reasonably come to the conclusion.
Proposition 2.
With Assumption 1, (12), Q ˜ i S ˜ i S ˜ i T R ˜ i 0 , i L and R ˜ i > 0 , if GMJARE (45) has a solution P ˇ i 1 ( k ) > 0 , then the LQOC Problem 3 is solvable. Moreover, the unique optimal control is
u ¯ k * = R ˜ i + ( k ) S ˜ i ( k ) x k 1 K ¯ i x k 1 ,
with its optimal cost value function as
J ( x 0 1 , θ ( 0 ) , u ¯ k ) = 1 2 E { ( x 0 1 ) T P ˇ θ ( 0 ) 1 ( 0 ) x 0 1 } .
Hence, it can achieve the stochastic stability for the closed-loop system
x k + 1 1 = A ˜ i 1 x k 1 + B ˜ i 1 u ¯ k + ( C ˜ i 1 x k 1 + D ˜ i 1 u ¯ k ) ω k .
In system (1), it is assumed that the state information is unmeasurable, while the input and output data remain measurable. The output feedback control law u ¯ k is given by
u ¯ k = L i y k = L i ( G ¯ i x k + H i ω k ) .
Just like in (46), the objective is to attain a numerical approximate solution with L i . Interestingly, a relationship can be observed between the optimal state feedback control (46) and the static output feedback control (49). Thus, to achieve the minimum value of the cost function (44), the minimal cost (47) can be attained via the following optimal output-feedback control law.
u ¯ k * = L i * y k .
where the matrix L i * is a suitable optimal control gain for the system (1).
Theorem 2.
Under Theorem 1, the optimal closed-loop system (43) is stochastically admissible with the the control law (49), if there exist matrices R i , S i and L i , symmetric matrices Q ˜ i = Q ˜ i T , P i 1 = ( P i 1 ) T and P ˇ i 1 = ( P ˇ i 1 ) T , positive-definite symmetric matrices E i > 0 , U 1 > 0 and U 2 > 0 such that
Λ ^ i 11 Λ i 12 Λ i 13 Λ i 14 ( ( A ˜ i 1 ) T + L i T ( B ˜ i 1 ) T I ) U 1 Λ i 16 0 0 0 * Λ ^ i 22 Λ i 23 Λ i 24 H i T L i T ( B ˜ i 1 ) T U 1 + U 2 0 Λ i 27 0 0 * * Λ ^ i 33 Λ i 34 ( G ¯ i T L i T ( D ˜ i 1 ) T ) U 1 + U 2 0 0 Λ i 38 0 * * * Λ ^ i 44 H i T L i T ( B ˜ i 1 ) T U 1 + U 2 0 0 0 Λ i 49 * * * * 2 U 1 0 0 0 0 * * * * * 1 2 Λ i 66 0 0 0 * * * * * * Λ i 77 0 0 * * * * * * * 1 2 Λ i 77 0 * * * * * * * * Λ i 77 < 0 ,
where Λ ^ i 11 = 2 ( A ˜ i 1 ) T E i A ˜ i 1 + 2 ( C ˜ i 1 ) T E i C ˜ i 1 P i 1 , Λ ^ i 22 = ( A ˜ i 1 ) T E i A ˜ i 1 + ( C ˜ i 1 ) T E i C ˜ i 1 2 U 2 B ˜ i 1 L i , Λ ^ i 33 = 2 ( A ˜ i 1 ) T E i A ˜ i 1 2 U 2 ( C ˜ i 1 + D ˜ i 1 L i G ¯ i ) , Λ ^ i 44 = ( A ˜ i 1 ) T E i A ˜ i 1 + ( C ˜ i 1 ) T E i C ˜ i 1 2 U 2 D ˜ i 1 L i H i and
Λ i 16 = G ¯ i T L i T ( B ˜ i 1 ) T H i T L i T ( B ˜ i 1 ) T G ¯ i T L i T ( D ˜ i 1 ) T H ¯ i T L i T ( D ˜ i 1 ) T , Λ i 66 = diag { E i 2 I , E i 2 I , E i 2 I , E i 2 I } , Λ i 27 = G ¯ i T L i T ( B ˜ i 1 ) T G ¯ i T L i T ( D ˜ i 1 ) T H i T L i T ( D ˜ i 1 ) T , Λ i 77 = diag { E i 2 I , E i 2 I , E i 2 I } , Λ i 38 = G ¯ i T L i T ( B ˜ i 1 ) T H i T L i T ( B ˜ i 1 ) T H i T L i T ( D ˜ i 1 ) T , Λ i 49 = H i T L i T ( B ˜ i 1 ) T G ¯ i T L i T ( D ˜ i 1 ) T G ¯ i T L i T ( B ˜ i 1 ) T
Proof. 
Choose a stochastic Lyapunov function
V ˜ ( k , x k 1 ) = E { ( x k 1 ) T P i 1 x k 1 }
with θ ( k + 1 ) = j . Then, the differential of V ( k , x k 1 ) is obtained as follows:
Δ V ˜ ( k , x k 1 ) = V ˜ ( k + 1 , x k + 1 1 ) V ˜ ( k , x k 1 ) = E ( x k + 1 1 ) T j = 0 l p i ( j ) P j 1 x k + 1 1 ( x k 1 ) T P i 1 x k 1 = E ( x k + 1 1 ) T E i x k + 1 1 ( x k 1 ) T P i 1 x k 1 .
In what follows, system (43) will be further discussed. It is from (53) that
Δ V ˜ ( k , x k 1 ) = V ˜ ( k + 1 , x k + 1 1 ) V ˜ ( k , x k 1 ) = ( x k 1 ) T ω k T x k 1 ω k T ( ω k 2 ) T Θ i x k 1 ω k x k 1 ω k ω k 2 ,
where
Θ i = ( Θ i 1 ) T E i Θ i 1 H i T L i T ( B ˜ i 1 ) T E i Θ i 1 ( Θ i 2 ) T E i Θ i 1 H i T L i T ( D ˜ i 1 ) T E i Θ i 1 * H i T L i T ( B ˜ i 1 ) T E i B ˜ i 1 L i H i ( Θ i 2 ) T E i B ˜ i 1 L i H i H i T L i T ( D ˜ i 1 ) T E i B ˜ i 1 L i H i * * ( Θ i 2 ) T E i Θ i 2 H i T L i T ( D ˜ i 1 ) T E i Θ i 2 * * * H i T L i T ( D ˜ i 1 ) T E i D ˜ i 1 L i H i , Θ i 1 = A ˜ i 1 + B ˜ i 1 L i G ¯ i 1 , Θ i 2 = C ˜ i 1 + D ˜ i 1 L i G ¯ i 1 .
Let η k = x k + 1 1 x k 1 . Introduce a symmetric matrix U 1 and U 2 , then we can get
sym [ ( η k T U 1 ( ω k + x k 1 ω k + ω k 2 ) T U 2 ) ( x k + 1 1 x k 1 η k ) ] = 0 .
By substituting (43) and (49) into (55), we can utilize the matrix inequality method and obtain that
V ˜ k + 1 , x k + 1 1 V ˜ k , x k 1 = ( x k 1 ) T ω k T ( x k 1 ω k ) T ( ω k 2 ) T Θ i x k 1 ω k x k 1 ω k ω k 2 + sym [ ( η k T U 1 ( ω k + x k 1 ω k + ω k 2 ) T U 2 ) ( A ˜ i 1 x k 1 + B ˜ i 1 u ¯ k + ( C ˜ i 1 x k 1 + D ˜ i 1 u ¯ k ) ω k x k 1 η k ) ] x k 1 ω k x k 1 ω k ω k 2 η k T Λ i 11 Λ i 12 Λ i 13 Λ i 14 ( ( A ˜ i 1 ) T + L i T ( B ˜ i 1 ) T I ) U 1 * Λ i 22 Λ i 23 Λ i 24 H i T L i T ( B ˜ i 1 ) T U 1 + U 2 * * Λ i 33 Λ i 34 ( G ¯ i T L i T ( D ˜ i 1 ) T ) U 1 + U 2 * * * Λ i 44 H i T L i T ( B ˜ i 1 ) T U 1 + U 2 * * * * 2 U 1 x k 1 ω k x k 1 ω k ω k 2 η k λ ˜ k T Λ i λ ˜ k ,
where
Λ i 11 = 2 ( A ˜ i 1 ) T E i A ˜ i 1 + 2 ( C ˜ i 1 ) T E i C ˜ i 1 + 2 G ¯ i T L i T ( B ˜ i 1 ) T E i B ˜ i 1 L i G ¯ i + 2 H i T L i T ( B ˜ i 1 ) T E i B ˜ i 1 L i H i + 2 G ¯ i T L i T ( D ˜ i 1 ) T E i D ˜ i 1 L i G ¯ i + 2 H i T L i T ( D ˜ i 1 ) T E i D ˜ i 1 L i H i P i 1 , Λ 12 i = Λ 13 i = Λ 14 i = ( ( A ˜ i 1 ) T L i T ( B ˜ i 1 ) T + I ) U 2 , Λ i 22 = ( A ˜ i 1 ) T E i A ˜ i 1 + ( C ˜ i 1 ) T E i C ˜ i 1 + G ¯ i T L i T ( B ˜ i 1 ) T E i B ˜ i 1 L i G ¯ i + H i T L i T ( D ˜ i 1 ) T E i D ˜ i 1 L i H i + G ¯ i T L i T ( D ˜ i 1 ) T E i D ˜ i 1 L i G ¯ i 2 U 2 B ˜ i 1 L i , Λ i 23 = U 2 ( C ˜ i 1 + D ˜ i 1 L i G ¯ i ) L i T ( B ˜ i 1 ) T U 2 , Λ i 24 = U 2 D ˜ i 1 L i H i L i T ( B ˜ i 1 ) T U 2 ,
Λ i 33 = 2 ( A ˜ i 1 ) T E i A ˜ i 1 + 2 G ¯ i T L i T ( B ˜ i 1 ) T E i B ˜ i 1 L i G ¯ i + H i T L i T ( B ˜ i 1 ) T E i B ˜ i 1 L i H i + H i T L i T ( D ˜ i 1 ) T E i D ˜ i 1 L i H i 2 U 2 ( C ˜ i 1 + D ˜ i 1 L i G ¯ i ) , Λ i 34 = U 2 D ˜ i 1 L i H i ( ( C ˜ i 1 ) T + G ¯ i T L i T ( D ˜ i 1 ) T ) U 2 , Λ i 44 = ( A ˜ i 1 ) T E i A ˜ i 1 + ( C ˜ i 1 ) T E i C ˜ i 1 + G ¯ i T L i T ( B ˜ i 1 ) T E i B ˜ i 1 L i G ¯ i 2 U 2 D ˜ i 1 L i H i + H i T L i T ( B ˜ i 1 ) T E i B ˜ i 1 L i H i + G ¯ i T L i T ( D ˜ i 1 ) T E i D ˜ i 1 L i G ¯ i .
We know that ( E i I ) T E i 1 ( E i I ) 0 , so it could be deduced that E i 1 E i 2 I . From the Schur Complement lemma, Λ i < 0 is equivalent to (51) in Theorem 2.
So taking iteration from 0 to and taking expectation of (56), it is obtained under the zero initial condition that
E k = 0 N Δ V ˜ ( k , x k 1 ) < 0 .
Noting that Λ ^ i < 0 and inequality (57), which completes the proof. □

4. Simulations Example

Three-dimensional discrete-time stochastic singular systems (16) driven by a one-dimensional stochastic disturbances ω k and a discrete-time Makrov chain with four switching modes are chosen by
E = 5 3 0 0 4 3 2 0 2 3 0 0 , A 1 = 4.35 0.78 3.2 5.44 4.19 6.4 2.78 0.87 3.2 , A 2 = 2.82 10.57 6.8 1.31 14.59 13.6 1.12 8.35 6.8 , A 3 = 3.82 16.75 13.33 13.69 27.9 26.67 5.53 15.1 13.33 , A 4 = 0.57 2.7 10.67 4.37 1.4 21.33 0.71 1.08 10.67 , B 1 = 1.1 2.12 0.44 , B 2 = 0.2 0.58 0.08 , B 3 = 0.33 1.62 0.13 , B 4 = 1.73 3.31 0.69 , C 1 = 1.3 0.68 0.1667 2.26 0.83 0.13 0.52 0.27 0.07 , C 2 = 0.23 0.27 0.08 0.03 0.77 2.07 0.09 0.11 0.03 , C 3 = 0.27 0.15 0.33 0.39 0.64 0.07 0.11 0.06 0.13 , C 4 = 0.43 0.7 0 0.61 1.02 0.4 0.17 0.28 0 , D 1 = 1.18 1.37 0.47 , D 2 = 2.43 4.23 0.97 , D 3 = 1.47 0.59 0.59 , D 4 = 0.85 0.36 0.34 , H 1 = 1.92 , H 2 = 0.24 , H 3 = 1.32 , H 4 = 1.88
with the three-dimensional state x k = [ x k 1 , x k 2 ] T and x 1 k = [ x k 11 , x k 12 ] T . And the jumping transfer matrix is given by
p i j = 0.15 0.6 0.05 0.2 0.05 0.2 0.6 0.15 0.05 0.25 0.1 0.6 0.6 0.1 0.2 0.1 .
First, we calculated the SVD of the corresponding system parameters and transformed the system matrix using the SVD results, and constructed the LMI conditions based on the transformed system matrix. Subsequently, the solution of GMJARE is obtained by solving the LMI using MATLAB. The optimal control gain matrix is calculated from the solution of GMJARE and the optimal control law is formed. Through the calculation of MATLAB LMI toolbox, the feasibility solution of our designed LMI in Theorem 2 can be obtained, where the weighting matrices in Problem 3 are given by
Q ˜ 1 = 0.58 0.12 0.12 2.28 , Q ˜ 2 = 0.14 0.15 0.15 0.23 , Q ˜ 3 = 1.23 1.32 1.32 0.34 , Q ˜ 4 = 0.51 0.16 0.16 0.12 , S ˜ 1 = 0.90 0.31 , S ˜ 2 = 0.31 0.04 , S ˜ 3 = 0.31 0.09 , S ˜ 4 = 0.52 0.59 , R ˜ 1 = 4.21 , R ˜ 2 = 0.94 , R ˜ 3 = 2.79 , R ˜ 4 = 3.32 ,
and the obtained parameters are as follows
L 1 = 0.1731 0.1424 , L 2 = 0.0163 0.0089 , L 3 = 0.0067 0.0693 , L 4 = 0.0105 0.0496 , U 1 = 0.7984 0.0184 0.0184 0.8037 , U 2 = 0.6978 0.1086 0.1086 0.8186 .
In this simulation example, an indefinite weight matrix is allowed to be introduced into the quadratic cost function, i.e., the weight matrix can have negative eigenvalues. This is significantly innovative compared with the traditional linear quadratic optimal control problem. Traditional methods usually require the weight matrix to be positive definite or positive semi-definite. However, through a series of equivalent transformations and theoretical derivations, we successfully solve the challenge brought by the indefinite weight matrix, which provides a broader space for flexible optimization of system performance. Then the LQOC problem E i ( i { 1 , 2 , 3 , 4 } ) in total could be indefinite. Thus the conditions of Theorem 2 are satisfied. We consider both random disturbance and discrete-time Markov chain in the modeling. The simulation results under the influence of 10 independent sets of disturbances and their average values are also presented. This comprehensive consideration enables the system to better simulate the uncertainty and dynamic changes in the real world and improves the robustness and adaptability of the system. In the existing research, few works can deal with these two complex factors at the same time, so it is significantly innovative in this respect. The initial condition is given by x 0 = [ x 0 11 , x 0 12 , x 0 2 ] T = [ 1 , 1 , 0.02 ] T . The simulation results are given in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6.
Figure 1 and Figure 2 show ten paths of stochastic disturbances and switching modes, respectively. It is easy to see that all the ten stochastic disturbances are independent and completely random, making the results more convincing and closer to real-world scenarios. Figure 3, Figure 4 and Figure 5 demonstrate the state trajectories x k 11 , x k 12 , and x k 2 of the closed-loop systems along ten stochastic disturbances, while their average trajectories under ten samples are also shown, respectively. It indicates that each system state gradually smooths and tends to stabilize within a finite time. Then, Figure 6 shows control signals u k , and it demonstrates that the designed controller is effective for the system. Finally, the optimal output feedback controller has been calculated by Theorem 2. Obviously, the figure shows that each of the 10 stochastic singular systems can reach stability and hold within t = 10 under the influence of 10 independently generated stochastic disturbances, and the experimental simulation shows that the method has good results.

5. Conclusions

In conclusion, this study tackled the indefinite LQOC problem in discrete-time stochastic singular systems driven by stochastic disturbances and discrete-time Markov chains. This paper transformed the problem equivalently and proposed conditions for solvability, obtaining optimal control strategies and ensuring system regularity and causality. Definiteness conditions were established, yielding unique solutions for the GMJARE. This paper acquired optimal controls and nonnegative cost values to ensure system admissibility. And the related research conclusions are extended from the finite horizon to the infinite horizon. Output feedback controller design was presented using the LMI method, validated through illustrative examples. Future research could explore extensions to more complex system dynamics, investigate robustness to uncertainties, and consider practical implementation aspects for real-world applications. Additionally, incorporating machine learning techniques for adaptive control in stochastic environments could further enhance system performance and robustness.

Author Contributions

Conceptualization, J.X. and B.Z.; methodology, J.X., B.Z. and T.Z.; software, J.X., B.Z. and X.K.; validation, J.X., B.Z. and T.Z.; formal analysis, J.X., B.Z. and T.Z.; investigation, J.X., B.Z. and T.Z.; resources, J.X., B.Z. and X.K.; writing—original draft preparation, J.X. and B.Z.; writing—review and editing, J.X., B.Z., T.Z. and X.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Qingdao Municipality under Grant 23-2-1-156-zyyd-jch.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dai, L.Y. Singular Control Systems, Lecture Notes in Control and Information Sciences; Springer: Berlin, Germany, 1989. [Google Scholar]
  2. Lam, J.; Xu, S.Y. Robust Control and Filtering of Singular Systems; Springer: Berlin, Germany, 2006. [Google Scholar]
  3. Xia, Y.Q.; Zhang, J.H.; Boukas, E.K. Control for discrete singular hybrid systems. Automatica 2008, 44, 2635–2641. [Google Scholar] [CrossRef]
  4. Yang, H.L.; Yang, L.J.; Ivanov, I.G. Reachable set estimation of discrete singular systems with time-varying delays and bounded peak inputs. Mathematics 2025, 13, 79. [Google Scholar] [CrossRef]
  5. Chang, X.H.; Wang, X.Y.; Hou, L.W. New LMI approach to H control of discrete-time singular systems. Appl. Math. Comput. 2024, 474, 128703. [Google Scholar] [CrossRef]
  6. Xu, Y.C.; Zhang, Y.J.; Zhang, J.F. Singularity-free adaptive control of discrete-time linear systems without prior knowledge of the high-frequency gain. Automatica 2024, 165, 111657. [Google Scholar] [CrossRef]
  7. Wang, X.Y.; Zhang, Y.Q.; Yan, J.J.; Shi, Y. Event-triggered finite-time state estimation for discrete singular Markovian systems with quantization. J. Frankl. Inst. 2023, 360, 14430–14442. [Google Scholar] [CrossRef]
  8. Han, Y.Y.; Zhou, S.S. Novel criteria of stochastic stability for discrete-time Markovian jump singular systems via supermartingale approach. IEEE Trans. Autom. Control 2022, 67, 6940–6947. [Google Scholar] [CrossRef]
  9. Zhang, L.; Zong, G.D.; Zhao, X.D.; Zhao, N.; Sharaf, S. Reachable Set Control for Discrete-Time Takagi-Sugeno Fuzzy Singular Markov Jump System. IEEE Trans. Fuzzy Syst. 2023, 31, 3173–3184. [Google Scholar] [CrossRef]
  10. Kao, Y.G.; Han, Y.Q.; Zhu, Y.Z.; Shu, Z. Stability analysis of delayed discrete singular piecewise homogeneous Markovian jump systems with unknown transition probabilities via sliding-mode approach. IEEE Trans. Autom. Control 2024, 69, 315–322. [Google Scholar] [CrossRef]
  11. Zhang, Q.L.; Zhang, J.Y.; Wang, Y.Y. Sliding-mode control for singular Markovian jump systems with Brownian motion based on stochastic sliding mode surface. IEEE Trans. Syst. Man, Cybern. Syst. 2019, 49, 494–505. [Google Scholar] [CrossRef]
  12. Zhou, H.; Hu, Y.Z.; Zhao, J.J. Numerical method for singular drift stochastic differential equation driven by fractional Brownian motion. J. Comput. Appl. Math. 2024, 447, 115902. [Google Scholar] [CrossRef]
  13. Wang, S.Q.; Lian, G.G.; Cheng, C.; Chen, H.T. A novel method of rolling bearings fault diagnosis based on singular spectrum decomposition and optimized stochastic configuration network. Neurocomputing 2024, 574, 127278. [Google Scholar] [CrossRef]
  14. Kalman, R.E. Contributions to the theory of optimal control. Bol. Soc. Mexicana 1960, 5, 102–119. [Google Scholar]
  15. Elsisi, M.; Zaini, H.G.; Mahmoud, K.; Bergies, S.; Ghoneim, S.S.M. Improvement of trajectory tracking by robot manipulator based on a new co-operative optimization algorithm. Mathematics 2021, 9, 3231. [Google Scholar] [CrossRef]
  16. Messerer, F.; Baumgartner, K.; Nurkanovic, A.; Diehl, M. Approximate propagation of normal distributions for stochastic optimal control of nonsmooth systems. Nonlinear Anal. Hybrid Syst. 2024, 53, 101499. [Google Scholar] [CrossRef]
  17. Ferrante, A.; Ntogramatzidis, L. The generalized continuous algebraic Riccati equation and impulse-free continuous-time LQ optimal control. Automatica 2014, 50, 1176–1180. [Google Scholar] [CrossRef]
  18. Ntogramatzidis, L.; Ferrante, A. On the solution of the Riccati differential equation arising from the LQ optimal control problem. Syst. Control Lett. 2010, 59, 114–121. [Google Scholar] [CrossRef]
  19. Chávez-Fuentes, J.R.; Costa, E.F.; Terra, M.H.; Rocha, K.D.T. The linear quadratic optimal control problem for discrete-time Markov jump linear singular systems. Automatica 2021, 127, 109506. [Google Scholar] [CrossRef]
  20. Li, Y.C.; Ma, S.P. Finite and infinite horizon indefinite linear quadratic optimal control for discrete-time singular Markov jump systems. J. Frankl. Inst. 2021, 358, 8993–9022. [Google Scholar] [CrossRef]
  21. Lu, Q.Y.; Zhu, Y.G. LQ optimal control of fractional-order discrete-time uncertain systems. Chaos Solitons Fractals 2021, 147, 110984. [Google Scholar] [CrossRef]
  22. Lu, Q.Y.; Zhu, Y.G.; Li, B. Necessary optimality conditions of fractional-order discrete uncertain optimal control problems. Eur. J. Control 2023, 69, 100723. [Google Scholar] [CrossRef]
  23. Xu, M.Z.; Tang, M.N.; Meng, Q.X. Forward-backward stochastic evolution equations in infinite dimensions and application to LQ optimal control problems. Syst. Control. Lett. 2024, 185, 105748. [Google Scholar] [CrossRef]
  24. Ye, L.W.; Zhao, Z.G.; Liu, F. Stochastic LQ optimal control for Markov jumping systems with multiplicative noise using reinforcement learning. Syst. Control Lett. 2024, 186, 105765. [Google Scholar] [CrossRef]
  25. Wang, G.L.; Zhang, Q.L.; Yan, X.G. Analysis and Design of Singular Markovian Jump Sysytems; Springer: Cham, Switzerland, 2015. [Google Scholar]
  26. Liu, X.K.; Li, Y.; Zhang, W.H. Stochastic linear quadratic optimal control with constraint for discrete-time systems. Appl. Math. Comput. 2014, 228, 264–270. [Google Scholar] [CrossRef]
  27. Rami, M.A.; Chen, X.; Zhou, X.Y. Discrete-time indefinite LQ control with state and control dependent noises. J. Glob. Optim. 2002, 23, 245–265. [Google Scholar] [CrossRef]
  28. Zhu, L.M.; Modares, H.; Peen, G.O.; Lewis, F.L.; Yue, B.Z. Adaptive suboptimal output-feedback control for linear systems using integral reinforcement learning. IEEE Trans. Control Syst. Technol. 2015, 23, 264–273. [Google Scholar] [CrossRef]
Figure 1. Ten stochastic disturbance paths.
Figure 1. Ten stochastic disturbance paths.
Mathematics 13 00634 g001
Figure 2. Markovian switching modes.
Figure 2. Markovian switching modes.
Mathematics 13 00634 g002
Figure 3. State trajectories x k 11 of the closed-loop model.
Figure 3. State trajectories x k 11 of the closed-loop model.
Mathematics 13 00634 g003
Figure 4. State trajectories x k 12 of the closed-loop model.
Figure 4. State trajectories x k 12 of the closed-loop model.
Mathematics 13 00634 g004
Figure 5. State trajectories x k 2 of the closed-loop model.
Figure 5. State trajectories x k 2 of the closed-loop model.
Mathematics 13 00634 g005
Figure 6. System control signal u k .
Figure 6. System control signal u k .
Mathematics 13 00634 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xie, J.; Zhang, B.; Zhang, T.; Kong, X. Output Feedback Optimal Control for Discrete-Time Singular Systems Driven by Stochastic Disturbances and Markov Chains. Mathematics 2025, 13, 634. https://doi.org/10.3390/math13040634

AMA Style

Xie J, Zhang B, Zhang T, Kong X. Output Feedback Optimal Control for Discrete-Time Singular Systems Driven by Stochastic Disturbances and Markov Chains. Mathematics. 2025; 13(4):634. https://doi.org/10.3390/math13040634

Chicago/Turabian Style

Xie, Jing, Bowen Zhang, Tianliang Zhang, and Xiangtong Kong. 2025. "Output Feedback Optimal Control for Discrete-Time Singular Systems Driven by Stochastic Disturbances and Markov Chains" Mathematics 13, no. 4: 634. https://doi.org/10.3390/math13040634

APA Style

Xie, J., Zhang, B., Zhang, T., & Kong, X. (2025). Output Feedback Optimal Control for Discrete-Time Singular Systems Driven by Stochastic Disturbances and Markov Chains. Mathematics, 13(4), 634. https://doi.org/10.3390/math13040634

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop