Next Article in Journal
Large-Eddy Simulation of Flow Separation Control in Low-Speed Diffuser Cascade with Splitter Blades
Previous Article in Journal
A Novel Hybrid Deep Learning Model for Forecasting Ultra-Short-Term Time Series Wind Speeds for Wind Turbines
Previous Article in Special Issue
T-S Fuzzy Algorithm Optimized by Genetic Algorithm for Dry Fermentation pH Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Infinite Horizon H2/H Control for Discrete-Time Mean-Field Stochastic Systems

1
School of Mathematics and Statistics, Shandong Normal University, Jinan 250358, China
2
College of Mathematics and Systems Science, Shandong University of Science and Technology, Qiangdao 266590, China
*
Author to whom correspondence should be addressed.
Processes 2023, 11(11), 3248; https://doi.org/10.3390/pr11113248
Submission received: 22 September 2023 / Revised: 28 October 2023 / Accepted: 10 November 2023 / Published: 18 November 2023
(This article belongs to the Special Issue Advances in Nonlinear and Stochastic System Control)

Abstract

:
In this paper, we deal with the H 2 / H control problem in the infinite horizon for discrete-time mean-field stochastic systems with ( x , u , v ) -dependent noise. First of all, a stochastic-bounded real lemma, which is the core of H analysis, is derived. Secondly, a sufficient condition in terms of the solution of coupled difference Riccati equations (CDREs) is obtained for solving the H 2 / H control problem above. In addition, an iterative algorithm for solving CDREs is proposed and a numerical example is given for verification of the feasibility of the developed results.

1. Introduction

Mixed H 2 / H control is a crucial robust control research subject that has been thoroughly investigated by numerous researchers over the past decades. Robust control, as a control design method that can effectively eliminate the effect of disturbances, has been refined since the 1960s. However, in engineering practice, it is desired that the controller not only minimizes performance targets, but also eliminates the influence of disturbances when the worst disturbance is applied. Therefore, mixed H 2 / H control is better suited to the needs of engineering practice. Mixed H 2 / H control problems regarding deterministic systems have been discussed in [1,2] and others. H and H 2 / H control problems of stochastic systems and their applications have also been extensively studied by many researchers in recent years. For example, stochastic H 2 / H control for continuous-time systems with state-dependent noise has been discussed in [3]. In [4], the H control problem for a type of nonlinear system with noise that is dependent on the state and disturbance has been discussed by the authors. Ref. [5] considered H control for discrete-time linear systems with their states and inputs affected by random perturbations. Ref. [6] proposed the notion and stochastic PBH criterion of exactly observable linear autonomous systems, which provided great convenience for the analysis and synthesis of stochastic systems. Under the presumption that the system is exactly observable, an infinite horizon H 2 / H controller of discrete-time stochastic systems with state- and disturbance-dependent noise has been designed in [7]. As the research on stochastic H 2 / H control continues, many researchers have begun to focus on systems with Markov jumps. H 2 / H control for stochastic systems with Markov jumps not only contributes to improving system performance but also holds significant practical significance for areas such as economics and technology. Ref. [8] investigated the robust control problem for continuous-time systems that were subject to multiplicative noise and Markovian parameter jumps. And a necessary/sufficient condition was presented for the existence of a controller using two coupled algebraic Riccati equations. Ref. [9] developed a robust control theory for discrete-time stochastic systems subject to independent stochastic perturbations and Markov jumps.
The origins of the mean-field theory can be traced back to the early 20th century, when physicists began to focus on complex systems in quantum mechanics and statistical physics. As an approximate method, mean-field theory can effectively solve problems in complex systems. In recent years, the application of mean-field theory has become increasingly widespread in fields such as power systems and biology. Many researchers have conducted numerous studies in this area [10,11,12,13,14]. For example, the LQ control problems of discrete-time stochastic mean-field systems in the finite and infinite horizon are discussed in [10] and [11], respectively. Ref. [13] discussed the H 2 / H control problem of discrete-time mean-field stochastic systems in the finite horizon, while [14] addressed the H 2 / H control problem of continuous-time systems in the finite horizon. The problem of mean-field games has been studied in [15] to illustrate the feasibility of the mean-field modeling approach used in economics, and a number of issues worthy of further research were mentioned at the end of the article. With the gradual maturation of theoretical studies, mean-field theory has been widely applied, for instance, in [16]; mean-field theory has provided great convenience for solving the behavior of node interaction behavior in complex networks, and [17] has used mean-field theory to deal with the problem of large population stochastic differential games connected to the optimal and robust decentralized control of large-scale multiagent systems.
Ref. [18] has addressed finite and infinite horizon stochastic H 2 / H control problems for stochastic systems with ( x , u , v ) -dependent noise. The infinite horizon H 2 / H control problem for discrete-time mean-field stochastic systems with ( x , u , v ) -dependent noise is the topic of this paper. The difference with the considered stochastic systems in [18] is that the system equation considered in this paper includes both the state x , control u, and perturbation v, as well as the expectation of the state, control, and perturbation, i.e., E x , E u , E v , respectively. Expectation terms can reduce the sensitivity of the system to random events; hence, mean-field theory simplifies complex problems and has attracted a great deal of attention. In [19,20,21], the mean-field stochastic maximum principle for dealing with the mean-field linear–quadratic (LQ) optimal control issue have been discussed. Separately, there are discussions of mean-field maximum principles for the mean-field stochastic diffierencial equations (MFSDEs), backward stochastic differential equations (BSDEs), and forward–backward stochastic differential equations (FBSDEs). Later, [22] extended the results in [21] to the case of an infinite horizon. For discrete-time mean-field systems, ref. [10] investigated the finite horizon LQ optimal control problem. Based on [6,11,23], these works gave the notion of exact detectability of mean-field systems and derived the existence and uniqueness of the solution of the infinite horizon optimal control problem. This research has enriched the existing theory about stochastic algebraic equations. By providing a deeper understanding of the relationships between l 2 stability and optimal control problems, it contributes to the development of more efficient control strategies for linear mean-field stochastic systems. Ref. [24] investigated the stabilization and optimal LQ control problems for infinite horizon discrete-time mean-field systems. Unlike previous works, the authors showed for the first time that, under the exact detectability (exact observability) assumption, the mean-field system is stabilizable in the mean square sense with the optimal controller if and only if coupled algebraic Riccati equations have a unique positive semidefinite (or positive definite) solution. Ref. [12] considered the LQ optimal control problem for a class of mean-field systems with Markov jump parameters, and the unique optimal control could be given by the solution to generalized difference Riccati equations. Currently, driven by the research on mean-field games and mean-field-type control problems [15], many researchers have contributed to MFSDEs and their applications. By using a purely random approach, ref. [25] worked on solutions to McKean–Vlasov-type forward–backward stochastic differential equations (FBSDEs). Ref. [26] investigated the Markov framework for mean-field backward stochastic differential equations (MFBSDEs). The authors in [27] investigated the existence and uniqueness of the McKean–Vlasov FBSDE solution.
Nonlinear systems and time-delay systems pose inherent difficulties in control due to their complex behavior. Ref. [4] is an example of extending control strategies to address nonlinearities in systems, thus further broadening the scope of control theory. Ref. [28] examined the H 2 / H control problem for nonlinear stochastic systems with time delay and state-dependent noise. The work explored the sufficient condition for the existence of nonlinear stochastic H 2 / H control and the design method for stochastic H 2 / H controllers. In [29], a fuzzy method was used to design multiobjective H 2 / H control for a class of nonlinear mean-field random jump diffusion systems with uncertainty, and the stability, robustness, and performance optimization were deeply studied. Ref. [30] discussed noncooperative and cooperative multiplayer minmax H mean-field target tracking game strategies for nonlinear mean-field stochastic systems, with applications in the realm of cyberfinancial systems. Stochastic control problems with time delays are highly challenging in practice, because delays and uncertainties may lead to the degradation of the system performance. Ref. [31] focused on the mixed H 2 / H control problem under an open-loop information pattern with input delay. In [32], the authors investigated the state feedback control laws for Markov jump linear systems with state and mode observation delays. Ref. [33] addressed the robust stability problem and mixed H 2 / H control for Markovian jump time-delay systems with uncertain transition probabilities in the discrete-time domain. The obtained results have generalized several results in the previous literature that consider Markov transition probabilities as either a priori known or partially unknown.
Recently, robust controls for stochastic mean-field systems have received a great deal of attention. For instance, ref. [34] treated the discrete-time mean-field systems H output feedback control problem. Ref. [13] discussed the finite horizon H 2 / H control problem for a class of stochastic mean-field systems. Ref. [14] studied a continuous-time stochastic control problem for mean-field stochastic differential systems with random initial values and diffusion coefficients that depend on the state, control, and disturbance explicitly, together with their expectations. The work first established a mean-field stochastic-bounded real lemma for continuous-time mean field stochastic systems, thereby revealing the equivalence between robust stability and the solvability of two indefinite differential Riccati equations, which provided a theoretical basis for H control. Based on this significant result, an equivalent condition for the existence of a controller was proposed by utilizing the solution of two crosscoupled Riccati equations. In contrast, there are relatively few studies on the robust control of discrete-time mean-field systems in the infinite horizon. Compared with the finite horizon case, it is required for the infinite horizon feedback controller to ensure that the closed-loop system is stable. The primary accomplishments of this paper are the following: (i) A mean-field stochastic bounded real lemma has been obtained. This lemma reveals the equivalence between robust stability and the solvability of algebraic Riccati equations, thereby providing a useful tool for the stability analysis of H control; (ii) By means of exact detectability, a sufficient condition for the solvability of the H 2 / H control problem has been obtained. This condition offers a theoretical basis for designing robust controllers and ensures the stability of closed-loop systems; (iii) An iterative algorithm has been proposed to solve the coupled SDREs. This algorithm reduces computational complexity and enhances the practicality of controller design.
This paper is structured as follows: Section 2 recalls the notions of l 2 stability and exact detectability; The mean-field SBRL is presented in Section 3. Section 4 discusses the infinite horizon H 2 / H control problem for the considered system. An iterative algorithm and a numerical example are given in Section 5.

2. Preliminaries

Consider the following discrete-time mean-field stochastic system:
x s + 1 = A x s + A ¯ E x s + B 1 u s + B ¯ 1 E u s + B 2 v s + B ¯ 2 E v s + C x s + C ¯ E x s + D 1 u s + D ¯ 1 E u s + D 2 v s + D ¯ 2 E v s ω s , z s = K x s F u s , F T F = I , x 0 = ζ 0 , s N : = { 0 , 1 , 2 , } .
In (1), R n denotes the n-dimensional real space, R n × m is the set of all real n × m matrices, and I is the identity matrix. x s R n , u s R n u , and v s R n v are, respectively, the system state, control input, and external disturbance; ζ 0 R n is the initial condition. { ω s , s N } indicates the stochastic disturbance, which is a second-order process with E { ω s + 1 | ω t , t = 0 , 1 , , s } = 0 and E { ( ω s + 1 ) 2 | ω t , t = 0 , 1 , , s } = 1 . We assume that ζ 0 and { ω s , s N } are mutually independent. A, A ¯ , B 1 , B ¯ 1 , B 2 , B ¯ 2 , C, C ¯ , D 1 , D ¯ 1 , D 2 , D ¯ 2 , K, and F are adequate dimensional constant matrices. Let F s be the σ algebra generated by { ω t , t = 0 , 1 , , s } . l 2 ( Ω , R k ) represents the space of R k -valued square integrable random vectors, while l ω 2 ( N , R k ) is the collection of nonanticipative square summable stochastic processes z = { z s : z s R k , s N } , with z s l 2 ( Ω , R k ) being F s 1 -measurable, in which F s 1 = { ϕ , Ω } . The norm of l ω 2 ( N , R k ) is expressed by z l ω 2 ( N , R k ) = s = 0 E z s 2 1 2 . H n ( R ) denotes the set of all n × n real symmetric matrices. H n 0 , + ( R ) is the subset of all nonnegative definite matrices of H n ( R ) . We define N : = { 0 , 1 , 2 , } . For S N , N S : = { 0 , 1 , , S } .
Next, we recall the concept of stability for the discrete-time mean-field system and present some results that will be needed in the following study. Consider the following unforced stochastic system:
x s + 1 = A x s + A ¯ E x s + B 2 v s + B ¯ 2 E v s + C x s + C ¯ E x s + D 2 v s + D ¯ 2 E v s ω s , z s = K x s , x 0 = ζ 0 .
For simplicity, we denote (2) with [ A , A ¯ , B 2 , B ¯ 2 ; C , C ¯ , D 2 , D ¯ 2 ] . Particularly, [ A , A ¯ ; C , C ¯ ] indicates [ A , A ¯ , 0 , 0 ; C , C ¯ , 0 , 0 ] .
Definition 1 
([11]). [ A , A ¯ ; C , C ¯ ] is considered to be l 2 -asymptotically stable ( l 2 -stable for short), if lim s E x s 2 = 0 .
Taking the mathematical expectation in (2), the system equation of E x s can be written as
E x s + 1 = ( A + A ¯ ) E x s + ( B 2 + B ¯ 2 ) E v s , E x 0 = E ζ 0 .
By a simple calculation, we obtain the following system equation of x s E x s :
x s + 1 E x s + 1 = A x s E x s + B 2 v s E v s + C x s E x s + ( C + C ¯ ) E x s + D 2 v s E v s + ( D 2 + D ¯ 2 ) E v s ω s , x 0 E x 0 = ζ 0 E ζ 0 .
Theorem 1 
([11]). The following statements are equivalent:
(i) [ A , A ¯ ; C , C ¯ ] is l 2 -stable;
(ii) For any R, R + R ¯ > 0 ; the Lyapunov equations
P = A T P A + C T P C + R , Q = ( A + A ¯ ) T Q ( A + A ¯ ) + ( C + C ¯ ) T P ( C + C ¯ ) + R + R ¯ ,
admit a unique solution ( P , Q ) with P, Q > 0 ;
(iii) There exist R, R + R ¯ > 0 , and the Lyapunov equations (5) to allow a unique solution ( P , Q ) with P, Q > 0 .
When v s 0 , s N in (1), l 2 stabilizability can be formulated as follows:
Definition 2 
([11]). [ A , A ¯ , B 1 , B ¯ 1 ; C , C ¯ , D 1 , D ¯ 1 ] is said to be closed-loop l 2 -stabilizable if there exists a pair ( U , U ¯ ) such that for any ζ 0 R n and under control u s = U x s + U ¯ E x s , s N , the closed-loop system
x s + 1 = [ ( A + B 1 U ) x s + ( A ¯ + B 1 U ¯ + B ¯ 1 U + B ¯ 1 U ¯ ) E x s ] + ( C + D 1 U ) x s + ( C ¯ + D 1 U ¯ + D ¯ 1 U + D ¯ 1 U ¯ ) E x s ω s , x 0 = ζ 0 , s N ,
is l 2 -stable. In this instance, we refer to ( U , U ¯ ) as a closed-loop l 2 -stabilizer.
Below, we present the following linear operator:
L ( X ) = A X A T + C X C T + C ^ X C ^ T , X H 2 n ( R ) ,
where H 2 n ( R ) = X = X 1 0 0 X 2 | X 1 , X 2 H n ( R ) , and
A = A 0 0 A + A ¯ , C = C 0 0 0 , C ^ = 0 C + C ¯ 0 0 .
It is obvious that L is a linear positive operator, i.e., if X H 2 n + ( R ) , we have L ( X ) H 2 n + ( R ) , where H 2 n + ( R ) indicates the set of nonnegative definite real matrices of H 2 n ( R ) . For any H 1 , H 2 H 2 n ( R ) , define the inner product H 1 , H 2 = T r ( H 1 H 2 ) . Therefore, the adjoint operator of L is expressed as follows:
L ( X ) = A T X A + C T X C + C ^ T X C ^ , X H 2 n ( R ) .
Furthermore, the spectrum of L is σ ( L ) : = { λ | L ( X ) = λ X , X T = X , X 0 } .
The following definition is about the exact detectability of discrete-time mean-field systems.
Definition 3 
([11]). In (2), let v s 0 , s N . [ A , A ¯ ; C , C ¯ | K ] is considered to be exactly detectable if, for any S N and s N S , z s 0 (a.s.) implies that lim s E | x s | 2 = 0 .
Let
Y s = E [ ( x s E x s ) ( x s E x s ) ] T 0 0 E x s E x s T .
It can be seen that lim s E | x s | 2 = 0 is equivalent to lim s Y s = 0 . By calculation, one has that
E [ z s z s T ] = E [ K ( x s E x s ) ( x s E x s ) T K T + K ( E x s ) ( E x s ) T K T ] = K Y s K T = 0
with K = K 0 0 K . Now, we introduce the following dynamics:
Y s + 1 = L Y s , Y 0 = E [ ( ζ 0 E ζ 0 ) ( ζ 0 E ζ 0 ) ] T 0 0 E ζ 0 E ζ 0 T , Z s = K Y s K T .
The following lemma extends the result of Theorem 5.6 in [11], and the complete proof is omitted here because we could easily demonstrate the following result using Theorem 3 of [35].
Lemma 1. 
[ A , A ¯ ; C , C ¯ | K ] is exactly detectable if and only if, for each X 0 , K X K T 0 , where L ( X ) = λ X , and λ 1 .
The following lemma links the l 2 -stability with the exactly detectability of the uncontrolled system, and the detailed proof process can be refered to in [23].
Lemma 2. 
[ A , A ¯ ; C , C ¯ ] is l 2 stable if and only if [ A , A ¯ ; C , C ¯ | K ] is exactly detectable and the following generalized Lyapunov equation holds:
X = L ( X ) + K T K
admits a unique solution X = X 1 0 0 X 2 0 .

3. Stochastic-Bounded Real Lemma

In this section, we will derive a mean-field stochastic-bounded real lemma that plays a crucial role in the analysis of the stochastic disturbance attenuation problem and H 2 / H control problems. By utilizing two coupled Riccati difference equations, this lemma establishes an equivalent condition guaranteeing the stability of a mean-field stochastic system with an H norm that is less than a given disturbance attenuation level γ .
Definition 4. 
In system (2), assume that the disturbance input is v s l ω 2 ( N , R n v ) and the controlled output is z s l ω 2 ( N , R n z ) . The perturbed operator L : l ω 2 ( N , R n v ) l ω 2 ( N , R n z ) is defined by
L v s : = K x ( s ; 0 , v ) , v s l ω 2 ( N , R n v ) , x 0 = 0
with its norm
L = sup v l w 2 N , R n v , v s 0 , x 0 = 0 z s l w 2 ( N , R n z ) v s l w 2 N , R n v = sup v l w 2 N , R n v , v s 0 , x 0 = 0 s = 0 E K x s 2 1 2 s = 0 E v s 2 1 2 .
Remark 1. 
The infinite horizon H gain is given by the norm of L. However, it is rather hard to compute the H gain using the formula directly. Hence, the objective of H analysis is toward presenting a sufficient and necessary condition in terms of the solution to the CDREs to ensure that the norm of L is below a prescribed level γ > 0 , namely, the bounded real lemma.
The following is the definition of the performance function for the infinite horizon H control:
J γ 2 ( ζ 0 , v ) = s = 0 E [ γ 2 v s 2 z s 2 ] .
Lemma 3. 
In system (2), assume that S N is given and that P s and Q s are arbitrary matrices in H n ( R ) with s = 0 , 1 , 2 , , S + 1 . Thus, for any ζ 0 R n and v s l w 2 N S , R n v , we have that
J S γ 2 ζ 0 , v = s = 0 S E γ 2 v s 2 z s 2 = s = 0 S E x s E x s v s E v s T M s γ 2 ( P ) x s E x s v s E v s + s = 0 S E x s E v s T G s γ 2 ( P , Q ) E x s E v s +   ζ 0 T Q 0 ζ 0 E x S + 1 T Q S + 1 E x S + 1   E x S + 1 E x S + 1 T P S + 1 x S + 1 E x S + 1 ,
where
A ˜ = A + A ¯ , B ˜ 2 = B 2 + B ¯ 2 , C ˜ = C + C ¯ , D ˜ 2 = D 2 + D ¯ 2 ,
M s γ 2 ( P ) = P s + A T P s + 1 A + C T P s + 1 C K T K A T P s + 1 B 2 + C T P s + 1 D 2 B 2 T P s + 1 A + D 2 T P s + 1 C γ 2 I + B 2 T P s + 1 B 2 + D 2 T P s + 1 D 2 ,
and
G s γ 2 ( P , Q ) = Q s + A ˜ T Q s + 1 A ˜ + C ˜ T P s + 1 C ˜ K T K A ˜ T Q s + 1 B ˜ 2 + C ˜ T P s + 1 D ˜ 2 B ˜ 2 T Q s + 1 A ˜ + D ˜ 2 T P s + 1 C ˜ γ 2 I + B ˜ 2 T Q s + 1 B ˜ 2 + D ˜ 2 T P s + 1 D ˜ 2 .
Proof of Lemma 3. 
This is identical to the proof of Lemma 3.1 of [34], so the details are omitted. □
For convenience, we define some notations as follows:
M r 2 ( P ) = L s ( P ) K s ( P ) K s ( P ) T H s ( P ) , G r 2 ( P , Q ) = L ˜ s ( P , Q ) K ˜ s ( P , Q ) K ˜ s ( P , Q ) T H ˜ s ( P , Q ) ,
where
L s ( P ) = P + A T P A + C T P C K T K , K s ( P ) = A T P B 2 + C T P D 2 , H s ( P ) = γ 2 I + B 2 T P B 2 + D 2 T P D 2 , L ˜ s ( P , Q ) = Q + A ˜ T Q A ˜ + C ˜ T P C ˜ K T K , K ˜ s ( P , Q ) = A ˜ T Q B ˜ 2 + C ˜ T P D ˜ 2 , H ˜ s ( P , Q ) = γ 2 I + B ˜ 2 T Q B ˜ 2 + D ˜ 2 T P D ˜ 2 .
Theorem 2. 
Assume that [ A , A ¯ ; C , C ¯ ] is l 2 stable. The following generalized difference Riccati equation (GDRE) is defined as follows:
P = A T P A + C T P C K T K ( A T P B 2 + C T P D 2 ) ( γ 2 I + B 2 T P B 2 + D 2 T P D 2 ) 1 [ ] T , Q = A ˜ T Q A ˜ + C ˜ T P C ˜ K T K ( A ˜ T Q B ˜ 2 + C ˜ T P D ˜ 2 ) ( γ 2 I + B ˜ 2 T Q B ˜ 2 + D ˜ 2 T P D ˜ 2 ) 1 [ ] T , H s ( P ) = γ 2 I + B 2 T P B 2 + D 2 T P D 2 > 0 , H ˜ s ( P , Q ) = γ 2 I + B ˜ 2 T Q B ˜ 2 + D ˜ 2 T P D ˜ 2 > 0
exists as a stabilizing solution ( P , Q ) with P 0 and Q 0 if and only if for a given γ > 0 and L < γ in which [ ] denotes the M in M G M T ; in the following, the symbol represents the same meaning.
Proof of Theorem 2. 
Necessity. Consider the following associated finite horizon GDRE:
P s = A T P s + 1 A + C T P s + 1 C K T K K s ( P s + 1 ) H s ( P s + 1 ) 1 [ ] T , Q s = A ˜ T Q s + 1 A ˜ + C ˜ T P s + 1 C ˜ K T K K ˜ s ( P s + 1 , Q s + 1 ) H ˜ s ( P s + 1 , Q s + 1 ) 1 [ ] T , H s ( P s + 1 ) = γ 2 I + B 2 T P s + 1 B 2 + D 2 T P s + 1 D 2 > 0 , H ˜ s ( P s + 1 , Q s + 1 ) = γ 2 I + B 2 ˜ T Q s + 1 B 2 ˜ + D 2 ˜ T P s + 1 D 2 ˜ > 0 , P S + 1 = 0 , Q S + 1 = 0 , s N S ,
and the corresponding cost functional
J S γ 2 ζ 0 , v = s = 0 S E γ 2 v s 2 z s 2 = s = 0 S E γ 2 v s T v s x s T K T K x s .
According to Lemma 3 and [13], we have that
min v s l w 2 N S , R n v J S γ 2 ζ 0 , v = ζ 0 T Q S ( 0 ) ζ 0 ,
and the worst disturbance can be denoted by
v S ( s ) = H ˜ S ( P S ) 1 K ˜ S ( P S ) T E x S ( s ) H S ( P S ) 1 K S ( P S ) T [ x S ( s ) E x S ( s ) ] ,
where { x S ( s ) , s = 0 , 1 , , S , S + 1 } is the solution to (11). From [7], one can obtain that lim S P S ( s ) = P 0 . Moreover, because Q S ( s ) is bounded from below and decreases as S increases, we can derive that
lim S Q S ( s ) = lim S Q S s ( 0 ) = Q .
Therefore, GDRE (10) has a solution ( P , Q ) with P 0 , Q 0 . By replacing K with K δ = K δ I and z s with z s , δ = K δ x s , we obtain the associated perturbation operator L δ and the cost functional
J S , δ γ 2 ζ 0 , v = s = 0 S E γ 2 v s 2 z s , δ 2 = s = 0 S E γ 2 v s T v s x s T K T K x s δ 2 x s T x s .
Since (2) is l 2 -stabilizable, x s l ω 2 ( N , R n ) can be concluded for every v s l ω 2 ( N , R n v ) ; thus, L δ < γ for sufficiently small δ > 0 . Similarly, it can be inferred that the GDRE
P s = A T P s + 1 A + C T P s + 1 C δ 2 I K T K K s ( P s + 1 ) H s ( P s + 1 ) 1 [ ] T , Q s = A ˜ T Q s + 1 A ˜ + C ˜ T P s + 1 C ˜ K T K δ 2 I K ˜ s ( P s + 1 , Q s + 1 ) H ˜ s ( P s + 1 , Q s + 1 ) 1 [ ] T , H s ( P s + 1 ) = γ 2 I + B 2 T P s + 1 B 2 + D 2 T P s + 1 D 2 > 0 , H ˜ s ( P s + 1 , Q s + 1 ) = γ 2 I + B 2 ˜ T Q s + 1 B ˜ 2 + D ˜ 2 T P s + 1 D ˜ 2 > 0 , P S + 1 = 0 , Q S + 1 = 0 , s N s
admits a unique solution ( P S , δ , Q S , δ ), and
min v s l w 2 N S , R n v J S , δ γ 2 x 0 , v = ζ 0 T Q S , δ ( 0 ) ζ 0 .
According to the time invariance of P S ( s ) , Q S ( s ) , P S , δ ( s ) , and Q S , δ ( s ) on N S + 1 , i.e., P S ( s ) = P S s ( 0 ) , Q S ( s ) = Q S s ( 0 ) , P S , δ ( s ) = P S s , δ ( 0 ) , and Q S , δ ( s ) = Q S s , δ ( 0 ) , 0 s S , it can be inferred that, for any ζ 0 R n ,
ζ 0 T Q S , δ ( s ) ζ 0 = ζ 0 T Q S s , δ ( 0 ) ζ 0 = min v s l w 2 N S s , R n v J S s , δ γ 2 x 0 , v min v s l w 2 N S , R n v J S s γ 2 x 0 , v = ζ 0 T Q S s ( 0 ) ζ 0 = ζ 0 T Q S ( s ) ζ 0 .
As ζ 0 is arbitrary, we have that P S , δ ( s ) P S ( s ) and Q S , δ ( s ) Q S ( s ) , where s N S . By applying the time invariance of P S , δ ( s ) and Q S , δ ( s ) on N S + 1 , one can imply that
lim S P S , δ ( s ) = lim S P S s , δ ( 0 ) = P δ , lim S Q S , δ ( s ) = lim S Q S s , δ ( 0 ) = Q δ ,
P P δ , and Q Q δ . Moreover, ( P , Q ) and ( P δ , Q δ ) satisfy GDRE (10) and the following GDRE:
P δ = A T P δ A + C T P δ C K T K ( A T P δ B 2 + C T P δ D 2 ) ( γ 2 I l + B 2 T P δ B 2 + D 2 T P δ D 2 ) 1 [ ] T , Q δ = A ˜ T Q δ A ˜ + C ˜ T P δ C ˜ K T K ( A ˜ T Q δ B 2 ˜ + C ˜ T P δ D ˜ 2 ) ( γ 2 I l + B ˜ 2 T Q δ B ˜ 2 + D ˜ 2 T P δ D ˜ 2 ) 1 [ ] T , H s ( P ) = γ 2 I + B 2 T P δ B 2 + D 2 T P δ D 2 > 0 , H ˜ s ( P δ , Q δ ) = γ 2 I + B ˜ 2 T Q δ B ˜ 2 + D ˜ 2 T P δ D ˜ 2 > 0 ,
respectively. Next, we shall prove that [ A + B 2 V , A ¯ + B 2 V ¯ + B ¯ 2 V + B ¯ 2 V ¯ ; C + D 2 V , C ¯ + D 2 V ¯ + D ¯ 2 V + D ¯ 2 V ¯ ] is l 2 stable. To this end, it should be noted that that (10) and (14) can be rewritten as
P = ( A + B 2 V ) T P ( A + B 2 V ) + ( C + D 2 V ) T P ( C + D 2 V ) K T K + γ 2 V T V , Q = ( A ˜ + B ˜ 2 V ˜ ) T Q ( A ˜ + B ˜ 2 V ˜ ) + ( C ˜ + D ˜ 2 V ˜ ) T P ( C ˜ + D ˜ 2 V ˜ ) K T K γ 2 V ˜ T V ˜ ,
and
P δ = ( A + B 2 V ) T P δ ( A + B 2 V ) + ( C + D 2 V ) T P δ ( C + D 2 V ) K T K + γ 2 V δ T V δ , Q δ = ( A ˜ + B 2 ˜ V ˜ ) T Q δ ( A ˜ + B 2 ˜ V ˜ ) + ( C ˜ + D 2 ˜ V ˜ ) T P δ ( C ˜ + D ˜ 2 V ˜ ) K T K γ 2 V ˜ δ T V ˜ δ .
By subtracting (16) from (15), one obtains
( P P δ ) + ( A + B 2 V ) T P P δ ( A + B 2 V ) + C + D 2 V T P P δ C + D 2 V = V V δ T ( γ 2 I + B 2 T P δ B 2 + D 2 T P δ D 2 ) V V δ δ 2 I .
We propose that P P δ must be strictly positive. If not, there exists ζ 0 0 such that ( P P δ ) ζ 0 = 0 with P P δ . Multiplying the left side by ζ 0 T and the right side by ζ 0 , (17) yields
0 ζ 0 T ( A + B 2 V ) T P P δ ( A + B 2 V ) ζ 0 + ζ 0 T C + D 2 V T P P δ C + D 2 V ζ 0 = ζ 0 T V V δ T ( γ 2 I + B 2 T P δ B 2 + D 2 T P δ D 2 ) V V δ ζ 0 δ 2 ζ 0 T ζ 0 < 0 ,
which is a contradiction. Hence, P P δ > 0 . And according to a similar discussion, it can be proven that Q Q δ > 0 . Let R 1 = P P δ and R 2 = Q Q δ . Since
R 1 + ( A + B 2 V ) T R 1 ( A + B 2 V ) + ( C + D 2 V ) T R 1 ( C + D 2 V ) < 0 , R 2 + ( A ˜ + B ˜ 2 V ˜ ) T R 1 ( A ˜ + B ˜ 2 V ˜ ) + ( C ˜ + D ˜ 2 V ˜ ) T R 2 ( C ˜ + D ˜ 2 V ˜ ) < 0 ,
then, by Theorem 1, we deduced that [ A + B 2 V , A ¯ + B 2 V ¯ + B ¯ 2 V + B ¯ 2 V ¯ ; C + D 2 V , C ¯ + D 2 V ¯ + D ¯ 2 V + D ¯ 2 V ¯ ] is l 2 -stable. The necessity is proven.
Sufficiency. Considering (11), (13), and Lemma 3, we have that
J S γ 2 ζ 0 , v = s = 0 S E γ 2 v s 2 z s 2 = s = 0 S E x s E x s v s E v s T M γ 2 ( P ) x s E x s v s E v s + s = 0 S E x s E v s T G γ 2 ( P , Q ) E x s E v s + ζ 0 T Q ζ 0 E x S + 1 T Q E x S + 1 E x S + 1 E x S + 1 T P x S + 1 E x S + 1 = ζ 0 T Q ζ 0 E x S + 1 T Q E x S + 1 + s = 0 E v s E v s T H ˜ s ( P , Q ) E v s E v s + s = 0 E ( v s E v s ) ( v s E v s ) T H s ( P ) ( v s E v s ) ( v s E v s ) ,
where
v s E v s = H s 1 K s T ( x s E x s ) , E v s = H ˜ s ( P , Q ) 1 K ˜ s ( P , Q ) T E x s .
Because [ A , A ¯ ; C , C ¯ ] is l 2 -stable, it can be inferred that x s l ω 2 N , R n when v s l ω 2 N , R n v . Letting S , then lim S E | x ( S ) | 2 = 0 is obtained. So
J γ 2 ( ζ 0 , v ) = ζ 0 T Q ζ 0 + s = 0 S E v s E v s T H ˜ s ( P , Q ) E v s E v s + s = 0 S E ( v s E v s ) ( v s E v s ) T H s ( P ) ( v s E v s ) ( v s E v s ) J γ 2 ( ζ 0 , v = v ) = ζ 0 T Q ζ 0 .
When ζ 0 = 0 , L γ can be derived from J γ 2 ( 0 , v ) 0 . Next, in order to prove that L < γ , we define the following operators:
L 1 : l ω 2 ( N , R n v ) l ω 2 ( N , R n v ) , L 1 ( v s E v s ) = ( v s E v s ) ( v s E v s ) , L ¯ 1 : l ω 2 ( N , R n v ) l ω 2 ( N , R n v ) , L ¯ 1 ( E v s ) = E v s E v s
According to (3) and (4), it can be seen that
E v s E v s = E v s + H ¯ s ( P , Q ) 1 K ¯ s ( P , Q ) T E x s
and
( v s E v s ) ( v s E v s ) = ( v s E v s ) + H s ( P ) 1 K s ( P ) T ( x s E x s ) .
Then, L 1 1 and L ¯ 1 1 exist. Therefore, it holds that
E x s + 1 = [ ( A + A ¯ ) ( B 2 + B ¯ 2 ) H ¯ s ( P , Q ) 1 K ¯ s ( P , Q ) T E x s ] E x s + ( B 2 + B ¯ 2 ) ( E v s E v s ) , E x 0 = 0 ,
with E v s = H ˜ s ( P , Q ) 1 K ˜ s ( P , Q ) T E x s + ( E v s E v s ) , and
x s + 1 E x s + 1 = [ A B 2 H s ( P ) 1 K s ( P ) T ] x s E x s + B 2 [ ( v s E v s ) ( v s E v s ) ] +   { [ C D 2 H s ( P ) 1 K s ( P ) T ] ( x s E x s ) + D 2 [ ( v s E v s ) ( v s E v s ) ] +   [ C ˜ D ˜ 2 H ¯ s ( P , Q ) 1 K ¯ s ( P , Q ) T ] E x s + D ˜ 2 [ E v s E v s ] } ω s , x 0 E x 0 = 0 ,
with v s E v s = H s ( P ) 1 K s ( P ) T ( x s E v s ) + [ ( v s E v s ) ( v s E v s ) ] . Let us assume that H s ( P ) ϵ I and H ¯ s ( P , Q ) ϵ I for some ϵ > 0 . Since H s ( P ) > 0 , H ¯ s ( P , Q ) > 0 . Because L 1 1 and L ¯ 1 1 exist, there exists a constant c > 0 such that
J γ 2 ( 0 , v ) = s = 0 E v s E v s T H ˜ s ( P , Q ) E v s E v s + s = 0 E ( v s E v s ) ( v s E v s ) T H s ( P ) ( v s E v s ) ( v s E v s ) ϵ [ L 1 ( v s E v s ) l ω 2 ( N , R n v ) 2 + L 1 ¯ ( E v s ) l ω 2 ( N , R n v ) 2 ] c [ v s E v s l ω 2 ( N , R n v ) 2 + E v s l ω 2 ( N , R n v ) 2 ] c v s l ω 2 ( N , R n v ) 2 > 0 ,
which shows that L < γ . The proof is completed. □

4. Infinite Horizon Mean-Field Stochastic H 2 / H Control

In this section, for (1), we shall handle the infinite horizon H 2 / H control problem. The design objective is to find a controller that not only can eliminate the effect of disturbance, but also can minimize the output energy when the worst case disturbance is implemented, as well as can ensure the internal stability. Hence, the mixed H 2 / H control design, as one of multiobjective design methods, is better suited to the needs of engineering practice.
Given the disturbance attenuation level of γ > 0 , the corresponding performances are characterized by
J 1 ( u , v ) = s = 0 E [ γ 2 v s 2 z s 2 ]
and
J 2 ( u , v ) = s = 0 E z s 2 .
The infinite horizon H 2 / H control problem of system (1) can be expressed as follows. Considering γ > 0 , find a state feedback control u s l ω 2 ( N , R n u ) , such that the following are achieved:
(i)
u s stabilizes system (1), i.e., when v s 0 , u s = u s ; the trajectory of (1) with any initial value x 0 = ζ 0 satisfies lim s E | x s | 2 = 0 ;
(ii)
L u = sup v s l w 2 N , R n v , v s 0 , x 0 = 0 s = 0 E K x s 2 + u ( s ) 2 1 2 s = 0 E v s 2 1 2 < γ ;
(iii)
When the worst case disturbance v s l ω 2 ( N , R n v ) is applied to (1), u s minimizes the output energy to J 2 ( u , v ) = s = 0 E z s 2 .
The infinite horizon H 2 / H control problem for the stochastic mean-field system (1) is said to be solvable if the previously mentioned ( u , v ) exist. Clearly, ( u , v ) are the Nash equilibria of (18) and (19), which satisfy
J 1 ( u , v ) J 1 ( u , v ) , J 2 ( u , v ) J 2 ( u , v ) .
The following four coupled matrix-valued equations are introduced before the main result is presented:
P 1 = A + B 1 U T P 1 A + B 1 U + C + D 1 U T P 1 ( C + D 1 U ) K T K U T U K u ( P 1 ) H u P 1 1 K u ( P 1 ) T , Q 1 = ( A ˜ + B ˜ 1 U ˜ ) T Q 1 ( A ˜ + B ˜ 1 U ˜ ) + ( C ˜ + D ˜ 1 U ˜ ) T P 1 ( C ˜ + D ˜ 1 U ) K T K U ˜ T U ˜ K ˜ u ( P 1 , Q 1 ) H ˜ u P 1 , Q 1 1 K ˜ u ( P 1 , Q 1 ) T , H u P 1 > 0 , H ˜ u P 1 , Q 1 > 0 , s N ,
V = H u P 1 1 K u P 1 T , V ˜ = V + V ¯ = H ˜ u P 1 , Q 1 1 K ˜ u P 1 , Q 1 T ,
P 2 = A + B 2 V T P 2 A + B 2 V + C + D 2 V T P 2 ( C + D 2 V ) + K T K + I K v ( P 2 ) H v P 2 1 K v ( P 2 ) T , Q 2 = ( A ˜ + B ˜ 2 V ˜ ) T Q 1 ( A ˜ + B ˜ 2 V ˜ ) + ( C ˜ + D ˜ 2 V ˜ ) T P 1 ( C ˜ + D ˜ 2 V ˜ ) + K T K + I K ˜ v ( P 2 , Q 2 ) H ˜ v P 2 , Q 2 1 K ˜ v ( P 2 , Q 2 ) T , H v P 2 > 0 , H ˜ v P 2 , Q 2 > 0 , s N
U = H v P 2 1 K v P 2 T , U ˜ = U + U ¯ = H ˜ v P 2 , Q 2 1 K ˜ v P 2 , Q 2 T ,
where
H u ( P 1 ) = γ 2 I + B 2 T P 1 B 2 + D 2 T P 1 D 2 , K u ( P 1 ) = ( A + B 1 U ) T P 1 B 2 + ( C + D 1 U ) T P 1 D 2 , H ˜ u ( P 1 , Q 1 ) = γ 2 I + B ˜ 2 T Q 1 B ˜ 2 + D ˜ 2 T P 1 D ˜ 2 , K ˜ u ( P 1 , Q 1 ) = ( A ˜ + B ˜ 1 U ˜ ) T Q 1 B ˜ 2 + ( C ˜ + D ˜ 1 U ˜ ) T P 1 D ˜ 2 ,
and
H v ( P 2 ) = I + B 1 T P 2 B 1 + D 1 T P 2 D 1 , K v ( P 2 ) = ( A + B 2 V ) T P 2 B 1 + ( C + D 2 V ) T P 2 D 1 , H ˜ v ( P 2 , Q 2 ) = I + B ˜ 1 T Q 2 B ˜ 1 + D ˜ 1 T P 2 D ˜ 1 , K ˜ v ( P 2 , Q 2 ) = ( A ˜ + B ˜ 2 V ˜ ) T Q 2 B ˜ 1 + ( C ˜ + D ˜ 2 V ˜ ) T P 2 D ˜ 1 .
The following lemma will be used in the proof of our main result.
Lemma 4. 
Define the following matrices:
K ¯ 1 = K 0 U 0 I 0 , K ˜ 1 = K U + U ¯ I 0 0 0 ,
and
K ¯ 2 = K 0 U 0 H u ( P 1 ) 1 2 K u ( P 1 ) T 0 , K ˜ 2 = K U + U ¯ H ˜ u ( P 1 , Q 1 ) 1 2 K ˜ u ( P 1 , Q 1 ) T 0 0 0 .
Let
K 1 = K ¯ 1 0 0 K ˜ 1 , K 2 = K ¯ 2 0 0 K ˜ 2 .
Then, the following statements hold:
(i)
If [ A , A ¯ ; C , C ¯ | K ] is exactly detectable, so is [ A + B 1 U , A ¯ + B ¯ 1 U ˜ + B 1 U ¯ ; C + D 1 U , C ¯ + D ¯ 1 U ˜ + D 1 U ¯ | K 2 ] ;
(ii)
If [ A + B 2 V , A ¯ + D ¯ 1 V ˜ + D 1 V ¯ ; C + D 2 V , C ¯ + D ¯ 2 V ˜ + D 2 V ¯ | K ] is exactly detectable, so is [ A + B 1 U + B 2 V , A ¯ + B 1 U ¯ + B 2 V ¯ + B ¯ 1 U ˜ + B ¯ 2 V ˜ ; C + D 1 U + D 2 V , C ¯ + D 1 U ¯ + D 2 V ¯ + D ¯ 1 U ˜ + D ¯ 2 V ˜ | K 1 ] .
Proof of Lemma 4. 
Suppose that [ A , A ¯ ; C , C ¯ | K ] is exactly detectable, but [ A + B 1 U , A ¯ + B ¯ 1 U ˜ + B 1 U ¯ ; C + D 1 U , C ¯ + D ¯ 1 U ˜ + D 1 U ¯ | K 2 ] is not. According to Lemma 3, there exists X 0 such that K 2 X K 2 T = 0 . Since K 2 X K 2 T = 0 means that K X K T = 0 , this contradicts that [ A , A ¯ ; C , C ¯ | K ] is exactly detectable. Along the same line of the proof of (i), (ii) could be demonstrated. □
Theorem 3. 
For (1), suppose that the above coupled matrix-valued Equations (20)(23) have a solution ( P 1 , Q 1 ; P 2 , Q 2 ; U , U ¯ ; V , V ¯ ) with P 1 < 0 , Q 1 < 0 , P 2 > 0 , and Q 2 > 0 . If [ A , A ¯ ; C , C ¯ | K ] and [ A + B 2 V , A ¯ + D 1 ¯ V ˜ + D 1 V ¯ ; C + D 2 V , C ¯ + D 2 ¯ V ˜ + D 2 V ¯ | K ] are exactly detectable, then the H 2 / H control problem has a pair of solutions ( u s , v s ) with
u s = U x s + U ¯ E x s , v s = V x s + V ¯ E x s .
Proof of Theorem 3. 
We first define the following matrices:
P ^ 1 = P 1 0 0 Q 1 , A 1 = A + B 1 U 0 0 A ˜ + B ˜ 1 U ˜ , C 1 = C + D 1 U 0 0 0 , C ^ 1 = 0 C ˜ + D ˜ 1 U ˜ 0 0 , U = U 0 0 U ˜ , H 1 = H u ( P 1 ) 1 2 K u ( P 1 ) T 0 0 H ˜ u ( P 1 ) 1 2 K ˜ u ( P 1 ) T , P ^ 2 = P 2 0 0 Q 2 , A 2 = A + B 1 U + B 2 V 0 0 A ˜ + B ˜ 1 U ˜ + B ˜ 2 V ˜ , C 2 = C + D 1 U + D 2 V 0 0 0 , C 2 ^ = 0 C ˜ + D ˜ 1 U ˜ + D ˜ 2 V ˜ 0 0 , H 2 = H v ( P 2 ) 1 2 K v ( P 2 ) T 0 0 H ˜ u ( P 1 ) 1 2 K ˜ u ( P 1 ) T .
Notice that (20) and (22) can be written as
P ^ 1 = A 1 T P ^ 1 A 1 + C 1 T P ^ 1 C 1 + C ^ 1 T P ^ 1 C ^ 1 K 2 T K 2
and
P 2 ^ = A 2 T P 2 ^ A 2 + C 2 T P 2 ^ C 2 + C 2 ^ T P 2 ^ C 2 ^ + K 1 T K 1
respectively, where K 1 and K 2 are defined in Lemma 4. Since [ A + B 2 V , A ¯ + D ¯ 1 V ˜ + D 1 V ¯ ; C + D 2 V , C ¯ + D ¯ 2 V ˜ + D 2 V ¯ | K ] is exactly detectable, according to Lemma 4, [ A + B 1 U + B 2 V , A ¯ + B 1 U ¯ + B 2 V ¯ + B ¯ 1 U ˜ + B ¯ 2 V ˜ ; C + D 1 U + D 2 V , C ¯ + D 1 U ¯ + D 2 V ¯ + D ¯ 1 U ˜ + D ¯ 2 V ˜ | K 1 ] is also exactly detectable, which implies from Lemma 3 and (27) that [ A + B 1 U + B 2 V , A ¯ + B 1 U ¯ + B 2 V ¯ + B ¯ 1 U ˜ + B ¯ 2 V ˜ ; C + D 1 U + D 2 V , C ¯ + D 1 U ¯ + D 2 V ¯ + D ¯ 1 U ˜ + D ¯ 2 V ˜ ] is l 2 -stable. Hence,
u s = U x s + U ¯ E x s l ω 2 ( N , R n u ) , v s = V x s + V ¯ E x s l ω 2 ( N , R n v ) .
From Lemma 3, Lemma 4, and (20), [ A + B 1 U , A ¯ + B 1 U ¯ + B ¯ 1 U ˜ ; C + D 1 U , C ¯ + D 1 U ¯ + D ¯ 1 U ˜ ] is l 2 -stable, i.e., u s = U x s + U ¯ E x s stabilizes system (1).
Substituting u s = u s = U x s + U ¯ E x s into (1), one yields that
x s + 1 = ( A + B 1 U ) x s + ( A ¯ + B 1 U ¯ + B ¯ 1 U ˜ ) E x s + B 2 v s + B ¯ 2 E v s + [ ( C + D 1 U ) x s + ( C ¯ + D 1 U ¯ + D ¯ 1 U ˜ ) E x s + D 2 v s + D ¯ 2 E v s ] ω s , z s = K x s F ( U x s + U ¯ E x s ) , F T F = I , x ( 0 ) = ζ 0 R n .
Because [ A + B 1 U , A ¯ + B 1 U ¯ + B ¯ 1 U ˜ ; C + D 1 U , C ¯ + D 1 U ¯ + D ¯ 1 U ˜ ] is l 2 stable, for every v s l ω 2 ( N , R n v ) , we have that x s l ω 2 ( N , R n ) . Therefore, it is directly obtained from Theorem 2 that L u < γ .
Next, we prove that v exists and v s = V x s + V ¯ E x s . Being identical to the proof of Lemma 2 and by taking (20), (28), and Lemma 3 into consideration, it is clear that
J 1 S u , v = s = 0 S E γ 2 v s 2 z s 2 = s = 0 S E x s E x s v s E v s T M 1 γ 2 ( P 1 ) x s E x s v s E v s + s = 0 S E x s E v s T G 1 γ 2 ( P 1 , Q 1 ) E x s E v s + ζ 0 T Q 1 ζ 0 E x S + 1 T Q 1 E x S + 1 E x S + 1 E x S + 1 T P 1 x S + 1 E x S + 1 = s = 0 S E ( v s E v s ) ( v s E v s ) T H u ( P ) ( v s E v s ) ( v s E v s ) + ζ 0 T Q 1 ζ 0 + s = 0 S E v s E v s T H ˜ u ( P 1 , Q 1 ) E v s E v s ,
where
M 1 γ 2 ( P 1 ) = L u 1 ( P 1 ) K u 1 ( P 1 ) K u ( P 1 ) T H u ( P 1 ) , G 1 γ 2 ( P 1 , Q 1 ) = L ˜ u ( P 1 , Q 1 ) K ˜ u ( P 1 , Q 1 ) K ˜ u ( P 1 , Q 1 ) T H ˜ u ( P 1 , Q 1 ) ,
with
L u 1 ( P 1 ) = P 1 + A + B 1 U T P 1 A + B 1 U + C + D 1 U T P 1 ( C + D 1 U ) K T K U T U , L ˜ u ( P 1 , Q 1 ) = Q 1 + A ˜ + B ˜ 1 U ˜ T Q 1 A ˜ + B ˜ 1 U ˜ + C ˜ + D ˜ 1 U ˜ T P 1 ( C ˜ + D ˜ 1 U ) K T K U ˜ T U ˜ .
Since lim S E x S 2 = 0 , one can infer that
J 1 u , v = s = 0 E ( v s E v s ) ( v s E v s ) T H u ( P ) ( v s E v s ) ( v s E v s ) + ζ 0 T Q 1 ζ 0 + s = 0 E v s E v s T H ˜ u ( P 1 , Q 1 ) E v s E v s J 1 u , v = ζ 0 T Q 1 ζ 0 .
From the above, it can be seen that v s is the worst case disturbance related to u that exists and is denoted by v s = V x s + V ¯ E x s l ω 2 ( N , R n v ) . When v s = v s is applied to system (1), we obtain
x s + 1 = ( A + B 2 V ) x s + ( A ¯ + B 2 V ¯ + B ¯ 2 V ˜ ) E x s + B 1 u s + B ¯ 1 E u s + [ ( C + D 2 V ) x s + ( C ¯ + D 2 V ¯ + D ¯ 2 V ˜ ) E x s + D 1 u s + D ¯ 1 E u s ] ω s , z s = K x s F u s , F T F = I , x ( 0 ) = ζ 0 R n .
Due to the fact that [ A + B 1 U + B 2 V , A ¯ + B 1 U ¯ + B 2 V ¯ + B ¯ 1 U ˜ + B ¯ 2 V ˜ ; C + D 1 U + D 2 V , C ¯ + D 1 U ¯ + D 2 V ¯ + D ¯ 1 U ˜ + D ¯ 2 V ˜ ] is l 2 stable, [ A + B 2 V , A ¯ + B ¯ 2 V ˜ + B 2 V ¯ , B 1 , B ¯ 1 ; C + D 2 V , C ¯ + D ¯ 2 V ˜ + D 2 V ¯ , D 1 , D ¯ 1 ] is l 2 stable. By applying Theorem 4.1 of [11], we assert that
min u l ω 2 N , R n u J 2 u , v = J 2 u , v = ζ 0 T Q 2 ζ 0 .
The proof is completed. □
Remark 2. 
According to the sufficient condition stated in Theorem 3, the solvability of the H 2 / H control problem is transformed into the existence of solutions to the coupled matrix-valued Equations (20)(23). Since system (1) contains mean-field terms, the Riccati equations in (20)(23) with respect to P 1 and P 2 are inscribed as E x s , while the Riccati equations with respect to Q 1 and Q 2 are used to characterize x s E x s . If system (1) does not contain mean-field terms and the noise does not depend on the control, then Theorem 3 degenerates to Theorem 1 of [7].

5. Iterative Algorithm and Example

The following iterative algorithm is applicable to solve GDREs (20)–(23):
(i)
Given S and the initial conditions P 1 ( S + 1 ) = 0 , Q 1 ( S + 1 ) = 0 , P 2 ( S + 1 ) = 0 and Q 2 ( S + 1 ) = 0 , we can obtain K u ( P 1 ( S + 1 ) ) = 0 , K ˜ u ( P 1 ( S + 1 ) , Q 1 ( S + 1 ) ) = 0 , K v ( P 2 ( S + 1 ) ) = 0 , and K ˜ v ( P 2 ( S + 1 ) , Q 2 ( S + 1 ) ) = 0 , as well as H u ( P 1 ( S + 1 ) ) = H ˜ u ( P 1 ( S + 1 ) , Q 1 ( S + 1 ) ) = γ 2 I , H v ( P 2 ( S + 1 ) ) = H ˜ v ( P 2 ( S + 1 ) , Q 2 ( S + 1 ) ) = I .
(ii)
Given P 1 ( s + 1 ) , Q 1 ( s + 1 ) , P 2 ( s + 1 ) , and Q 2 ( s + 1 ) , as well as K u ( P 1 ( s + 1 ) ) , K ˜ u ( P 1 ( s + 1 ) , Q 1 ( s + 1 ) ) , K v ( P 2 ( s + 1 ) ) , and K ˜ v ( P 2 ( s + 1 ) , Q 2 ( s + 1 ) ) , then H u ( P 1 ( s + 1 ) ) , H ˜ u ( P 1 ( s + 1 ) , Q 1 ( s + 1 ) ) , H v ( P 2 ( s + 1 ) ) , and H ˜ v ( P 2 ( s + 1 ) , Q 2 ( s + 1 ) ) ; as well, U ( s ) , U ˜ ( s ) , V ( s ) , V ˜ ( s ) can be computed using (23) and (21). Thus, P 1 ( s ) , Q 1 ( s ) , P 2 ( s ) , and Q 2 ( s ) can be obtained.
(iii)
Once ( P 1 ( s ) , Q 1 ( s ) ; P 2 ( s ) , Q 2 ( s ) ; U ( s ) , U ˜ ( s ) ; V ( s ) , V ˜ ( s ) ) is obtained, we continue to calculate K u ( P 1 ( s 1 ) ) , K ˜ u ( P 1 ( s 1 ) , Q 1 ( s 1 ) ) , K v ( P 2 ( s 1 ) ) , and K ˜ v ( P 2 ( s 1 ) , Q 2 ( s 1 ) ) , respectively.
(iv)
Repeat steps (ii)–(iii) when s = s 1 until m a x { P 1 ( s + 1 ) P 1 ( s ) , P 2 ( s + 1 ) P 2 ( s ) , Q 1 ( s + 1 ) Q 1 ( s ) , Q 2 ( s + 1 ) Q 2 ( s ) } e , where e = 10 5 ; then, we can obtain the solutions of (20)–(23).
Example 1. 
Take into account the one-dimensional discrete-time mean-field stochastic system below:
x s + 1 = a x s + a ¯ E x s + b 1 u s + b ¯ 1 E u s + b 2 v s + b ¯ 2 E v s + [ c x s + c ¯ E x s + d 1 u s + d ¯ 1 E u s + d 2 v s + d ¯ 2 E v s ] w s , z s = k x s u s , x 0 = ζ 0 , s N S .
Given a = 0.4 , a ¯ = 0.45 , b 1 = 0.6 , b ¯ 1 = 0.7 , b 2 = 0.65 , b ¯ 2 = 0.7 , c = 0.45 , c ¯ = 0.55 , d 1 = 0.3 , d ¯ 1 = 0.3 , d 2 = 0.5 , d ¯ 2 = 0.55 , and k = 0.65 , thus, γ = 3 .
Consider systems [ a , a ¯ ; c , c ¯ | k ] and [ a + b 2 V , a ¯ + d ¯ 1 V ˜ + d 1 V ¯ ; c + d 2 V , c ¯ + d ¯ 2 V ˜ + d 2 V ¯ | k ] . According to the definition of L , we can convert L ( X ) = λ X into a 2 + c 2 ( c + c ¯ ) 2 0 ( a + a ¯ ) 2 X = λ X and ( a + b 2 V ) 2 + ( c + d 2 V ) 2 ( c + d 2 V + c ¯ + d ¯ 2 V ˜ + d 2 V ¯ ) 2 0 ( a + b 2 V + a ¯ + d ¯ 1 V ˜ + d 1 V ¯ ) 2 X = λ X using algebraic operations. Since the eigenvalues of both of the matrices are less than one, by applying Lemma 1, the exact detectability of the systems can be obtained.
By letting S = 24 and using the iterative algorithm, we can obtain the solutions of the Riccati equations; see Table 1. The evolutions of the solutions have been shown in Figure 1.
By applying Theorem 3, we can obtain the coefficients of u s = U s x s + U ¯ s E x s and v s = V s x s + V ¯ s E x s respectively; see Table 2.
Figure 1. The evolutions of the solutions of Riccati equations.
Figure 1. The evolutions of the solutions of Riccati equations.
Processes 11 03248 g001

6. Conclusions

This paper has investigated the infinite horizon H 2 / H control problem for the discrete-time mean-field stochastic system with ( x , u , v ) -dependent noise. Firstly, a stochastic-bounded real lemma has been derived for the considered system. Under the condition that the system is exactly detectable, a sufficient condition for the presence of an H 2 / H controller has been obtained, which relates to the existence of the solution for the coupled Riccati equations. An iterative algorithm for solving the coupled Riccati equations has been proposed, and a numerical example has been presented. Note that Theorem 3 in this paper has only given a sufficient condition for the solvability of H 2 / H control. In future studies, we will try our best to explore the necessity of Theorem 3 and draw a necessary condition.
As stated in the introduction, nonlinear systems and time-delay systems are applied extensively and thoroughly in engineering practice. Therefore, it is undoubtedly valuable to extend the design method obtained in this paper to nonlinear mean-field systems and time-delay mean-field systems. However, nonlinearities and time delays will cause a lot of difficulty in analysis and design. How to set up the stochastic-bounded real lemma is the key issue in H control. Moreover, H 2 / H control problems for discrete-time mean-field systems with Markov jumps or Poisson jumps could be further investigated based on the research in this paper.

Author Contributions

Methodology, T.H.; writing—original draft, C.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant No. 62073204.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Limebeer, D.J.; Anderson, B.D.; Hendel, B. A Nash game approach to mixed H2/H control. IEEE Trans. Autom. Control 1994, 39, 69–82. [Google Scholar] [CrossRef]
  2. Wu, C.S.; Chen, B.S. Adaptive attitude control of spacecraft: Mixed H2/H approach. J. Guid. Control Dyn. 2001, 24, 755–766. [Google Scholar] [CrossRef]
  3. Chen, B.S.; Zhang, W. Stochastic H2/H control with state-dependent noise. IEEE Trans. Autom. Control 2004, 49, 45–57. [Google Scholar] [CrossRef]
  4. Zhang, W.; Chen, B.S. State feedback H control for a class of nonlinear stochastic systems. SIAM J. Control Optim. 2006, 44, 1973–1991. [Google Scholar] [CrossRef]
  5. El Bouhtouri, A.; Hinrichsen, D.; Pritchard, A.J. H-type control for discrete-time stochastic systems. Int. J. Robust Nonlinear Control 1999, 9, 923–948. [Google Scholar] [CrossRef]
  6. Zhang, W.; Chen, B.S. On stabilizability and exact observability of stochastic systems with their applications. Automatica 2004, 40, 87–94. [Google Scholar] [CrossRef]
  7. Zhang, W.; Huang, Y.; Xie, L. Infinite horizon stochastic H2/H control for discrete-time systems with state and disturbance dependent noise. Automatica 2008, 44, 2306–2316. [Google Scholar] [CrossRef]
  8. Huang, Y.; Zhang, W.; Feng, G. Infinite horizon H2/H control for stochastic systems with Markovian jumps. Automatica 2008, 44, 857–863. [Google Scholar] [CrossRef]
  9. Dragan, V.; Morozan, T.; Stoica, A.M. Mathematical Methods in Robust Control of Discrete-Time Linear Stochastic Systems; Springer: New York, NY, USA, 2010. [Google Scholar]
  10. Elliott, R.; Li, X.; Ni, Y. Discrete time mean-field stochastic linear-quadratic optimal control problems. Automatica 2013, 49, 3222–3233. [Google Scholar] [CrossRef]
  11. Ni, Y.; Elliott, R.; Li, X. Discrete-time mean-field stochastic linear-quadratic optimal control problems, II: Infinite horizon case. Automatica 2015, 57, 65–77. [Google Scholar] [CrossRef]
  12. Ni, Y.; Li, X.; Zhang, J. Mean-field stochastic linear-quadratic optimal control with Markov jump parameters. Syst. Control Lett. 2016, 93, 69–76. [Google Scholar] [CrossRef]
  13. Zhang, W.; Ma, L.; Zhang, T. Discrete-time mean-field stochastic H2/H control. J. Syst. Sci. Complex. 2016, 30, 765–781. [Google Scholar] [CrossRef]
  14. Wang, M.; Meng, Q.; Shen, Y.; Shi, P. Stochastic H2/H control for mean-field stochastic differential systems with (x, u, v)-dependent noise. J. Optim. Theory Appl. 2023, 197, 1024–1060. [Google Scholar] [CrossRef]
  15. Lasry, J.M.; Lions, P.L. Mean-field games. Jpn. J. Math. 2007, 2, 229–260. [Google Scholar] [CrossRef]
  16. Guo, J.; Bai, Y. A note on mean-field theory for scale-free random networks. Dyn. Contin. Discret. Impuls. Syst. 2006, 13, 107. [Google Scholar]
  17. Moon, J.; Başar, T. Linear quadratic risk-sensitive and robust mean field games. IEEE Trans. Autom. Control 2016, 62, 1062–1077. [Google Scholar] [CrossRef]
  18. Zhang, W.; Zhang, H.; Chen, B.S. Stochastic H2/H control with (x, u, v)-dependent Noise. IEEE Conf. Decis. Control 2005, 44, 7352–7357. [Google Scholar]
  19. Li, J. Stochastic maximum principle in the mean-field controls. Automatica 2012, 48, 366–373. [Google Scholar] [CrossRef]
  20. Buckdahn, R.; Djehiche, B.; Li, J. A general stochastic maximum principle for SDEs of mean-field type. Appl. Math. Optim. 2011, 64, 197–216. [Google Scholar] [CrossRef]
  21. Yong, J. Linear-quadratic optimal control problems for mean-field stochastic differential equations. SIAM J. Control Optim. 2013, 51, 2809–2838. [Google Scholar] [CrossRef]
  22. Huang, J.; Li, X.; Yong, J. A linear-quadratic optimal control problem for mean-field stochastic differential equations in infinite horizon. Math. Control Relat. Fields 2015, 5, 97–139. [Google Scholar] [CrossRef]
  23. Damm, T. Rational Matrix Equations in Stochastic Control; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  24. Zhang, H.; Qi, Q.; Fu, M. Optimal stabilization control for discrete-time mean-field stochastic systems. IEEE Trans. Autom. Control 2019, 64, 1125–1136. [Google Scholar] [CrossRef]
  25. Buckdahn, R.; Djehiche, B.; Li, J.; Peng, S. Mean-field backward stochastic differential equations: A limit approach. Ann. Probab. Off. J. Inst. Math. Stat. 2009, 37, 1524–1565. [Google Scholar] [CrossRef]
  26. Buckdahn, R.; Li, J.; Peng, S. Mean-field backward stochastic differential equations and related partial differential equations. Stoch. Process. Their Appl. 2009, 119, 3133–3154. [Google Scholar] [CrossRef]
  27. Bayraktar, E.; Zhang, X. Solvability of infinite horizon McKean–Vlasov FBSDEs in mean field control problems and games. Appl. Math. Optim. 2023, 87, 13–26. [Google Scholar] [CrossRef]
  28. Gao, M.; Sheng, L.; Zhang, W. Stochastic H2/H control of nonlinear systems with time-delay and state-dependent noise. Appl. Math. Comput. 2015, 266, 429–440. [Google Scholar] [CrossRef]
  29. Wu, C.F.; Chen, B.S.; Zhang, W. Multiobjective H2/H control design of the nonlinear mean-field stochastic jump-diffusion systems via Fuzzy Approach. IEEE Trans. Fuzzy Syst. 2018, 27, 686–700. [Google Scholar] [CrossRef]
  30. Chen, B.S.; Lee, M.Y. Noncooperative and cooperative multiplayer minmax H mean-field target tracking game strategy of nonlinear mean field stochastic systems with application to Cyber-Financial systems. IEEE Access 2022, 10, 57124–57142. [Google Scholar] [CrossRef]
  31. Li, X.; Xu, J.; Wang, W.; Zhang, H. Mixed H2/H control for discrete-time systems with input delay. IET Control Theory Appl. 2018, 12, 2221–2231. [Google Scholar] [CrossRef]
  32. Mei, W.; Zhao, C.; Ogura, M.; Sugimoto, K. Mixed H2/H control of delayed Markov jump linear systems. IET Control Theory Appl. 2020, 14, 2076–2083. [Google Scholar] [CrossRef]
  33. Qiu, L.; Zhang, B.; Xu, G.; Pan, J.; Yao, F. Mixed H2/H control of markovian jump time-delay systems with uncertain transition probabilities. Inf. Sci. 2016, 373, 539–556. [Google Scholar] [CrossRef]
  34. Ma, L.; Zhang, W. Output feedback H control for discrete-time mean-field stochastic systems. Asian J. Control 2015, 17, 2241–2251. [Google Scholar] [CrossRef]
  35. Damm, T. On detectability of stochastic systems. Automatica 2007, 43, 928–933. [Google Scholar] [CrossRef]
Table 1. The solutions of Riccati equations.
Table 1. The solutions of Riccati equations.
P 1 Q 1 P 2 Q 2
−0.6720−1.16271.85372.6530
Table 2. The cofficients of us and vs.
Table 2. The cofficients of us and vs.
U s U ˜ s V s V ˜ s
−0.3908−0.70440.02150.0496
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, C.; Hou, T. Infinite Horizon H2/H Control for Discrete-Time Mean-Field Stochastic Systems. Processes 2023, 11, 3248. https://doi.org/10.3390/pr11113248

AMA Style

Ma C, Hou T. Infinite Horizon H2/H Control for Discrete-Time Mean-Field Stochastic Systems. Processes. 2023; 11(11):3248. https://doi.org/10.3390/pr11113248

Chicago/Turabian Style

Ma, Chaoyue, and Ting Hou. 2023. "Infinite Horizon H2/H Control for Discrete-Time Mean-Field Stochastic Systems" Processes 11, no. 11: 3248. https://doi.org/10.3390/pr11113248

APA Style

Ma, C., & Hou, T. (2023). Infinite Horizon H2/H Control for Discrete-Time Mean-Field Stochastic Systems. Processes, 11(11), 3248. https://doi.org/10.3390/pr11113248

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop