1. Introduction
Mixed
control is a crucial robust control research subject that has been thoroughly investigated by numerous researchers over the past decades. Robust control, as a control design method that can effectively eliminate the effect of disturbances, has been refined since the 1960s. However, in engineering practice, it is desired that the controller not only minimizes performance targets, but also eliminates the influence of disturbances when the worst disturbance is applied. Therefore, mixed
control is better suited to the needs of engineering practice. Mixed
control problems regarding deterministic systems have been discussed in [
1,
2] and others.
and
control problems of stochastic systems and their applications have also been extensively studied by many researchers in recent years. For example, stochastic
control for continuous-time systems with state-dependent noise has been discussed in [
3]. In [
4], the
control problem for a type of nonlinear system with noise that is dependent on the state and disturbance has been discussed by the authors. Ref. [
5] considered
control for discrete-time linear systems with their states and inputs affected by random perturbations. Ref. [
6] proposed the notion and stochastic PBH criterion of exactly observable linear autonomous systems, which provided great convenience for the analysis and synthesis of stochastic systems. Under the presumption that the system is exactly observable, an infinite horizon
controller of discrete-time stochastic systems with state- and disturbance-dependent noise has been designed in [
7]. As the research on stochastic
control continues, many researchers have begun to focus on systems with Markov jumps.
control for stochastic systems with Markov jumps not only contributes to improving system performance but also holds significant practical significance for areas such as economics and technology. Ref. [
8] investigated the robust control problem for continuous-time systems that were subject to multiplicative noise and Markovian parameter jumps. And a necessary/sufficient condition was presented for the existence of a controller using two coupled algebraic Riccati equations. Ref. [
9] developed a robust control theory for discrete-time stochastic systems subject to independent stochastic perturbations and Markov jumps.
The origins of the mean-field theory can be traced back to the early 20th century, when physicists began to focus on complex systems in quantum mechanics and statistical physics. As an approximate method, mean-field theory can effectively solve problems in complex systems. In recent years, the application of mean-field theory has become increasingly widespread in fields such as power systems and biology. Many researchers have conducted numerous studies in this area [
10,
11,
12,
13,
14]. For example, the LQ control problems of discrete-time stochastic mean-field systems in the finite and infinite horizon are discussed in [
10] and [
11], respectively. Ref. [
13] discussed the
control problem of discrete-time mean-field stochastic systems in the finite horizon, while [
14] addressed the
control problem of continuous-time systems in the finite horizon. The problem of mean-field games has been studied in [
15] to illustrate the feasibility of the mean-field modeling approach used in economics, and a number of issues worthy of further research were mentioned at the end of the article. With the gradual maturation of theoretical studies, mean-field theory has been widely applied, for instance, in [
16]; mean-field theory has provided great convenience for solving the behavior of node interaction behavior in complex networks, and [
17] has used mean-field theory to deal with the problem of large population stochastic differential games connected to the optimal and robust decentralized control of large-scale multiagent systems.
Ref. [
18] has addressed finite and infinite horizon stochastic
control problems for stochastic systems with
-dependent noise. The infinite horizon
control problem for discrete-time mean-field stochastic systems with
-dependent noise is the topic of this paper. The difference with the considered stochastic systems in [
18] is that the system equation considered in this paper includes both the state
control
u, and perturbation
v, as well as the expectation of the state, control, and perturbation, i.e.,
, respectively. Expectation terms can reduce the sensitivity of the system to random events; hence, mean-field theory simplifies complex problems and has attracted a great deal of attention. In [
19,
20,
21], the mean-field stochastic maximum principle for dealing with the mean-field linear–quadratic (LQ) optimal control issue have been discussed. Separately, there are discussions of mean-field maximum principles for the mean-field stochastic diffierencial equations (MFSDEs), backward stochastic differential equations (BSDEs), and forward–backward stochastic differential equations (FBSDEs). Later, [
22] extended the results in [
21] to the case of an infinite horizon. For discrete-time mean-field systems, ref. [
10] investigated the finite horizon LQ optimal control problem. Based on [
6,
11,
23], these works gave the notion of exact detectability of mean-field systems and derived the existence and uniqueness of the solution of the infinite horizon optimal control problem. This research has enriched the existing theory about stochastic algebraic equations. By providing a deeper understanding of the relationships between
stability and optimal control problems, it contributes to the development of more efficient control strategies for linear mean-field stochastic systems. Ref. [
24] investigated the stabilization and optimal LQ control problems for infinite horizon discrete-time mean-field systems. Unlike previous works, the authors showed for the first time that, under the exact detectability (exact observability) assumption, the mean-field system is stabilizable in the mean square sense with the optimal controller if and only if coupled algebraic Riccati equations have a unique positive semidefinite (or positive definite) solution. Ref. [
12] considered the LQ optimal control problem for a class of mean-field systems with Markov jump parameters, and the unique optimal control could be given by the solution to generalized difference Riccati equations. Currently, driven by the research on mean-field games and mean-field-type control problems [
15], many researchers have contributed to MFSDEs and their applications. By using a purely random approach, ref. [
25] worked on solutions to McKean–Vlasov-type forward–backward stochastic differential equations (FBSDEs). Ref. [
26] investigated the Markov framework for mean-field backward stochastic differential equations (MFBSDEs). The authors in [
27] investigated the existence and uniqueness of the McKean–Vlasov FBSDE solution.
Nonlinear systems and time-delay systems pose inherent difficulties in control due to their complex behavior. Ref. [
4] is an example of extending control strategies to address nonlinearities in systems, thus further broadening the scope of control theory. Ref. [
28] examined the
control problem for nonlinear stochastic systems with time delay and state-dependent noise. The work explored the sufficient condition for the existence of nonlinear stochastic
control and the design method for stochastic
controllers. In [
29], a fuzzy method was used to design multiobjective
control for a class of nonlinear mean-field random jump diffusion systems with uncertainty, and the stability, robustness, and performance optimization were deeply studied. Ref. [
30] discussed noncooperative and cooperative multiplayer minmax
mean-field target tracking game strategies for nonlinear mean-field stochastic systems, with applications in the realm of cyberfinancial systems. Stochastic control problems with time delays are highly challenging in practice, because delays and uncertainties may lead to the degradation of the system performance. Ref. [
31] focused on the mixed
control problem under an open-loop information pattern with input delay. In [
32], the authors investigated the state feedback control laws for Markov jump linear systems with state and mode observation delays. Ref. [
33] addressed the robust stability problem and mixed
control for Markovian jump time-delay systems with uncertain transition probabilities in the discrete-time domain. The obtained results have generalized several results in the previous literature that consider Markov transition probabilities as either a priori known or partially unknown.
Recently, robust controls for stochastic mean-field systems have received a great deal of attention. For instance, ref. [
34] treated the discrete-time mean-field systems
output feedback control problem. Ref. [
13] discussed the finite horizon
control problem for a class of stochastic mean-field systems. Ref. [
14] studied a continuous-time stochastic control problem for mean-field stochastic differential systems with random initial values and diffusion coefficients that depend on the state, control, and disturbance explicitly, together with their expectations. The work first established a mean-field stochastic-bounded real lemma for continuous-time mean field stochastic systems, thereby revealing the equivalence between robust stability and the solvability of two indefinite differential Riccati equations, which provided a theoretical basis for
control. Based on this significant result, an equivalent condition for the existence of a controller was proposed by utilizing the solution of two crosscoupled Riccati equations. In contrast, there are relatively few studies on the robust control of discrete-time mean-field systems in the infinite horizon. Compared with the finite horizon case, it is required for the infinite horizon feedback controller to ensure that the closed-loop system is stable. The primary accomplishments of this paper are the following: (i) A mean-field stochastic bounded real lemma has been obtained. This lemma reveals the equivalence between robust stability and the solvability of algebraic Riccati equations, thereby providing a useful tool for the stability analysis of
control; (ii) By means of exact detectability, a sufficient condition for the solvability of the
control problem has been obtained. This condition offers a theoretical basis for designing robust controllers and ensures the stability of closed-loop systems; (iii) An iterative algorithm has been proposed to solve the coupled SDREs. This algorithm reduces computational complexity and enhances the practicality of controller design.
This paper is structured as follows:
Section 2 recalls the notions of
stability and exact detectability; The mean-field SBRL is presented in
Section 3.
Section 4 discusses the infinite horizon
control problem for the considered system. An iterative algorithm and a numerical example are given in
Section 5.
2. Preliminaries
Consider the following discrete-time mean-field stochastic system:
In (1), denotes the n-dimensional real space, is the set of all real matrices, and I is the identity matrix. , , and are, respectively, the system state, control input, and external disturbance; is the initial condition. indicates the stochastic disturbance, which is a second-order process with and . We assume that and are mutually independent. A, , , , , , C, , , , , , K, and F are adequate dimensional constant matrices. Let be the algebra generated by . represents the space of -valued square integrable random vectors, while is the collection of nonanticipative square summable stochastic processes , with being -measurable, in which . The norm of is expressed by . denotes the set of all real symmetric matrices. is the subset of all nonnegative definite matrices of . We define . For , .
Next, we recall the concept of stability for the discrete-time mean-field system and present some results that will be needed in the following study. Consider the following unforced stochastic system:
For simplicity, we denote (2) with . Particularly, indicates .
Definition 1 ([
11])
. is considered to be -asymptotically stable (-stable for short), if Taking the mathematical expectation in (2), the system equation of
can be written as
By a simple calculation, we obtain the following system equation of
:
Theorem 1 ([
11])
. The following statements are equivalent:(i) is -stable;
(ii) For any R, ; the Lyapunov equationsadmit a unique solution with P, ; (iii) There exist R, , and the Lyapunov equations (5) to allow a unique solution with P, .
When in (1), stabilizability can be formulated as follows:
Definition 2 ([
11])
. is said to be closed-loop -stabilizable if there exists a pair such that for any and under control , the closed-loop systemis -stable. In this instance, we refer to as a closed-loop -stabilizer. Below, we present the following linear operator:
where
, and
It is obvious that
is a linear positive operator, i.e., if
, we have
, where
indicates the set of nonnegative definite real matrices of
. For any
,
, define the inner product
. Therefore, the adjoint operator of
is expressed as follows:
Furthermore, the spectrum of is
The following definition is about the exact detectability of discrete-time mean-field systems.
Definition 3 ([
11])
. In (2)
, let . is considered to be exactly detectable if, for any and , (a.s.) implies that . It can be seen that
is equivalent to
. By calculation, one has that
with
. Now, we introduce the following dynamics:
The following lemma extends the result of Theorem 5.6 in [
11], and the complete proof is omitted here because we could easily demonstrate the following result using Theorem 3 of [
35].
Lemma 1. is exactly detectable if and only if, for each , , where , and .
The following lemma links the
-stability with the exactly detectability of the uncontrolled system, and the detailed proof process can be refered to in [
23].
Lemma 2. is stable if and only if is exactly detectable and the following generalized Lyapunov equation holds:admits a unique solution 4. Infinite Horizon Mean-Field Stochastic Control
In this section, for (1), we shall handle the infinite horizon control problem. The design objective is to find a controller that not only can eliminate the effect of disturbance, but also can minimize the output energy when the worst case disturbance is implemented, as well as can ensure the internal stability. Hence, the mixed control design, as one of multiobjective design methods, is better suited to the needs of engineering practice.
Given the disturbance attenuation level of
, the corresponding performances are characterized by
and
The infinite horizon control problem of system (1) can be expressed as follows. Considering , find a state feedback control such that the following are achieved:
- (i)
stabilizes system (1), i.e., when , ; the trajectory of (1) with any initial value satisfies ;
- (ii)
- (iii)
When the worst case disturbance is applied to (1), minimizes the output energy to
The infinite horizon
control problem for the stochastic mean-field system (1) is said to be solvable if the previously mentioned
exist. Clearly,
are the Nash equilibria of (18) and (19), which satisfy
The following four coupled matrix-valued equations are introduced before the main result is presented:
where
and
The following lemma will be used in the proof of our main result.
Lemma 4. Define the following matrices:andLetThen, the following statements hold: - (i)
If is exactly detectable, so is ;
- (ii)
If is exactly detectable, so is .
Proof of Lemma 4. Suppose that is exactly detectable, but is not. According to Lemma 3, there exists such that . Since means that , this contradicts that is exactly detectable. Along the same line of the proof of (i), (ii) could be demonstrated. □
Theorem 3. For (1)
, suppose that the above coupled matrix-valued Equations (20)
–(23)
have a solution with , , , and . If and are exactly detectable, then the control problem has a pair of solutions with Proof of Theorem 3. We first define the following matrices:
Notice that (20) and (22) can be written as
and
respectively, where
and
are defined in Lemma 4. Since
is exactly detectable, according to Lemma 4,
is also exactly detectable, which implies from Lemma 3 and (27) that
is
-stable. Hence,
From Lemma 3, Lemma 4, and (20), is -stable, i.e., stabilizes system (1).
Substituting
into (1), one yields that
Because is stable, for every , we have that . Therefore, it is directly obtained from Theorem 2 that .
Next, we prove that
exists and
. Being identical to the proof of Lemma 2 and by taking (20), (28), and Lemma 3 into consideration, it is clear that
where
with
Since
one can infer that
From the above, it can be seen that
is the worst case disturbance related to
that exists and is denoted by
. When
is applied to system (1), we obtain
Due to the fact that
is
stable,
is
stable. By applying Theorem 4.1 of [
11], we assert that
The proof is completed. □
Remark 2. According to the sufficient condition stated in Theorem 3, the solvability of the control problem is transformed into the existence of solutions to the coupled matrix-valued Equations (20)
–(23)
. Since system (1)
contains mean-field terms, the Riccati equations in (20)
–(23)
with respect to and are inscribed as , while the Riccati equations with respect to and are used to characterize . If system (1)
does not contain mean-field terms and the noise does not depend on the control, then Theorem 3 degenerates to Theorem 1 of [
7]
.