Robust H ∞ Performance Analysis of Uncertain Large-Scale Networked Systems

: This paper considers the robust H ∞ performance problem of continuous-time uncertain large-scale networked systems (LSNSs). The systems consist of numerous arbitrarily connected subsystems, each of which has different dynamics. Currently it is computationally difﬁcult to manage systems with the existing lumped analysis method; therefore, exploiting the structural properties of the systems, sufﬁcient conditions are derived for robust H ∞ performance. Based on these results, an analysis condition that depends only on the parameters of each single subsystem is further obtained. Some numerical simulations are proposed to verify the validity and superiority of the developed conditions.


Introduction
Large-scale networked systems (LSNSs) have attracted a great deal of attention becase they are widely used in unmanned aircraft formation flights [1], smart grids [2,3], automated highway systems [4], and industrial process systems [5,6].LSNSs contain a great many of subsystems, which have various dynamics [7].Although the subsystems interact with each other in a simple way, the overall plant can exhibit extremely complex behaviors [8].However, in practical engineering applications, the complexity of LSNSs makes it difficult to avoid the existence of model errors between the ideal model and the actual system, which may deteriorate the system performance.Aiming to deal with the uncertainty generated by model errors, we introduce the concept of robust H ∞ performance and adopt the robust H ∞ performance index to portray the immunity of the system to perturbations.Moreover, high dimensionality is a thorny problem in the analysis and synthesis of LSNSs, and the existing analysis method in [9] based on lumped description needs to perform some high-dimensional matrix operations, especially the inverse computation of the high-dimensional matrix, which can generate a great computational burden or even fail to complete the computation [10].Therefore, it is significant to make the most of the system's structural properties to obtain effective conditions for robust H ∞ performance of LSNSs with model uncertainty.
To reduce the computational burden of high-dimensional matrices in LSNSs, many studies have focused on utilizing the connectivity characteristics of subsystems.Ref. [11] proposed an analysis method based on the Nyquist condition, which depends only on the parameter of a single subsystem.Under the framework of integral quadratic constraints (IQC), linear matrix inequalities (LMI) were obtained using sparse connection characteristics of subsystems [12,13].In [14], the complexity of robust performance analysis for LSNSs was reduced by checking the dynamic response of a single subsystem.Ref. [15] studied the robust output tracking problems for linear interconnected systems with time-invariant uncertainties.Ref. [16] proposed a decomposable robust stability analysis method for uncertain LSNSs, and the obtained conditions exploit the structural properties of individual subsystems to reduce the computing burden of performance analysis.According to the above results, it can be concluded that making full use of the structural characteristics of subsystems is an effective method for managing with uncertain LSNSs.
With an increase in system scale, the existing conditions in [9] based on the lumped model inevitably undergo inverse operations of the high-dimensional matrix, which can generate an extremely high computational burden or even reach infeasibility.In practical engineering, the existing method can no longer meet the computational requirements of the present system development.Therefore, the purpose of this paper is to fully utilize the structural properties of the systems to establish computationally attractive conditions for robust H ∞ performance.
Linear systems are systems that satisfy both superposition and homogeneity.Timeinvariant systems are such that the properties of the systems do not change with time, that is, the behaviors of the system's responses depend only on the behaviors of the input signals and the characteristics of the systems, and are independent of the moment when the input signals are applied.Time-varying system sare systems that change over time, that is, the systems that do not satisfy the properties of the time-invariant systems and whose output characteristics change explicitly over time.This paper investigates the robust H ∞ performance problem of continuous-time linear time-invariant (LTI) uncertain large-scale networked systems (LSNSs).The contributions are summarized as follows:

•
We derive sufficient conditions for robust H ∞ performance, which are completely dependent on the block-diagonal structural characteristics of the system parameter matrix and the sparsity of the subsystem connection matrix (SCM).Both conditions are computationally attractive due to the avoidance of some matrix operations, especially the computationally expensive inverse operation of the high-dimensional matrix.

•
Without loss of generality, we impose a hypothetical condition on the SCM and further derive an analysis condition that can be checked independently for each individual subsystem, which significantly improves computational efficiency.
The paper is composed as follows.Section 2 gives a model of uncertain LSNSs and some lemmas.Section 3 puts forward some robust H ∞ performance analysis theorems for uncertain LSNSs.Section 4 contains the simulation results, and Section 5 shows the conclusions.
Notations: col{ M i | L i=1 } represents the column vector composed of elements M i , and diag{ M i | L i=1 } represents the diagonal matrix with diagonal elements M i .{blockdiag{M 1 . . .M k }} represents the block diagonal matrix with diagonal elements of M 1 . . .M k .R m×n stands for the set of matrices in m × n dimension.R # denotes the vector space consisting of real numbers with proper dimensions.The mark " * " signifies the symmetric item in the symmetric matrix.The transpose of matrix M is denoted by M T .( * ) T MX or X T M( * ) is a brief representation of X T MX, respectively.P n represents the set of positive definite matrices of n × n dimension and subscripts may be omitted when the meaning is clear.

Problem Formulation and Preliminaries
Consider an uncertain LSNS Π composed of N LTI subsystems; the dynamic of each subsystem Π i (i = 1, . . ., N) can be expressed as follows: where δ j denotes the real uncertain parameters repeated with multiplicity k j , j = 1, . . ., s, and ∆ j , j = 1, . . ., f , represent the general full-block stable and appropriate transfer function matrices (TFM).I k j stands for a k j -dimensional identity matrix.In addition, the subsystems are connected by where , and Φ is called the SCM.t and i denote the time variable and the subsystem number, respectively.x i (t) is the state vector at time t.v i (t) and z i (t) are, respectively, called internal input vector and internal output vector.u i (t), w i (t), and y i (t) are called the external input vector, the perturbation input vector, and the external output vector of the system Π i , respectively.∆ i (t) is the uncertainty parameter matrix in the model [17], p i (t) and q i (t) are the external signals describing the uncertainty ∆ i (t) of the system model.Figure 1 shows the structural schematic diagram of uncertain LSNSs.

The dimension of vectors x i
t , v i t , z i t , u i t , w i t , y i t , p i t and q i t separately defined as n xi , n vi , n zi , n ui , n wi , n yi , n pi and n qi .For clarity, define the following matrices, , where * , # ∈ {x, p, q, v, z, w, y, u}. Figure 2 shows the structural diagram of the linear fractional model of the uncertain LSNS.
, and q(t) = col q i (t) N i=1 .By arithmetic, the uncertain LSNS Π i can be described as follows. where, Uncertain linear fraction model.
The matrix (I − A zv Φ) is invertible due to the system's well-posedness, and the following lemmas are introduced.Lemma 1 ([18]).If there are matrices L (n 1 +n 3 )×(n 1 +n 3 ) , and det(I − L 3 L 1 ) = 0, then the following two inequalities are equivalent, Lemma 2 ([19]).For matrices M and N with suitable dimensions, if there exists a positive scalar a > 0 such that MN Lemma 3 ([20]).J and K are symmetric matrices with proper dimensions, if and only if for any vector f = 0 satisfying both f T K f = 0 and f T J f > 0, then there exists a scalar b such that J + bK > 0.

Lemma 4 ([19]
).Consider a N 1 × N 1 (N 1 > 0) block-partitioned LMI, symmetric matrix variable P represented by G(P) < 0, and coefficient matrices are all block diagonal ones containing N 2 (N 2 > 0) diagonal blocks which are compatibly dimensional.If full matrix solutions P exist for the LMI, then feasible block diagonal solutions with compatible dimensions must exist.

Robust H ∞ Performance Analysis
In this section, we put forward the sufficient conditions for robust H ∞ performance.Ref. [9] provides a method to check the robust H ∞ performance of the uncertain system (4).
Lemma 5 ([9]).The uncertain system (4) is quadratically stable and G yu (s) ∞ < γ, γ > 0, if there exists a symmetric matrix P > 0 and matrix S ∈ K such that Remark 1. From the quadratic stability of the system, it can be deduced that the system is asymptotically stable for all uncertain parameters.According to the Cayley-Hamiltonian theory, matrices A, B, K, C, D, L, M, N, and H in (4) are usually dense.Moreover, there are matrix inversion terms in these parameter matrices.Therefore, for the LSNSs composed of many subsystems, the computational cost of the lumped condition becomes increasingly expensive and even computationally infeasible as the number of system dimensions increases.To solve these problems, based on Lemma 5, we derive a sufficient condition for robust H ∞ performance with computational attractiveness.
Theorem 1.The system Π is quadratically stable and G yu (s) ∞ < γ, γ > 0 if there exists a symmetric matrix P > 0, matrix S ∈ K, and a scalar y > 0 such that where Proof of Theorem 1.The above Inequality (9) can be written as By Lemma 1, we have The Inequality (11) can be expressed as Substituting R 1 , R 2 , R 3 , Ω 11 , Ω 12 and Ω 22 into Inequality (12), we get The proof is finished.
Remark 2. In contrast to the condition in Lemma 5, the matrices A * # , B * # , C * , D * # , and E * # , * , # ∈ {x, p, q, v, z, w, y, u} in Theorem 1 are all block-diagonally structured and the SCM Φ is sparse, which sufficiently exploits the structural properties of the system.The condition in Theorem 1 is computationally attractive due to the avoidance of some matrix operations, especially the inversion of the high-dimensional matrix.
Considering that the dimension of the matrix blocks in the condition of Theorem 1 is still large, which may affect the efficiency of the computation, we proceed to derive the other sufficient condition for robust H ∞ performance of uncertain LSNSs Π. Theorem 2. The system Π is quadratically stable and G yu (s) ∞ < γ, γ > 0, if there exists a symmetric matrix P > 0, matrix S ∈ K and a scalar µ > 0 such that where, Proof of Theorem 2. The above Inequality (8) can be rewritten as By partitioning the matrix operation we obtain Substituting into the parameters A, B, K, C, D, L, M, N and Define the matrices It is obvious that V T 2 (−U 1 )V 2 < 0. Let w = V 2 θ, θ ∈ R # ; then, regarding any w = 0, there is W 3 w = 0, which implies w T U 1 w > 0. From Lemma 3 we have Multiplying Inequality ( 21) by w T and w, respectively, we obtain w Substituting the parameter matrices in the system (4) yields This concludes the proof.
Remark 3. In the analysis and synthesis of LSNSs, the matrix block dimension of the systems affects the computational efficiency, and when the matrix block dimension is reduced, the computational efficiency will be increased accordingly.Therefore, compared with the condition in Theorem 1, the matrix block dimension in Theorem 2 is reduced from six to four, which makes Theorem 2 more computationally attractive than Theorem 1.

Remark 4.
The results of the simulation show that Theorem 1's computation is more stable, which is appropriate for scenarios that require accurate calculations, such as the temperature detection process of a cyclic interconnected chemical reaction device.On the contrary, Theorem 2 is highly efficient in computation, and based on Theorem 2, the conditions testinga single subsystem can be derived, which is applicable for scenarios that call for high computational speeds, such as controlling UAV groups.
For an LSNS, the LMI-formed conditions in Theorems 1 and 2 may still be computationally challenging.Therefore, we introduce a hypothetical condition for the SCM Φ and further take advantage of the SCM structural characteristics to derive a condition that is solely reliant on single subsystem parameters.
The assumed condition for the SCM Φ is Suppose that each row of the SCM Φ has only one non-zero element of 1 and there are no columns in which all elements are equal to 0. As described in [21], this assumption preserves the adopted system model's generality, and various connectivity relationships among subsystems can be described in this way. Let , and H * ,0 = 0, where * ∈ {v, z}. a k is a row vector of dimension H z , the k-th element in the column is 1, and all other elements are 0. Additionally, the location of the non-zero element in the i-th row of the SCM Φ is represented by l(i), i = 1, 2, . . ., H v .Then, according to the assumed conditions of Φ, Φ = col a l(i) H v i=1 can be deduced.c(i) represents the number of subsystems directly influenced by the i-th element of the internal output vector z(t).Define the matrices and Σ = diag Σ l N l=1 , l = 1, 2, . . ., H v .According to the above conditions, it is clear that By Lemma 4 and the connection characteristics of Φ, we can derive the following sufficient condition.
Remark 5. Generally speaking, some subsystem parameters in a LSNS are sometimes considered the same; thus, Theorem 3 is more computationally advantageous.Compared with conditions in Theorems 1 and 2, the computing complexity of Theorem 3 is solely linearly dependent on the number of subsystems when fixed each subsystem dimension, which means that for LSNSs, the computing burden of conditions in Theorem 3 is far less than that in Theorems 1 and 2.

Numerical Simulations
In this section, the calculation time costs of different analysis methods are compared by numerical simulations, and the obtained results verify the validity and superiority of the derived methods.
In this simulation, we make m xi = m vi = m zi = m ui = m wi = m yi = m pi = m qi = 2.Moreover, the parameters of each subsystem are chosen randomly and con-tinuously uniformly on the interval [−0.2, 0.2].The elements in the SCM Φ are generated randomly, where the position of the non-zero element one is selected arbitrarily.To compare the computing time variation trend of the derived methods and Lemma 5, we select different subsystems number N, i.e., N ∈ {2, 8, 10, 15, 20, 25, 30, 38, 40}.The calculation of the conditions for Theorems 1 and 2 is completed by the algorithm tool (DSDP5) developed in [22].Furthermore, Lemma 5 and Theorem 3 are calculated by the LMI toolbox in MATLAB.The computational results are recorded in Tables 1 and 2.More intuitively, Figure 3a,b show the trends of the average computation time and standard deviation of computation time with the number of subsystems, respectively.From Figure 3a,b, it is obvious that the computation time of the four methods is proportional to the number of subsystems.With the increasing subsystem number, the growth rate of computing time of Lemma 5 is increasing.Furthermore, compared with Lemma 5, the growth rates of Theorems 1-3 are greatly reduced.It is worth noting that the computation time of Theorem 3 only varies linearly with the number of subsystems.When the system size is relatively small, it is evident that the computational efficiency of the four methods is comparable.However, with the increase in system size, the computational efficiency of Theorems 1-3 conditions is much higher than that of Lemma 5, because these three methods all avoid some matrix operations, especially the computationally expensive inverse operation of the high-dimensional matrix.In addition, when the system size continues to increase to a certain size, the conditions in Lemma 5 and Theorems 1 and 2 may not be possible to compute owing to the computer's memory limitations.Because the condition of Theorem 3 relies only on individual subsystem parameters, which can complete the computation for larger-size systems, the condition in Theorem 3 has greater computational advantages in robust H ∞ performance analysis of uncertain LSNSs.

Conclusions
This paper investigates the problem of robust H ∞ performance for LSNSs with model uncertainty.The uncertain LSNSs model containing numerous subsystems is established, where each subsystem has a certain degree of model uncertainty.Fully utilizing the blockdiagonal structural properties of the system matrices and the sparseness of the SCM, sufficient conditions for the robust H ∞ performance are obtained.The resulting conditions are computationally attractive due to the avoidance of the computationally expensive inverse operations on the high-dimensional matrix.On this basis, a condition that depends only on individual subsystem parameters is further derived, which significantly improves the computing efficiency in the analysis and synthesis of uncertain LSNSs.

Figure 1 .
Figure 1.The general structure of uncertain LSNSs.

Figure 3 .
Figure 3.Comparison of computation time cost for various methods.(a) Average calculation time.(b) Standard deviation time.

Table 1 .
Average of calculation time.

Table 2 .
Standard deviation of calculation time.