Next Article in Journal
The ∗-Ricci Operator on Hopf Real Hypersurfaces in the Complex Quadric
Previous Article in Journal
A Data-Driven Process Monitoring Approach Based on Evidence Reasoning Rule Considering Interval-Valued Reliability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Network Structure Identification Based on Measured Output Data Using Koopman Operators

Graduate School of Systems Design, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji 192-0397, Tokyo, Japan
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(1), 89; https://doi.org/10.3390/math11010089
Submission received: 19 November 2022 / Revised: 16 December 2022 / Accepted: 20 December 2022 / Published: 26 December 2022
(This article belongs to the Section Network Science)

Abstract

:
This paper considers the identification problem of network structures of interconnected dynamical systems using measured output data. In particular, we propose an identification method based on the measured output data of each node in the network whose dynamic is unknown. The proposed identification method consists of three steps: we first consider the outputs of the nodes to be all the states of the dynamics of the nodes, and the unmeasurable hidden states to be dynamical inputs with unknown dynamics. In the second step, we define the dynamical inputs as new variables and identify the dynamics of the network system with measured output data using Koopman operators. Finally, we extract the network structure from the identified dynamics as the information transmitted via the network. We show that the identified coupling functions, which represent the network structures, are actually projections of the dynamical inputs onto the space spanned by some observable functions. Numerical examples illustrate the validity of the obtained results.

1. Introduction

Nowadays, network theory shows great importance in various fields such as engineering [1,2,3,4], finance [5] and bioscience [6]. Particularly, the problem of inferring structures of networks from measured data becomes an important task, and many practical issues can be categorized as such problems, e.g., modeling human brains [7] or detecting sources of rumor propagation [8]. The network identification problem has been studied in many perspectives, and various methods have been developed. Data correlation [9,10,11,12,13,14,15,16] plays an important role in network identification problems: in the presence of noises [9,14,15] or unmeasured hidden signals [16,17], the time derivative of the dynamical correlation matrix among the nodes reveals the relationship between the adjacency matrix of the network and the covariance matrix of the noise. With the help of basis functions, nonlinearity is identified with a higher order data correlation [13,18]. Furthermore, sparse identification (compressive sensing) [19,20] techniques are employed to identify the structure of networks [21,22]: linear evolution of data is identified by finding the sparsest matrix that maps the data to the next step. Taylor expansion or trigonometric basis functions also make the method applicable to networks with nonlinear couplings [13,23,24]. In [25], the Koopman operator theory and sparse identification techniques are combined to achieve accurate identification of nonlinear coupling functions. Moreover, adaptive master–slave synchronization-based methods [26,27], statistic-based methods [28] and variable (or phase) resetting-based methods [15,29] also achieve success.
Although most of the above-mentioned methods assume that all the states of the nodes are measurable, in practical cases there may exist states that cannot be measured, which are known as hidden variables. In [10,11,17], the hidden variables are considered colored noises, and the identification is performed in the subspace spanned by the measurable states. In [16], networks with hidden nodes are identified by reconstructing the adjacency matrix corresponding to the network and calculating the covariance of the obtained results, basing on the principle that unknown input signals from hidden nodes would enlarge the covariance. In [30], such a method is extended to networks with transmission time delays. In this paper, we consider networks of interconnected dynamical systems and model the network structures as coupling functions that describe the inputs to the nodes via the network. Specifically, we focus on the case where only the output data of the nodes are measurable. The goal of this paper is then reduced to identifying the coupling functions from measured output data.
Generally speaking, the statistic-based [28] and correlation-based methods [9,22] detect whether transmission between nodes exists; therefore, these methods could only obtain results about the topology of networks. Nonlinearity in network structures, modeled as nonlinear coupling functions, is commonly identified by approximating the dynamics that govern the nonlinear evolution of measured data with the help of basis functions [13,21,23,25,31]. In this paper, compared to the widely studied case where all the states of the nodes can be measured, a major difficulty of obtaining the possibly nonlinear coupling function is that the coupling function may be dependent on the unmeasurable states, so the dynamics cannot be fully revealed from measured data. On the other hand, in the case where neither the dynamics of the nodes without inputs is unknown, nor data of unforced nodes can be measured, it is also a challenging task to distinguish the coupling function from the identified dynamics of all the nodes. Towards solving these problems, we propose an identification method that makes it possible to obtain the possibly nonlinear coupling functions from solely measured output data. The proposed method consists of the following three steps. First, we reformulate the dynamical model of the network. We consider the outputs of each node as the full states and the dynamics of the output signals as the dynamics of the node. In the second step, the unmeasurable hidden states are then modeled as unknown dynamical inputs. Defining such dynamical inputs as new variables, time series data of the new variables can be calculated and the dynamics of such variables can be identified with the help of Koopman operators. In the third step, the network dynamics are then identified in terms of the outputs and the new variables using measured data, and the network structure is extracted as the data transmission in the network. If the dimension of the output is so low that the dynamics of the network cannot be embedded into the space spanned by the outputs and the new variables, then additional variables are introduced based on past data [32]. We show that the obtained coupling function is a projection of the future values of the dynamical inputs onto the space spanned by some pre-defined observable functions, and discuss the causes of identification errors. Compared to previous efforts, the contribution of this paper is two-fold:
  • The proposed method is applicable to networks whose nodes have unknown dynamics and only the outputs can be measured;
  • The proposed method is applicable to the case where data transmission in the network is represented by nonlinear functions.
With these advantages, the proposed method is expected to be applicable to practical problems such as analyzing electronic circuits with limited measurements and modeling nervous systems.
This paper is organized as follows. In Section 2, we provide brief introductions to the Koopman operator and Koopman mode decomposition. Section 3 describes the proposed identification method, and Section 4 presents numerical identification examples to show the validity and usefulness of the obtained results. In Section 5, we summarize the paper and conclude with some remarks and discussions.
Throughout this paper, N , R and C denote the sets of all the natural numbers, all the real numbers and all the complex numbers, respectively. x p denotes the p norm of an n-dimensional vector x defined by x p = i = 1 n | x i | p , where p = 1 , 2 , , and A F denote the Frobenius norm of matrix A C n × m defined by A F = i = 1 n j = 1 m | a i j | 2 . For constants, vectors, matrices or functions v 1 , , v n , col i = 1 n ( v i ) or col ( v 1 , , v n ) denote the composed vector (or matrix) ( v 1 * , , v n * ) * , where v i * is the Hermitian transpose of v i . For a vector a = col ( v 1 , , v n ) , span { a } denotes the space spanned by all its entries, i.e., span { a } = { f | f = i = 1 n c i a i , c i C } , and span { a } n denotes the space of n-dimensional vectors whose entries are contained in span { a } . For a functional space G and its subspace F , P F φ denotes the projection operator defined by P F φ = argmin ϕ F G ϕ φ , where φ G and · is a norm defined on G . e i denotes the ith standard basis vector of appropriate dimension such that the ith entry of e i is 1 and other entries are 0. x i + denotes the evolution of variable x, i.e., x i + [ k ] = x [ k + i ] where k N , and specially, x + = x 1 + .

2. Preliminary: The Koopman Operator

In this paper, we consider Koopman operators in discrete-time settings [33,34]. For the discrete-time system described by
x + = f ( x ) ,
where x R n and f : R n R n is a nonlinear vector field, define ψ : R n C to be an observable (function), and denote the space of all the observables by F . Define the operator space of all the operators that map F to F by B , i.e., B = L ( F , F ) . Then, the evolution of the states of system (1) can be described by the evolution of the observable functions governed by the Koopman Operator K defined on B by
K ψ ( x ) = ψ ( f ( x ) ) .
The Koopman operator is an infinite-dimensional operator that preserves all the nonlinear characteristics of vector field f ( · ) , and is linear in the sense that K ( a ψ 1 + b ψ 2 ) = a K ψ 1 + b K ψ 2 for any a , b C and ψ 1 , ψ 2 F . Define ϕ 1 , ϕ 2 , and λ 1 , λ 2 , to be eigenfunctions and the corresponding eigenvalues of Koopman operator K such that K ϕ i = λ i ϕ i . If the eigenfunctions span F , then for any ξ F , there exist c i for i = 1 , 2 , such that ξ = i = 1 c i ϕ i and
K ξ = i = 1 c i K ϕ i = i = 1 c i λ i ϕ i .
Equation (3) is called the Koopman mode decomposition of K ξ ( x ) = ξ ( f ( x ) ) , and c i is called the eigenmode of ξ ( f ( x ) ) with respect to span { ϕ i } .
To obtain a q-dimensional approximation of K, we consider the first q dominant pairs of eigenvalues and eigenfunctions of Koopman operators K, denoted by ( ( λ 1 , ϕ 1 ) , , ( λ q , ϕ q ) ) . Define the space spanned by the q eigenfunctions by F q , i.e., F q = span { ϕ 1 , , ϕ q } . For any φ F q , there exists a c φ C q such that φ = c φ * Φ and
K φ = c φ * Λ Φ ,
where Λ = diag ( λ 1 , , λ q ) and Φ = col ( ϕ 1 , , ϕ q ) . If the set of eigenfunctions { ϕ 1 , , ϕ q } is sufficiently rich, then the spanned space can be considered to contain all the observable functions, i.e., F q F . Furthermore, consider an observable function set { ψ 1 , , ψ q } of q linearly independent elements, and define Ψ = col ( ψ 1 , , ψ q ) . Suppose that { ψ 1 , , ψ q } also forms a set of linearly independent basis of F q , then there exists an invertible linear map P that maps Ψ to Φ , i.e., Φ = P Ψ , and
K Ψ = K P 1 Φ = P 1 Λ Φ = P 1 Λ P Ψ .
Setting A = P 1 Λ P , A is a q-dimensional approximation of Koopman operator K, such that A Ψ ( x ) = Ψ ( f ( x ) ) . For any φ F that has an approximation φ c * Ψ F q , a q-dimensional approximation of K φ can be obtained as
K φ c * A Ψ .
In this paper, a Koopman operator acts on a vector in an entry-wise manner, i.e., K v = col i = 1 N ( K v i ) for v = col i = 1 N ( v i ) , where v i F .

3. Network Structure Identification Based on Output Data

Consider the network system of N nodes described by
x i + = f ( x i ) + B u i ( x ) , w i = C x i , u i ( x ) = j = 1 N a i j g ( x i , x j ) ,
where x i R n , x = col i = 1 N ( x i ) R N n denotes the states, f : R n R n is locally Lipschitz continuous, w i R m is the output of node i, and u i ( x ) : R N n R m denotes the transmission sent to node i via the network where a i j is the ( i , j ) entry of the adjacency matrix associated with the network topology. Define w = col i = 1 N ( w i ) and rewrite the dynamics of the network into
x + = f ( x ) + Bg ( x ) , w = C x ,
where f ( x ) = col i = 1 N ( f ( x i ) ) , g ( x ) = col i = 1 N ( u i ) , B = I N B , C = I N C , and ⊗ denotes the Kronecker product. Here, the network structure is fully contained in g ( x ) , so the network structure identification problem is reduced to identifying g ( x ) . Specifically, we consider the case where a linear approximation of the dynamical model is known, and attempt to identify the network structure from measured output data. We first reformulate the dynamical model of the nodes such that the data of the new variables used to describe the dynamics can be calculated from the output, and then describe a numerical method to obtain the coupling function. The identification method is also considered in two cases where the dimension of the output is sufficiently high or not, respectively.
We first construct the dynamical model of the nodes such that the outputs w i correspond to the full states, and the unmeasurable states as unknown dynamical inputs. Let F x be a linear approximation of f ( x ) where F R n × n , f ( x ) F x + τ ( x ) and τ ( x ) denote the modeling error. Let T R ( n m ) × n be such that col ( C , T ) has full rank, and rewrite (5) into
w i + = C x i + = C F x i + C τ ( x i ) + C B u i ( x ) ,
T x i + = T F x i + T τ ( x i ) + T B u i ( x ) .
Since only data of the outputs w i of the nodes can be measured, we rewrite C F x i into C F x i F 1 w i + F 2 x i , where F 1 R m × m consists of the first m columns of C F C T 1 and F 2 R m × n . Define y i = F 2 x i + C τ ( x i ) + C B u i ( x ) for i = 1 , , N . Substituting y i into (7a), y i is considered a dynamical input to w i , i.e., w i + = F 1 w i + y i , where the dynamics of y i are unknown. The merits of defining the variables y i are two-fold: the dynamics of w i becomes fully known and y i [ k ] can be obtained as w i [ k + 1 ] F 1 w i [ k ] . We consider two cases where the dimension of col ( w i , y i ) is smaller than the dimensional of x i or not, respectively. Note that though y i is unknown, data for y i can be calculated from w i and the dimension of y i can be inferred by checking the linear dependency of the data series.
We first consider the case where dim ( y i ) + m n . In this case, no additional variables are required to span the dynamics of x i , so let the dynamics of y i be described by y i + = h i ( w , y ) , where y = col i = 1 N ( y i ) R N m and h i : R N m × R N m R m . Reformulate the dynamical models of the nodes as
w i + = F 1 w i + y i ,
y i + = h i ( w , y ) ,
for i = 1 , , N , and rewrite the dynamics of the network into
w + = F 1 w + y ,
y + = h ( w , y ) ,
where h ( w , y ) = col i = 1 N ( h i ( w , y ) ) , F 1 = I N F 1 and ⊗ denotes the Kronecker product.
Next, we attempt to extract the dynamics of y, i.e., h ( w , y ) , from measured data w [ k ] and y [ k ] using the Koopman operator theory. Define ψ i ( w , y ) F to be observable for i = 1 , , q where F is the space of all the complex-valued scalar functions, i.e., F = { f | f : R N m × R N m C } . Let K denote the Koopman operator associated with dynamics (9), which governs the evolution of the observables, i.e.,
K ψ i ( w , y ) = ψ i ( w + , y + ) = ψ i ( F 1 w + y , h ( w , y ) ) ,
for i = 1 , , q . Define Ψ ( w , y ) = col ( ψ 1 , , ψ q ) : R N m × R N m C q to be an observable set. Then, for any observable φ F , which can be approximated as φ c φ Ψ , c φ C 1 × q , it follows from (4) that a q-dimensional approximation of K φ can be obtained as
K φ c φ A Ψ ,
where A C q × q is such that K Ψ A Ψ . Here, c φ and A numerically satisfy c φ = argmin c f c Ψ L 2 and A = argmin H Ψ ( F 1 w + y , h ( w , y ) ) H Ψ L 2 , respectively. As a result, by considering the states y as variables, the right-hand side of (9b) can be approximated as
h ( w , y ) = K y C y A Ψ ( w , y ) ,
where C y is such that y C y Ψ ( w , y ) . Note that the existence of the expansion matrix C y can always be ensured by including the states w and y as observables in Ψ ( w , y ) . The identification problem is then reduced to obtaining A, which is a matrix approximation of K.
To obtain A from data, suppose that M + 2 steps of data, indexed by w [ 0 ] , w [ 1 ] , , w [ M ] , w [ M + 1 ] , are measured. Then, data for y [ k ] can be obtained as y [ k ] = w [ k + 1 ] F 1 w [ k ] for k = 0 , , M . Define data matrices X , Y C q × M as
X = Ψ ( w [ 0 ] , y [ 0 ] ) Ψ ( w [ M 1 ] , y [ M 1 ] ) ,
Y = Ψ ( w [ 1 ] , y [ 1 ] ) Ψ ( w [ M ] , y [ M ] ) .
According to the definition of K, i.e., (10), entries in Y contain the evolution of the observations in X so K X = Y holds, and the desired A can be obtained as the transition matrix that maps X to Y, i.e.,
A = argmin H H X Y F 2 ,
where · F denotes the Frobenius norm. Denote the optimality of optimization problem (12) by A o p t , and an approximation of dynamics (9b) is obtained as
y + , i d = h i d ( w , y ) = C y A o p t Ψ ( w , y ) .
Next, we obtain information about the network structure from the identified dynamics of y, i.e., h i d ( w , y ) . Specifically, we extract the information sent to the nodes from other nodes via the network [31]. If all the connections in the network were cut, then g ( x ) = 0 and y i + should depend only on y i and w i . This fact indicates that the transmission from node j to node i can be identified as the terms in the dynamical model of node i that depends on w j or y j . It follows (13) that for the ith node,
y i + = C i C y A o p t Ψ ( w , y ) ,
where C i R m × N m is such that y i C i y . Define Ψ i ( w i , y i ) R q to be the observable vector such that all the entries, which are dependent on w j or y j for any j i , in Ψ ( w , y ) are set to 0, and define Ψ i ( w , y ) = Ψ ( w , y ) Ψ i ( w i , y i ) . Note that Ψ i ( w i , y i ) now contains observables that only depend on the states of node i. Then, (14) can be rewritten as
y i + = C i C y A o p t Ψ i ( w i , y i ) + C i C y A o p t Ψ i ( w , y ) .
Here, C i C y A o p t Ψ i ( w , y ) corresponds to the information sent from other nodes to node i via the network, and is considered as the network structure to be identified. As a result, the dynamics of the network are identified as
w + = F 1 w + y
y + , i d = col i = 1 N ( C i C y A o p t Ψ i ( w i , y i ) ) + g i d ( w , y ) ,
where
g i d ( w , y ) = col i = 1 N ( C i C y A o p t Ψ i ( w , y ) )
is the identified coupling function.
Remark 1.
Since T x i in (7b) is n m dimensional, theoretically only n m linearly independent variables are required to fully describe the dynamics. Without loss of generality, define y ˜ i to be a vector consisting of the first n m linearly independent entries in y i and define y ˜ = col i = 1 N ( y ˜ i ) R N ( n m ) . Then, dynamics (9b) can be rewritten into
y + = h ( w , y ˜ ) ,
where h : R N m × R N ( n m ) R N m .
Next, we consider the case where dim ( y i ) + m < n . In such a case, with y i defined by y i = F 2 x i + C τ ( x i ) + C B u i ( x ) , as in (8), dim ( col ( w i , y i ) ) < dim ( x i ) , which means that additional variables are required for the dynamics of x i to be fully embedded into the space spanned by the new variables. Here, we make use of delay coordinates [32] to complement the dimension. Let r be the smallest integer such that r · dim ( y i ) + m n , and rewrite the dynamics of the network (9) into
w + = F 1 w + y ,
y r + = h ( w , y , , y ( r 1 ) + ) ,
where F 1 is defined the same as in (9). Further, define z i = y i + R N m for i = 1 , , r 1 and define z = col ( z 1 , , z r 1 ) . Then, we have dim ( col ( w , y , z ) ) = r · dim ( y i ) + m n and the dynamics of x can be embedded into the space spanned by w , y and z. The identification is then performed by finding an approximation A o p t of the Koopman operator defined by
K ψ i ( w , y , z 1 , , z r 1 ) = ψ i ( F 1 w + y , z 1 , z 2 , , h ( w , y , z ) ) ,
using data matrices defined by
X = Ψ ( w [ 0 ] , y [ 0 ] , z [ 0 ] ) Ψ ( w [ M 1 ] , y [ M 1 ] , z [ M 1 ] ) ,
Y = Ψ ( w [ 1 ] , y [ 1 ] , z [ 1 ] ) Ψ ( w [ M ] , y [ M ] , z [ M ] )
constructed with M + r steps of measured data of w. The dynamics of the network are identified as
w + = F 1 w + y , y + = z 1 , z 1 + = z 2 , z r 1 + = col i = 1 N ( C i C z A o p t Ψ i ( w , y , z ) ) + g i d ( w , y , z ) ,
where C z R N m × q is such that z r 1 C z Ψ ( w , y , z 1 , , z r 1 ) , Ψ i contains observables that solely depend on the states of node i, and
g i d ( w , y , z ) = col i = 1 N ( C i C z A o p t Ψ i ( w , y , z ) )
is the identified coupling function, where Ψ i = Ψ Ψ i as in (15). Note that the case where dim ( y i ) + m n can be considered as a special case where r = 1 and z does not exist.
The proposed identification algorithm is summarized as Algorithm 1.
Algorithm 1 Proposed identification algorithm
Input: measured time series of the outputs of the nodes, F x i as an approximation of f ( x i ) , C, and n
Output: g i d ( w , y ) or g i d ( w , y , z )
   1. Initialization: find r as the minimum integer with which r · dim ( y i ) + m n
   2. Matrices construction:
   if r = 1 then
      design an observable set Ψ ( w , y ) and construct data matrices X , Y with (11)
   else
      design an observable set Ψ ( w , y , z ) and construct data matrices X , Y with (21)
   end if
   3. Approximate the Koopman operator: calculate A o p t with (12)
   4. Obtain the coupling function:
   if r = 1 then
      obtain g i d ( w , y ) with (17)
   else
      obtain g i d ( w , y , z ) with (22)
   end if
   5. Output: g i d ( w , y ) or g i d ( w , y , z )
Next, we consider the relationship between the identified coupling function g i d ( w , y , z ) and the original g ( x ) , and discuss the identification error in terms of the difference between the original and the identified dynamics of the measured output w.
Proposition 1.
If the amount of data is sufficiently large ( M ) and the data are uniformly random, then for i = 1 , , N and j = 1 , , m , the jth component of the identified coupling function for the ith node, i.e., e j C i g i d ( w , y , z ) , is a projection of e j ( w i + F 1 w i ) r + onto span { Ψ i } :
e j C i g i d ( w , y , z ) = P span { Ψ i ( w , y , z ) } ( e j ( w i + F 1 w i ) r + )
with respective to dynamics (19).
Proof. 
We first show the relationship between the theoretical and the identified dynamics of y, and then show the validity of Equation (23).
As shown in [25,35], the solution of optimization problem (12), i.e., A o p t , minimizes A Ψ ( w , y , z ) K Ψ ( w , y , z ) L 2 over the manifold where data are measured, so A o p t is an approximation of the Koopman operator defined in (10), in the sense that for any complex-value scalar function φ ( w , y , z ) [35],
P span { Ψ } K ( P span { Ψ } φ ) = c φ A o p t Ψ ( w , y , z ) ,
where c φ = argmin c φ c Ψ ( w , y , z ) L 2 . Since we can design Ψ to contain w , y and z as observables, the evolution of the jth entry of y i is obtained as
e j C i y r + , i d = e j C i z r 1 + , i d = e j C i C z A o p t Ψ ( w , y , z ) = P span { Ψ } ( e j C i z r 1 + ) = P span { Ψ } ( e j C i y r + ) ,
i.e., the identified evolution e j C i y r + , i d is a projection of the true evolution e j C i y r + onto the space spanned by the observables in Ψ ( w , y , z ) .
On the other hand, by the construction of g i d ( w , y , z ) in (22), the coupling function of node i is obtained as a linear combination of the observables in Ψ i ( w , y , z ) , i.e.,
e j C i g i d ( w , y , z ) = P span { Ψ i ( w , y , z ) } ( e j C i z r 1 + , i d ) = P span { Ψ i ( w , y , z ) } P span { Ψ } ( e j C i y r + ) .
Making use of dynamics (19a) that w + F 1 w = y and the fact that span { Ψ i ( w , y , z ) } span { Ψ ( w , y , z ) } , we have
e j C i g i d ( w , y , z ) = P span { Ψ i ( w , y , z ) } ( e j C i y r + ) = P span { Ψ i ( w , y , z ) } ( e j ( w i + F 1 w i ) r + ) .
where we substituted C i y = y i = w i + F 1 w i . □
Proposition 1 means that the coupling function is obtained in terms of the r-step-future and the ( r + 1 ) -step-future values of w, which is estimated using the dynamics identified from measured data. On the other hand, the identification error of the proposed method depends on two factors, namely the design of the observable set Ψ ( w , y , z ) and the amount of measured data [25,31]. To reduce the identification errors, one could design the observable set sufficiently rich to obtain a close approximation of h ( w , y , z ) , and increase the amount of measurements.
Remark 2.
The Koopman mode decomposition theory guarantees that the evolution of any coupling function can be decomposed into a linear combination of the evolution of an infinite amount of eigenfunctions, i.e., φ = j = 1 K λ j ϕ j , φ F , and such decomposition holds globally in the space of state variables. In practical situations, since we can only handle a finite amount of variables, the dynamic-mode-composition-like numerical method we employed is only able to recover point spectra of the Koopman operator [36,37]. As a result, the proposed method gives an optimal coefficient matrix of the observable vector such that the corresponding linear combination approximates the coupling function over a certain set where data are measured.
Remark 3.
The observable set should be designed sufficiently rich to reach high accuracy, and the computational cost would blow up as the number of nodes or the complexity of the coupling function increase. Since there is no a priori knowledge about the coupling functions, we can only design the observable set to be as rich as possible to hope the spanned space contains the coupling function, i.e., using trigonometric basis or power series. Raising the amount of observables would also lead to higher computational costs, which reveals a trade-off between computation costs and identification accuracy of nonlinearity.

4. Numerical Examples

4.1. Identification without Dimension Complement

Consider a network of eight interconnected systems described by
x i 1 + = x i 1 + 0.1 ( x i 2 ) ,
x i 2 + = x i 2 + 0.1 ( x i 1 + x i 2 x i 1 2 x i 2 + x i 3 ) ,
x i 3 + = x i 3 + 0.1 ( x i 3 + u i ) , u i = j = 1 8 a i j ( x 2 j x 2 i ) , w i = x i 1 x i 2 ,
for i = 1 , , 8 , which are Van der Pol oscillators with damped inputs and discretized using the first order Euler method with 0.1-long time steps. Suppose that only linearized models of the oscillators are known:
w i 1 + = w i 1 + 0.1 w i 2 + τ i 1 ( x ) , w i 2 + = 0.1 w i 1 + 1.1 w i 2 + τ i 2 ( x ) , x i 3 + = 0.9 x i 3 + τ i 3 ( x ) .
Define y i 1 = τ i 1 ( x ) , y i 2 = τ i 2 ( x ) and consider the nodes described by ( w i , y i ) :
w i 1 + = w i 1 + 0.1 w i 2 + y i 1 , w i 2 + = 0.1 w i 1 + 1.1 w i 2 + y i 2 , y i 1 + = h i 1 ( w , y ) , y i 2 + = h i 2 ( w , y ) ,
where w = col i = 1 8 ( w i ) and y = col i = 1 8 ( y i ) . Rewrite the dynamics of the network into
w + = F w + y , y + = h ( w , y ) ,
where F = I 8 1 0.1 0.1 1.1 .
Suppose that 5000 trajectories of three steps of the outputs, denoted by j w , j w + , j w 2 + , are measured from random initial points distributed uniformly in [ 2 , 2 ] 16 . Then, data for y and y + can be calculated as
j y i 1 = j w i 1 + j w i 1 0.1 ( j w i 2 ) , j y i 2 = j w i 2 + + 0.1 ( j w i 1 ) 1.1 ( j w i 2 ) , j y i 1 + = j w i 1 2 + j w i 1 + 0.1 ( j w i 2 + ) , j y i 2 + = j w i 2 2 + + 0.1 ( j w i 1 + ) 1.1 ( j w i 2 + ) ,
for j = 1 , , 5000 . Define Koopman operator K by
K ψ ( w , y ) = ψ ( F w + y , h ( w , y ) ) ,
where ψ : R 16 × R 16 C . To obtain an approximation of K, define Ψ ( w , y ) : R 16 × R 16 R 321 as
Ψ ( w , y ) = col ( 1 , w , y , w i 1 w i 2 , w i 1 y i 1 , w i 1 y i 2 , w i 2 y i 1 , w i 2 y i 2 , y i 1 y i 2 , w i 1 2 , w i 2 2 , y i 1 2 , y i 2 2 , w i 1 2 w i 2 , w i 1 w i 2 2 , w i 1 2 y i 1 , w i 1 y i 1 2 , w i 1 2 y i 2 , w i 1 y i 2 2 , w i 2 2 y i 1 , w i 2 y i 1 2 , w i 2 2 y i 2 , w i 2 y i 2 2 , y i 1 2 y i 2 , y i 1 y i 2 2 , w i 1 w i 2 y i 1 , w i 1 w i 2 y i 2 , cos w i 1 , cos w i 2 , cos y i 1 , cos y i 2 , sin w i 1 , sin w i 2 , sin y i 1 , sin y i 2 , cos w i 1 sin w i 1 , cos w i 2 sin w i 2 , cos y i 1 sin y i 1 , cos y i 2 sin y i 2 ) ,
where the variables with subscript i denote abbreviations of the corresponding states of all the nodes, i.e., y i 1 w i 1 is an abbreviation of col i = 1 N ( y i 1 w i 1 ) . Define data matrices X , Y by
X = Ψ ( 1 w , 1 y ) Ψ ( 5000 w , 5000 y ) , Y = Ψ ( 1 w + , 1 y + ) Ψ ( 5000 w + , 5000 y + ) .
Then, a matrix approximation of K can be obtained as A = argmin H H X Y F , and h ( w , y ) can be obtained as
h i d ( w , y ) = C y A Ψ ( w , y ) ,
where C y = 0 16 × 17 I 16 0 16 × 288 is such that C y Ψ ( w , y ) = y . Entries of the obtained C y A is plotted in Figure 1, where the color at coordinate ( i , j ) represents the value of entry [ C y A ] i j .
Next, the first node is taken as an illustrative example to show the process of obtaining the coupling function. The dynamics of y 11 and y 12 are identified as
y 11 + , i d = e 1 h i d = 0 , y 12 + , i d = e 9 h i d = 0.084 ψ 2 0.029 ψ 10 + 0.010 ψ 15 + 0.010 ψ 17 + 0.899 ψ 26 0.180 ψ 114 0.022 ψ 122 0.100 ψ 130 0.001 ψ 162 0.020 ψ 210 0.093 ψ 258 + 0.010 ψ 266 + 0.009 ψ 290 0.001 ψ 298 = 0.084 w 11 0.029 w 12 + 0.010 w 62 + 0.010 w 82 ̲ + 0.899 y 12 0.180 w 11 2 w 21 0.022 w 11 w 21 2 0.100 w 11 2 y 12 0.001 w 12 2 y 12 0.020 w 11 w 12 y 11 0.093 sin w 11 + 0.0102 sin w 12 + 0.009 cos w 11 cos w 11 0.001 cos w 12 sin w 12 ,
where entries smaller than O ( 5 × 10 4 ) are omitted. Here, only the terms 0.010 ψ 15 + 0.010 ψ 17 contain states sent from other nodes, and are hence considered the coupling function. As a result, the dynamics of the first node are identified as
w 11 + = w 11 + 0.1 w 12 + y 11 , w 12 + = 0.1 w 11 + 1.1 w 12 + y 12 , y 11 + = 0 , y 12 + = i N 1 [ C y A ] ( 9 , i ) ψ i + 0.01 w 62 + 0.01 w 82 ,
where N 1 = { 2 , 10 , 26 , 114 , 122 , 130 , 162 , 210 , 258 , 266 , 290 , 298 } . By performing the same procedures to all nodes, the topology of the network is then identified as shown in Figure 2. A comparison between a reconstructed system using the identified result and the original is also shown in Figure 3. As the figure shows, starting from the same initial position, the reconstructed system is able to track the original for the first 50 steps, and deviates due to identification errors.
Theoretically, complex-valued observables are required to ensure that the space spanned by the observables is invariant under the actions of Koopman operators. However, for the purpose of coupling function identification, instead of the full spectral properties, we are only interested in K col ( w , y , z ) , i.e., the evolution of the states as observables. As shown in Proposition 1 and Section 4, [25], what the proposed method obtains is a projection of the coupling function onto the span of the observables, so we only have to make sure that the coupling function is contained in the span of the observables to reach high accuracy. As a result, in this section, real-valued observables are employed.

4.2. Identification with Dimension Complement

In this subsection, consider the case where the dimension of the output is so low that the dynamics of the nodes cannot be embedded into the space spanned by w and y, i.e., the case where r > 1 . Consider a network of 12 nodes described by
x i , 1 + = 0.9 x i , 1 + x i , 2 + x i , 3 ,
x i , 2 + = 0.9 x i , 2 + x i , 3 + u i ,
x i , 3 + = 0.01 x i , 3 + 1.2 ( x i , 1 + x i , 2 + x i , 3 ) ( x i , 1 + x i , 2 + x i , 3 1 ) , u i = 0.1 j = 1 12 a i j ( x i , 2 x j , 2 ) , w i = x i , 1 ,
for i = 1 , , 12 . The trajectory of node i described by (27) when u i = 0 is shown in Figure 4.
Suppose that only a linearized model in the space spanned by the output, which is also inaccurate, is known, i.e., w i + = 1.0 w i + τ i ( x ) , where 1.0 w i is the inaccurate known part and τ i ( x ) denotes the unknown dynamics. Reformulate the dynamics of the nodes into the following form
w i + = w i + y i , y i + = z i , z i + = h i ( w , y , z ) ,
and rewrite the dynamics of the network into
w + = w + y ,
y + = z ,
z + = h ( w , y , z ) ,
where w = col i = 1 12 ( w i ) R 12 , y = col i = 1 12 ( y i ) R 12 and z = col i = 1 12 ( z i ) R 12 . Let ψ ( w , y , z ) : R 36 C denote an observable and defined Koopman operator K by
K ψ ( w , y , z ) = ψ ( w + y , z , h ( w , y , z ) ) .
Next, consider the problem of obtaining an approximation of K. Define a set of observable functions Ψ ( w , y , z ) : R 36 R 109 by
Ψ ( w , y , z ) = col ( 1 , w , y , z , col i = 1 12 ( w i y i ) , col i = 1 12 ( w i z i ) , col i = 1 12 ( y i z i ) , col i = 1 12 ( w i 2 ) , col i = 1 12 ( y i 2 ) , col i = 1 12 ( z i 2 ) ) ,
and suppose that a 203-step-long trajectory of the outputs of the coupled nodes in the network are measured, which is denoted by w [ k ] R 12 for k = 1 , , 203 . Then, data series of y and z are obtained as
y [ k ] = w [ k + 1 ] w [ k ] , k = 1 , , 202 , z [ k ] = y [ k + 1 ] , k = 1 , , 201 ,
and col ( w [ k ] , y [ k ] , z [ k ] ) for k = 1 , , 201 is a 201-step-long trajectory of system (28). Define data matrices X , Y R 109 × 200 by
X = Ψ ( w [ 1 ] , y [ 1 ] , z [ 1 ] ) Ψ ( w [ 200 ] , y [ 200 ] , z [ 200 ] ) ,
Y = Ψ ( w [ 2 ] , y [ 2 ] , z [ 2 ] ) Ψ ( w [ 201 ] , y [ 201 ] , z [ 201 ] ) ,
and a matrix approximation A of Koopman K can be obtained as
A = argmin H H X Y F .
As the result of the identification, the obtained C A matrix, where C is such that C Ψ ( w , y , z ) = col ( w , y , z ) , is shown in Figure 5, which corresponds to the identified dynamics of w , y and z. The h ( w , y , z ) function is identified as
h i d ( w , y , z ) = C z A Ψ ( w , y , z ) ,
where C z R 12 × 109 is such that z C z Ψ ( w , y , z ) , and is also the sub-matrix consisting of the 25th, , 36 th rows in C A . As an illustrative example, the 26th row of C A , i.e., the second row of C z A , which corresponds to the unknown dynamics of the second node, i.e., h 2 ( w , y , z ) , reads
e 26 C A Ψ ( w , y , z ) = 3.8742 ψ 3 0.1419 ψ 5 3.6510 ψ 15 0.1290 ψ 17 2.0900 ψ 27 0.1000 ψ 29 + 7.6560 ψ 39 + 0.2640 ψ 41 + 2.6400 ψ 51 + 2.6400 ψ 63 + 4.0656 ψ 75 + 0.1452 ψ 77 + 3.6120 ψ 87 + 0.1200 ψ 89 + 1.2000 ψ 99 = 3.8742 w 2 0.1419 w 4 3.6510 y 2 0.1290 y 4 2.0900 z 2 0.1000 z 4 + 7.6560 w 2 y 2 + 0.2640 w 4 y 4 + 2.6400 w 2 z 2 + 2.6400 y 2 z 2 + 4.0656 w 2 2 + 0.1452 w 4 2 + 3.6120 y 2 2 + 0.1200 y 4 2 + 1.2000 z 2 2 ,
where readings smaller than O ( 10 4 ) are omitted. It can be seen that node 2 only receives transmission from node 4, which is explicitly identified as the terms in e 26 C A Ψ ( w , y , z ) , i.e.,
g 2 i d = 0.1419 w 4 0.1290 y 4 0.1000 z 4 + 0.2640 w 4 y 4 + 0.1452 w 4 2 + 0.1200 y 4 2 .
Figure 6 shows comparisons of trajectories of system (28) with the identified h i d ( w , y , z ) function and the original system starting from the same initial values. Specifically, the new states w , y , z and the corresponding true values w t , y t , z t , calculated by
w i t = x i , 1 , y i t = x i , 2 + x i , 3 0.1 x i , 1 , z i t = 0.09 x i , 1 1.0 x i , 2 + 0.91 x i , 3 + 1.2 ( x i , 1 + x i , 2 + x i , 3 ) ( x i , 1 + x i , 2 + x i , 3 1.0 ) + u i ( x ) ,
respectively, are compared. As the figures show, the errors between the trajectories stay low for around 250 steps, and deviate due to small identification errors. The identified network topology is shown in Figure 7, and the coupling function is obtained from h i d using (22). This example also indicates that incorrectness of the a priori linear model in the space spanned by the outputs would not disable the proposed identification method.

5. Conclusions

In this paper, we considered the identification problem of network structures using measured output data. We modeled the structures of networks as coupling functions that describe the data transmission in the networks, and proposed identification methods to identify the coupling functions from measured data. The identification was performed in a three-step manner, and we extracted the coupling function using the output and newly defined variables. The proposed method makes it possible to identify possibly nonlinear coupling function from measured output data of networks whose nodes have unknown dynamics. Numerical identification examples showed the usefulness and validity of the obtained results.

Author Contributions

Conceptualization, Z.M. and T.O.; Methodology, Z.M.; Formal analysis, Z.M.; Writing—original draft, Z.M.; Writing—review & editing, Z.M. and T.O.; Supervision, T.O.; Funding acquisition, T.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by JSPS KAKENHI 20K04538.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cavraro, G.; Kekatos, V. Graph Algorithms for Topology Identification Using Power Grid Probing. IEEE Control Syst. Lett. 2018, 2, 689–694. [Google Scholar] [CrossRef] [Green Version]
  2. Qu, F.; Tian, E.; Zhao, X. Chance-Constrained H State Estimation for Recursive Neural Networks Under Deception Attacks and Energy Constraints: The Finite-Horizon Case. IEEE Trans. Neural Networks Learn. Syst. 2022, 1–12. [Google Scholar] [CrossRef] [PubMed]
  3. Zha, L.; Liao, R.; Liu, J.; Xie, X.; Tian, E.; Cao, J. Dynamic Event-Triggered Output Feedback Control for Networked Systems Subject to Multiple Cyber Attacks. IEEE Trans. Cybern. 2022, 52, 13800–13808. [Google Scholar] [CrossRef] [PubMed]
  4. Li, Y.; Song, F.; Liu, J.; Xie, X.; Tian, E. Decentralized event-triggered synchronization control for complex networks with nonperiodic DoS attacks. Int. J. Robust Nonlinear Control 2022, 32, 1633–1653. [Google Scholar] [CrossRef]
  5. Marti, G.; Nielsen, F.; Bińkowski, M.; Donnat, P. A review of two decades of correlations, hierarchies, networks and clustering in financial markets. In Progress in Information Geometry: Theory and Applications; Nielsen, F., Ed.; Springer International Publishing: Cham, Switzerland, 2021; pp. 245–274. [Google Scholar] [CrossRef]
  6. Van den Heuvel, M.P.; Hulshoff Pol, H.E. Exploring the brain network: A review on resting-state fMRI functional connectivity. Eur. Neuropsychopharmacol. 2010, 20, 519–534. [Google Scholar] [CrossRef] [PubMed]
  7. Roebroeck, A.; Formisano, E.; Goebel, R. The identification of interacting networks in the brain using fMRI: Model selection, causality and deconvolution. NeuroImage 2011, 58, 296–302. [Google Scholar] [CrossRef] [PubMed]
  8. Shelke, S.; Attar, V. Source detection of rumor in social network—A review. Online Soc. Netw. Media 2019, 9, 30–42. [Google Scholar] [CrossRef]
  9. Ren, J.; Wang, W.X.; Li, B.; Lai, Y.C. Noise Bridges Dynamical Correlation and Topology in Coupled Oscillator Networks. Phys. Rev. Lett. 2010, 104, 058701. [Google Scholar] [CrossRef] [Green Version]
  10. Shi, R.; Jiang, W.; Wang, S. Detecting network structures from measurable data produced by dynamics with hidden variables. Chaos Interdiscip. J. Nonlinear Sci. 2020, 30, 013138. [Google Scholar] [CrossRef]
  11. Zhang, C.; Chen, Y.; Hu, G. Network reconstructions with partially available data. Front. Phys. 2017, 12, 128906. [Google Scholar] [CrossRef]
  12. Lai, P.Y. Reconstructing network topology and coupling strengths in directed networks of discrete-time dynamics. Phys. Rev. E 2017, 95, 022311. [Google Scholar] [CrossRef] [PubMed]
  13. Chen, Y.; Zhang, Z.; Chen, T.; Wang, S.; Hu, G. Reconstruction of noise-driven nonlinear networks from node outputs by using high-order correlations. Sci. Rep. 2017, 7, 44639. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Zhang, Z.; Zheng, Z.; Niu, H.; Mi, Y.; Wu, S.; Hu, G. Solving the inverse problem of noise-driven dynamic networks. Phys. Rev. E 2015, 91, 012814. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Levnajic, Z.; Pikovsky, A. Network Reconstruction from Random Phase Resetting. Phys. Rev. Lett. 2011, 107, 034101. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Su, R.Q.; Wang, W.X.; Lai, Y.C. Detecting hidden nodes in complex networks from time series. Phys. Rev. E 2012, 85, 065201. [Google Scholar] [CrossRef] [Green Version]
  17. Ching, E.S.C.; Tam, P.H. Effects of hidden nodes on the reconstruction of bidirectional networks. Phys. Rev. E 2018, 98, 062318. [Google Scholar] [CrossRef] [Green Version]
  18. Smelyanskiy, V.N.; Luchinsky, D.G.; Stefanovska, A.; McClintock, P.V.E. Inference of a Nonlinear Stochastic Model of the Cardiorespiratory Interaction. Phys. Rev. Lett. 2005, 94, 098101. [Google Scholar] [CrossRef] [Green Version]
  19. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  20. Foucart, S.; Rauhut, H. A Mathematical Introduction to Compressive Sensing; Birkhäuser Basel: Basel, Switzerland, 2013. [Google Scholar] [CrossRef]
  21. Mei, G.; Wu, X.; Wang, Y.; Hu, M.; Lu, J.A.; Chen, G. Compressive-Sensing-Based Structure Identification for Multilayer Networks. IEEE Trans. Cybern. 2018, 48, 754–764. [Google Scholar] [CrossRef]
  22. Sanandaji, B.M.; Vincent, T.L.; Wakin, M.B. Exact Topology Identification of Large-Scale Interconnected DynamicalSystems from Compressive Observations. In Proceedings of the 2011 American Control Conference, San Francisco, CA, USA, 29 June–1 July 2011; pp. 649–656. [Google Scholar] [CrossRef]
  23. Li, G.; Wu, X.; Liu, J.; Lu, J.A.; Guo, C. Recovering network topologies via Taylor expansion and compressive sensing. Chaos Interdiscip. J. Nonlinear Sci. 2015, 25, 043102. [Google Scholar] [CrossRef]
  24. Shen, Y.; Baingana, B.; Giannakis, G.B. Kernel-Based Structural Equation Models for Topology Identification of Directed Networks. IEEE Trans. Signal Process. 2017, 65, 2503–2516. [Google Scholar] [CrossRef]
  25. Mei, Z.; Oguchi, T. Network Structure Identification via Koopman Analysis and Sparse Identification. Nonlinear Theory Its Appl. 2022, 13, 477–492. [Google Scholar] [CrossRef]
  26. Yu, D.; Righero, M.; Kocarev, L. Estimating Topology of Networks. Phys. Rev. Lett. 2006, 97, 188701. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Chen, L.; Lu, J.; Tse, C.K. Synchronization: An Obstacle to Identification of Network Topology. IEEE Trans. Circuits Syst. II Express Briefs 2009, 56, 310–314. [Google Scholar] [CrossRef]
  28. Chiuso, A.; Pillonetto, G. A Bayesian approach to sparse dynamic network identification. Automatica 2012, 48, 1553–1565. [Google Scholar] [CrossRef] [Green Version]
  29. Shi, R.; Deng, C.; Wang, S. Detecting directed interactions of networks by random variable resetting. EPL 2018, 124, 18002. [Google Scholar] [CrossRef] [Green Version]
  30. Su, R.Q.; Wang, W.X.; Wang, X.; Lai, Y.C. Data-based reconstruction of complex geospatial networks, nodal positioning and detection of hidden nodes. R. Soc. Open Sci. 2016, 3, 150577. [Google Scholar] [CrossRef]
  31. Mei, Z.; Oguchi, T. A real-time identification method of network structure in complex network systems. Int. J. Syst. Sci. 2022, 1–16. [Google Scholar] [CrossRef]
  32. Takens, F. Detecting strange attractors in turbulence. In Proceedings of the Dynamical Systems and Turbulence, Warwick 1980; Rand, D., Young, L.S., Eds.; Springer: Berlin/Heidelberg, Germany, 1981; pp. 366–381. [Google Scholar] [CrossRef]
  33. Koopman, B.O. Hamiltonian systems and transformation in Hilbert space. Proc. Natl. Acad. Sci. USA 1931, 17, 315–318. [Google Scholar] [CrossRef] [Green Version]
  34. Budisic, M.; Mohr, R.; Mezic, I. Applied Koopmanism. Chaos Interdiscip. J. Nonlinear Sci. 2012, 22, 047510. [Google Scholar] [CrossRef] [Green Version]
  35. Korda, M.; Mezić, I. On convergence of extended dynamic mode decomposition to the Koopman operator. J. Nonlinear Sci. 2018, 28, 687–710. [Google Scholar] [CrossRef] [Green Version]
  36. Schmid, P.J. Dynamic mode decomposition of numerical and experimental data. J. Fluid Mech. 2010, 656, 5–28. [Google Scholar] [CrossRef] [Green Version]
  37. Williams, M.O.; Kevrekidis, I.G.; Rowley, C.W. A Data–Driven Approximation of the Koopman Operator: Extending Dynamic Mode Decomposition. J. Nonlinear Sci. 2015, 25, 1307–1346. [Google Scholar] [CrossRef]
Figure 1. Entries of the obtained matrix C y A R 16 × 321 , where the value of [ C y A ] i j entry is represented by the color at coordinate ( i , j ) . The network topology can be inferred by the non-diagonal entries in the 8 × 8 sub-block of rows 9 to 16 and columns 10 to 17. Note that the first observable is defined by ψ 1 = 1 so the coefficients of the observable ψ 2 = w 11 appear in the second column.
Figure 1. Entries of the obtained matrix C y A R 16 × 321 , where the value of [ C y A ] i j entry is represented by the color at coordinate ( i , j ) . The network topology can be inferred by the non-diagonal entries in the 8 × 8 sub-block of rows 9 to 16 and columns 10 to 17. Note that the first observable is defined by ψ 1 = 1 so the coefficients of the observable ψ 2 = w 11 appear in the second column.
Mathematics 11 00089 g001
Figure 2. The identified network topology.
Figure 2. The identified network topology.
Mathematics 11 00089 g002
Figure 3. Comparison of the first components of the 12 nodes in the original network and the reconstructed network starting from the same initial positions.
Figure 3. Comparison of the first components of the 12 nodes in the original network and the reconstructed network starting from the same initial positions.
Mathematics 11 00089 g003
Figure 4. Trajectory of system i described by (27) with u i = 0 .
Figure 4. Trajectory of system i described by (27) with u i = 0 .
Mathematics 11 00089 g004
Figure 5. The obtained C A R 36 × 109 shown in color.
Figure 5. The obtained C A R 36 × 109 shown in color.
Mathematics 11 00089 g005
Figure 6. Comparison between the trajectories of the reconstructed system using the identification results and the original system. Note that the time interval of k [ 200 , 300 ] is shown.
Figure 6. Comparison between the trajectories of the reconstructed system using the identification results and the original system. Note that the time interval of k [ 200 , 300 ] is shown.
Mathematics 11 00089 g006
Figure 7. The identified network topology.
Figure 7. The identified network topology.
Mathematics 11 00089 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mei, Z.; Oguchi, T. Network Structure Identification Based on Measured Output Data Using Koopman Operators. Mathematics 2023, 11, 89. https://doi.org/10.3390/math11010089

AMA Style

Mei Z, Oguchi T. Network Structure Identification Based on Measured Output Data Using Koopman Operators. Mathematics. 2023; 11(1):89. https://doi.org/10.3390/math11010089

Chicago/Turabian Style

Mei, Zhuanglin, and Toshiki Oguchi. 2023. "Network Structure Identification Based on Measured Output Data Using Koopman Operators" Mathematics 11, no. 1: 89. https://doi.org/10.3390/math11010089

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop