Next Article in Journal
Modeling and Optimization of a Compression Ignition Engine Fueled with Biodiesel Blends for Performance Improvement
Next Article in Special Issue
Differential Neural Network-Based Nonparametric Identification of Eye Response to Enforced Head Motion
Previous Article in Journal
GPS: A New TSP Formulation for Its Generalizations Type QUBO
Previous Article in Special Issue
Robust Stabilization via Super-Stable Systems Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identification of Linear Time-Invariant Systems with Dynamic Mode Decomposition

by
Jan Heiland
1,†,‡ and
Benjamin Unger
2,*,‡
1
Max Planck Institute for Dynamics of Complex Technical Systems, 39106 Magdeburg, Germany
2
Stuttgart Center for Simulation Science, University of Stuttgart, 70563 Stuttgart, Germany
*
Author to whom correspondence should be addressed.
Current address: Faculty of Mathematics, Otto von Guericke University Magdeburg, 39106 Magdeburg, Germany.
These authors contributed equally to this work.
Mathematics 2022, 10(3), 418; https://doi.org/10.3390/math10030418
Submission received: 14 September 2021 / Revised: 14 December 2021 / Accepted: 24 December 2021 / Published: 28 January 2022

Abstract

:
Dynamic mode decomposition (DMD) is a popular data-driven framework to extract linear dynamics from complex high-dimensional systems. In this work, we study the system identification properties of DMD. We first show that DMD is invariant under linear transformations in the image of the data matrix. If, in addition, the data are constructed from a linear time-invariant system, then we prove that DMD can recover the original dynamics under mild conditions. If the linear dynamics are discretized with the Runge–Kutta method, then we further classify the error of the DMD approximation and detail that for one-stage Runge–Kutta methods; even the continuous dynamics can be recovered with DMD. A numerical example illustrates the theoretical findings.

1. Introduction

Dynamical systems play a fundamental role in many modern modeling approaches of physical and chemical phenomena. The need for high fidelity models often results in large-scale dynamical systems, which are computationally demanding to solve, analyze, and optimize. Thus the last three decades have seen significant efforts to replace the so-called full-order model, which is considered the truth model, with a computationally cheaper surrogate model. In the context of model order reduction, we refer the interested reader to the monographs [1,2,3,4,5]. Often, the surrogate model is constructed by projecting the dynamical system onto a low-dimensional manifold, thus requiring a state-space description of the differential equation.
If a mathematical model is not available or not suited for modification, data-driven methods, such as the Loewner framework [6,7], vector fitting [8,9,10], operator inference [11], or dynamic mode decomposition (DMD) [12] may be used to create a low-dimensional realization directly from the measurement or simulation data of the system. Suppose the dynamical system that creates the data is linear. In that case, the Loewner framework and vector fitting are—under some technical assumptions—able to recover the original dynamical system and hence serve as system identification tools. Despite the popularity of DMD, a similar analysis seems to be missing, and this paper aims to close this gap.
Since DMD creates a discrete, linear time-invariant dynamical system from data, we are interested in answering the following questions:
  • What is the impact of transformations of the data on the resulting DMD approximation?
  • Assume that the data used to generate the DMD approximation are obtained from a linear differential equation. Can we estimate the error between the continuous dynamics and the DMD approximation?
  • Are there situations in which we are even able to recover the original dynamical system from its DMD approximation?
It is essential to know how the data for the construction of the DMD model are generated to answer these questions. Assuming exact measurements of the solution may be valid from a theoretical perspective only. Instead, we take the view of a numerical analyst and assume that the data are obtained via time integration of the dynamics with a general Runge–Kutta method (RKM) with known order of convergence. We emphasize that for linear time-invariant systems, a RKM may not be the method of choice; see, for instance, [13]. Nevertheless, RKMs are a common numerical technique to solve general differential equations, which is our main reason to consider RKMs in the following.
We can summarize the questions graphically as in Figure 1. Thus, the dashed lines represent the questions that we aim to answer in this paper.
Our main results are the following:
  • We show in Theorem 1 that DMD is invariant in the image of the data under linear transformations of the data.
  • Theorem 2 details that DMD is able to identify discrete-time dynamics, i.e., for every initial value in the image of the data, the DMD approximation exactly recovers the discrete-time dynamics.
  • In Theorem 3, we show that if the DMD approximation is constructed from data that are obtained via a RKM, then the approximation error of DMD with respect to the ordinary differential equation is in the order of the error of the RKM. If a one-stage RKM is used and the data are sufficiently rich, then the continuous-time dynamics, i.e., the matrix F in Figure 1, can be recovered cf. Lemma 1.
To render the manuscript self-contained, we recall important definitions and results for RKM and DMD in the upcoming Section 2.1 and Section 2.2, respectively, before we present our analysis in Section 3. We conclude with a numerical example to confirm the theoretical findings.

Notation

As is standard, N , R , and R [ t ] denote the positive integers, the real numbers, and the polynomials with real coefficients, respectively. For any n , m N , we denote with R n × m the set of n × m matrices with real entries. The set of nonsingular matrices of size n × n is denoted with GL n ( R ) . Let A = [ a i j ] R n × m , B R p × q , and x i R n ( i = 1 , , k ). The transpose and the Moore–Penrose pseudoinverse of A are denoted with A T and A , respectively. The Kronecker product ⊗ is defined as
A B = a 11 B a 1 m B a n 1 B a n m B R n p × m q .
We use span { x 1 , , x k } to denote the linear span of the vectors x 1 , , x k and also casually write span { X } = span { x 1 , , x k } for the column space of the matrix X with { x 1 , , x k } as its columns. For A R n × n and a vector x 0 R n , we denote the reachable space as C ( x 0 , A ) = span { x 0 , A x 0 , , A n 1 x 0 } . The Stiefel manifold of n × r dimensional matrices with real entries is denoted by
St ( n , r ) = U R n × r U T U = I r ,
where I r denotes the r × r identity matrix. For a continuously differentiable function x : I R n from the interval I R to the vector space R n , we use the notation x ˙ : = d d t x to denote the derivative with respect to the independent variable t, which we refer to as the time.

2. Preliminaries

As outlined in the introduction, DMD creates a finite-dimensional linear model to approximate the original dynamics. Thus, in view of possibly exact system identification, we need to assume that the data that are fed to the DMD algorithm are obtained from a linear ODE, which in the sequel is denoted by
x ˙ ( t ) = F x ( t )
with given matrix F R n × n . To fix a solution of (2a), we prescribe the initial condition
x ( 0 ) = x 0 R n ,
and denote the solution of the initial value problem (IVP) as x ( t ; x 0 ) : = exp ( F t ) x 0 . For the analysis of DMD, we assume that the matrix F is not available. Instead, the question is to what extent DMD is able to recover the matrix F solely from measurements of the state variable x.
Remark 1.
While a DMD approximation, despite its linearity, may well reproduce trajectories of nonlinear systems (see, for example, [14]), the question of DMD being able to recover the full dynamics has to focus on linear systems. Here, the key observation is that a DMD approximation is a finite-dimensional linear map. In contrast, the encoding of nonlinear systems via a linear operator necessarily needs an infinite-dimensional mapping.

2.1. Runge–Kutta Methods

To solve the IVP (2) numerically, we employ a RKM, which is a common one-step method to approximate ordinary and differential-algebraic equations [15,16]. More precisely, given a step size h > 0 , the solution of the IVP (2) is approximated via the sequence x i x ( t 0 + i h ) given by
x i + 1 = x i + h j = 1 s β j k j ,
with the so-called internal stages  k j R n (implicitly) defined via
k j = F x i + h = 1 s α j , F k for j = 1 , , s ,
where s N denotes the number of stages in the RKM. Using the matrix notation A = [ α j , ] R s × s and β = [ β j ] R s , the s-stage RKM defined via (3) is conveniently summarized with the pair ( A , β ) . Note that we restrict our presentation to linear time-invariant dynamics, and hence, do not require the full Butcher tableau.
Since the ODE (2a) is linear, we can rewrite the internal stages as
I n h α 1 , 1 F h α 1 , 2 F h α 1 , s F h α 2 , 1 F I n h α 2 , 2 F h α 2 , s F h α s , 1 F h α s , s 1 F I n h α s , s F k 1 k 2 k s = F x i F x i F x i
Setting k : = k 1 T k s T T R s n and e : = 1 1 T R s , the linear system in (4) can be written as
( I s I n h A F ) k = ( e F ) x i ,
where ⊗ denotes the Kronecker product. If h is small enough, the matrix ( I s I n h A F ) is invertible, and thus, we obtain the discrete linear system
x i + 1 = x i + h j = 1 s β j k j = x i + h ( β T I n ) k = x i + h ( β T I n ) I s I n h A F 1 ( e F ) x i = A h x i ,
with (using the identity I s I n = I s n )
A h : = I n + h ( β T I n ) I s n h A F 1 ( e F ) .
Example 1.
The explicit (or forward) Euler method is given as ( A , β ) = ( 0 , 1 ) and according to (6) we obtain the well-known formula A h = I n + h F . For the implicit (or backward) Euler method ( A , β ) = ( 1 , 1 ) the discrete system matrix is given by
A h = I n + h ( I n h F ) 1 F = ( I n h F ) 1 ( I n h F + h F ) = ( I n h F ) 1 .
To guarantee that the representation (6) is valid, we make the following assumption throughout the manuscript.
Assumption A1.
For any s-stage RKM ( A , β ) and any dynamical system matrix F R n × n , we assume that the step size h is chosen such that the matrix I s n h A F is nonsingular.
Remark 2.
Using Assumption 1, the matrix I s n h A F is nonsingular, and thus, there exists a polynomial p = k = 0 s n 1 p k t k R [ t ] of degree at most s n 1 depending on the step size h such that
I s n h A F 1 = p ( I s n h A F ) = k = 0 s n 1 p k I s n h A F k = k = 0 s n 1 p k ρ = 0 k k ρ ( 1 ) ρ h ρ ( A ρ F ρ ) ,
where the last equality follows from the binomial theorem. Consequently, we have
A h = I n + k = 0 s n 1 p k ρ = 0 k k ρ ( 1 ) ρ h ρ + 1 β T A ρ e F ρ + 1 .
Rearranging the terms together with the Cayley–Hamilton theorem implies the existence of a polynomial p ˜ R [ t ] of degree at most n such that A h = p ˜ ( F ) . As a direct consequence, we see that any eigenvector of F is an eigenvector of A h and thus, A h is diagonalizable if F is diagonalizable.
Having computed the matrix A h , the question that remains to be answered is the quality of the approximation x ( i h ; x 0 ) x i , which yields the following well-known definition (cf. [15]).
Definition 1.
A RKM ( A , β ) has order p if there exists a constant C 0 (independent of h) such that
x ( h ; x 0 ) x 1 C h p + 1
holds, where x 1 = A h x 0 with A h defined as in (6).
For one-step methods, it is well known that the local errors—as estimated in (8) for the initial time step—basically sum in the global error such that the following estimate holds:
x ( N h ; x 0 ) x N C h p ;
see, e.g., ([15], Thm. II.3.6).

2.2. Dynamic Mode Decomposition

For i = 0 , , m , assume data points x i R n are available. If not explicitly stated, we do not make any assumption on m. The idea of DMD is to determine a linear time-invariant relation between the data, i.e., finding a matrix A DMD R n × n such that the data approximately satisfy
x i + 1 A DMD x i for i = 0 , 1 , , m 1 .
Following [17], we introduce
X : = x 0 x m 1 R n × m and Z : = x 1 x m R n × m .
Then, the DMD approximation matrix is defined as the minimum-norm solution of
min M R n × n Z M X F ,
where · F denotes the Frobenius norm. It is easy to show that the minimum-norm solution is given by A DMD = Z X [12], where X denotes the Moore–Penrose pseudoinverse of X. This motivates the following definition.
Definition 2.
Consider the data x i R n for i = 0 , 1 , , m and associated data matrices X and Z defined in (9). Then the matrix A DMD = Z X is called the DMD matrix for ( x i ) i = 0 m . If the eigendecomposition of A DMD exists, then the eigenvalues and eigenvectors of A DMD are called DMD eigenvalues and DMD modes of ( x i ) i = 0 m , respectively.
The Moore–Penrose pseudoinverse and, thus, also the DMD matrix can be computed via the singular value decomposition (SVD); see, for example, ([18], Ch. 5.5.4). Let
U U ¯ Σ 0 0 0 V V ¯ = X
denote the SVD of X, with r : = rank ( X ) , U St ( n , r ) , Σ R r × r and rank ( Σ ) = r , and V St ( m , r ) , where we use the Stiefel manifold as defined in (1). Then
X = V V ¯ Σ 1 0 0 0 U U ¯ = V Σ 1 U
and, thus,
A DMD = Z V Σ 1 U T .
For later reference, we call U Σ V = X the trimmed SVD of X.

3. System Identification and Error Analysis

In this section, we present our main results. Before discussing system identification for discrete-time (cf. Section 3.2) and continuous-time (cf. Section 3.3) dynamical systems via DMD, we study the impact of transformations of the data on DMD in Section 3.1.

3.1. Data Scaling and Invariance of the DMD Approximation

Scaling and more general transformations of data are often used to improve the performance of the methods that work on the data. Since DMD is inherently related to the Moore–Penrose inverse, we first study the impact of a nonsingular matrix T GL n ( R ) on the generalized inverse. To this purpose, consider a matrix X R n × m with r : = rank ( X ) . Let X = U Σ V denote the trimmed SVD of X with U St ( n , r ) , Σ GL r ( R ) and V St ( m , r ) . Let T U = Q R denote the QR-decomposition of T U with Q R n × n and R R n × r . We immediately obtain rank ( R S ) = r . Let R Σ = U ^ Σ ^ V ^ denote the trimmed SVD of R Σ with U ^ St ( n , r ) , Σ ^ GL r ( R ) , and V ^ St ( r , r ) . We immediately infer
V ^ V ^ = I r .
It is easy to see that the matrices U T : = Q U ^ R n × r , and V T : = V V ^ R m × r satisfy U T U T = I r = V T V T , i.e., U T St ( n , r ) and V T St ( m , r ) . The trimmed SVD of T X is thus given by
T X = T U Σ V = Q R Σ V = Q U ^ Σ ^ V ^ V = U T Σ ^ V T .
We conclude
( T X ) T X = V T V T = V V ^ V ^ V = V V = X X ,
where we used the identity (13). We have thus shown the following result.
Proposition 1.
Let X R n × m and T GL n ( R ) . Then ( T X ) ( T X ) = X X .
With these preparations, we can now show that the DMD approximation is partially invariant to general regular transformations applied to the training data. More precisely, a data transformation only affects the part of the DMD approximation that is not in the image of the data.
Theorem 1.
For given data ( x i ) i = 0 m consider the matrices X and Z as defined in (9) and the corresponding DMD matrix A DMD R n × n . Consider T GL n ( R ) and let
X ˜ : = T X a n d Z ˜ : = T Z
be the matrices of the transformed data. Let A ˜ DMD = Z ˜ X ˜ denote the DMD matrix for the transformed data. Then the DMD matrix is invariant under the transformation in the image of X, i.e.,
A DMD X = T 1 A ˜ DMD T X = T 1 A ˜ DMD X ˜ .
Moreover, if T is unitary or rank ( X ) = n , then
A DMD = T 1 A ˜ DMD T .
Proof. 
Using Proposition 1, we obtain
T 1 A ˜ DMD T X = T 1 T Z ( T X ) T X = Z X X = A DMD X .
If T is unitary or rank ( X ) = n , then we immediately obtain ( T X ) = X T 1 , and thus
T 1 A ˜ DMD T = T 1 T Z ( T X ) T = Z X T 1 T = A DMD ,
which concludes the proof. □
While Theorem 1 states that DMD is invariant under transformations in the image of the data matrix, the invariance in the orthogonal complement of the image of the data matrix, i.e., equality (14), is, in general, not satisfied. We illustrate this observation in the numerical simulations in Section 4 and in the following analytical example.
Example 2.
Consider the data vectors x i : = [ i + 1 , 0 ] for i = 0 , 1 , 2 and T : = 1 0 1 1 . Then,
X = 1 2 0 0 , Z = 2 3 0 0 , X = 1 5 1 0 2 0 , T X = 1 2 1 2 , ( T X ) = 1 10 1 1 2 2 .
We thus obtain
A DMD = 1 5 8 0 0 0 , A ˜ DMD = 1 5 4 4 4 4 , a n d T 1 A ˜ DMD T = 1 5 8 4 0 0 ,
confirming that DMD is invariant under transformations in the image of the data, but not in the orthogonal complement.
Remark 3.
One can show that in the setting of Theorem 1, the matrix M ^ : = T A DMD T 1 is a minimizer (not necessarily the minimum-norm solution) of
min M R n × n Z ^ M X ^ F .

3.2. Discrete-Time Dynamics

In this subsection, we focus on the identification of discrete-time dynamics, which are exemplified by the discrete-time system
x i + 1 = A x i
with initial value x 0 R n and system matrix A R n × n . The question that we want to answer is to what extent DMD is able to recover the matrix A solely from data.
Proposition 2.
Consider data ( x i ) i = 0 m generated by (15), associated data matrices X , Z as defined in (9), and the corresponding DMD matrix A DMD . Moreover let U Σ V = X with U St ( n , r ) , Σ GL r ( R ) , V St ( m , r ) , and r : = rank ( X ) denote the trimmed SVD of X. Then
A DMD = A U U .
Proof. 
By assumption, we have X = x 0 A x 0 A m 1 x 0 and Z = A X = A U Σ V . We conclude
A DMD = Z X = A U Σ V V Σ 1 U = A U U .
Remark 4.
We immediately conclude that DMD recovers the true dynamics, i.e., A DMD = A , whenever rank ( X ) = n . This is the case if and only if ( A , x 0 ) is controllable, i.e., C ( A , x 0 ) has dimension n, and the data set is sufficiently rich, i.e., m n .
Our next theorem identifies the part of the dynamics that is exactly recovered in the case that rank ( X ) < n that occurs for ( A , x 0 ) is not controllable or m < n .
Theorem 2.
Consider the setting of Proposition 2. If span { U } is A DMD invariant, then the DMD approximation is exact in the image of U, i.e.,
( A i A DMD i ) x 0 = 0 f o r a l l i 0 a n d x 0 span { U } .
If, in addition, ker ( A ) span { U } = { 0 } , then also the converse direction holds.
Proof. 
Let x 0 span { U } . Since span { U } is A DMD invariant, we conclude A DMD i x 0 span { U } for i 0 , i.e., there exists y i R r such that A DMD i x = U y i . Using Proposition 2 we conclude
A DMD i + 1 x 0 = A DMD A DMD i x 0 = A DMD U y i = A x i = A i + 1 x 0 .
The proof of (17) follows via induction over i. For the converse direction, let x = x U + x U with x U span { U } and x U span { U } . Proposition 2 and (17) imply
( A A DMD ) x = A x U 0 ,
which completes the proof. □
Remark 5.
The proof of Theorem 2 details that span { U } is A DMD -invariant if and only if span { U } is A invariant. Moreover, span { U } = span { X } implies that this condition can be checked easily during the data-generation process. If we further assume that the data are generated via (15), then this is the case whenever
rank x 0 x i ) = rank ( x 0 x i + 1
for some i 0 .

3.3. Continuous-Time Dynamics and RK Approximation

Suppose now that the data ( x i ) i = 0 m are generated by a continuous process, i.e., via the dynamical system (2). In this case, we are interested in recovering the continuous dynamics from the DMD approximation. As a consequence of Theorem 2, we immediately obtain the following results for exact sampling.
Corollary 1.
Let A DMD be the DMD matrix for the sequence x i = exp ( i F h ) x 0 R n for i = 1 , , m with m n . Then
x ( i h ; x ˜ 0 ) = A DMD i x ˜ 0
if and only if x ˜ 0 span { x 0 , , x m } , where x ( t ; x ˜ 0 ) denotes the solution of the IVP (2) with initial value x ˜ 0 .
Proof. 
The assertion follows immediately from Proposition 2 with the observation that exp ( i F h ) is nonsingular. □
We conclude that we can recover the continuous dynamics with the matrix logarithm (see [19] for further details), whenever rank ( X ) = n . In practical applications, an exact evaluation of the flow map is typically not possible. Instead, a numerical time-integration method is used to approximate the continuous dynamics.
Suppose we use a RKM with constant step size h > 0 to obtain a numerical approximation ( x i ) i = 0 m R n of the IVP (2) and use these data to construct the DMD matrix A DMD R n × n as in Definition 2. If we now want to use the DMD matrix to obtain an approximation for a different initial condition, say x ( 0 ) = x ˜ 0 , we are interested in quantifying the error
x ( i h ; x ˜ 0 ) A DMD i x ˜ 0 .
Theorem 3.
Suppose that the sequence ( x i ) i = 0 m , with x i R n for i = 0 , , m , is generated from the linear IVP (2) via a RKM of order p and step size h > 0 and satisfies
span { x 0 , , x m 1 } = span { x 0 , , x m } .
Let A DMD R n × n denote the associated DMD matrix. Then there exists a constant C 0 such that
x ( i h ; x ˜ 0 ) A DMD i x ˜ 0 C h p
holds for any x ˜ 0 span ( { x 0 , , x m 1 } ) .
Proof. 
Since the data ( x i ) i = 0 m are generated from a RKM, there exists a matrix A h R n × n such that x i + 1 = A h x i for i = 0 , , m 1 . Let x ˜ 0 span ( { x 0 , , x m 1 } ) . Then, Theorem 2 implies A h i x ˜ 0 = A DMD i x ˜ 0 for any i 0 . Thus, the result follows from the classical error estimates for RKM (see, for example, [15], Thm. II.3.6) and from the equality
x ( i h ; x ˜ 0 ) A DMD i x ˜ 0 = x ( i h ; x ˜ 0 ) A h i x ˜ 0 C h p
for some C 0 since the RKM is of order p. □
The proof details that due to Proposition 2, we are essentially able to recover the discrete dynamics A h obtained from the RKM via DMD, provided that rank ( X ) = n . As laid out in Remark 4, this condition is equivalent to ( A h , x 0 ) being controllable for which the controllability of ( F , x 0 ) is a necessary condition.
The question that remains to be answered is whether it is possible to recover the continuous dynamic matrix F from the discrete dynamics A DMD (respectively A h ) provided that the Runge–Kutta scheme used to discretize the continuous dynamics is known. For any 1-stage Runge–Kutta method ( α , β ) , i.e., s = 1 in (3), this is indeed the case since then (6) simplifies to
A h = I n + h β ( I n h α F ) 1 F ,
which yields
F = 1 h ( I n A h ) α A h + ( β α ) I n 1 .
Combining (19) with Proposition 2 yields the following result.
Lemma 1.
Suppose that the sequence ( x i ) i = 0 m R n is generated from the linear IVP (2) via the 1-stage Runge-Kutta method ( α , β ) with step size h > 0 . Let A DMD R n × n denote the associated DMD matrix. If rank ( { x 0 , , x m 1 } ) = n , then
F = 1 h ( I n A DMD ) α A DMD + ( β α ) I n 1 ,
provided that the inverse exists.
If the assumption of Lemma 1 holds, then we can recover the continuous dynamic matrix from the DMD approximation. The corresponding formula for popular 1-stage methods is presented in Table 1.
In this scenario, let us emphasize that we can compute the discrete dynamics with the DMD approximation for any time step.
The situation is different for s 2 , as we illustrate with the following example.
Example 3.
For given h > 0 , consider F 1 : = 0 and F 2 : = 2 h . Then, for Heun’s method, i.e., A = 0 0 1 0 and β = 1 2 1 2 , we obtain A h = p ( F ) with p ( x ) = 1 + h x + h 2 2 x 2 , and thus p ( F 1 ) = p ( F 2 ) . In particular, we cannot distinguish the continuous-time dynamics in this specific scenario.

4. Numerical Examples

To illustrate our analytical findings, we constructed a dynamical system that exhibits some fast dynamics that is stable but not exponentially stable and has a nontrivial but exactly computable flow map. In this way, we can check the approximation both qualitatively and quantitatively. In addition, the system can be scaled to arbitrary state-space dimensions. Most importantly, for our purposes, the system is designed such that for any initial value, the space not reached by the system is at least as large as the reachable space. The complete code of our numerical examples can be found in the supplementary material.
With N N , Δ : = diag ( 0 , 1 , , N 1 ) we consider the continuous-time dynamics (2) with
F : = 0 2 Δ 0 1 2 Δ and exp ( t F ) = I 4 ( I exp ( t 2 Δ ) ) 0 exp ( t 2 Δ ) .
Starting with an initial value x 0 R 2 N we can thus generate exact snapshots of the solution via x ( t ) = exp ( t F ) x 0 , as well as the controllability space
C ( F , x 0 ) = span x 0 , 0 2 Δ 0 1 2 Δ x 0 , 0 2 Δ 0 1 2 Δ 2 x 0 , , 0 2 Δ 0 1 2 Δ 2 N 1 x 0 .
One can confirm that dim ( C ( F , x 0 ) ) N with equality if, for example, the initial state
x 0 = x 0 , 1 x 0 , 2
has no zero entries in its lower part x 0 , 2 R N . Due to (7), we immediately infer
dim ( C ( A h , x 0 ) ) N
for any A h obtained by a Runge–Kutta method. We conclude that DMD is at most capable of reproducing solutions that evolve in C ( F , x 0 ) . Indeed, as outlined in Proposition 2, all components of any other initial value x ˜ 0 that are in the orthogonal complement of C ( F , x 0 ) are set to zero in the first DMD iteration.
For our numerical experiments, we set N : = 5 , x 0 : = [ 1 , 2 , , 10 ] , and consider the time-grid t i : = i h for i = 0 , 1 , , 100 with uniform step size h = 0.1 . A SVD of exactly sampled data
U 1 U 2 Σ 1 0 0 0 V T = x 0 x ( h ; x 0 ) x ( 2 h ; x 0 ) x ( 10 ; x 0 )
of the matrix of snapshots of the solution x ( t ; x 0 ) reveals that the solution space is indeed of dimension N = 5 and defines the bases U 1 , U 2 St ( 10 , 5 ) of C ( F , x 0 ) and its orthogonal complement, respectively.
For our numerical experiment, whose results are depicted in Figure 2, we choose the initial values
x ˜ 0 : = U 1 e span ( U 1 ) and x ^ 0 : = U 2 e span ( U 2 ) = span ( U 1 ) ,
with e = [ 1 , 1 , 1 , 1 , 1 ] . The exact solution for both initial values is presented in Figure 2a,b, respectively. Our simulations confirm the following:
  • As predicted by Theorem 2, the DMD approximation for the initial value x ˜ 0 , depicted in Figure 2c, exactly recovers the exact solution, while the DMD approximation for the initial value x ^ 0 (cf. Figure 2d) is identically zero.
  • If we first transform the data with the matrix
    T = 1 1 1 1 GL 2 N ( R ) ,
    then compute the DMD approximation, and then transform the results back, the DMD approximation for x ˜ 0 remains unchanged (see Figure 2e), confirming (14) from Theorem 1. In contrast, the prediction of the dynamics for x ^ 0 changes (see Figure 2f), highlighting that DMD is not invariant under state-space transformations in the orthogonal complement of the data.
The presented numerical example is chosen to illustrate the importance of the reachable space. Computing a subspace numerically is a delicate task in particular if, as in our example, the ratio of the largest and the smallest entry in the controllability matrix is of size ( 1 / 2 ) 2 N 3 ( N 1 ) 2 N ( 1 / 2 ) 2 N 1 = 4 ( N 1 ) 2 N , which leads to huge rounding errors already for moderate N. This mainly concerns the separation of the reachable and the unreachable subspace, which, however, can be monitored in a general implementation for a general setup. Since in standard SVD implementations, the dominant directions (and, thus, the Moore–Penrose inverse) are computed with high accuracy, for quantitative approximations using DMD, these numerical issues are less severe.

5. Conclusions

This work highlighted fundamental properties of the DMD approach if applied to linear problems both in continuous and discrete times. Depending on how the initial data relate to the reachable space, the DMD can recover the exact discrete-time dynamics. If, in addition, the discrete-time data are generated from a continuous-time system via time discretization with a Runge–Kutta scheme, then the error of the DMD approximation is in the same order as the time-integration method. As a by-product of our analysis, we made a relation of the Moore–Penrose inverse and regular transformations explicit, which has not been stated so far. Although the findings mainly confirm what should be expected, the basic principles, such as controllability, will well generalize to nonlinear problems.

Supplementary Materials

The following are available at https://www.mdpi.com/article/10.3390/math10030418/s1. Python script to reproduce the numerical results.

Author Contributions

All authors have contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

B. Unger acknowledges funding from the DFG under Germany’s Excellence Strategy–EXC 2075–390740016 and is thankful for support by the Stuttgart Center for Simulation Science (SimTech).

Data Availability Statement

The code to produce the numerical example is attached to this manuscript as Supplementary Material.

Acknowledgments

We thank Robert Altmann for inviting us to the Sion workshop, where we started this work.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DMDdynamic mode decomposition
IVPinitial value problem
ODEordinary differential equation
RKMRunge–Kutta method
SVDsingular value decomposition

References

  1. Benner, P.; Cohen, A.; Ohlberger, M.; Willcox, K. Model Reduction and Approximation; SIAM: Philadelphia, PA, USA, 2017. [Google Scholar] [CrossRef] [Green Version]
  2. Quarteroni, A.; Manzoni, A.; Negri, F. Reduced Basis Methods for Partial Differential Equations: An Introduction; UNITEXT, Springer: Berlin, Germany, 2016. [Google Scholar] [CrossRef]
  3. Antoulas, A.C. Approximation of Large-Scale Dynamical Systems; Advances in Design and Control; SIAM: Philadelphia, PA, USA, 2005; p. 489. [Google Scholar]
  4. Hesthaven, J.S.; Rozza, G.; Stamm, B. Certified Reduced Basis Methods for Parametrized Partial Differential Equations; Springer: Berlin, Germany, 2016. [Google Scholar]
  5. Antoulas, A.C.; Beattie, C.A.; Güğercin, S. Interpolatory Methods for Model Reduction; SIAM: Philadelphia, PA, USA, 2020. [Google Scholar] [CrossRef]
  6. Mayo, A.J.; Antoulas, A.C. A framework for the solution of the generalized realization problem. Linear Algebra Appl. 2007, 425, 634–662. [Google Scholar] [CrossRef] [Green Version]
  7. Beattie, C.; Gugercin, S. Realization-independent H2-approximation. In Proceedings of the 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), Maui, HI, USA, 10–13 December 2012; pp. 4953–4958. [Google Scholar] [CrossRef]
  8. Gustavsen, B.; Semlyen, A. Rational approximation of frequency domain responses by vector fitting. IEEE Trans. Power Deliv. 1999, 14, 1052–1061. [Google Scholar] [CrossRef] [Green Version]
  9. Drmač, Z.; Gugercin, S.; Beattie, C. Quadrature-Based Vector Fitting for Discretized H2 Approximation. SIAM J. Sci. Comput. 2015, 37, A625–A652. [Google Scholar] [CrossRef] [Green Version]
  10. Drmač, Z.; Gugercin, S.; Beattie, C. Vector Fitting for Matrix-valued Rational Approximation. SIAM J. Sci. Comput. 2015, 37, A2345–A2379. [Google Scholar] [CrossRef]
  11. Peherstorfer, B.; Willcox, K. Data-driven operator inference for nonintrusive projection-based model reduction. Comput. Methods Appl. Mech. Engrg. 2016, 306, 196–215. [Google Scholar] [CrossRef] [Green Version]
  12. Kutz, J.; Brunton, S.; Brunton, B.; Proctor, J. Dynamic Mode Decomposition; SIAM: Philadelphia, PA, USA, 2016. [Google Scholar]
  13. Moler, C.; Van Loan, C. Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later. SIAM Rev. 2003, 45, 3–49. [Google Scholar] [CrossRef]
  14. Mezić, I. Spectral Properties of Dynamical Systems, Model Reduction and Decompositions. Nonlinear Dyn. 2005, 41, 309–325. [Google Scholar] [CrossRef]
  15. Hairer, E.; Nørsett, S.; Wanner, G. Solving Ordinary Differential Equations I: Nonstiff Problems; Springer Series in Computational Mathematics; Springer: Berlin, Germany, 2008. [Google Scholar]
  16. Kunkel, P.; Mehrmann, V. Differential-Algebraic Equations. Analysis and Numerical Solution; European Mathematical Society: Zürich, Switzerland, 2006. [Google Scholar]
  17. Tu, J.H.; Rowley, C.W.; Luchtenburg, D.M.; Brunton, S.L.; Kutz, J.N. On dynamic mode decomposition: Theory and applications. J. Comput. Dyn. 2014, 1, 391–421. [Google Scholar] [CrossRef] [Green Version]
  18. Golub, G.H.; Van Loan, C.F. Matrix Computations, 3rd ed.; Johns Hopkins University Press: Baltimore, MD, USA, 1996. [Google Scholar]
  19. Higham, N. Functions of Matrices: Theory and Computation; Other Titles in Applied Mathematics; SIAM: Philadelphia, PA, USA, 2008. [Google Scholar]
Figure 1. Problem setup.
Figure 1. Problem setup.
Mathematics 10 00418 g001
Figure 2. Comparison of the exact solution, DMD approximation, and DMD approximation based on transformed data for initial values inside the reachable subspace, i.e., x ˜ 0 C ( F , x 0 ) and outside the reachable subspace, i.e., x ^ 0 C ( F , x 0 ) . (a) Exact solution with initial value x ˜ 0 . (b) Exact solution with initial value x ^ 0 . (c) DMD approximation with initial value x ˜ 0 . (d) DMD approximation with initial value x ^ 0 . (e) DMD with transformed data with initial value x ˜ 0 . (f) DMD with transformed data with initial value x ^ 0 .
Figure 2. Comparison of the exact solution, DMD approximation, and DMD approximation based on transformed data for initial values inside the reachable subspace, i.e., x ˜ 0 C ( F , x 0 ) and outside the reachable subspace, i.e., x ^ 0 C ( F , x 0 ) . (a) Exact solution with initial value x ˜ 0 . (b) Exact solution with initial value x ^ 0 . (c) DMD approximation with initial value x ˜ 0 . (d) DMD approximation with initial value x ^ 0 . (e) DMD with transformed data with initial value x ˜ 0 . (f) DMD with transformed data with initial value x ^ 0 .
Mathematics 10 00418 g002aMathematics 10 00418 g002b
Table 1. Identification of continuous-time systems via DMD with 1-stage Runge–Kutta methods.
Table 1. Identification of continuous-time systems via DMD with 1-stage Runge–Kutta methods.
Method ( α , β ) Lemma 1
explicit Euler ( 0 , 1 ) F = 1 h ( I n A DMD )
implicit Euler ( 1 , 1 ) F = 1 h ( I n A DMD 1 )
implicit midpoint rule ( 1 2 , 1 ) F = 1 2 h ( A DMD I n ) ( A DMD + I n ) 1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Heiland, J.; Unger, B. Identification of Linear Time-Invariant Systems with Dynamic Mode Decomposition. Mathematics 2022, 10, 418. https://doi.org/10.3390/math10030418

AMA Style

Heiland J, Unger B. Identification of Linear Time-Invariant Systems with Dynamic Mode Decomposition. Mathematics. 2022; 10(3):418. https://doi.org/10.3390/math10030418

Chicago/Turabian Style

Heiland, Jan, and Benjamin Unger. 2022. "Identification of Linear Time-Invariant Systems with Dynamic Mode Decomposition" Mathematics 10, no. 3: 418. https://doi.org/10.3390/math10030418

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop