Next Article in Journal
Correction: Oluwagbemi et al. Bioinformatics, Computational Informatics, and Modeling Approaches to the Design of mRNA COVID-19 Vaccine Candidates. Computation 2022, 10, 117
Next Article in Special Issue
Parallelization of Runge–Kutta Methods for Hardware Implementation
Previous Article in Journal
Precision Calibration of Omnidirectional Camera Using a Statistical Approach
Previous Article in Special Issue
Chebfun Solutions to a Class of 1D Singular and Nonlinear Boundary Value Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Alternative Algorithms for Computing Dynamic Mode Decomposition

by
Gyurhan Nedzhibov
Faculty of Mathematics and Informatics, Shumen University, 9700 Shumen, Bulgaria
Computation 2022, 10(12), 210; https://doi.org/10.3390/computation10120210
Submission received: 16 November 2022 / Revised: 25 November 2022 / Accepted: 28 November 2022 / Published: 1 December 2022
(This article belongs to the Special Issue Mathematical Modeling and Study of Nonlinear Dynamic Processes)

Abstract

:
Dynamic mode decomposition (DMD) is a data-driven, modal decomposition technique that describes spatiotemporal features of high-dimensional dynamic data. The method is equation-free in the sense that it does not require knowledge of the underlying governing equations. The main purpose of this article is to introduce new alternatives to the currently accepted algorithm for calculating the dynamic mode decomposition. We present two new algorithms which are more economical from a computational point of view, which is an advantage when working with large data. With a few illustrative examples, we demonstrate the applicability of the introduced algorithms.

1. Introduction

Dynamic mode decomposition (DMD) was first introduced by Schmid [1] as a method for analyzing data from numerical simulations and laboratory experiments in fluid dynamics field. The method constitutes a mathematical technique for identifying spatiotemporal coherent structures from high-dimensional data. It can be considered to be a numerical approximation to Koopman spectral analysis, and in this sense, it is applicable to nonlinear dynamical systems (see [2,3,4]). The DMD method combines the favorable features from two of the most powerful data analytic tools: proper orthogonal decomposition (POD) in space and Fourier transforms in time. DMD has gained popularity and it has been applied for a variety of dynamic systems in many different fields such as video processing [5], epidemiology [6], neuroscience [7], financial trading [8,9,10], robotics [11], cavity flows [12,13] and various jets [2,14]. For a review of the DMD literature, we refer the reader to [15,16,17,18]. Since its initial introduction, along with its wide application in various fields, the DMD method has undergone various modifications and improvements. For some recent results on the topics of DMD for non-uniformly sampled data, higher order DMD method, parallel implementations of DMD and some derivative DMD techniques, we recommend to the reader [19,20,21,22,23,24,25,26]; see also [27,28,29,30,31,32,33].
Our goal in the present work is to introduce alternative algorithms for calculating DMD. The new approaches calculate the DMD modes of the Koopman operator using a simpler formula compared to the standard DMD algorithm. The remainder of this work is organized as follows: in the rest of Section 1, we briefly describe the DMD algorithm, in Section 1, we propose and discuss the new approaches for DMD computation and in Section 3, we present numerical results, and the conclusion is in Section 4.

1.1. Description of the Standard DMD Algorithm

Originally, the DMD technique was formulated in terms of a companion matrix [1,2], emphasizing its connections to the Arnoldi algorithm and Koopman operator theory. Later, an SVD-based algorithm was presented in [12]. This algorithm is more numerically stable and is now a commonly accepted approach for performing DMD decomposition. We describe this algorithm in the following. Throughout the paper, we use the following notations: uppercase Latin letters for matrices, lowercase Latin or Greek letters for scalars, and lowercase bold letters for vectors.
Consider a sequential set of data arranged in n × m + 1 matrix
Z = [ x 0 , , x m ]
where n is the number of state variables and m + 1 is the number of observations (snapshots). The data x i could be from measurements, experiments, or simulations collected at the time t i from a given dynamical system and assume that the data are equispaced in time, with a time, step t . We assume that the data Z are generated by linear dynamics, i.e., assume that there exists a linear operator A such that
x k + 1 = A x k , for k = 0 , , m 1 .
The goal of the DMD method is to find an eigendecomposition of the (unknown) operator A. To proceed, we use an arrangement of the data set into two large data matrices
X = [ x 0 , , x m 1 ] and Y = [ x 1 , , x m ] .
Therefore, the xpression (2) is equivalent to
Y = A X .
Then the DMD of the data matrix Z is given by the eigendecomposition of A. We can approximate operator A by using singular value decomposition (SVD) of data matrix X = U Σ V * , where U is an n × n unitary matrix, Σ is an n × m rectangular diagonal matrix with non-negative real numbers on the diagonal, V is an m × m unitary matrix, and V * is the conjugate transpose of V; see [34]. Then from (4), we obtain
A = Y X = Y V Σ 1 U * ,
where X is the pseudoinverse of X; see [35]. It should be noted that calculating the eigendecomposition of the n × n matrix A can be prohibitively expensive if n is large, i.e., if n m . As a result, the goal is to compute eigenvectors and eigenvalues without explicitly representing or manipulating A. A low-rank approximation matrix A ˜ is constructed for this purpose, and its eigendecomposition is calculated to obtain the DMD modes and eigenvalues. The DMD modes and DMD eigenvalues are intended to approximate the eigenvectors and eigenvalues of A.
A reduced SVD of X = U r Σ r V r * can be used to obtain the low-rank approximation matrix A ˜ , where U r is n × r , Σ r is r × r diagonal, V r is m × r , and r is the rank of X, ( r m ). Then, using (5), we obtain the low-dimensional representation
A ˜ = U r * A U r = U r * Y V r Σ r 1 .
The following algorithm (Algorithm 1) provides a robust method for computing DMD modes and eigenvalues.
In its original form [1], the algorithm of the DMD method differs slightly from the one described above. The only difference is that the DMD modes (at step 5) are computed by the formula
Φ = U W ,
where W is the eigenvector matrix of A ˜ . The DMD modes calculated by Algorithm 1 are called exact DMD modes, because Tu et al. in [16] prove that these are exact eigenvectors of matrix A. The modes computed by (7) are referred to as projected DMD modes. It is worth noting that the DMD method is generalized and extended to a larger class of data sets in [16], where the assumption of evenly spaced measurements was relaxed.
Algorithm 1 Exact DMD
Input: Data matrices X and Y, and rank r.
Output: DMD modes Φ and eigenvalues Λ
1: Procedure DMD(X,Y,r).
2:    [ U , Σ , V ] = S V D ( X , r )    (Reduced r-rank SVD of X)
3:    A ˜ = U * Y V Σ 1               (Low-rank approximation of A)
4:    [ W , Λ ] = E I G ( A ˜ )          (Eigen-decomposition of A ˜ )
5:    Φ = Y V Σ 1 W                  (DMD modes of A)
6: End Procedure
Finally, knowing the DMD modes Φ and eigenvalues Λ = d i a g { λ i } , we can reconstruct the time series of data set Z in (1) by expression
x ^ k = Φ Λ k b ,
where b = Φ x 0 is the vector of the initial amplitudes of DMD modes.
The DMD discrete-time eigenvalues λ j can also be converted to continuous time eigenvalues (Fourier modes)
ω j = ln ( λ j ) t , j = 1 , , r .
A continuous time dynamical system can be reconstructed as a function of time by the expression
x ^ ( t ) = Φ exp ( Ω t ) b ,
where Ω = d i a g { ω 1 , , ω r } . A prediction of the future state of the system is obtained from the expression (9) for any time t.

1.2. Matrix Similarity

Here, we will briefly describe an important matrix technique called similarity transformation, which we will use in the next section.
Definition 1. 
Let A and B be n × n matrices. If there is a non-singular n × n matrix P exists such that
A = P 1 B P
then we say that A and B are similar to each other.
We will state some well-known properties of similar matrices; see [36].
Lemma 1. 
If A and B are similar, then they have the same rank.
Lemma 2. 
If A and B are similar, then they have the same eigenvalues.
It is easy to show that if A and B are similar and x is an eigenvector of B, then P 1 x is an eigenvector of A = P 1 B P .

2. New DMD Algorithms

In this section, we introduce two new alternatives to the standard DMD algorithm.

2.1. An Alternative of Exact DMD Algorithm

The DMD algorithms presented in the previous section use the advantage of low dimensionality in the data to make a low-rank approximation of the operator A that best approximates the nonlinear dynamics of the data set.
We suggest that the modal structures can be extracted from the following matrix
A ^ = Σ r 1 U r * Y V r ,
rather than the matrix A ˜ defined by (6). The two matrices A ˜ and A ^ are similar, with transformation matrix Σ r
A ^ = Σ r 1 A ˜ Σ r ,
they therefore have the same eigenvalues. As a result, the eigenvectors of the matrix A ˜ can be expressed in terms of the eigenvectors of A ^ .
Let
A ^ W ^ = W ^ Λ
be eigendecomposition of matrix A ^ . Then, using relations (7), (11) and (12) we can easily obtain the following expression:
A ˜ Σ r W ^ = Σ r W ^ Λ ,
which yields the formula
Φ = U r Σ r W ^ ,
for the DMD modes. Expression (13) corresponds to projected DMD modes defined by (7). To be thorough, we will prove that matrix
Φ ^ = Y V r W ^ ,
corresponds to exact DMD modes Φ = Y V W Σ 1 defined at Step 5 in Algorithm 1; see [37].
Theorem 1. 
Let ( λ , w ) , with λ 0 , be an eigenpair of A ^ defined by (10), then the corresponding eigenpair of A is ( λ , φ ) , where
φ = Y V r w .
Proof. 
By using reduced SVD X = U r Σ r V r * and the pseudoinverse of X
X = V r Σ r 1 U r * ,
we obtain the expression
A = Y X = Y V r Σ r 1 U r * .
Let us now express A φ
A φ = Y V r Σ 1 U r * Y V r w
which implies, by using (10)
A φ = Y V r A ^ w = Y V r w λ = λ φ .
In addition, φ 0 , since if Y V r w = 0 , then Σ r 1 U r * Y V r w = A ^ w = 0 . This implies that λ = 0 . Hence, φ is an eigenvector of A with eigenvalue λ . The proof is completed.    □
Now, we are ready to formulate an alternative to the exact DMD method described (Algorithm 1); see Algorithm 2 below.
According to Theorem 1, modes in (14) generated by Algorithm 2 are eigenvectors of the matrix A. Although the matrices A ^ and A ˜ are computationally similar because they have the same but permuted multipliers, the calculation of DMD modes Φ ^ in Algorithm 2 is more economical than the calculation of modes Φ in Algorithm 1.
Algorithm 2 Alternative exact DMD
Input: Data matrices X and Y, and rank r.
Output: DMD modes Φ ^ and eigenvalues Λ
1: Procedure DMD(X,Y,r).
2:    [ U , Σ , V ] = S V D ( X , r )    (Reduced r-rank SVD of X)
3:    A ^ = Σ 1 U * Y V               (Low-rank approximation of A)
4:    [ W ^ , Λ ] = E I G ( A ˜ )          (Eigen-decomposition of A ^ )
5:    Φ ^ = Y V W ^                         (DMD modes of A)
6: End Procedure

2.2. A New DMD Algorithm for Full Rank Dataset

We will assume in this section that matrix X R n × m is a full rank matrix, i.e., r = m , where n > m and r = rank ( X ) . Our goal is to obtain a more efficient algorithm for calculating DMD modes and eigenvalues in this particular case.
We suggest that the modal structures can be extracted from the following matrix
A ¯ = V r A ^ V r * ,
where V r is the unitary matrix from the SVD of X = U r Σ r V r * . Matrices A ^ and A ¯ are obviously similar. From (10) and (15), we obtain the expression
A ¯ = V r Σ r 1 U r * Y ,
which yields
A ¯ = X Y ,
where X is the Moore–Penrose pseudoinverse of X. Denoting the eigen-decomposition of A ¯ by
A ¯ W ¯ = W ¯ Λ ,
where the columns of W ¯ are eigenvectors and Λ is a diagonal matrix containing the corresponding eigenvalues. From Definition of A ^ , and relations (15) and (18), we deduce
A U r Σ r V r * W ¯ = U r Σ r V r * W ¯ Λ
or equivalently
A ( X W ¯ ) = ( X W ¯ ) Λ .
Thus, we show that
Φ = X W ¯
is the matrix of DMD modes. These are the projected DMD modes (see Theorem 3 below). We will express the exact DMD modes in the next Theorem.
Theorem 2. 
Let ( λ , w ¯ ) , with λ 0 , be an eigenpair of A ¯ defined by (17). Then, the corresponding eigenpair of A is ( λ , φ ¯ ) , where
φ ¯ = Y w ¯ .
Proof. 
Let us express A φ by using (4)
A φ ¯ = Y V r Σ r 1 U r * Y w ¯ .
From the last relation and (16), we get
A φ ¯ = Y A ¯ w ¯ = λ Y w ¯ = λ φ ¯ .
Furthermore, φ ¯ 0 , because if Y w ¯ = 0 , then V r Σ r 1 U r * Y w ¯ = A ¯ w ¯ = 0 , implying λ = 0 . Hence, φ ¯ is an eigenvector of A with an eigenvalue λ .    □
Next, we resume the results from above in the form of an algorithm.
We intentionally omitted Step 1 of Algorithm 1 (or Algorithm 2) in Algorithm 3, because the pseudo-inverse matrix of full-rank matrix X can be calculated not only by SVD but also by formula
X = ( A * A ) 1 A * .
The presented Algorithm 3 has the greatest advantage among the described algorithms from a computational point of view in the case of full-rank data. We will now prove that modes in expression (21) are projected DMD modes.
Algorithm 3 DMD Algorithm for full rank dataset
Input: Data matrices X and Y.
Output: DMD modes Φ ¯ and eigenvalues Λ
1: Procedure DMD(X,Y).
2:    A ¯ = X Y                       (Low-rank approximation of A)
3:    [ W ¯ , Λ ] = E I G ( A ¯ )   (Eigen-decomposition of A ¯ )
4:    Φ ¯ = Y W ¯   (DMD modes of A)
5: End Procedure
Theorem 3. 
Let ( λ , w ¯ ) , with λ 0 , be an eigenpair of A ¯ defined by (15), and let P X denotes the orthogonal projection matrix onto the column space of X. Then, the vector
φ = X w ¯
is an eigenvector of P X A with an eigenvalue λ. Furthermore, if φ ¯ = Y w ¯ is given by (22), then P X φ ¯ = λ φ .
Proof. 
From the reduced SVD X = U r Σ r V r * , we obtain the orthogonal projection onto the column space of X by P X = X X = U r U r * . From (17) and the relation Y = A X , we get
X A X = A ¯ ,
which implies
P X A φ = X X A X w ¯ = X A ¯ w ¯ = λ X w ¯ = λ φ .
According to the previous expression, φ is an eigenvector of P X A with an eigenvalue λ . Let us now express
P X φ ¯ = X X Y w ¯ = U r Σ r ( Σ r 1 U r * Y V r ) w ¯ = X A ¯ w ¯ = λ X w ¯ = λ φ .
which proves the statement of the Theorem. □

2.3. In Terms of Companion Matrix

Let us consider the case where the last snapshot x m in the data set (1) is in the column space of X, i.e. x m is a linear combination of x 0 , , x m 1 . Therefore
Im ( Y ) Im ( X ) .
In this case, matrix A ¯ defined by (16) is in type of the Frobenius companion matrix and it relates the data sets exactly Y = X A ¯ , even if the data are generated by nonlinear dynamics. Moreover, in this case, the projected DMD modes (23) and exact DMD modes (22) are identical.
Theorem 4. 
If the columns of Y are spanned by those of X the DMD modes (23) are eigenvectors of A ¯ defined by (17).
Proof. 
From the statement of the Theorem, it follows
P X Y = Y ,
where P X = X X = U r U r * is the orthogonal projection onto the image of X. We obtain from the previous relation and the reduced SVD of X
P X A = P X Y V r Σ r 1 U r * = Y V r Σ r 1 U r * = A .
Finally, we can show that φ = X w ¯ defined by (23) is an eigenvector of A. The following relations are fulfilled
A φ = A X w ¯ = P X A X w ¯ = X X A X w ¯ = X A ¯ w ¯ = λ X w ¯ = λ φ ,
which proves the Theorem. □
In this case, a reconstruction of the data matrix Y, using relation Y = X A ¯ yields
Y = X W ¯ Λ W ¯ 1 ,
where A ¯ = W ¯ Λ W ¯ 1 is the eigendecomposition of A ¯ defined by (18). Using that A ¯ is a Frobenius companion matrix, and notation (21), we obtain
Y = Φ Λ V ( λ )
where V ( λ ) is a Vandermonde matrix, i.e.
Y = | | ϕ 1 ϕ 2 | | λ 1 0 0 λ 2 1 λ 1 λ 1 m 2 1 λ 2 λ 2 m 2 .
where ϕ i and λ i are DMD modes and eigenvalues, respectively. In this formulation, each mode ϕ i is scaled with associated λ i . The Vandermonde matrix captures the iterative exponentiation of the DMD eigenvalues. The representation (24) and (25) gives us a factorization of the data into spatial modes, amplitudes, and temporal dynamics. Moreover, the amplitudes in this case coincide with the DMD eigenvalues and do not depend on the initial condition.

2.4. Computational Cost and Memory Requirement

Table 1 gives a brief summary of the main matrices in the three algorithms considered. The representations of the corresponding reduced order approximations of the Koopman operator are shown, as well as the formulas for calculating the DMD modes in three cases.
Although the structures of the three low-rank approximation matrices A ˜ , A ^ and A ¯ are similar, the corresponding representations Φ ^ and Φ ¯ have a simpler form when determining the DMD modes than Φ . In Algorithm 2, three matrices need to be stored and two matrix multiplications performed, while in Algorithm 3, it is necessary to store only two matrices and perform one matrix multiplication.
Since the reduced matrix A ˜ in Algorithm 1 is of the same size as the corresponding matrices A ^ and A ¯ in the alternative algorithms, they therefore require the same resources to compute their spectral decompositions. To estimate the computational cost for the three algorithms considered, we will ignore the comparable computations and focus on the different ones. While in Algorithms 1 and 2, the calculation of the corresponding reduced matrices A ˜ and A ^ involves SVD of X and matrix multiplication, in Algorithm 3, matrix A ¯ is calculated by the pseudo-inverse matrix of X. The DMD modes for the three algorithms are calculated by the corresponding matrix multiplications, as indicated in Table 1. The computational costs are shown in Table 2, see Golub and Van Loan [38].
From the memory point of view, the corresponding matrices that require the same amount of memory for all three algorithms are: the data matrix Y, the reduced matrix ( A ˜ , A ^ or A ¯ ), and the eigenvectors matrix ( W , W ^ or W ¯ ). The number of floating point numbers to be stored for the corresponding reduced-order matrix and eigenvector matrix is equal to r 2 in all three algorithms. The difference in the required memory for the three algorithms is determined by the matrices needed to calculate the DMD modes. The number of floating point numbers that must be stored for the DMD calculations is shown in Table 3.

3. Numerical Illustrative Examples

In this section, we will compare the results obtained by the standard DMD algorithm and the new algorithms (Algorithms 2 and 3) introduced in Section 2. All considered examples are well known in the literature. All numerical experiments and simulations were performed on Windows 7 with MATLAB release R2013a on Acer Aspire 571G laptop with an Intel(R) Core(TM) i3-2328M CPU @2.2GHz processor and 4 GB RAM.
We should note that all three algorithms that we consider in the present work require the use of some of the most expensive functions from a computational point of view: s v d and e i g , respectively, for calculating SVD and spectral decomposition of matrices.

3.1. Example 1: Spatiotemporal Dynamics of Two Signals

We consider an illustrative example of two mixed spatiotemporal signals
f 1 ( x , t ) = s e c h ( x + 6 ) e i 3.8 t and f 2 ( x , t ) = 2 s e c h ( x ) t a n h ( x ) e i 2.2 t
and the mixed signal
X ( t ) = f 1 ( x , t ) + f 2 ( x , t ) .
The two signals f 1 , f 2 and mixed signal X are illustrated in Figure 1a–c.
Figure 1d depicts the singular values of data matrix X, indicating that the data can be adequately represented by the rank r = 2 approximation.
We perform a rank-2 DMD reconstruction of data by using standard DMD (Algorithm 1) and Alternative DMD (Algorithm 2). These reconstructions are shown in Figure 2a,b.
The two reconstructions are nearly exact approximations, with the DMD modes and eigenvalues matching those of the underlying signals f 1 and f 2 perfectly. Both algorithms reproduce the same continuous-time DMD eigenvalues ω 1 = 2.2 i and ω 2 = 3.8 i . Their imaginary components correspond to the frequencies of oscillation.
Figure 3 panels compare the first two DMD modes, with true modes plotted alongside modes extracted by Standard DMD (Algorithm 1) and Alternative DMD (Algorithm 2). The DMD modes produced by the two algorithms match exactly to nearly machine precision.
Table 4 compares the execution time results of simulations using Algorithms 1 and 2.

3.2. Example 2: Re = 100 Flow around a Cylinder Wake

We consider a time series of fluid vortex fields for the wake behind a round cylinder at a Reynolds number Re = 100. The Reynolds number is defined as R e = D U / v , where D is the cylinder diameter, U is the free-stream velocity, and v is the kinematic fluid viscosity. It quantifies the ratio of inertial to viscous forces.
This example is taken from [17], see also [39]. We use the same data set which is publicly available at ‘www.siam.org/books/ot149/flowdata’. Collected data consists m = 150 snapshots at regular intervals in time, 10 t , sampling five periods of vortex shedding. An example of a vorticity field is shown in Figure 4.
We performed Algorithms 1 and 2 to obtain DMD decomposition and reconstruction of the data. Two algorithms reproduce the same DMD eigenvalues and modes.
The quality of low-rank approximations is measured by the relative error e D M D
e D M D = x x ^ 2 x 2 ,
where x ^ is DMD reconstruction of data by using expression (9). Actually, both standard DMD and alternative DMD reconstructions have the same error, see Table 5.
See Figure 5 for DMD eigenvalues and singular values of the data matrix X.
Figure 6 shows the first six DMD modes computed by Algorithms 1 and 2, respectively. The only difference is in the visualization, with contour lines added to the DMD modes obtained by Algorithm 2 (otherwise we obtain the same picture).
It can be seen from Figure 5 and Figure 6 that the two algorithms produce the same DMD eigenvalues and DMD modes.
Table 6 shows the execution time of this task by Algorithms 1 and 2.

3.3. Example 3: DMD with Different Koopman Observables

We consider the nonlinear Schrödinger (NLS) equation
i q t + 1 2 2 q ξ 2 + | q | 2 q = 0 ,
where q ( ξ , t ) is a function of space and time. This equation can be rewritten in the equivalent form
q t = i 2 2 q ξ 2 + i | q | 2 q = 0 .
Fourier transforming in ξ gives the differential equation in the Fourier domain variables q ^
q ^ t = i k 2 2 q ^ + i | q | 2 q ^ = 0 .
By discretizing in the spatial variable we can generate a numerical approximation to the solution of (7); see [17].
The following parameters were used in the simulation: there is a total of 21 slices of time data over the interval t [ 0 , 2 π ) , the state variables are an n-dimensional discretization of q ( ξ , t ) , so that q ( ξ , t k ) ξ k , where n = 400 . The resulting ξ k are the columns of the generated data matrix. We analyze the two-soliton solution that has the initial condition q ( ξ , 0 ) = 2 sech ( ξ ) . The result of this simulation is shown in Figure 7a.
We performed low-rank DMD approximation (r = 10) with standard DMD method as shown in Figure 7b. In this case, by standard DMD approximation, it is meant that the state vectors x coincide with the Koopman quantities
g D M D ( x ) = x = q ( ξ , t ) .
The obtained approximation is not satisfactory.
To improve the approximation, we can use another Koopman observable
g 1 ( x ) = x | x | 2 x ,
which is based on the NLS nonlinearity; see also [17].
In this case, we define new input data matrices corresponding to X and Y defined by (3), as follows
X 1 = X | X | 2 X and Y 1 = Y | Y | 2 Y ,
respectively. Following that, the DMD approach is used in the usual way with matrices X 1 and Y 1 instead of X and Y. New approximation gives a superior reconstruction, which is evident from Figure 8. We have performed DMD reconstructions using both algorithms Algorithms 1 and 2.
It can be seen from Figure 9b that both algorithms reproduce the same DMD eigenvalues. To measure the quality of approximations, the relative error formula defined by (27) is used. Both reconstructions, by standard DMD (Algorithm 1) and alternative DMD (Algorithm 2), have the same error curve; see Figure 9a.
Algorithms 1 and 2 are compared in terms of execution times, the results are included in Table 7.

3.4. Example 4: Standing Wave

It is known that the standard DMD algorithm is not able to represent a standing wave in the data [16]. For example, if measurements of a single sine wave are collected, DMD fails to capture periodic oscillations in the data.
In this case, the data matrix X contains a single row
X = x 1 x 2 x m ,
where each x i is a scalar and DMD fails to reconstruct the data. There is a simple solution to this rank deficiency problem, which involves stacking multiple time-shifted copies of the data into augmented data matrices
X a u g = x 1 x 2 x m s x 2 x 3 x s x s + 1 x m 1 ; Y a u g = x 2 x 3 x m s + 1 x 3 x 4 x s + 1 x s + 2 x m .
Thus, using delay coordinates achieves an increase in the rank of the data matrix X a u g . In fact, we can increase the number s of delay coordinates until the data matrix reaches full rank numerically. Then, we perform the DMD technique on the augmented matrices X a u g and Y a u g .
We can demonstrate the rank mismatch issue with an example from finance by considering the evolution in the price of only one type of commodity. In fact, this problem is quite similar to the standing wave problem. Let us consider the price evolution of the Brent Crude Oil for the period 1 February 2022–28 February 2022, containing 20 trading days; see Figure 10.
The data matrix X is a single row as in (32) containing m = 20 elements, where each x i is the closing price on the corresponding day. We construct the augmented matrices X a u g and Y a u g as in (33). In this case, we can choose s [ 10 , 18 ] , which ensures that matrices X a u g and Y a u g will have more rows than columns. For each s in this interval, we obtain a full rank matrix X a u g .
Therefore, in this case, we can use the alternative DMD algorithm for full rank data matrices, Algorithm 3. We performed Algorithms 1 and 3 on augmented data matrices X a u g and Y a u g for each s [ 10 , 18 ] . The results show that the best approximation of the measured data is obtained at the highest rank of X a u g , r = 10 , with s = 10 . In each case, the two algorithms reproduce the same approximation. Figure 10 shows the two approximations for rank ( X a u g ) = 10 , where it can be seen that both algorithms perfectly approximate the actual price.
Execution times for Algorithms 1 and 3 are computed with the dataset of this example. Table 8 presents a comparison between the two algorithms.

4. Conclusions

The purpose of this study was to introduce two new algorithms for computing approximate DMD modes and eigenvalues. We proved that each generated pairs ( φ ^ , λ ) and ( φ ¯ , λ ) by Algorithms 2 and 3, respectively, is an eigenvector/eigenvalue pair of Koopman operator A (Theorems 1 and 2). The matrices of DMD modes Φ ^ and Φ ¯ from Algorithms 2 and 3 have a simpler form than the DMD mode matrix Φ from Algorithm 1. They need less memory and require fewer matrix multiplications.
We demonstrate the performance of the presented algorithms with numerical examples from different fields of application. From the obtained results, we can conclude that the introduced approaches give identical results to those of the exact DMD method. Comparison of simulation times shows that better effectiveness is attained by new algorithms. The presented results show that the introduced algorithms are alternatives to the standard DMD algorithm and can be used in various fields of application.
This study motivates several further investigations. Future work on the use of the proposed algorithms will consist of their application to a wider class of dynamical systems, particularly those dealing with full-rank data. Applications to other known methods that use approximate linear dynamics, such as embedding with Kalman filters, will be sought.It may be possible to develop some alternatives to some known variants of the DMD method, such as DMD with control and higher-order DMD.
An interesting direction for future work is the optimization of the introduced algorithms in relation to the required computing resources. One line of work is to implement these algorithms using parallel computing.

Funding

Paper written with financial support of Shumen University under Grant RD-08-144/01.03.2022.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Schmid, P.J.; Sesterhenn, J. Dynamic mode decomposition of numerical and experimental data. In Proceedings of the 61st Annual Meeting of the APS Division of Fluid Dynamics, San Antonio, TX, USA, 23–25 November 2008; American Physical Society: Washington, DC, USA, 2008. [Google Scholar]
  2. Rowley, C.W.; Mezić, I.; Bagheri, S.; Schlatter, P.; Henningson, D.S. Spectral analysis of nonlinear flows. J. Fluid Mech. 2009, 641, 115–127. [Google Scholar] [CrossRef] [Green Version]
  3. Mezić, I. Spectral properties of dynamical systems, model reduction and decompositions. Nonlinear Dyn. 2005, 41, 309–325. [Google Scholar] [CrossRef]
  4. Chen, K.K.; Tu, J.H.; Rowley, C.W. Variants of dynamic mode decomposition: Boundary condition, Koopman, and Fourier analyses. J. Nonlinear Sci. 2012, 22, 887–915. [Google Scholar] [CrossRef]
  5. Grosek, J.; Nathan Kutz, J. Dynamic Mode Decomposition for Real-Time Background/Foreground Separation in Video. arXiv 2014, arXiv:1404.7592. [Google Scholar]
  6. Proctor, J.L.; Eckhoff, P.A. Discovering dynamic patterns from infectious disease data using dynamic mode decomposition. Int. Health 2015, 7, 139–145. [Google Scholar] [CrossRef]
  7. Brunton, B.W.; Johnson, L.A.; Ojemann, J.G.; Kutz, J.N. Extracting spatial–temporal coherent patterns in large-scale neural recordings using dynamic mode decomposition. J. Neurosci. Methods 2016, 258, 1–15. [Google Scholar] [CrossRef] [Green Version]
  8. Mann, J.; Kutz, J.N. Dynamic mode decomposition for financial trading strategies. Quant. Financ. 2016, 16, 1643–1655. [Google Scholar] [CrossRef] [Green Version]
  9. Cui, L.X.; Long, W. Trading Strategy Based on Dynamic Mode Decomposition: Tested in Chinese Stock Market. Phys. A Stat. Mech. Its Appl. 2016, 461, 498–508. [Google Scholar]
  10. Kuttichira, D.P.; Gopalakrishnan, E.A.; Menon, V.K.; Soman, K.P. Stock price prediction using dynamic mode decomposition. In Proceedings of the 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Udupi, India, 13–16 September 2017; pp. 55–60. [Google Scholar] [CrossRef]
  11. Berger, E.; Sastuba, M.; Vogt, D.; Jung, B.; Ben Amor, H. Estimation of perturbations in robotic behavior using dynamic mode decomposition. J. Adv. Robot. 2015, 29, 331–343. [Google Scholar] [CrossRef]
  12. Schmid, P.J. Dynamic mode decomposition of numerical and experimental data. J. Fluid Mech. 2010, 656, 5–28. [Google Scholar] [CrossRef] [Green Version]
  13. Seena, A.; Sung, H.J. Dynamic mode decomposition of turbulent cavity flows for self-sustained oscillations. Int. J. Heat Fluid Flow 2011, 32, 1098–1110. [Google Scholar] [CrossRef]
  14. Schmid, P.J. Application of the dynamic mode decomposition to experimental data. Exp. Fluids 2011, 50, 1123–1130. [Google Scholar] [CrossRef]
  15. Mezić, I. Analysis of fluid flows via spectral properties of the Koopman operator. Annu. Rev. Fluid Mech. 2013, 45, 357–378. [Google Scholar] [CrossRef] [Green Version]
  16. Tu, J.H.; Rowley, C.W.; Luchtenburg, D.M.; Brunton, S.L.; Kutz, J.N. On dynamic mode decomposition: Theory and applications. J. Comput. Dyn. 2014, 1, 391–421. [Google Scholar] [CrossRef] [Green Version]
  17. Kutz, J.N.; Brunton, S.L.; Brunton, B.W.; Proctor, J.L. Dynamic Mode Decomposition: Data-Driven Modeling of Complex Systems; Society for Industrial and Applied Mathematics: Philadelphia, PL, USA, 2016; pp. 1–234. ISBN 978-1-611-97449-2. [Google Scholar]
  18. Bai, Z.; Kaiser, E.; Proctor, J.L.; Kutz, J.N.; Brunton, S.L. Dynamic Mode Decomposition for CompressiveSystem Identification. AIAA J. 2020, 58, 561–574. [Google Scholar] [CrossRef]
  19. Le Clainche, S.; Vega, J.M.; Soria, J. Higher order dynamic mode decomposition of noisy experimental data: The flow structure of a zero-net-mass-flux jet. Exp. Therm. Fluid Sci. 2017, 88, 336–353. [Google Scholar] [CrossRef]
  20. Anantharamu, S.; Mahesh, K. A parallel and streaming Dynamic Mode Decomposition algorithm with finite precision error analysis for large data. J. Comput. Phys. 2013, 380, 355–377. [Google Scholar] [CrossRef] [Green Version]
  21. Sayadi, T.; Schmid, P.J. Parallel data-driven decomposition algorithm for large-scale datasets: With application to transitional boundary layers. Theor. Comput. Fluid Dyn. 2016, 30, 415–428. [Google Scholar] [CrossRef] [Green Version]
  22. Maryada, K.R.; Norris, S.E. Reduced-communication parallel dynamic mode decomposition. J. Comput. Sci. 2020, 61, 101599. [Google Scholar] [CrossRef]
  23. Li, B.; Garicano-Mena, J.; Valero, E. A dynamic mode decomposition technique for the analysis of non–uniformly sampled flow data. J. Comput. Phys. 2022, 468, 111495. [Google Scholar] [CrossRef]
  24. Smith, E.; Variansyah, I.; McClarren, R. Variable Dynamic Mode Decomposition for Estimating Time Eigenvalues in Nuclear Systems. arXiv 2022, arXiv:2208.10942. [Google Scholar]
  25. Jovanović, M.R.; Schmid, P.J.; Nichols, J.W. Sparsity-promoting dynamic mode decomposition. Phys. Fluids 2014, 26, 024103. [Google Scholar] [CrossRef] [Green Version]
  26. Guéniat, F.; Mathelin, L.; Pastur, L.R. A dynamic mode decomposition approach for large and arbitrarily sampled systems. Phys. Fluids 2014, 27, 025113. [Google Scholar] [CrossRef] [Green Version]
  27. Cassamo, N.; van Wingerden, J.W. On the Potential of Reduced Order Models for Wind Farm Control: A Koopman Dynamic Mode Decomposition Approach. Energies 2020, 13, 6513. [Google Scholar] [CrossRef]
  28. Ngo, T.T.; Nguyen, V.; Pham, X.Q.; Hossain, M.A.; Huh, E.N. Motion Saliency Detection for Surveillance Systems Using Streaming Dynamic Mode Decomposition. Symmetry 2020, 12, 1397. [Google Scholar] [CrossRef]
  29. Babalola, O.P.; Balyan, V. WiFi Fingerprinting Indoor Localization Based on Dynamic Mode Decomposition Feature Selection with Hidden Markov Model. Sensors 2021, 21, 6778. [Google Scholar] [CrossRef]
  30. Lopez-Martin, M.; Sanchez-Esguevillas, A.; Hernandez-Callejo, L.; Arribas, J.I.; Carro, B. Novel Data-Driven Models Applied to Short-Term Electric Load Forecasting. Appl. Sci. 2021, 11, 5708. [Google Scholar] [CrossRef]
  31. Surasinghe, S.; Bollt, E.M. Randomized Projection Learning Method for Dynamic Mode Decomposition. Mathematics 2021, 9, 2803. [Google Scholar] [CrossRef]
  32. Li, C.Y.; Chen, Z.; Tse, T.K.; Weerasuriya, A.U.; Zhang, X.; Fu, Y.; Lin, X. A parametric and feasibility study for data sampling of the dynamic mode decomposition: Range, resolution, and universal convergence states. Nonlinear Dyn. 2022, 107, 3683–3707. [Google Scholar] [CrossRef]
  33. Mezic, I. On Numerical Approximations of the Koopman Operator. Mathematics 2022, 10, 1180. [Google Scholar] [CrossRef]
  34. Trefethen, L.; Bau, D. Numerical Linear Algebra; Society for Industrial and Applied Mathematics: Philadelphia, PL, USA, 1997. [Google Scholar]
  35. Golub, G.H.; Van Loan, C.F. Matrix Computations, 3rd ed.; The Johns Hopkins University Press: Baltimore, ML, USA, 1996. [Google Scholar]
  36. Lancaster, P.; Tismenetsky, M. The Theory of Matrices; Academic Press Inc.: San Diego, CA, USA, 1985. [Google Scholar]
  37. Nedzhibov, G. Dynamic Mode Decomposition: A new approach for computing the DMD modes and eigenvalues. Ann. Acad. Rom. Sci. Ser. Math. Appl. 2022, 14, 5–16. [Google Scholar] [CrossRef]
  38. Golub, G.H.; Van Loan, C.F. Matrix Computations; JHU Press: Baltimore, MD, USA, 2012; Volume 3. [Google Scholar]
  39. Bagheri, S. Koopman-mode decomposition of the cylinder wake. J. Fluid Mech. 2013, 726, 596–623. [Google Scholar] [CrossRef]
Figure 1. Spatiotemporal dynamics of two signals (a) f 1 ( x , t ) , (b) f 2 ( x , t ) , and mixed signal in (c) x = f 1 + f 2 . Singular values of X are shown in (d).
Figure 1. Spatiotemporal dynamics of two signals (a) f 1 ( x , t ) , (b) f 2 ( x , t ) , and mixed signal in (c) x = f 1 + f 2 . Singular values of X are shown in (d).
Computation 10 00210 g001
Figure 2. Rank−2 reconstructions of the signal X by: standard DMD (a) and Alternative DMD (b).
Figure 2. Rank−2 reconstructions of the signal X by: standard DMD (a) and Alternative DMD (b).
Computation 10 00210 g002
Figure 3. Firts two DMD modes: true modes, modes extracted by standard DMD and modes extracted by Alternative DMD.
Figure 3. Firts two DMD modes: true modes, modes extracted by standard DMD and modes extracted by Alternative DMD.
Computation 10 00210 g003
Figure 4. Some vorticity field snapshots for the wake behind a cylinder at R e = 100 are shown in (ac).
Figure 4. Some vorticity field snapshots for the wake behind a cylinder at R e = 100 are shown in (ac).
Computation 10 00210 g004
Figure 5. Singular values of X (a) and DMD eigenvalues computed by Algorithms 1 and 2 (b).
Figure 5. Singular values of X (a) and DMD eigenvalues computed by Algorithms 1 and 2 (b).
Computation 10 00210 g005
Figure 6. The first six DMD modes computed by Algorithm 1 are shown in (af). Corresponding DMD modes computed by Algorithm 2 are in (gl).
Figure 6. The first six DMD modes computed by Algorithm 1 are shown in (af). Corresponding DMD modes computed by Algorithm 2 are in (gl).
Computation 10 00210 g006
Figure 7. The full simulation of the NLS Equation (7) in (a) and DMD reconstruction (b) by standard DMD algorithm, where the observable is given by g D M D ( x ) = x , where x = q ( ξ , t ) , in panel (b).
Figure 7. The full simulation of the NLS Equation (7) in (a) and DMD reconstruction (b) by standard DMD algorithm, where the observable is given by g D M D ( x ) = x , where x = q ( ξ , t ) , in panel (b).
Computation 10 00210 g007
Figure 8. DMD reconstructions, based on new observable g 1 defined by (31). Reconstruction by Algorithm 1 in (a) and by Algorithm 2 in (b).
Figure 8. DMD reconstructions, based on new observable g 1 defined by (31). Reconstruction by Algorithm 1 in (a) and by Algorithm 2 in (b).
Computation 10 00210 g008
Figure 9. Relative errors (a) and DMD eigenvalues (b).
Figure 9. Relative errors (a) and DMD eigenvalues (b).
Computation 10 00210 g009
Figure 10. Two approximations of Brent Crude Oil price for the period 1 February 2022–28 February 2022 by Standard DMD and Alternative DMD approaches.
Figure 10. Two approximations of Brent Crude Oil price for the period 1 February 2022–28 February 2022 by Standard DMD and Alternative DMD approaches.
Computation 10 00210 g010
Table 1. Reduced matrices and DMD modes.
Table 1. Reduced matrices and DMD modes.
Algorithm 1Algorithm 2Algorithm 3
( r m )( r < m )( r = m )
Reduced matrix A ˜ = U r * Y V r Σ r 1 A ^ = Σ r 1 U r * Y V r A ¯ = X Y
DMD modes Φ = Y V r Σ r 1 W Φ ^ = Y V r W ^ Φ ¯ = Y W ¯
Table 2. Computational costs.
Table 2. Computational costs.
Cost ofAlgorithm 1Algorithm 2Algorithm 3
SVD ofX 6 n m 2 + 20 m 3 6 n m 2 + 20 m 3
Reduced matrix 2 r 3 + 2 n r 2 + r 2 r 3 + 2 n r 2 + r n 2 r + n r 2
DMD modes Φ 2 r 3 + 2 n r 2 + r 2 r 3 + ( 2 n 2 ) r 2 n r 2
Total cost 6 n m 2 + 20 m 3 + 4 r 3 + 4 n r 2 6 n m 2 + 20 m 3 + 4 r 3 + ( 4 n 2 ) r 2 n 2 r + 2 n r 2
Table 3. Memory requirements for DMD mode matrices.
Table 3. Memory requirements for DMD mode matrices.
MatrixAlgorithm 1Algorithm 2Algorithm 3
Y n m n m n m
V r r m r m
Σ r 1 r
Total memory ( n + r ) m + r ( n + r ) m n m
Table 4. Execution time (in sec.) by Algorithms 1 and 2.
Table 4. Execution time (in sec.) by Algorithms 1 and 2.
Standard DMDAlternative DMD
Number of Cycles (k)(Algorithm 1)(Algorithm 2)
k = 1000 0.5407 0.4864
k = 10,000 4.2264 4.2013
Table 5. Relative errors for DMD reconstructions by Algorithms 1 and 2.
Table 5. Relative errors for DMD reconstructions by Algorithms 1 and 2.
Standard DMDAlternative DMD
Relative errors e s t . D M D = 5.3685 × 10 4 e a l t . D M D = 5.3685 × 10 4
Table 6. Execution time (in sec.) by Algorithms 1 and 2.
Table 6. Execution time (in sec.) by Algorithms 1 and 2.
Standard DMDAlternative DMD
Number of Cycles (k)(Algorithm 1)(Algorithm 2)
k = 100 15.6083 15.5615
k = 1000 156.3974 154.8962
Table 7. Execution time (in sec.) by Algorithms 1 and 2.
Table 7. Execution time (in sec.) by Algorithms 1 and 2.
Standard DMDAlternative DMD
Number of Cycles (k)(Algorithm 1)(Algorithm 2)
k = 1000 0.5873 0.5295
k = 10,000 5.5346 5.0998
Table 8. Execution time (in sec.) by Algorithms 1 and 3.
Table 8. Execution time (in sec.) by Algorithms 1 and 3.
Standard DMDAlternative DMD
Number of Cycles (k)(Algorithm 1)(Algorithm 3)
k = 1000 0.1933 0.1324
k = 10,000 1.6319 0.9783
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nedzhibov, G. On Alternative Algorithms for Computing Dynamic Mode Decomposition. Computation 2022, 10, 210. https://doi.org/10.3390/computation10120210

AMA Style

Nedzhibov G. On Alternative Algorithms for Computing Dynamic Mode Decomposition. Computation. 2022; 10(12):210. https://doi.org/10.3390/computation10120210

Chicago/Turabian Style

Nedzhibov, Gyurhan. 2022. "On Alternative Algorithms for Computing Dynamic Mode Decomposition" Computation 10, no. 12: 210. https://doi.org/10.3390/computation10120210

APA Style

Nedzhibov, G. (2022). On Alternative Algorithms for Computing Dynamic Mode Decomposition. Computation, 10(12), 210. https://doi.org/10.3390/computation10120210

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop