Previous Article in Journal
Applications of Shapley Value to Financial Decision-Making and Risk Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Alternative Framework for Dynamic Mode Decomposition with Control

by
Gyurhan Nedzhibov
Faculty of Mathematics and Computer Science, Konstantin Preslavsky University of Shumen, 9700 Shumen, Bulgaria
AppliedMath 2025, 5(2), 60; https://doi.org/10.3390/appliedmath5020060
Submission received: 24 March 2025 / Revised: 13 May 2025 / Accepted: 14 May 2025 / Published: 23 May 2025

Abstract

:
Dynamic mode decomposition with control (DMDc) is a widely used technique for analyzing dynamic systems influenced by external control inputs. It is a recent development and an extension of dynamic mode decomposition (DMD) tailored for input–output systems. In this work, we investigate and analyze an alternative approach for computing DMDc. Compared to the traditional formulation, the proposed method restructures the computation by decoupling the influence of the state and control components, allowing for a more modular and interpretable implementation. The algorithm avoids compound operator approximations typical of standard approaches, which makes it potentially more efficient in real-time applications or systems with streaming data. The new scheme aims to improve computational efficiency while maintaining the reliability and accuracy of the decomposition. We provide a theoretical proof that the dynamic modes produced by the proposed method are exact eigenvectors of the corresponding Koopman operator. Compared to the standard DMDc approach, the new algorithm is shown to be more efficient, requiring fewer calculations and less memory. Numerical examples are presented to demonstrate the theoretical results and illustrate potential applications of the modified approach. The results highlight the promise of this alternative formulation for advancing data-driven modeling and control in various engineering and scientific domains.

1. Introduction

With advancements in technology, capabilities in data storage, transfer rates, and computational power are rapidly increasing, leading to the generation of massive datasets. This creates a growing need for new techniques for the real-time control of complex systems. The modeling and control of such systems are central to various applications in physical, biological, and engineering domains. These include epidemic control, internet traffic regulation, energy infrastructure optimization, and more.
The analysis and control of complex systems require the development of innovative quantitative methods and data-driven techniques. One such approach is dynamic mode decomposition (DMD) [1], which has become a prominent tool for identifying spatiotemporal coherent structures from high-dimensional data. DMD provides a numerical approximation to Koopman spectral analysis and can therefore be applied to nonlinear dynamical systems [2,3]. Over the past decade, the DMD method has gained popularity across a wide range of applications, including video processing [4], epidemiology [5], neuroscience [6], financial trading [7,8,9], robotics [10], cavity flows [11,12], and various jet flows [2,13]. For an overview of the DMD literature, we refer the reader to [14,15,16,17].
In recent years, a variety of advanced control strategies and mathematical models have been proposed to address complex systems in different fields. For instance, Liang et al. [18] explored decentralized control for networked systems under asymmetric information conditions, while Fahim et al. [19] developed a mathematical framework for liquidity risk contagion in banking systems using optimal control approaches. Shahgholian and Fathollahi [20] focused on improving load frequency control in multi-resource energy systems, particularly through superconducting magnetic energy storage, and Ndeke et al. [21] provided a detailed methodology for modeling voltage source converters in electrical circuits.
In practical settings, the modeling of high-dimensional, complex dynamical systems often requires accounting for external inputs or control actions. The standard DMD framework does not include control effects and may therefore fail to accurately predict system behavior under actuation. To address this, a modification known as dynamic mode decomposition with control (DMDc) was proposed in [22]. DMDc incorporates both state measurements and external inputs to capture the underlying system dynamics. In this work, we present an alternative formulation for implementing the DMDc method. Before introducing the new approach, we will briefly review the standard DMD and DMDc methodologies.

1.1. Dynamic Mode Decomposition (DMD Method)

For completeness, we begin with a brief overview of the standard DMD framework, following the formulation in [14]. Let us consider a sequential set of snapshot vectors:
D = { x 0 , , x m } ,
where each snapshot x k R n . These data vectors may originate from simulations, experiments, or real-world measurements collected at discrete times t k , assumed to be uniformly spaced by a time interval Δ t , over the interval [ t 0 , t m ] .
The DMD method begins by arranging the data into two large matrices:
X = [ x 0 , , x m 1 ] , Y = [ x 1 , , x m ] .
The central assumption of the method is that a linear operator A exists such that we have the following:
x k + 1 = A x k ,
which, in matrix form, leads to:
Y = A X .
Performing spectral decomposition of A yields the so-called dynamic mode decomposition of the dataset be italic D. The eigenvectors and eigenvalues of A are approximated and referred to as the DMD modes and DMD eigenvalues, respectively.
To compute the best-fit approximation of A, we apply the Moore–Penrose pseudoinverse X to both sides of Equation (4):
A Y X .
A practical approach for computing this approximation uses the singular value decomposition (SVD) of X = U X Σ X V X :
A Y V X Σ X 1 U X ,
where U X R n × n and V X R m × m are unitary matrices, and Σ X R n × m is a diagonal matrix of singular values. The symbol * denotes complex conjugate transpose.
In high-dimensional settings where n m , direct analysis of A becomes computationally expensive. DMD addresses this by projecting A onto a lower-dimensional subspace using a reduced SVD. This is motivated by the often low-dimensional nature of the essential system dynamics.
Let the reduced (or truncated) SVD of X be as follows:
X U r Σ r V r ,
where r is the truncation rank, U r R n × r , Σ r R r × r , and V r R m × r . Substituting into Equation (5), we obtain a reduced-order representation of A. By taking the basis transformation U r X = X ˜ , the reduced-order model can be derived:
U r Y = U r A X = U r Y V r Σ r 1 U r X = A ˜ X ˜ ,
which yields the following:
Y ˜ = A ˜ X ˜ ,
where A ˜ = U r Y V r Σ r 1 and X ˜ = U r X are the reduced operator and reduced snapshot matrix, respectively.
By performing eigendecomposition on A ˜ :
A ˜ W = W Λ ,
we obtain the eigenvalues Λ and eigenvectors W of the reduced system. Notably, the eigenvalues of A ˜ are also eigenvalues of the original matrix A.
To compute the DMD modes of the full system, we lift the reduced eigenvectors back into the original space. Two formulations are commonly used:
  • The exact DMD modes, introduced by Tu et al. [14]:
    Φ = Y V r Σ r 1 W ,
  • The projected DMD modes, as originally proposed in [1]:
    Φ = U r W .
The modes from Equation (10) are exact eigenvectors of A, while those from Equation (11) represent their projection onto the POD subspace.

1.2. Dynamic Mode Decomposition with Control (DMDc)

Dynamic mode decomposition with control (DMDc) is an extension of the classical DMD method, developed to analyze multidimensional systems that are influenced by external inputs or disturbances [22]. Unlike standard DMD, which considers only the autonomous system dynamics, DMDc incorporates time-dependent control inputs, enabling it to distinguish between internal system behavior and input-driven effects.
To account for the control influence, a new data matrix—analogous to the snapshot matrices X and Y from (2)—is introduced:
Γ = [ u 0 , , u m 1 ] ,
where each u i R q , and q is the number of input variables.
The DMDc model assumes a linear relationship between consecutive system states and the inputs:
x k + 1 = A x k + B u k ,
where A R n × n is the system (DMD) operator, and B R n × q is the input matrix. Using snapshot matrices, this model can be written as follows:
Y = A X + B Γ .
Assuming A and B are unknown, we rewrite Equation (14) in block matrix form:
Y = A B X Γ = G Ω ,
where
G = [ A B ] R n × ( n + q ) , Ω = X Γ R ( n + q ) × m .
The DMDc procedure seeks the best-fit approximation of G that satisfies the relation Y G Ω . To achieve this, we apply a truncated singular value decomposition (SVD) to the matrix Ω :
Ω U Ω Σ Ω V Ω ,
where U Ω R ( n + q ) × p , Σ Ω R p × p , and V Ω R m × p .
Using this decomposition, the matrix G is approximated by the following:
G Y V Ω Σ Ω 1 U Ω .
We partition U Ω as follows:
U Ω = U 1 U 2 ,
with U 1 R n × p and U 2 R q × p . The matrices A and B are then approximated as follows:
A ¯ Y V Ω Σ Ω 1 U 1 , B ¯ Y V Ω Σ Ω 1 U 2 .
To obtain a reduced-order representation of the dynamics, we project the system onto a lower-dimensional subspace. Unlike standard DMD, where the projection is based on X, here, we use the SVD of Y to define the transformation:
Y U Y Σ Y V Y ,
where U Y R n × r , Σ Y R r × r , and V Y R m × r . Typically, the reduced dimension, r, is smaller than p from the decomposition of Ω .
Applying this projection yields reduced-order operators:
A ˜ = U Y A ¯ U Y = U Y Y V Ω Σ Ω 1 U 1 U Y , B ˜ = U Y B ¯ = U Y Y V Y Σ Y 1 U 2 ,
with A ˜ R r × r and B ˜ R r × q . Alternatively, one may choose to project using the SVD of X, but this leads to a different reduced basis.
The DMD modes are derived from the eigendecomposition of A ˜ :
A ˜ W = W Λ ,
where the eigenvalues Λ describe the system’s temporal behavior, and the DMDc modes are obtained via the following:
Φ = Y V Y Σ Y 1 U 1 U Ω W .
The following algorithm (Algorithm 1) summarizes the steps of implementing the DMDc scheme:
Algorithm 1 Standard DMDc procedure
  • Collect data and construct the matrices X, Y, and Γ as in (2) and (12). Define the combined matrix Ω as in (16).
  • Perform reduced SVD of Ω :
    Ω U Ω Σ Ω V Ω ,
    truncated to rank p.
  • Perform reduced SVD of Y:
    Y U Y Σ Y V Y ,
    truncated to rank r.
  • Compute the reduced-order operators:
    A ˜ = U Y Y V Ω Σ Ω 1 U 1 U Y , B ˜ = U Y Y V Y Σ Y 1 U 2 .
  • Compute the spectral decomposition of A ˜ :
    A ˜ W = W Λ .
  • Calculate the DMDc modes:
    Φ = Y V Y Σ Y 1 U 1 U Ω W .

2. Alternative DMDc Algorithm

To obtain a low-rank approximation of the operator A from Equation (13), the standard DMDc algorithm exploits the low-dimensional structure of the data. In this section, we propose an alternative formulation of the DMDc method, as also presented in [23].
Let the data matrices X, Y, and Γ be defined as in Equations (2) and (12). Define the block matrix Ω as in (16). We aim to compute the best-fit linear operator G from Equation (16) using the Moore–Penrose pseudoinverse:
G = Y Ω ,
where Ω denotes the pseudoinverse of Ω . If the rows of Ω are linearly independent, Ω can be computed via SVD or as Ω ( Ω Ω ) 1 .
We now express Ω in block form to separate the contributions of the state and control inputs:
Ω = [ Ω 1 Ω 2 ] ,
where Ω 1 R m × n corresponds to the state snapshot matrix X, and Ω 2 R m × q corresponds to the control input matrix Γ . This decomposition allows us to isolate the influence of X and Γ on Y.
Substituting into (24), the operators A and B are approximated as follows:
A A ^ = Y Ω 1 , B B ^ = Y Ω 2 ,
where A ^ and B ^ are the least-squares estimates of the system and input matrices, obtained from projecting Y onto the subspaces of X and Γ .
If a reduced SVD of Ω as in (17) is used, then the estimates A ^ and B ^ coincide with the matrices A ¯ and B ¯ defined in (19).
We now perform a reduced-order SVD on Ω 1 (instead of Y) to obtain a compact representation of the system dynamics:
Ω 1 U ^ Σ ^ V ^ ,
where U ^ R m × r contains the leading left singular vectors, Σ ^ R r × r is a diagonal matrix with the top r singular values, and V ^ R n × r contains the right singular vectors. The truncation parameter r defines the rank of the reduced subspace.
Using this decomposition, we define reduced-order approximations of A and B via projection onto the subspace defined by V ^ :
A ˜ = V ^ A ^ V ^ = V ^ Y U ^ Σ ^ , B ˜ = V ^ B ^ = V ^ Y Ω 2 ,
where A ˜ R r × r and B ˜ R r × q are reduced approximations of the system and input matrices. Note that this reduced-order formulation differs from the one in (21) obtained via the standard DMDc method.
We compute the eigenstructure of A ˜ :
A ˜ W = W Λ ,
where W contains the eigenvectors of A ˜ and Λ is a diagonal matrix of its eigenvalues.
To reconstruct the DMD modes in the full state space, we apply the following transformation:
Φ = Y U ^ Σ ^ W ,
where the columns of Φ are the DMD modes of the approximated operator A.
We then summarize the results in the next algorithm Algorithm 2:
Algorithm 2 Alternative DMDc Algorithm
  • Collect data and construct the matrices X, Y, and Γ as in (2) and (12). Define Ω as in (16).
  • Compute the pseudoinverse Ω and express it in block form:
    Ω = Ω 1 Ω 2 .
  • Perform a truncated SVD of Ω 1 :
    Ω 1 U ^ Σ ^ V ^ ,
    with truncation rank r.
  • Compute the reduced-order operators:
    A ˜ = V ^ Y U ^ Σ ^ , B ˜ = V ^ Y Ω 2 .
  • Compute the eigendecomposition of A ˜ :
    A ˜ W = W Λ .
  • Reconstruct the DMD modes:
    Φ = Y U ^ Σ ^ W .
In what follows, we will demonstrate that Algorithm 2 identifies all nonzero eigenvalues of the operator A. We begin by proving the following theorem.
Theorem 1.
Let the matrix A ˜ , defined in (28), have an eigenpair ( λ , w ) , where λ 0 . Then the pair ( λ , ϕ ) is an eigenpair of A, where we have the following:
ϕ = Y U ^ Σ ^ w .
Proof. 
We aim to show that if A ˜ w = λ w , then ϕ , defined via (31), satisfies A ϕ = λ ϕ , i.e., it is an eigenvector of A with eigenvalue λ .
Recall from (26) that we approximate: A A ^ = Y Ω 1 . Also, from the reduced SVD in (27), we have: Ω 1 U ^ Σ ^ V ^ . Combining these, we write the action of A on ϕ as follows:
A ϕ = Y Ω 1 ϕ = Y U ^ Σ ^ V ^ ϕ .
Then, substituting the definition of ϕ from (31):
A ϕ = Y U ^ Σ ^ V ^ Y U ^ Σ ^ w
We now observe that this expression is equivalent to applying the matrix A ˜ = V ^ Y U ^ Σ ^ , as defined in (28). Hence, we have the following:
A ϕ = Y U ^ Σ ^ A ˜ w
Since ( λ , w ) is an eigenpair of A ˜ , we have the following:
A ϕ = λ Y U ^ Σ ^ w = λ ϕ .
Thus, ϕ satisfies the eigenvalue equation for A.
It remains to verify that ϕ 0 . Suppose ϕ = 0 , i.e.,
Y U ^ Σ ^ w = 0 .
Then, multiplying both sides by V ^ , we obtain the following:
A ˜ w = V ^ Y U ^ Σ ^ w = 0 ,
which implies λ = 0 , contradicting the assumption λ 0 . Hence, ϕ 0 , and we conclude that ( λ , ϕ ) is indeed an eigenpair of A. The theorem is proved.    □
We will now show that Algorithm 2 produces all non-zero eigenvalues of A. Let us assume that A ϕ = λ ϕ , for λ 0 , and w = V ^ ϕ . Hence
A ˜ w = V ^ Y U ^ Σ ^ w = V ^ Y U ^ Σ ^ V ^ ϕ = V ^ A ϕ = λ V ^ ϕ = λ w .
Therefore, w 0 , because if V ^ ϕ = 0 , then Y U ^ Σ ^ V ^ ϕ = A ϕ = 0 , and λ = 0 . Hence, w is an eigenvector of A ˜ with the corresponding eigenvalue λ , and it is discovered by Algorithm 2.
The alternative DMDc algorithm constructs a low-rank approximation of the system operator A by projecting the data onto a reduced subspace derived from the singular value decomposition (SVD) of the input data matrix. This approach yields a reduced-order operator A ˜ that captures the dominant system dynamics. The corresponding eigenvectors are then lifted back to the original state space via the transformation Φ = Y U ^ Σ ^ W , recovering the DMD modes.
However, due to the nature of this projection, only the nonzero eigenvalues of A are preserved. Eigenvectors associated with zero eigenvalues are mapped to the zero vector and effectively discarded, as the transformation Φ = Y U ^ Σ ^ W inherently maps any eigenvector associated with a zero eigenvalue to the zero vector. While this filtering enhances numerical robustness by eliminating uninformative or noise-dominated directions, it also means that the algorithm does not recover DMD modes corresponding to structurally zero dynamics. In some control or system identification tasks, these missing modes may still carry important information.
Zero eigenvalues typically correspond to stationary modes or equilibrium components that do not evolve over time. Although they reflect invariant properties of the system, such modes are often disregarded in dynamic analyses like DMD, where the focus lies on capturing temporal evolution and transient behavior. Nevertheless, in scenarios where equilibrium behavior is of interest, such as in stability analysis or controller design, these zero modes may be of practical significance.

Computational Complexity

The overall computational structure of the alternative Algorithm 2 is comparable to that of the classical DMDc (Algorithm 1), with similar effort required for Steps 1–3 and Step 5. However, the key differences arise in Steps 4 and 6, where Algorithm 2 offers a more streamlined approach. These differences are summarized in Table 1, which compares the reduced operators and the expressions used for computing the DMD modes in both methods.
From a computational standpoint, Algorithm 1 requires a greater number of matrix multiplications and intermediate storage. Specifically, it involves at least six intermediate matrices and five matrix multiplications to compute the reduced operator A ˜ . In contrast, Algorithm 2 uses only four intermediate matrices and requires three multiplications due to its more direct formulation. The computation of the DMD modes Φ is also more straightforward in Algorithm 2.
This reduction in computational complexity makes Algorithm 2 particularly suitable for high-dimensional systems and for use in online or real-time applications, where efficiency and scalability are critical. Moreover, the simplified structure facilitates easier implementation and adaptation to large-scale, data-driven control problems.

3. Numerical Illustrations

In this section, we present a series of numerical experiments to demonstrate the performance of the proposed algorithm. Dynamic mode decomposition with control (DMDc) is a generalization of the standard DMD framework that captures the underlying dynamics of a system through measurements of both the state and external inputs.
The new formulation (Algorithm 2), introduced in Section 2, offers significant computational advantages over the classical DMDc algorithm (Algorithm 1). In all examples below, we compare the performance and outputs of both methods.
Example 1.
Unstable linear system with inputs
Algorithm 2 is first applied to a simple two-dimensional unstable linear system with input-driven stabilization. This canonical setup has been considered in several prior works, see [15,22,24]. The system is defined as follows:
x 1 x 2 k + 1 = λ 0 0 μ x 1 x 2 k + 0 δ u k .
For | λ | > 1 or | μ | > 1 , the system is unstable. We set the parameters to: λ = 0.3 , μ = 1.2 , δ = 0.5 , and apply a proportional controller u k = [ x 2 ] k . This stabilizes the system by moving the unstable eigenvalue inside the unit circle. The fixed point of the system is at the origin x 1 = x 2 = 0 , illustrated in Figure 1.
We perform ten iterations with the initial condition [ 5 4 ] T , and construct the following data matrices from the first five snapshots:
X = 5 1.5 0.45 0.135 0.0405 4 2.8 1.96 1.372 0.9604 , Γ = 2 1.4 0.98 0.686 0.4802 ,
Y = 1.5 0.45 0.135 0.0405 0.0121 2.8 1.96 1.372 0.9604 0.6723 .
Both algorithms produce identical DMD modes:
ϕ 1 = 0.3 0 , ϕ 2 = 0 0.56 ,
with corresponding DMD eigenvalues λ 1 = 0.3 and λ 2 = 0.56 .
Example 2.
Nonlinear system with inputs.
Next, we consider a nonlinear dynamical system with external input:
x ˙ 1 = μ x 1 , x ˙ 2 = λ ( x 2 x 1 2 ) + δ u ,
where λ = 0.5 , μ = 1.5 , and δ = 1 , following examples in [14,24,25,26].
To enable a Koopman-based analysis, we augment the state with a nonlinear observable:
y = y 1 y 2 y 3 = x 1 x 2 x 1 2 .
The system is as follows:
y ˙ = μ 0 0 0 λ λ 0 0 2 μ   y + 0 δ 0   u .
We collect measurements of the observables over 10 iterations, starting from the initial condition [ 2 3 4 ] T . The input u consists of random signals sampled from a uniform distribution. The following matrices are constructed from the first five time snapshots:
X = 2 3 4.5 6.75 10.125 3 0.745 5.697 14.542 45.506 4 12 36 108 324 , Γ = 0.245 0.07 0.608 1.222 0.316 ,
Y = 3 4.5 6.75 10.125 15.1875 0.745 5.697 14.542 45.506 139.563 12 36 108 324 972 .
Both algorithms yield the same DMD modes (see Figure 2) corresponding to the eigenvalues λ 1 = 1.5 , λ 2 = 0.5 , and λ 3 = 3 .
Example 3.
Large-scale stable linear systems.
Finally, we examine a large-scale linear system where the number of measurements significantly exceeds the dimensionality of the underlying dynamics.
The system exhibits a low-dimensional attractor. We construct a discrete-time linear state-space model using MATLAB’s drss function. The configuration includes:
  • State dimension: 5
  • Input dimension: 2
  • Output dimension: 50
This yields matrices A R 5 × 5 , B R 5 × 2 , C R 50 × 5 , and D R 50 × 2 . The input sequence Γ is generated via MATLAB’s randn function. The initial state is set as x 0 = [ 1 , 1 , 1 , 1 , 1 ] T . Using these, we compute the data matrices X and Y, whose dynamics are shown in Figure 3.
The DMD modes and eigenvalues computed using both algorithms are displayed in Figure 4.
The results show complete agreement between the classical and alternative algorithms in both computed eigenvalues and DMD modes across all examples.

4. Conclusions

In this paper, we proposed an alternative formulation of the dynamic mode decomposition with control (DMDc), based on a structured low-rank approximation of the data matrices. We introduced and analyzed a new algorithm (Algorithm 2), which is shown to be more computationally efficient in terms of resource usage compared to the standard approach. Unlike traditional DMDc, this method constructs a reduced-order model using the singular value decomposition (SVD) of a block matrix, Ω 1 . This allows for the direct computation of the reduced dynamics A ˜ and control matrix B ˜ without requiring a full SVD of the output matrix Y. Theoretical analysis has shown that all nonzero eigenvalues of the original system matrix A are captured by the eigenvalues of A ˜ , and a precise transformation allows recovery of the corresponding DMD modes.
The proposed algorithm simplifies the computational pathway by decoupling the effects of state and control inputs during model identification. This decoupling eliminates the need for certain matrix inversions and complex operator approximations inherent in traditional DMDc methods. As a result, it reduces memory usage and minimizes matrix multiplications. These features make the method particularly advantageous for large-scale systems, streaming data environments, and real-time control applications, where both computational speed and resource efficiency are critical. A comparison of matrix operations shows that Algorithm 2 requires storing fewer matrices and performing fewer multiplications than the standard approach, all while maintaining accuracy. Numerical examples have confirmed that the proposed method yields results identical to those obtained from the standard DMDc algorithm. Overall, the proposed method provides a practical, computationally efficient, and theoretically sound alternative to standard DMDc. Its modular structure and minimal resource footprint make it a promising tool for data-driven modeling and control in high-dimensional or rapidly changing environments.
However, despite its computational advantages, the proposed approach has certain limitations. One important feature of the formulation is that modes associated with zero eigenvalues are naturally excluded through the projection mechanism, which can enhance numerical robustness by focusing the model on the most dynamically significant behaviors. While this exclusion is often beneficial, it may be problematic in systems where zero dynamics have physical significance or are essential for stability analysis. In such cases, the method might overlook important behaviors. Additionally, the algorithm assumes that the data matrices are sufficiently rank-deficient and well-conditioned, which may not be the case in the presence of high noise levels or missing data.
Future work could focus on addressing these limitations by integrating regularization techniques, robust estimation strategies, and methods for explicitly capturing zero dynamics when necessary. Furthermore, extending the algorithm to work within adaptive or streaming frameworks for online learning and control would be a valuable direction, particularly for real-time applications in large-scale systems or robotics.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The author declare no conflicts of interest.

References

  1. Schmid, P.J.; Sesterhenn, J. Dynamic mode decomposition of numerical and experimental data. In Proceedings of the 61st Annual Meeting of the APS Division of Fluid Dynamics, San Antonio, TX, USA, 23–25 November 2008; American Physical Society: Washington, DC, USA, 2008. [Google Scholar]
  2. Rowley, C.W.; Mezić, I.; Bagheri, S.; Schlatter, P.; Henningson, D.S. Spectral analysis of nonlinear flows. J. Fluid Mech. 2009, 641, 115–127. [Google Scholar] [CrossRef]
  3. Mezić, I. Spectral properties of dynamical systems, model reduction and decompositions. Nonlin. Dynam. 2005, 41, 309–325. [Google Scholar] [CrossRef]
  4. Grosek, J.; Kutz, J.N. Dynamic Mode Decomposition for Real-Time Background/Foreground Separation in Video. arXiv 2014, arXiv:1404.7592. [Google Scholar]
  5. Proctor, J.L.; Eckhoff, P.A. Discovering dynamic patterns from infectious disease data using dynamic mode decomposition. Int. Health 2015, 7, 139–145. [Google Scholar] [CrossRef]
  6. Brunton, B.W.; Johnson, L.A.; Ojemann, J.G.; Kutz, J.N. Extracting spatial-temporal coherent patterns in large-scale neural recordings using dynamic mode decomposition. J. Neurosci. Methods 2016, 258, 1–15. [Google Scholar] [CrossRef]
  7. Mann, J.; Kutz, J.N. Dynamic mode decomposition for financial trading strategies. Quant. Financ. 2016, 16, 1643–1655. [Google Scholar] [CrossRef]
  8. Cui, L.; Long, W. Trading strategy based on dynamic mode decomposition: Tested in Chinese stock market. In Physica A: Statistical Mechanics and Its Applications; Elsevier: Amsterdam, The Netherlands, 2016; Volume 461, pp. 498–508. [Google Scholar]
  9. Kuttichira, D.P.; Gopalakrishnan, E.A.; Menon, V.K.; Soman, K.P. Stock price prediction using dynamic mode decomposition. In Proceedings of the 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Udupi, India, 13–16 September 2017; pp. 55–60. [Google Scholar]
  10. Berger, E.; Sastuba, M.; Vogt, D.; Jung, B.; Amor, H.B. Estimation of perturbations in robotic behavior using dynamic mode decomposition. J. Adv. Robot. 2015, 29, 331–343. [Google Scholar] [CrossRef]
  11. Schmid, P.J. Dynamic mode decomposition of numerical and experimental data. J. Fluid Mech. 2010, 656, 5–28. [Google Scholar] [CrossRef]
  12. Seena, A.; Sung, H.J. Dynamic mode decomposition of turbulent cavity ows for selfsustained oscillations. Int. J. Heat Fluid Flow 2011, 32, 1098–1110. [Google Scholar] [CrossRef]
  13. Schmid, P.J. Application of the dynamic mode decomposition to experimental data. Exp. Fluids 2011, 50, 1123–1130. [Google Scholar] [CrossRef]
  14. Tu, J.H.; Rowley, C.W.; Luchtenburg, D.M.; Brunton, S.L.; Kutz, J.N. On dynamic mode decomposition: Theory and applications. J. Comput. Dyn. 2014, 1, 391–421. [Google Scholar] [CrossRef]
  15. Kutz, J.N.; Brunton, S.L.; Brunton, B.W.; Proctor, J. Dynamic Mode Decomposition: Data-Driven Modeling of Complex Systems; SIAM: Philadelphia, PA, USA, 2016; pp. 1–234. ISBN 978-1-611-97449-2. [Google Scholar]
  16. Bai, Z.; Kaiser, E.; Proctor, J.L.; Kutz, J.N.; Brunton, S.L. Dynamic Mode Decomposition for CompressiveSystem Identification. AIAA J. 2020, 58, 561–574. [Google Scholar] [CrossRef]
  17. Brunton, S.L.; Budišić, M.; Kaiser, E.; Kutz, J.N. Modern Koopman Theory for Dynamical Systems. SIAM Rev. 2020, 64, 229–340. [Google Scholar] [CrossRef]
  18. Liang, X.; Qi, Q.; Zhang, H.; Xie, L. Decentralized Control for Networked Control Systems With Asymmetric Information. IEEE Trans. Autom. Control 2022, 67, 2076–2083. [Google Scholar] [CrossRef]
  19. Fahim, S.; Mourad, H.; Lahby, M. Modeling and Mathematical Analysis of Liquidity Risk Contagion in the Banking System Using an Optimal Control Approach. AppliedMath 2025, 5, 20. [Google Scholar] [CrossRef]
  20. Shahgholian, G.; Fathollahi, A. Advancing Load Frequency Control in Multi-Resource Energy Systems Through Superconducting Magnetic Energy Storage. AppliedMath 2025, 5, 1. [Google Scholar] [CrossRef]
  21. Ndeke, C.B.; Adonis, M.; Almaktoof, A. Basic Circuit Model of Voltage Source Converters: Methodology and Modeling. AppliedMath 2024, 4, 889–907. [Google Scholar] [CrossRef]
  22. Proctor, J.L.; Brunton, S.L.; Kutz, N. Dynamic mode decomposition with control. SIAM J. Appl. Dyn. Syst. 2016, 15, 142–161. [Google Scholar] [CrossRef]
  23. Nedzhibov, G. An Improved Approach for Implementing Dynamic Mode Decomposition with Control. Computation 2023, 11, 201. [Google Scholar] [CrossRef]
  24. Proctor, J.L.; Brunton, S.L.; Kutz, J.N. Generalizing Koopman theory to allow for inputs and control. SIAM J. Appl. Dyn. 2018, 17, 909–930. [Google Scholar] [CrossRef]
  25. Brunton, S.L.; Brunton, B.W.; Proctor, J.L.; Kutz, J.N. Koopman invariant subspaces and finite linear representations of nonlinear dynamical systems for control. PLoS ONE 2016, 11, e0150171. [Google Scholar] [CrossRef] [PubMed]
  26. Proctor, J.L.; Brunton, S.L.; Kutz, J.N. Including inputs and control within equation-free architectures for complex systems. Eur. Phys. J. Spec. Top. 2016, 225, 2413–2434. [Google Scholar] [CrossRef]
Figure 1. Linear dynamics governed by Equation (32).
Figure 1. Linear dynamics governed by Equation (32).
Appliedmath 05 00060 g001
Figure 2. DMD modes computed by Algorithms 1 and 2.
Figure 2. DMD modes computed by Algorithms 1 and 2.
Appliedmath 05 00060 g002
Figure 3. State trajectories of Example 3.
Figure 3. State trajectories of Example 3.
Appliedmath 05 00060 g003
Figure 4. First panel: DMD eigenvalues. Remaining panels: DMD modes computed by Algorithms 1 and 2.
Figure 4. First panel: DMD eigenvalues. Remaining panels: DMD modes computed by Algorithms 1 and 2.
Appliedmath 05 00060 g004
Table 1. Reduced-order approximations and DMD modes.
Table 1. Reduced-order approximations and DMD modes.
Algorithm 1Algorithm 2
Reduced operator A ˜ = U Y Y V Ω Σ Ω 1 U 1 U Y A ˜ = V ^ Y U ^ Σ ^
DMD modes Φ = Y V Y Σ Y 1 U 1 U Ω W Φ = Y U ^ Σ ^ W
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nedzhibov, G. An Alternative Framework for Dynamic Mode Decomposition with Control. AppliedMath 2025, 5, 60. https://doi.org/10.3390/appliedmath5020060

AMA Style

Nedzhibov G. An Alternative Framework for Dynamic Mode Decomposition with Control. AppliedMath. 2025; 5(2):60. https://doi.org/10.3390/appliedmath5020060

Chicago/Turabian Style

Nedzhibov, Gyurhan. 2025. "An Alternative Framework for Dynamic Mode Decomposition with Control" AppliedMath 5, no. 2: 60. https://doi.org/10.3390/appliedmath5020060

APA Style

Nedzhibov, G. (2025). An Alternative Framework for Dynamic Mode Decomposition with Control. AppliedMath, 5(2), 60. https://doi.org/10.3390/appliedmath5020060

Article Metrics

Back to TopTop