Efficient space-time reduced order model for linear dynamical systems in Python using less than 120 lines of code

A classical reduced order model (ROM) for dynamical problems typically involves only the spatial reduction of a given problem. Recently, a novel space-time ROM for linear dynamical problems has been developed, which further reduces the problem size by introducing a temporal reduction in addition to a spatial reduction without much loss in accuracy. The authors show an order of a thousand speed-up with a relative error of less than 0.00001 for a large-scale Boltzmann transport problem. In this work, we present for the first time the derivation of the space-time Petrov-Galerkin projection for linear dynamical systems and its corresponding block structures. Utilizing these block structures, we demonstrate the ease of construction of the space-time ROM method with two model problems: 2D diffusion and 2D convection diffusion, with and without a linear source term. For each problem, we demonstrate the entire process of generating the full order model (FOM) data, constructing the space-time ROM, and predicting the reduced-order solutions, all in less than 120 lines of Python code. We compare our Petrov-Galerkin method with the traditional Galerkin method and show that the space-time ROMs can achieve O(100) speed-ups with O(0.001) to O(0.0001) relative errors for these problems. Finally, we present an error analysis for the space-time Petrov-Galerkin projection and derive an error bound, which shows an improvement compared to traditional spatial Galerkin ROM methods.


Introduction
Many computational models for physical simulations are formulated as linear dynamical systems.Examples of linear dynamical systems include, but are not limited to, the Schrödinger equation that arises in quantum mechanics, the computational model for the signal propagation and interference in electric circuits, storm surge prediction models before an advancing hurricane, vibration analysis in large structures, thermal analysis in various media, neurotransmission models in the nervous system, various computational models for micro-electro-mechanical systems, and various particle transport simulations.These linear dynamical systems can quickly become large scale and computationally expensive, which prevents fast generation of solutions.Thus, areas in design optimization, uncertainty quantification, and controls where large parameter sweeps need to be done can become intractable, and this motivates the need for developing a Reduced Order Model (ROM) that can accelerate the solution process without loss in accuracy.
Many ROM approaches for linear dynamical systems have been developed, and they can be broadly categorized as data-driven or non data-driven approaches.We give a brief background of some of the methods here.For the non data-driven approaches, there are several methods, including: balanced truncation methods [2][3][4][5][6][7][8][9][10], moment-matching methods [11][12][13][14][15], and Proper Generalized Decomposition (PGD) [16] and its extensions [17][18][19][20][21][22][23][24][25][26].The balanced truncation method is by far the most popular method, but it requires the solution of two Lyapunov equations to construct bases, which is a formidable task in large-scale problems.Moment matching methods were originally developed as non data-driven, although later papers extended the method to include it.They provide a computationally efficient framework using Krylov subspace techniques in an iterative fashion where only matrix-vector multiplications are required.The optimal H2 tangential interpolation for nonparametric systems [12] is also available.Proper Generalized Decomposition was first developed as a numerical method for solving boundary value problems.It utilizes techniques to separate space and time for an efficient solution procedure and is considered a model reduction technique.For the detailed description of PGD, we refer to a short review paper [27].Many data driven ROM approaches have been developed as well.When datasets are available either from experiments or high-fidelity simulations, these datasets can contain rich information about the system of interest and utilizing this in the construction of a ROM can produce an optimal basis.Although there are some data-driven moment matching works available [28,29], two popular methods are Dynamic Mode Decomposition (DMD) and Proper Orthogonal decomposition (POD).DMD generates reduced modes that embed an intrinsic temporal behavior and was first developed by Peter Schmid [30].The method has been actively developed and extended to many applications [31][32][33][34][35][36][37][38].For a more detailed description about DMD, we refer to this preprint [39] and book [40].POD utilizes the method of snapshots to obtain an optimal basis of a system and typically applies only to spatial projections, although temporal projection techniques have been developed as well [41][42][43][44][45][46][47][48][49][50][51][52][53].
In our paper, we focus on building a space-time ROM where both spatial and temporal projections are applied to achieve an optimal reduction.This method has been developed by previous authors [54][55][56][57], and a space-time ROM for large-scale linear dynamical systems has been recently introduced [1].The authors show a speed-up of > 8, 000 with good accuracy for a large-scale transport problem.In our work, we present several new contributions on the space-time ROM development: • We derive the block structures of least-squares Petrov-Galerkin space-time ROM operators and compare them with the Galerkin space-time ROM operators and show that the computational cost saving due to the block structure is a factor of the FOM spatial degrees of freedom.
• We present an error analysis of both Galerkin and least-squares Petrov-Galerkin space-time ROMs and demonstrate the growth rate of the stability constant with the actual space-time operators used in our numerical results.
• Utilizing the block structures derived, we demonstrate the ease of implementing both Galerkin and leastsquares Petrov-Galerkin space-time ROM implementations and provide the source code for three canonical problems.For each problem, we cover the entire space-time ROM process in less than 120 lines of Python code, which includes sweeping a wide parameter space and generating data from the full order model, constructing the space-time ROM, and generating the ROM prediction in the online phase.
• Finally, we present our results for the two model problems and compare the speed up and relative error between the Galerkin and Petrov-Galerkin methods and show that they give similar results.
We hope that by providing full access to the Python source codes, researchers can easily apply space-time ROMs to their linear dynamical problem of interest.Furthermore, we have curated the source codes to be simple and short so that it may be easily extended in various multi-query problem settings, such as design optimization [58][59][60][61][62][63][64], uncertainty quantification [65][66][67], and optimal control problems [68][69][70].

Organization of the paper
The paper is organized in the following way: Section 2 describes a parametric linear dynamical systems and spacetime formulation.Section 3 introduces linear subspace solution representation in Section 3.1 and space-time ROM formulation using Galerkin projection in Section 3.2 and least-squares Petrov-Galerkin projection in Section 3.3.Then, both space-time ROMs are compared in Section 3.4.Section 4 describes how to generate space-time basis.We investigate block structures of space-time ROM basis in Section 5.1.We introduce the block structures of Galerkin space-time ROM operators derived in [1] in Section 5.2.In Section 5.3, we derive least-squares Petrov-Galerkin space-time ROM operators in terms of the blocks.Then, we compared Galerkin and least-squares Petrov-Galerkin block structures in Section 5.4.We compute computational complexity of forming the space-time ROM operators in Section 5.5.The error analysis is presented in Section 6.We demonstrate the performance of both Galerkin and least-squares Petrov-Galerkin space-time ROMs in two numerical experiments in Section 7. Finally, the paper is concluded with summary and future works in Section 8.Note that we use "least-squares Petrov-Galerkin" and "Petrov-Galerkin" interchangeably throughout the paper.Appendix A presents six Python codes with less than 120 lines that are used to generate our numerical results.

Linear dynamical systems
We consider the parameterized linear dynamical system shown in Equation (2.1).
where µ ∈ Ωµ ⊂ R nµ denotes a parameter vector, u : [0, T ] × R nµ → R Ns denotes a time dependent state variable function, u 0 : R nµ → R Ns denotes an initial state, and f : [0, T ] × R nµ → R N i denotes a time dependent input variable function.The operators A : R nµ → R Ns×Ns , B : R nµ → R Ns×N i , and are real valued matrices that are independent of state variables.
Although any time integrator can be used, for the demonstration purpose, we choose to apply a backward Euler time integration scheme shown in Equation (2.2): where IN s ∈ R Ns×Ns is the identity matrix, ∆t (k) is the kth time step size with T = N t k=1 ∆t (k) and t (k) = k j=1 ∆t (j) , and u (k) (µ) := u(t (k) ; µ) and f (k) (µ) := f (t (k) ; µ) are the state and input vectors at kth time step where k ∈ N(Nt).The Full Order Model (FOM) solves Equation (2.2) for every time step, where its spatial dimension is Ns and the temporal dimension is Nt.Each time step of the FOM can be written out and put in another matrix system shown in Equation (2.3).This is known as the space-time formulation. where ) The space-time system matrix A st has dimensions R nµ → R NsN t ×NsN t , the space-time state vector u st has dimensions R nµ → R NsN t , the space-time input vector f st has dimensions R nµ → R NsN t , and the space-time initial vector u st 0 has dimensions R nµ → R NsN t .Although it seems that the solution can be found in a single solve, in practice there is no computational saving gained from doing so since the block structure of the space-time system will solve the system in a time-marching fashion anyways.However, we formulate the problem in this way since our reduced order model (ROM) formulation can reduce and solve the space-time system efficiently.In the following sections, we describe the parametric Galerkin and least-squares Petrov-Galerkin ROM formulations.

Space-time reduced order models
We investigate two projection-based space-time ROM formulations: the Galerkin and least-squares Petrov-Galerkin formulations.Here, we use "least-squares Petrov-Galerkin" and "Petrov-Galerkin" interchangeably throughout the paper.

Linear subspace solution representation
Both the Galerkin and Petrov-Galerkin methods reduce the number of space-time degrees of freedom by approximating the space-time state variables as a smaller linear combination of space-time basis vectors: where ûst (µ) : R nµ → R nsn t with ns Ns and nt Nt.The space-time basis, Φst ∈ R NsN t ×nsn t is defined as where i ∈ N(ns), j ∈ N(nt).Substituting Equation (3.1) into the space-time formulation in Equation (2.3) gives an over-determined system of equations: This over-determined system of equations can be closed by either the Galerkin or Petrov-Galerkin projections.

Galerkin projection
In the Galerkin formulation, Equation 3.3 is closed by the Galerkin projection, where both sides of the equation is multiplied by Φ T st .Thus, we solve following reduced system for the unknown generalized coordinates, u st : For notational simplicity, let us define the reduced space-time system matrix as Âst,g (µ) := Φ T st A st (µ)Φst, reduced space-time input vector as f st,g (µ) := Φ T st f st (µ), and reduced space-time initial state vector as û0 st,g (µ) := Φ T st u st 0 (µ).

Least-squares Petrov-Galerkin projection
In the least-squares Petrov-Galerkin formulation, we first define the space-time residual as rst (û st ; µ) := f st (µ) + u st 0 (µ) − A st (µ)Φst ûst (3.5)where rst : R nsn t × R nµ → R NsN t .Note Equation (3.5) is an over-determined system.To close the system and solve for the unknown generalized coordinates, ûst , the least-squares Petrov-Galerkin method takes the squared norm of the residual vector function and minimize it: ûst = argmin The solution to Equation (3.6) satisfies For notational simplicity, let us define the reduced space-time system matrix as Âst,pg (µ) := Φ T st A st (µ) T A st (µ)Φst, reduced space-time input vector as f st,pg (µ) := Φ T st A st (µ) T f st (µ), and reduced space-time initial state vector as û0 st,pg (µ) := Φ T st A st (µ) T u st 0 (µ).

Comparison of Galerkin and Petrov-Galerkin projections
The reduced space-time system matrices, reduced space-time input vectors, and reduced space-time initial state vectors for Galerkin and Petrov-Galerkin projections are presented in Table 1.
Table 1: Comparison of Galerkin and Petrov-Galerkin projections 4 Space-time Basis Generation In this section, we repeat Section 4.1 in [1] to be self-contained.
Figure 1: Illustration of spatial and temporal bases construction, using SVD with n µ = 3.The right singular vector, v i , describes three different temporal behaviors of a left singular basis vector w i , i.e., three different temporal behaviors of a spatial mode.Each temporal behavior is denoted as v 1 i , v 2 i , and v 3 i .
We follow the method of snapshots described by Sirovich [71].First, let {µ 1 , . . ., µ nµ } be a set of parameter samples, where we run full order model simulations.Let be a full order model solution matrix for a sample parameter, µ p ∈ Ωµ.Then concatenating all the solution matrix defines a snapshot matrix, U ∈ R Ns×nµN t , i.e., We use Proper Orthogonal Decomposition (POD) to construct the spatial basis, Φs.POD [41] obtains Φs by choosing the leading ns columns of the left singular matrix, W , of the following Singular Value Decomposition (SVD) with ≡ min(Ns, nµNt) and ns < nµNt: where W ∈ R Ns× and V ∈ R nµN t × are orthogonal matrices and Σ ∈ R × is a diagonal matrix with singular values on its diagonal.The spatial POD basis, Φs minimizes over all Φs ∈ R Ns×ns with orthonormal columns, where • F denotes the Frobenius norm.The POD procedure seeks the ns-dimensional subspace that optimally represents the solution snapshot, U .The equivalent summation form is written in (4.3),where σi ∈ R is ith singular value, wi and vi are ith left and right singular vectors, respectively.Note that vi describes nµ different temporal behavior of wi.For example, Figure 1 illustrates the case of nµ = 3, where v 1 i , v 2 i , and v 3 i describe three different temporal behavior of a specific spatial basis vector, i.e., wi.For general nµ, we note that vi describes nµ different temporal behavior of ith spatial basis vector, i.e., φ s i = wi.We set to be ith temporal snapshot matrix, where , where vi(j), j ∈ N(nµNt) is the jth component of the vector.The SVD of Υi is Then, choosing the leading nt vectors of Λi yields the temporal basis, Φ i t for ith spatial basis vector.Finally, we can construct a space-time basis vector, φ st i+ns(j−1) ∈ R NsN t , in Equation (3.2) as where ⊗ denotes Kronecker product, φ s i ∈ R Ns is ith vector of the spatial basis, Φs, and φ t ij ∈ R N t is jth vector of the temporal basis, Φ i t that describes a temporal behavior of φ s i .

Space-time reduced order models in block structure
We avoid building the space-time basis vector defined in Equation (4.6) because it requires much memory for storage.Thus, we can exploit the block structure of the matrices to save computational cost and storage of the matrices in memory.Section 5.2 introduces such block structures for the space-time Galerkin projection, while Section 5.3 shows block structures for the space-time Petrov-Galerkin projection.First, we introduce common block structures that appear both the Galerkin and Petrov-Galerkin projections in Section 5.1.

Block structures of space-time basis
Following [1]'s notation, we define the block structure of the space-time basis to be: where the kth time step of the temporal basis matrix is a diagonal matrix defined as

Block structures of Galerkin projection
As shown in Table 1, the reduced space-time Galerkin system matrix, Âst,g (µ) is: Now, We define the block structure of this matrix as: so that we can exploit the block structure of these matrices such that we do not need to form the entire matrix.We derive that Âst,g (j ,j) (µ) ∈ R ns×ns where the (j , j)th block matrix is: The reduced space-time Galerkin input vector Again, utilizing the block structure of matrices, we compute jth block vector f st,g (µ) (j) ∈ R n t to be: Finally, the space-time Galerkin initial vector, û0 st,g (µ) ∈ R nsn t , can be computed as: where the jth block vector, û0 st,g (µ) (j) ∈ R ns , is: (5.9)

Block structures of least-squares Petrov-Galerkin projection
As shown in Table 1, the reduced space-time Petrov-Galerkin system matrix, Âst,pg (µ) is: Now, We define the block structure of this matrix as: so that we can exploit the block structure of these matrices such that we do not need to form the entire matrix.We derive that Âst,pg (j ,j) (µ) ∈ R ns×ns where the (j , j)th block matrix is: 12) The reduced space-time Petrov-Galerkin input vector Again, utilizing the block structure of matrices, we compute jth block vector f st,pg (µ) (j) ∈ R n t to be: (5.14) Finally, the space-time Petrov-Galerkin initial vector, û0 st,pg (µ) ∈ R nsn t , can be computed as: where the jth block vector, û0 st,pg (µ) (j) ∈ R ns , is: (5.16)

Comparison of Galerkin and Petrov-Galerkin block structures
The block structures of space-time reduced order model operators are summarized in Table 2.

Computational complexity of forming space-time ROM operators
To compute computational complexity of forming the reduced space-time system matrices, input vectors, and initial state vectors for Galerkin and Petrov-Galerkin projections, we assume that A(µ) ∈ R Ns×Ns is a band matrix with the bandwidth, b and B(µ) is a identity matrix, IN s .The band structure of A(µ) is often seen in mathematical models because of local approximations to derivative terms.Then, the bandwidth of A st (µ) ∈ R NsN t ×NsN t formed with backward Euler scheme is Ns.We also assume that the spatial basis vectors φ s i ∈ R Ns , i ∈ N(ns) and temporal basis vectors φ t ij ∈ R N t , i ∈ N(ns) and j ∈ N(nt) are given.Let us start to compute the computational cost without use of block structures.Constructing space-time basis costs O(NsNtnsnt).For Galerkin projections, computing the reduced space-time system matrix, input vectors, and initial state vector costs O(2N In summary, the computational complexities of forming space-time ROM operators in training phase for Galerkin and Petrov-Galerkin projections are presented in Table 3.We observe that a lot of computational costs are reduced by making use of block structures for forming space-time reduced order models.

Error analysis
We present error analysis of the space-time ROM method.The error analysis is based on [1].A posteriori error bound is derived in this section.Here, we drop the parameter dependence for notational simplicity.
Theorem 6.1.We define the error at kth time step as e (k) ≡ u (k) − ũ(k) ∈ R Ns where u (k) ∈ R Ns denotes FOM solution, ũ(k) ∈ R Ns denotes approximate solution, and k ∈ N(Nt).Let A st ∈ R NsN t ×NsN t be the space-time system matrix, r (k) ∈ R Ns be the residual computed using FOM solution at kth time step, and r(k) ∈ R Ns be the residual computed using approximate solution at kth time step.For example, r (k) and r(k) after applying the backward Euler scheme with the uniform time step become with ũ(0) = u 0 .Then, the error bound is given by where η ≡ √ Nt (A st ) −1 2 denotes the stability constant.
Proof.Let us define the space-time residual as with r st : R NsN t → R NsN t .Then, we have where u st ∈ R NsN t is the space-time FOM solution and ũst ∈ R NsN t is the approximate space-time solution.Subtracting Equation (6.6) from Equation (6.5) gives where e st ≡ u st − ũst ∈ R NsN t .Inverting A st yields e st = (A st ) −1 r st (ũ st ).(6.8) Taking 2 norm and Hölders' inequality gives We can re-write this in the following form and we have max which is equivalent to the error bound in (6.3).
A numerical demonstration with space-time system matrices, A st that have the same structure as the ones used in Section 7.1 and Section 7.2.1 shows the magnitude of (A st ) −1 2 increases linearly for small Nt, while it becomes eventually flattened for large Nt as shown in Fig. 2

Numerical results
In this section, we apply both the space-time Galerkin and Petrov-Galerkin ROMs to two model problems: (i) a 2D linear diffusion equation in Section 7.1 and (ii) a 2D linear convection-diffusion equation in Section 7.2.We demonstrate their accuracy and speed-up.The space-time ROMs are trained with solution snapshots associated with parameters in a chosen domain and used to predict the solution of a parameter that is not included in the trained parameter domain.We refer to this as the predictive case.The accuracy of space-time ROM solution ũst (µ) is assessed from its relative error by: and the 2 norm of space-time residual: The computational cost is measured in terms of CPU wall-clock time.The online speed-up is evaluated by dividing the wall-clock time of the FOM by the online phase of the ROM.For the multi-query problems, total speed-up is evaluated by dividing the time of all FOMs by the time of all ROMs including training time.All calculations are performed on an Intel(R) Core(TM) i9-10900T CPU @ 1.90GHz and DDR4 Memory @ 2933MHz.

2D linear diffusion equation
We consider a parameterized 2D linear diffusion equation with a source term and the initial condition is u(x, y, t = 0) = 0. ( The backward Euler with uniform time step size 2 N t is employed, where we set Nt = 50.For spatial differentiation, the second order central difference scheme is implemented for the diffusion terms.Discretizing the space domain into Nx = 70 and Ny = 70 uniform meshes in x and y directions, respectively, gives Ns = (Nx − 1) × (Ny − 1) = 4, 761 grid points, excluding boundary grid points.As a result, there are 238, 050 free degrees of freedom in space-time.
The final time snapshots of FOM, Galerkin space-time ROM, and Petrov-Galerkin space-time ROM are seen in Fig. 6.Both ROMs have a basis size of ns = 5 and nt = 3, resulting in a reduction factor of (NsNt)/(nsnt) = 15, 870.For the Galerkin method, the FOM and space-time ROM simulation with ns = 5 and nt = 3 takes an average time of 6.1816 × 10 −1 and 1.7646 × 10 −3 seconds, respectively, resulting in speed-up of 350.31.For the Petrov-Galerkin method, the FOM and space-time ROM simulation with ns = 5 and nt = 3 takes an average time of 6.0809 × 10 −1 and 1.6171 × 10 −3 seconds, respectively, resulting in speed-up of 376.04.For accuracy, the Galerkin method results in 1.210 × 10 −2 % relative error and 1.249 × 10 −2 space-time residual norm while the Petrov-Galerkin results in 2.626 × 10 −2 % relative error and 1.029 × 10 −2 space-time residual norm.The space-time ROM We investigate the numerical tests to see the generalization capability of both Galerkin and Petrov-Galerkin ROMs.The train parameter set, (µ1, µ2) ∈ {(−0.9, −0.9), (−0.9, −0.5), (−0.5, −0.9), (−0.5, −0.5)} is used to train a space-time ROMs with a basis of ns = 5 and nt = 3.Then trained ROMs solve predictive cases with the test parameter set, (µ1, µ2) ∈ {µ1|µ1 = −1.7 + 1.5/14i, i = 0, 1, • • • , 14} × {µ2|µ2 = −1.7 + 1.5/14j, j = 0, 1, • • • , 14}.Fig. 7 shows the relative errors over the test parameter set.The Galerkin and Petrov-Galerkin ROMs are the most accurate within the range of the train parameter points, i.e., [−0.9, −0.5] × [−0.9, −0.5].As the parameter points go beyond the train parameter domain, the accuracy of the Galerkin and Petrov-Galerkin ROMs start to deteriorate gradually.This implies that the Galerkin and Petrov-Galerkin ROMs have a trust region.Its trust region should be determined by the application space.For Galerkin ROM, online speed-up is about 389 in average and total time for ROM and FOM are 107.14 and 132.66 seconds, respectively, resulting in total speed-up of 1.24.For Petrov-Galerkin ROM, online speed-up is about 386 in average and total time for ROM and FOM are 117.96and 132.42 seconds, respectively, resulting in total speed-up of 1.12.Since the training time doesn't depend on the number of test cases, we expect more speed-up for a larger number of test cases.The backward Euler with uniform time step size 1 N t is employed where we set Nt = 50.For spatial differentiation, a second order central difference scheme for the diffusion terms and a first order backward difference scheme for the convection terms are implemented.Discretizing the space domain into Nx = 70 and Ny = 70 uniform meshes in x and y directions, respectively, gives Ns = (Nx − 1) × (Ny − 1) = 4, 761 grid points, excluding boundary grid points.As a result, there are 238, 050 free degrees of freedom in space-time.
For training phase, we collect solution snapshots associated with the following parameters: (µ1, µ2) ∈ {(0.03, 0.33), (0.03, 0.35), (0.05, 0.33), (0.05, 0.35)}, at which the FOM is solved.The Galerkin and Petrov-Galerkin space-time ROMs solve the Equation (7.6) with the target parameter (µ1, µ2) = (0.04, 0.34).Fig. 9, 10, and 11 show the relative errors, the space-time residuals, and the online speed-ups as a function of the reduced dimension ns and nt.We observe that both Galerkin and Petrov-Galerkin ROMs with ns = 5 and nt = 3 achieve a good accuracy (i.e., relative errors of 0.049% and 0.059%, respectively) and speed-up (i.e., 451.17 and 370.74, respectively).We also observe that the relative errors of Galerkin projection is smaller but the space-time residual is larger than Petrov-Galerkin projection.This is because Petrov-Galerkin space-time ROM solution minimizes the space-time residual.The final time snapshots of FOM, Galerkin space-time ROM, and Petrov-Galerkin space-time ROM are seen in Fig. 12.Both ROMs have a basis size of ns = 5 and nt = 3, resulting in a reduction factor of (NsNt)/(nsnt) = 15, 870.For the Galerkin method, the FOM and space-time ROM simulation with ns = 5 and nt = 3 takes an average time of 6.1562 × 10 −1 and 1.3645 × 10 −3 seconds, respectively, resulting in speed-up of 451.17.For the Petrov-Galerkin method, the FOM and space-time ROM simulation with ns = 5 and nt = 3 takes an average time of 5.7617 × 10 −1 and 1.5541 × 10 −3 seconds, respectively, resulting in speed-up of 370.74.For accuracy, the Galerkin method results in 4.898 × 10 −2 % relative error and 1.503 space-time residual norm while the Petrov-Galerkin results in 5.878 × 10 −2 % relative error and 1.459 space-time residual norm.

With source term
We consider a parameterized 2D linear convection diffusion equation with the source term f (x, y, t) which is given by f (x, y, t) = 10 and the initial condition is given by u(x, y, t = 0) = 0. (7.12) The backward Euler with uniform time step size 2 N t is employed where we set Nt = 50.For spatial differentiation, a second order central difference scheme for the diffusion terms and a first order backward difference scheme for the convection terms are implemented.Discretizing the space domain into Nx = 70 and Ny = 70 uniform meshes in x and y directions, respectively, gives Ns = (Nx − 1) × (Ny − 1) = 4, 761 grid points, excluding boundary grid points.As a result, there are 238, 050 free degrees of freedom in space-time.
For training phase, we collect solution snapshots associated with the following parameters: (µ1, µ2) ∈ {(0.195, 0.018), (0.195, 0.022), (0.205, 0.018), (0.205, 0.022)}, at which the FOM is solved.The Galerkin and Petrov-Galerkin space-time ROMs solve the Equation (7.9) with the target parameter (µ1, µ2) = (0.2, 0.02)).Fig. 14, 15, and 16 show the relative errors, the space-time residuals, and the online speed-ups as a function of the reduced dimension ns and nt.We observe that both Galerkin and Petrov-Galerkin ROMs with ns = 19 and nt = 3 achieve a good accuracy (i.e., relative errors of 0.217% and 0.265%, respectively) and speed-up (i.e., 153.87 and 139.88, respectively).We also observe that the relative errors of Galerkin projection is smaller but the space-time residual is larger than Petrov-Galerkin projection.This is because Petrov-Galerkin space-time ROM solution minimizes the space-time residual.The final time snapshots of FOM, Galerkin space-time ROM, and Petrov-Galerkin space-time ROM are seen in Fig. 17.Both ROMs have a basis size of ns = 19 and nt = 3, resulting in a reduction factor of (NsNt)/(nsnt) = 4, 176.For the Galerkin method, the FOM and space-time ROM simulation with ns = 19 and nt = 3 takes an average time of 6.1209 × 10 −1 and 3.9780 × 10 −3 seconds, respectively, resulting in speed-up of 153.87.For the Petrov-Galerkin method, the FOM and space-time ROM simulation with ns = 19 and nt = 3 takes an average time of 5.8780 × 10 −1 and 4.2020 × 10 −3 seconds, respectively, resulting in speed-up of 139.89.For accuracy, the Galerkin method results in 2.174 × 10 −1 % relative error and 1.564 × 10 3 space-time residual norm while the Petrov-Galerkin results in 2.652 × 10 −1 % relative error and 1.550 × 10 3 space-time residual norm.

Conclusion
In this work, we have formulated Galerkin and Petrov-Galerkin space-time ROMs using block structures which enable us to implement the space-time ROM operators efficiently.We also presented an a posteriori error bound for both Galerkin and Petrov-Galerkin space-time ROMs.We demonstrated that both Galerkin and Petrov-Galerkin space-time ROMs solves 2D linear diffusion problems and 2D linear convection diffusion problems accurately and efficiently.Both space-time reduced order models were able to achieve O(10 −3 ) to O(10 −4 ) relative errors with O(10 2 ) speed-ups.We also presented our Python codes used for the numerical examples in Appendix A so that readers can easily reproduce our numerical results.Furthermore, each Python code is less than 120 lines, demonstrating the ease of implementing our space-time ROMs.We used a linear subspace based ROM which is suitable for accelerating physical simulations whose solution space has a small Kolmogorov n-width.However, the linear subspace based ROM is not able to represent advection-dominated or sharp gradient solutions with a small number of bases.To address this challenge, a nonlinear manifold based ROM can be used, and recently, a nonlinear manifold based ROM has been developed for spatial ROMs [72,73].In future work, we aim to develop a nonlinear manifold based space-time ROM.The space-time ROM (a) for the backward Euler time integrator with uniform time step size.Combined with √ Nt, the stability constant η growth rate is shown in Fig. 2(b).These error bound shows much improvement against the ones for the spatial Galerkin and Petrov-Galerkin ROMs, which grows exponentially in time [1].

Figure 2 :
Figure 2: Growth rate of stability constant in Theorem 6.1.Backward Euler time stepping scheme with uniform time step size, ∆t = 10 −2 is used.
(a) Relative errors vs reduced dimensions for Galerkin projection (b) Relative errors vs reduced dimensions for Petrov-Galerkin projection
(a) Space-time residuals vs reduced dimensions for Galerkin projection (b) Space-time residuals vs reduced dimensions for Petrov-Galerkin projection
(a) Speedups vs reduced dimensions for Galerkin projection (b) Speedups vs reduced dimensions for Petrov-Galerkin projection

Figure 7 :
Figure 7: The comparison of the Galerkin and Petrov-Galerkin ROMs for predictive cases

Figure 8 :
Figure 8: Plot of Equation (7.8) (a) Relative errors vs reduced dimensions for Galerkin projection (b) Relative errors vs reduced dimensions for Petrov-Galerkin projection
(a) Space-time residuals vs reduced dimensions for Galerkin projection (b) Space-time residuals vs reduced dimensions for Petrov-Galerkin projection
35)} is used to train a space-time ROMs with a basis of ns = 5 and nt = 3.Then trained ROMs solve predictive cases with the test parameter set, (µ1, µ2) ∈ {µ1|µ1 = 0.01 + 0.06/11i, i = 0, 1, • • • , 11} × {µ2|µ2 = 0.31 + 0.06/11j, j = 0, 1, • • • , 11}.Fig. 13 shows the relative errors over the test parameter set.The Galerkin and Petrov-Galerkin ROMs are the most accurate within the range of the train parameter points, i.e., [0.03, 0.33] × [0.05, 0.35].As the parameter points go beyond the train parameter domain, the accuracy of the Galerkin and Petrov-Galerkin ROMs start to deteriorate gradually.This implies that the Galerkin and Petrov-Galerkin ROMs have a trust region.Its trust region should be determined by an application.For Galerkin ROM, online speed-up is about 387 in average and total time for ROM and FOM are 65.03 and 83.89 seconds, respectively, resulting in total speed-up of 1.29.For Petrov-Galerkin ROM, online speed-up is about 385 in average and total time for ROM and FOM are 70.55 and 83.34 seconds, respectively, (a) Speedups vs reduced dimensions for Galerkin projection (b) Speedups vs reduced dimensions for Petrov-Galerkin projection

Figure 13 :
Figure 13: The comparison of the Galerkin and Petrov-Galerkin ROMs for predictive cases (a) Relative errors vs reduced dimensions for Galerkin projection (b) Relative errors vs reduced dimensions for Petrov-Galerkin projection

Figure 14 :
Figure 14: 2D linear convection diffusion equation with source term.Relative errors vs reduced dimensions.
(a) Space-time residuals vs reduced dimensions for Galerkin projection (b) Space-time residuals vs reduced dimensions for Petrov-Galerkin projection

Figure 15 :
Figure 15: 2D linear convection diffusion equation with source term.Space-time residuals vs reduced dimensions.
Fig. 18 shows the relative errors over the test parameter set.The Galerkin and Petrov-Galerkin ROMs are the most accurate within the range of the train parameter points, i.e., [0.195, 0.205] × [0.018, 0.022].As the parameter points (a) Speedups vs reduced dimensions for Galerkin projection (b) Speedups vs reduced dimensions for Petrov-Galerkin projection

Figure 16 :
Figure 16: 2D linear convection diffusion equation with source term.Speedups vs reduced dimensions.

Figure 18 :
Figure 18: The comparison of the Galerkin and Petrov-Galerkin ROMs for predictive cases

Table 3 :
Comparison of Galerkin and Petrov-Galerkin computational complexities s N t n s n t ) O(N s N t n s n t )