# Reduced Order Modeling Using Advection-Aware Autoencoders

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Related Work

## 3. Methodology

#### 3.1. Autoencoders

#### 3.2. Advection-Aware Autoencoder Design

#### 3.3. Long-Short-Term Memory (LSTM) Network

## 4. Results

#### 4.1. Linear Advection Problem

#### 4.2. Advecting Viscous Shock Problem

#### 4.2.1. AA Autoencoder Models for Varying Advection Strength

#### 4.2.2. LSTM Models for System Dynamics

**a**)–(

**c**) are obtained with LSTM models that are trained on the latent space coefficients corresponding to snapshots in the parametric training dataset, i.e., $Re=50,300,500$, respectively. The modal trajectories in panel (

**d**) are obtained by evaluating the latent space coefficients for a test parameter value $Re=400$ using the AA3 model and then training a LSTM model to learn the evolution of these coefficients. As mentioned before, the LSTM models are trained on the first $90\%$ time steps of each timeseries and the boundary of the training data is marked by a dashed vertical line in each plot. The encoded true snapshots and the LSTM predictions for both the training and even the test parameter values display a high degree of agreement, especially for time steps within the LSTM training time window. It is also encouraging to observe that for a short length outside the training time window, even the extrapolatory predictions obtained from the trained LSTM models are in agreement with the encoded true snapshots. This behavior can also be seen in Figure 17 where the true decoder of the AA3 model is applied to the LSTM predictions and the results are compared with the high-dimensional snapshots. These plots are populated with the predicted and high-dimensional solution snapshots at four different intermediate times with the x-axis representing the spatial grid. The plotting time steps are distributed uniformly throughout the simulation time window and are chosen in such a way that the final time step $t=1.90$ lies outside the LSTM training time window. It is clear that the trained autoencoder(AE)-LSTM model captures the viscous advecting shock-like feature fairly well, even for the extrapolatory time step. This demonstrates the ease of constructing dynamics models in a latent space defined by the parametric AA autoencoder model, even while adopting a standard implementation of a simple and lightweight LSTM network. While some discrepancies emerge with extrapolatory and longer-time prediction windows, this can be attributed to the well-known issues with the autoregressive modeling of time series data using standard LSTM networks [70]. However, for applications where time series predictions are desired over shorter time windows, the proposed AA autoencoder+LSTM approach shows the capacity for effective extrapolatory predictions.

**a**), pLSTM predictions of the latent space evolution for $Re=50$ are computed by randomly choosing the encoded high-dimensional solution at $t=0.26$ as the initial data, and marching forward until $t=2$ in a recursive fashion. Similarly, in (

**b**), the initial point is chosen to be $t=0.50$ and the latent space evolution for $Re=500$ is computed recursively. In both cases, the predicted trajectories show remarkable agreement with the encoded high-dimensional solution trajectories. Finally, in (

**c**) and (

**d**), the pLSTM latent space predictions are decoded using model AA3 to compare with the high-dimensional solution snapshots at four intermediate time steps. Again, the predicted solutions are found to closely align with the true high-dimensional snapshot data.

## 5. Conclusions and Future Work

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Kutz, J.N. Data-Driven Modeling & Scientific Computation: Methods for Complex Systems & Big Data; Oxford University Press, Inc.: Oxford, UK, 2013. [Google Scholar]
- Holmes, P.; Lumley, J.L.; Berkooz, G. Turbulence, Coherent Structures, Dynamical Systems and Symmetry; Cambridge Monographs on Mechanics, Cambridge University Press: Cambridge, UK, 1996. [Google Scholar] [CrossRef]
- Benner, P.; Gugercin, S.; Willcox, K. A Survey of Projection-Based Model Reduction Methods for Parametric Dynamical Systems. SIAM Rev.
**2015**, 57, 483–531. [Google Scholar] [CrossRef] - Brunton, S.L.; Noack, B.R.; Koumoutsakos, P. Machine Learning for Fluid Mechanics. Annu. Rev. Fluid Mech.
**2020**, 52, 477–508. [Google Scholar] [CrossRef] [Green Version] - Hesthaven, J.S.; Rozza, G.; Stamm, B. Certified Reduced Basis Methods for Parametrized Partial Differential Equations; Springer: Cham, Switzerland, 2016; pp. 1–131. [Google Scholar] [CrossRef] [Green Version]
- Rowley, C.W.; Dawson, S.T. Model Reduction for Flow Analysis and Control. Annu. Rev. Fluid Mech.
**2017**, 49, 387–417. [Google Scholar] [CrossRef] [Green Version] - Taira, K.; Brunton, S.L.; Dawson, S.T.; Rowley, C.W.; Colonius, T.; McKeon, B.J.; Schmidt, O.T.; Gordeyev, S.; Theofilis, V.; Ukeiley, L.S. Modal analysis of fluid flows: An overview. AIAA J.
**2017**, 55, 4013–4041. [Google Scholar] [CrossRef] [Green Version] - Berkooz, G.; Holmes, P.; Lumley, J.L. The proper orthogonal decomposition in the analysis of turbulent flows. Annu. Rev. Fluid Mech.
**1993**, 25, 539–575. [Google Scholar] [CrossRef] - Lozovskiy, A.; Farthing, M.; Kees, C.; Gildin, E. POD-based model reduction for stabilized finite element approximations of shallow water flows. J. Comput. Appl. Math.
**2016**, 302, 50–70. [Google Scholar] [CrossRef] - Lozovskiy, A.; Farthing, M.; Kees, C. Evaluation of Galerkin and Petrov–Galerkin model reduction for finite element approximations of the shallow water equations. Comput. Methods Appl. Mech. Eng.
**2017**, 318, 537–571. [Google Scholar] [CrossRef] - Carlberg, K.; Barone, M.; Antil, H. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction. J. Comput. Phys.
**2017**, 330, 693–734. [Google Scholar] [CrossRef] [Green Version] - Dutta, S.; Rivera-Casillas, P.; Cecil, O.; Farthing, M. pyNIROM—A suite of python modules for non-intrusive reduced order modeling of time-dependent problems. Softw. Impacts
**2021**, 10, 100129. [Google Scholar] [CrossRef] - Alla, A.; Kutz, J.N. Nonlinear model order reduction via dynamic mode decomposition. SIAM J. Sci. Comput.
**2017**, 39, B778–B796. [Google Scholar] [CrossRef] - Wu, Z.; Brunton, S.L.; Revzen, S. Challenges in Dynamic Mode Decomposition. J. R. Soc. Interface
**2021**, 18, 20210686. [Google Scholar] [CrossRef] [PubMed] - Xiao, D.; Fang, F.; Pain, C.C.; Navon, I.M. A parameterized non-intrusive reduced order model and error analysis for general time-dependent nonlinear partial differential equations and its applications. Comput. Methods Appl. Mech. Eng.
**2017**, 317, 868–889. [Google Scholar] [CrossRef] [Green Version] - Dutta, S.; Farthing, M.W.; Perracchione, E.; Savant, G.; Putti, M. A greedy non-intrusive reduced order model for shallow water equations. J. Comput. Phys.
**2021**, 439, 110378. [Google Scholar] [CrossRef] - Guo, M.; Hesthaven, J.S. Data-driven reduced order modeling for time-dependent problems. Comput. Methods Appl. Mech. Eng.
**2019**, 345, 75–99. [Google Scholar] [CrossRef] - Xiao, D. Error estimation of the parametric non-intrusive reduced order model using machine learning. Comput. Methods Appl. Mech. Eng.
**2019**, 355, 513–534. [Google Scholar] [CrossRef] - Hesthaven, J.S.; Ubbiali, S. Non-intrusive reduced order modeling of nonlinear problems using neural networks. J. Comput. Phys.
**2018**, 363, 55–78. [Google Scholar] [CrossRef] [Green Version] - Wan, Z.Y.; Vlachas, P.; Koumoutsakos, P.; Sapsis, T. Data-assisted reduced-order modeling of extreme events in complex dynamical systems. PLoS ONE
**2018**, 13, e0197704. [Google Scholar] [CrossRef] - Maulik, R.; Mohan, A.; Lusch, B.; Madireddy, S.; Balaprakash, P.; Livescu, D. Time-series learning of latent-space dynamics for reduced-order model closure. Phys. D Nonlinear Phenom.
**2020**, 405, 132368. [Google Scholar] [CrossRef] [Green Version] - Chen, R.T.Q.; Rubanova, Y.; Bettencourt, J.; Duvenaud, D. Neural Ordinary Differential Equations. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS’18), Montréal, QC, Canada, 2–8 December 2018; pp. 6572–6583. [Google Scholar] [CrossRef]
- Dutta, S.; Rivera-Casillas, P.; Farthing, M.W. Neural Ordinary Differential Equations for Data-Driven Reduced Order Modeling of Environmental Hydrodynamics. In Proceedings of the AAAI 2021 Spring Symposium on Combining Artificial Intelligence and Machine Learning with Physical Sciences, Virtual Meeting, 22–24 March 2021; CEUR-WS: Stanford, CA, USA, 2021. [Google Scholar]
- Wu, P.; Sun, J.; Chang, X.; Zhang, W.; Arcucci, R.; Guo, Y.; Pain, C.C. Data-driven reduced order model with temporal convolutional neural network. Comput. Methods Appl. Mech. Eng.
**2020**, 360, 112766. [Google Scholar] [CrossRef] - Taddei, T.; Perotto, S.; Quarteroni, A. Reduced basis techniques for nonlinear conservation laws. ESAIM Math. Model. Numer. Anal.
**2015**, 49, 787–814. [Google Scholar] [CrossRef] - Greif, C.; Urban, K. Decay of the Kolmogorov N-width for wave problems. Appl. Math. Letters
**2019**, 96, 216–222. [Google Scholar] [CrossRef] [Green Version] - Carlberg, K.; Farhat, C.; Cortial, J.; Amsallem, D. The GNAT method for nonlinear model reduction: Effective implementation and application to computational fluid dynamics and turbulent flows. J. Comput. Phys.
**2013**, 242, 623–647. [Google Scholar] [CrossRef] [Green Version] - Nair, N.J.; Balajewicz, M. Transported snapshot model order reduction approach for parametric, steady-state fluid flows containing parameter-dependent shocks. Int. J. Numer. Methods Eng.
**2019**, 117, 1234–1262. [Google Scholar] [CrossRef] [Green Version] - Rim, D.; Moe, S.; LeVeque, R.J. Transport reversal for model reduction of hyperbolic partial differential equations. SIAM/ASA J. Uncertain. Quantif.
**2018**, 6, 118–150. [Google Scholar] [CrossRef] [Green Version] - Reiss, J.; Schulze, P.; Sesterhenn, J.; Mehrmann, V. The shifted proper orthogonal decomposition: A mode decomposition for multiple transport phenomena. SIAM J. Sci. Comput.
**2018**, 40, A1322–A1344. [Google Scholar] [CrossRef] [Green Version] - Rim, D.; Peherstorfer, B.; Mandli, K.T. Manifold approximations via transported subspaces: Model reduction for transport-dominated problems. arXiv
**2019**, arXiv:1912.13024. [Google Scholar] - Cagniart, N.; Maday, Y.; Stamm, B. Model order reduction for problems with large convection effects. In Contributions to Partial Differential Equations and Applications; Springer International Publishing: Cham, Switzerland, 2019; pp. 131–150. [Google Scholar] [CrossRef] [Green Version]
- Taddei, T. A registration method for model order reduction: Data compression and geometry reduction. SIAM J. Sci. Comput.
**2020**, 42, A997–A1027. [Google Scholar] [CrossRef] [Green Version] - Peherstorfer, B. Model reduction for transport-dominated problems via online adaptive bases and adaptive sampling. SIAM J. Sci. Comput.
**2020**, 42, A2803–A2836. [Google Scholar] [CrossRef] - Kashima, K. Nonlinear model reduction by deep autoencoder of noise response data. In Proceedings of the 2016 IEEE 55th Conference on Decision and Control (CDC), Las Vegas, NV, USA, 12–14 December 2016; pp. 5750–5755. [Google Scholar] [CrossRef]
- Lee, K.; Carlberg, K.T. Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. J. Comput. Phys.
**2020**, 404, 108973. [Google Scholar] [CrossRef] [Green Version] - Kim, Y.; Choi, Y.; Widemann, D.; Zohdi, T. A fast and accurate physics-informed neural network reduced order model with shallow masked autoencoder. J. Comput. Phys.
**2022**, 451, 110841. [Google Scholar] [CrossRef] - Willcox, K. Unsteady flow sensing and estimation via the gappy proper orthogonal decomposition. Comput. Fluids
**2006**, 35, 208–226. [Google Scholar] [CrossRef] [Green Version] - Chaturantabut, S.; Sorensen, D.C. Nonlinear model reduction via Discrete Empirical Interpolation. SIAM J. Sci. Comput.
**2010**, 32, 2737–2764. [Google Scholar] [CrossRef] - Mendible, A.; Brunton, S.L.; Aravkin, A.Y.; Lowrie, W.; Kutz, J.N. Dimensionality reduction and reduced-order modeling for traveling wave physics. Theor. Comput. Fluid Dyn.
**2020**, 34, 385–400. [Google Scholar] [CrossRef] - Haasdonk, B.; Ohlberger, M. Adaptive basis enrichment for the reduced basis method applied to finite volume schemes. In Proceedings of the Fifth International Symposium on Finite Volumes for Complex Applications, Aussois, France, 8–13 June 2008; pp. 471–479. [Google Scholar]
- Chen, P.; Quarteroni, A.; Rozza, G. A weighted empirical interpolation method: A priori convergence analysis and applications. ESAIM Math. Model. Numer. Anal.
**2014**, 48, 943–953. [Google Scholar] [CrossRef] [Green Version] - Amsallem, D.; Farhat, C. An online method for interpolating linear parametric reduced-order models. SIAM J. Sci. Comput.
**2011**, 33, 2169–2198. [Google Scholar] [CrossRef] - Maday, Y.; Stamm, B. Locally adaptive greedy approximations for anisotropic parameter reduced basis spaces. SIAM J. Sci. Comput.
**2013**, 35, A2417–A2441. [Google Scholar] [CrossRef] [Green Version] - Peherstorfer, B.; Butnaru, D.; Willcox, K.; Bungartz, H.J. Localized discrete empirical interpolation method. SIAM J. Sci. Comput.
**2014**, 36, A168–A192. [Google Scholar] [CrossRef] - Carlberg, K. Adaptive h-refinement for reduced-order models. Int. J. Numer. Methods Eng.
**2015**, 102, 1192–1210. [Google Scholar] [CrossRef] [Green Version] - Peherstorfer, B.; Willcox, K. Online adaptive model reduction for nonlinear systems via low-rank updates. SIAM J. Sci. Comput.
**2015**, 37, A2123–A2150. [Google Scholar] [CrossRef] - Tenenbaum, J. Mapping a manifold of perceptual observations. In Advances in Neural Information Processing Systems; Jordan, M., Kearns, M., Solla, S., Eds.; MIT Press: Cambridge, MA, USA, 1998; Volume 10. [Google Scholar]
- Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science
**2000**, 290, 2323–2326. [Google Scholar] [CrossRef] [Green Version] - Belkin, M.; Niyogi, P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput.
**2003**, 15, 1373–1396. [Google Scholar] [CrossRef] [Green Version] - van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res.
**2008**, 9, 2579–2605. [Google Scholar] - Kohonen, T. Self-organized formation of topologically correct feature maps. Biol. Cybern.
**1982**, 43, 59–69. [Google Scholar] [CrossRef] - Mika, S.; Schölkopf, B.; Smola, A.; Müller, K.R.; Scholz, M.; Rätsch, G. Kernel PCA and de-noising in feature spaces. In Advances in Neural Information Processing Systems; Kearns, M., Solla, S., Cohn, D., Eds.; MIT Press: Cambridge, MA, USA, 1999; Volume 11. [Google Scholar]
- Walder, C.; Schölkopf, B. Diffeomorphic dimensionality reduction. In Advances in Neural Information Processing Systems 21; Koller, D., Schuurmans, D., Bengio, Y., Bottou, L., Eds.; Curran Associates Inc.: Red Hook, NY, USA, 2009; pp. 1713–1720. [Google Scholar]
- Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science
**2006**, 313, 504–507. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Masci, J.; Meier, U.; Cireşan, D.; Schmidhuber, J. Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction. In Artificial Neural Networks and Machine Learning—ICANN 2011; Honkela, T., Duch, W., Girolami, M., Kaski, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 52–59. [Google Scholar]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. In Proceedings of the 4th International Conference on Learning Representations (ICLR 2016), San Juan, Puerto Rico, 2–4 May 2016; pp. 1–16. [Google Scholar]
- Champion, K.; Lusch, B.; Nathan Kutz, J.; Brunton, S.L. Data-driven discovery of coordinates and governing equations. Proc. Natl. Acad. Sci. USA
**2019**, 116, 22445–22451. [Google Scholar] [CrossRef] [Green Version] - Dutta, S.; Rivera-Casillas, P.; Cecil, O.M.; Farthing, M.W.; Perracchione, E.; Putti, M. Data-driven reduced order modeling of environmental hydrodynamics using deep autoencoders and neural ODEs. arXiv
**2021**, arXiv:2107.02784. [Google Scholar] - Maulik, R.; Lusch, B.; Balaprakash, P. Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders. Phys. Fluids
**2021**, 33, 037106. [Google Scholar] [CrossRef] - Wehmeyer, C.; Noé, F. Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics. J. Chem. Phys.
**2018**, 148, 241703. [Google Scholar] [CrossRef] [Green Version] - Nishizaki, H. Data augmentation and feature extraction using variational autoencoder for acoustic modeling. In Proceedings of the 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Kuala Lumpur, Malaysia, 12–15 December 2017; pp. 1222–1227. [Google Scholar] [CrossRef]
- Bakarji, J.; Champion, K.; Kutz, J.N.; Brunton, S.L. Discovering Governing Equations from Partial Measurements with Deep Delay Autoencoders. arXiv
**2022**, arXiv:2201.05136. [Google Scholar] - Erichson, N.B.; Muehlebach, M.; Mahoney, M.W. Physics-informed Autoencoders for Lyapunov-stable Fluid Flow Prediction. arXiv
**2019**, arXiv:1905.10866. [Google Scholar] - Gonzalez, F.J.; Balajewicz, M. Deep convolutional recurrent autoencoders for learning low-dimensional feature dynamics of fluid systems. arXiv
**2018**, arXiv:1808.01346. [Google Scholar] - Mojgani, R.; Balajewicz, M. Low-Rank Registration Based Manifolds for Convection-Dominated PDEs. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21), Vancouver, BC, Canada, 2–9 February 2021; pp. 399–407. [Google Scholar]
- Plaut, E. From Principal Subspaces to Principal Components with Linear Autoencoders. arXiv
**2018**, arXiv:1804.10253. [Google Scholar] - Greff, K.; Srivastava, R.K.; Koutník, J.; Steunebrink, B.R.; Schmidhuber, J. LSTM: A search space odyssey. IEEE Trans. Neural Netw. Learn. Syst.
**2016**, 28, 2222–2232. [Google Scholar] [CrossRef] [Green Version] - Eivazi, H.; Veisi, H.; Naderi, M.H.; Esfahanian, V. Deep neural networks for nonlinear model order reduction of unsteady flows. Phys. Fluids
**2020**, 32, 105104. [Google Scholar] [CrossRef] - Maulik, R.; Lusch, B.; Balaprakash, P. Non-autoregressive time-series methods for stable parametric reduced-order models. Phys. Fluids
**2020**, 32, 087115. [Google Scholar] [CrossRef] - del Águila Ferrandis, J.; Triantafyllou, M.S.; Chryssostomidis, C.; Karniadakis, G.E. Learning functionals via LSTM neural networks for predicting vessel dynamics in extreme sea states. Proc. R. Soc. A
**2021**, 477, 20190897. [Google Scholar] [CrossRef] - Chattopadhyay, A.; Mustafa, M.; Hassanzadeh, P.; Kashinath, K. Deep Spatial Transformers for Autoregressive Data-Driven Forecasting of Geophysical Turbulence. In Proceedings of the 10th International Conference on Climate Informatics, Oxford, UK, 22–25 September 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 106–112. [Google Scholar] [CrossRef]
- Pathak, J.; Hunt, B.; Girvan, M.; Lu, Z.; Ott, E. Model-Free Prediction of Large Spatiotemporally Chaotic Systems from Data: A Reservoir Computing Approach. Phys. Rev. Lett.
**2018**, 120, 24102. [Google Scholar] [CrossRef] [Green Version] - Usman, A.; Rafiq, M.; Saeed, M.; Nauman, A.; Almqvist, A.; Liwicki, M. Machine learning-accelerated computational fluid dynamics. Proc. Natl. Acad. Sci. USA
**2021**, 118, e2101784118. [Google Scholar] [CrossRef] - Stabile, G.; Zancanaro, M.; Rozza, G. Efficient geometrical parametrization for finite-volume-based reduced order methods. Int. J. Numer. Methods Eng.
**2020**, 121, 2655–2682. [Google Scholar] [CrossRef]

**Figure 2.**Advection-aware autoencoder architecture. An encoder network ${\mathit{\chi}}_{e}$ extracts the dominant features of $v\in {\mathbb{R}}^{N}$ into a compressed latent space $z\in {\mathbb{R}}^{k}$. One decoder network ${\mathit{\varphi}}_{s}$ maps the latent vector to the shifted snapshot, ${v}_{s}\in {\mathbb{R}}^{N}$. The second decoder network ${\mathit{\chi}}_{d}$ maps the latent vector back to an approximation of itself, $\tilde{v}\in {\mathbb{R}}^{N}$.

**Figure 3.**The relative information content for different number of retained POD modes. The singular values are computed by taking an SVD of the high-fidelity snapshots for an advecting circular Gaussian pulse (see Equation (6)) of varying width ($\sigma $) traveling at a constant speed $c=1$.

**Figure 4.**Training characteristics of two AA autoencoder networks trained using a parametric dataset of snapshots for a 1D advecting Gaussian pulse parameterized with varying support of the pulse profile, $\sigma =\{5,10,16\}$. AA1 denotes the model trained with the input features augmented by parameter values, while AA2 denotes the model where both input features and output labels are augmented. The left panel shows the decay of training and validation losses during training. The right panel shows the evolution of the loss components during training.

**Figure 5.**Prediction performance of ${\mathit{\varphi}}_{s}$ and ${\mathit{\chi}}_{d}$ decoders on training data. (

**a**,

**b**) predictions of shifted and true snapshots, respectively, for pulse size $\sigma =5$ and (

**c**,

**d**) predictions of shifted and true snapshots, respectively, for pulse size $\sigma =16$ at an intermediate time $t=6.92$ min using the AA1 model. (

**e**) Relative errors for the decoder predictions using different values of the parameter from the training set.

**Figure 6.**Prediction performance of ${\mathit{\varphi}}_{s}$ and ${\mathit{\chi}}_{d}$ decoders on unseen data. (

**a**,

**b**) predictions of shifted and true snapshots, respectively, for unseen pulse size $\sigma =8$ at time $t=6.92$ min using the AA1 model. (

**c**) Relative errors for the decoder predictions using two different values of the parameter from the unseen test set.

**Figure 7.**Time evolution of the high-fidelity snapshots for the advecting viscous shock problem (see Equation (9)), parameterized with variable Reynolds number, $Re$.

**Figure 8.**The relative information content for a different number of retained POD modes. The singular values are computed by taking an SVD of the high-fidelity snapshots for the advecting viscous shock problem (see Equation (9)), parameterized with variable Reynolds number, $Re$.

**Figure 9.**Training characteristics of two AA autoencoder networks trained using a parametric dataset of snapshots for the advecting viscous shock problem parameterized with variable Reynolds number, $Re=\{50,150,300,500\}$. AA3 denotes the model trained with the entire time snapshot history for every parameter value, while AA4 denotes the model trained with the initial $90\%$ of the time snapshot history for each parameter value. The left panel shows the decay of training and validation losses during training. The right panel shows the evolution of the loss components during training.

**Figure 10.**Prediction performance of ${\mathit{\chi}}_{d}$ and ${\mathit{\varphi}}_{s}$ decoders on training data. Predictions of (

**a**) true and (

**b**) shifted snapshots for a training parameter value, $Re=50$, using models AA3 and AA4. The left column shows the predicted solutions, the center column shows the high-fidelity solutions, and the right column shows the error between the two.

**Figure 11.**Prediction performance of ${\mathit{\chi}}_{d}$ and ${\mathit{\varphi}}_{s}$ decoders on training data. Predictions of (

**a**) true and (

**b**) shifted snapshots for a training parameter value, $Re=500$, using models AA3 and AA4. The left column shows the predicted solutions, the center column shows the high-fidelity solutions and the right column shows the error between the two.

**Figure 12.**Relative errors of (

**a**) ${\mathit{\varphi}}_{s}$ and (

**b**) ${\mathit{\chi}}_{d}$ predictions using the AA3 and AA4 models for snapshots generated with the training parameter values.

**Figure 13.**Prediction performance of ${\mathit{\chi}}_{d}$ and ${\mathit{\varphi}}_{s}$ decoders on unseen data. Predictions of (

**a**) true and (

**b**) shifted snapshots for a test parameter value, $Re=400$, using models AA3 and AA4. The left column shows the predicted solutions, the center column shows the high-fidelity solutions and the right column shows the error between the two.

**Figure 14.**Prediction performance of ${\mathit{\chi}}_{d}$ and ${\mathit{\varphi}}_{s}$ decoders on unseen data. Predictions of (

**a**) true and (

**b**) shifted snapshots for a test parameter value, $Re=600$, using models AA3 and AA4. The left column shows the predicted solutions, the center column shows the high-fidelity solutions, and the right column shows the error between the two.

**Figure 15.**Relative errors of ${\mathit{\varphi}}_{s}$ (

**left**) and ${\mathit{\chi}}_{d}$ (

**right**) predictions using the AA3 and AA4 models for snapshots generated with the unseen test parameter values.

**Figure 16.**Comparing latent space predictions obtained using a parametric AA autoencoder and LSTM models. The LSTM models are trained separately for each training parameter value as presented in (

**a**) $Re=50$, (

**b**) $Re=300$, (

**c**) $Re=500$ and for a test parameter value shown in (

**d**) $Re=400$. All LSTM models are trained using the first $90\%$ of the total time steps (as demarcated by the vertical lines in each figure), and the remaining time steps are used for evaluating extrapolatory predictions.

**Figure 17.**Comparing predictions of the high-dimensional solutions for the parametric advecting viscous shock problem using an AA autoencoder and LSTM models. The LSTM models are individually trained for each parameter value and are presented as (

**a**) $Re=50$, (

**b**) $Re=300$, (

**c**) $Re=500$, and (

**d**) $Re=400$.

**Figure 18.**Comparing predictions obtained using a parametric AA autoencoder and a parametric LSTM (pLSTM) model. (

**a**,

**b**) show the latent space predictions using the pLSTM model and the encoded high-dimensional snapshots for training parameter values $Re=150$ and $Re=300$, respectively. (

**c**,

**d**) compare the corresponding decoded pLSTM predictions with the high-dimensional snapshots at four intermediate time steps.

**Figure 19.**Comparing recursive predictions obtained using a parametric AA autoencoder and a parametric LSTM (pLSTM) model. (

**a**,

**b**) show the recursive latent space predictions using the pLSTM model and the encoded high-dimensional snapshots for training parameter values $Re=50$ and $Re=500$, respectively. (

**c**,

**d**) compare the corresponding decoded pLSTM predictions with the high-dimensional snapshots at four intermediate time steps.

Hyperparameters | AA1 | AA2 |
---|---|---|

Input/Output | Augmented Input, non-augmented output | Augmented input and output |

Hidden Units (50–1500) | $629,251,62$ | $629,251,62$ |

Batch Size (8–128) | 32 | 24 |

Latent Dimension (5–50) | 15 | 15 |

Activation (ReLU, selu, linear, tanh, swish) | selu | swish |

Hyperparameters | AA3 | AA4 |
---|---|---|

Input/Output | Augmented input and output | Augmented input and output |

Hidden Units (50–150) | 50 | $100,50$ |

Batch Size (8–128) | 24 | 24 |

Latent Dimension (3–10) | 5 | 5 |

Activation (selu, tanh, swish) | swish | swish |

Initial Learning Rate ($1\times {10}^{-3}$–$1\times {10}^{-5}$) | $5\times {10}^{-4}$ | $3\times {10}^{-4}$ |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Dutta, S.; Rivera-Casillas, P.; Styles, B.; Farthing, M.W.
Reduced Order Modeling Using Advection-Aware Autoencoders. *Math. Comput. Appl.* **2022**, *27*, 34.
https://doi.org/10.3390/mca27030034

**AMA Style**

Dutta S, Rivera-Casillas P, Styles B, Farthing MW.
Reduced Order Modeling Using Advection-Aware Autoencoders. *Mathematical and Computational Applications*. 2022; 27(3):34.
https://doi.org/10.3390/mca27030034

**Chicago/Turabian Style**

Dutta, Sourav, Peter Rivera-Casillas, Brent Styles, and Matthew W. Farthing.
2022. "Reduced Order Modeling Using Advection-Aware Autoencoders" *Mathematical and Computational Applications* 27, no. 3: 34.
https://doi.org/10.3390/mca27030034