# Machine Learning Methods for Multiscale Physics and Urban Engineering Problems

^{1}

^{2}

^{3}

^{4}

^{5}

^{6}

^{7}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. The Mysteriously Long Time Dynamics of Spin Ice

#### 2.1. Ice Is only One-Third Frozen

#### 2.2. Spin Ice: An Idealized Ice

#### 2.3. Magnetic Monopoles and Transitions between Ice States

#### 2.4. Heat Capacity and the Existence of Spin Ice in Dy${}_{2}$Ti${}_{2}$O${}_{7}$

#### 2.5. Supercooling and Listening to Monopole Behavior

#### 2.6. Spin Ice: A Clean Glass?

## 3. Approximate Hamiltonians for Molecular Dynamics

## 4. Dynamics of Moist Atmosphere

#### 4.1. Moist Rayleigh–Bénard Convection

#### 4.2. Moist Geostrophic Turbulence

## 5. Urban Engineering with Modern Remote Sensing

#### 5.1. Urban Data Acquisition Complexities

#### 5.2. Machine Learning Prospects for LiDAR

## 6. Spatiotemporal Prediction Techniques

- 1.
- (a)
- Particles systems defined by either a two-body force, $\mathbf{F}\left({\mathbf{\Delta}\mathbf{r}}_{\mathit{jk}}\right)$, or a potential energy, $V\left({r}_{k}\right),$ given ${\mathbf{r}}_{\mathbf{0}},{\mathbf{v}}_{\mathbf{0}}$:
- i.
- $\mathbf{r}}_{k}\left(t\right)=\frac{d{\mathbf{v}}_{k}}{dt$
- ii.
- $\mathbf{v}}_{k}\left(t\right)=\frac{d{\mathbf{a}}_{k}}{dt$
- iii.
- ${\mathbf{a}}_{k}\left(t\right)\to \mathrm{via}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}{\displaystyle \sum _{j}}\phantom{\rule{3.33333pt}{0ex}}\mathbf{F}\left({\mathbf{\Delta}\mathbf{r}}_{jk}\right)\to \mathrm{via}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}V\left({r}_{k}\right)\to (\mathrm{back}\phantom{\rule{3.33333pt}{0ex}}\mathrm{to}\phantom{\rule{3.33333pt}{0ex}}\mathrm{i}.)\phantom{\rule{3.33333pt}{0ex}}\left(\mathrm{repeat}\right);$

- (b)
- $\dot{\mathbf{r}}=f(\mathbf{r},t)$;
- (c)
- Euler–Lagrange equations of motion;
- (d)
- Hamiltonian formulation.

- 2.
- Field evolution systems $u(\mathbf{r},\mathbf{t})$ [104]:
- (a)
- Partial Differential Equations (PDEs): $\frac{\partial u}{\partial t}=f(u,\mathbf{r},t);$
- (b)
- Finite Difference/Element Equations: $u(t+\mathbf{\Delta}t)=g(u,\mathbf{r},t).$

- 3.
- (a)
- via Linear Operations (Markovian);
- (b)
- via Gaussian Processes (GP);
- (c)
- via Machine Learning (ML);
- (d)

- 4.
- Modal Decomposition—numerous applicable techniques for time-dependent amplitudes/power spectra.
- (a)
- Traditional analytic modes—Fourier, Bessel, Spherical Harmonics, Laguerre, Legendre;
- (b)
- Generalized modes—Galerkin methods;
- (c)
- Data/Experimentally Constrained Modes—Empirical Orthogonal Functions (EOF, POD, PCA, etc.);
- (d)
- In each case, the modes are spatial and static while the Amplitudes, ${A}_{n}\left(t\right)$, calculated are time-dependent;
- (e)
- When constrained by physical principles, the modes must evolve smoothly—no discontinuities.

- 5.
- Virial Theorem—Used to preserve the statistical features of a system’s time evolution
- (a)
- Couples the time-average of the kinetic energy of a system of particles to its internal forces;
- (b)
- For systems in equilibrium, relates the kinetic energy to the temperature of the system;
- (c)
- For power law-based forces, relates the kinetic energy to the potential energy;
- (d)
- Used in systems where the total energy is difficult to account for, but the time-averaged system is well known (i.e., stellar dynamics, thermodynamics).

- 6.
- 7.
- Physics-Informed Neural Networks (PINN) [111]—Related to spatiotemporal prediction schemes by directly extracting the components of a partial differential equation selected from a library of differential operators. Assists in understanding how a system is evolved over time by determining the equation of evolution.
- 8.
- Quantum Mechanics [112]—an admixture of a complex Field evolution via a PDE(Schroedinger/Dirac equation) with a statistical interpretation.
- 9.
- Hybrid Approaches
- (a)
- Given the methodology (toolset) listed above, find a suitable architecture to apply that yields the best interpretation of the system’s evolution;
- (b)
- As an example, consider a data-driven approach, where a Hopfield NN is employed to reduce the complexity of a large time-dependent data set. Assuming the data follow a particular model, by targeting a particular reduction in order, first project the current state vector of the data onto the best orthogonal domain appropriate to the problem—usually achieved by a linear operation (LA). Next, map the data from an input space to some desired output space using either $GP$ or $ML$, where the output space contains far less information (Model Order Reduction, POD, Krylov subspace reduction). This step is effectively Variational Auto-Encoding (VAE). From this mid-point state vector, begin to unpack the information by performing the inverse functional mapping ($G{P}^{-1}$ or $M{L}^{-1}$) finally bringing the state vector back to the linear space and via a linear inversion, back to the original input data space—effectively decoding the state from the VAE.
- (c)
- $$\underset{output\phantom{\rule{3.33333pt}{0ex}}=\phantom{\rule{3.33333pt}{0ex}}input}{\underset{\u23df}{\begin{array}{c}\hfill {\displaystyle (}\phantom{\rule{-2.84544pt}{0ex}}\begin{array}{c}\hfill \mathbb{O}\end{array}{\displaystyle )}\end{array}}}\phantom{\rule{0.0pt}{0ex}}\begin{array}{c}\hfill =\phantom{\rule{3.33333pt}{0ex}}{\widehat{LA}}^{-1}\end{array}\left[{\widehat{GP}}^{-1}\left[\begin{array}{c}\hfill \underset{\begin{array}{c}midpoint\\ state\end{array}}{\underset{\u23df}{\left(\phantom{\rule{-2.84544pt}{0ex}}\begin{array}{c}\hfill \mathbf{V}\\ \hfill \mathbf{A}\\ \hfill \mathbf{E}\end{array}\right)}}=\end{array}\phantom{\rule{-2.84544pt}{0ex}}\underset{\begin{array}{c}Functional\\ Mapping\end{array}}{\underset{\u23df}{\begin{array}{c}\hfill \widehat{GP}\end{array}}}\underset{LinearOp}{\underset{\u23df}{\begin{array}{c}\hfill \widehat{LA}\end{array}}}\underset{input}{\underset{\u23df}{\begin{array}{c}\hfill {\displaystyle (}\begin{array}{c}\hfill \mathbb{I}\end{array}{\displaystyle )}\end{array}}}\right]\right]\phantom{\rule{0.0pt}{0ex}}$$

- 1.
- Linear or Non-linear-based;
- 2.
- Data-Driven or Equation-based (or both);
- 3.
- Deterministic or Statistical—as a process;
- 4.
- Continuous or Discrete—numerically and/or analytically;
- 5.
- Boundary Conditions—fixed or dynamic;
- 6.
- Constraints—strong/weak/statistical in nature.

- 1.
- Analysis type—Least squares (fitted) versus Projection (onto a basis set);
- 2.
- Predictive—yes or no—this is in contrast with results that are more probabilistic in nature (those that generate a PDF/ensemble result;
- 3.
- Is a PDF generated?;
- 4.
- Simulation—versus a direct future prediction from a data-driven result (time-series/forecasting);
- 5.
- Interpolation—is the method good or bad at interpolation?;
- 6.
- Extrapolation—how well does it perform (good/bad)?;
- 7.
- Quality of result:
- (a)
- Is the technique excellent at forecasting over a short time-frame?;
- (b)
- Forecasting over a long time-frame?;
- (c)
- Ensemble forecasting—out of 100 possible simulated projections, x% show the following…;
- (d)

## 7. Dimension Reduction Techniques

**Principal Component Analysis**(PCA) is probably one of the most commonly used techniques to extract such patterns. In effect, one can view the dataset $\mathbf{X}$ as a $N\times M$ matrix, and perform a Singular Value Decomposition to rewrite it as

## 8. Multi-Resolution Gaussian Processes

## 9. Approximate Bayesian Computations

#### 9.1. Bayesian Analysis

Algorithm 1: Accept/Reject Sampler. |

Algorithm 2: Likelihood-Free Posterior Sampler ([180]). |

#### 9.2. Different Implementations of ABC

#### 9.2.1. Fixed Cutoff Sampler

Algorithm 3: Summary Statistic Rejection Sampler (Algorithm D in [181]). |

#### 9.2.2. ABC as K-Nearest Neighbors

`R`implementation

`abc.abc()`[182] by default returns the ${\theta}^{\prime}$ such that $S\left({y}^{\prime}\right)$ are amongst the ${k}_{N}=\u230aN/10\u230b$ nearest neighbors of $S\left({y}_{0}\right)$.

#### 9.2.3. ABC MCMC Samplers

Algorithm 4: Exact MCMC Sampler (Algorithm F in [181]). |

#### 9.3. ABC Strengths and Weaknesses

#### 9.4. Examples

#### 9.4.1. Example: Normal-Normal Hierarchical Model

- 1.
- Euclidean distance between summary $\mathit{S}\left(\mathit{y}\right)=\overline{\mathit{y}}$, i.e.,$$\mathsf{\rho}(S\left({\mathit{y}}_{0}\right),\mathit{S}\left({\mathit{y}}^{\prime}\right)=\parallel {\overline{\mathit{y}}}_{0}-{\overline{\mathit{y}}}^{\prime}{\parallel}_{\mathbf{2}}.$$
- 2.
- Euclidean distance between sufficient statistic: $\mathit{S}\left(\mathit{y}\right)={[\overline{\mathit{y}}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\frac{1}{\mathit{n}}\sum {({\mathit{y}}_{i}-\overline{\mathit{y}})}^{2}]}^{\top}$.
- 3.
- Wasserstein-1 distance between identity statistic: $S\left(y\right)=y$. This is implemented in terms of the empirical distribution function ${F}_{y}\left(t\right)={\sum}_{i=1}^{n}{w}_{i}^{\left(y\right)}{1}_{\{{y}_{i}\le t\}}$ with normalizing weights ${w}_{i}^{\left(y\right)}$, and the corresponding distribution ${F}_{{y}_{0}}\left(t\right)={\sum}_{j=1}^{n}{w}_{j}^{\left({y}_{0}\right)}{1}_{\{{b}_{j}\le t\}}$ for ${y}_{0}$. The Wasserstein-1 distance is given by$${W}_{1}({F}_{y},{F}_{{y}_{0}})={\int}_{-\infty}^{\infty}|{F}_{y}\left(t\right)-{F}_{{y}_{0}}\left(t\right)|dt$$
`transport`[189]. - 4.
- K-L divergence between identity statistic $S\left(y\right)=y$, i.e.,$${D}_{\mathrm{KL}}({p}_{{y}_{0}}\parallel {p}_{y})={\int}_{-\infty}^{\infty}{p}_{{y}_{0}}\left(t\right)log\left(\frac{{p}_{{y}_{0}}\left(t\right)}{{p}_{y}\left(t\right)}\right)dt$$
`FNN`[190].

- 1.
- Approximation Error: use of $\epsilon >0$ in acceptance criteria.
- 2.
- Information Loss: use of a non-sufficient summary statistic.
- 3.
- Monte Carlo Error: estimating quantities using finite ABC samples.

**Approximation Error:**We investigate the ABC estimates of $\mathbb{E}[\vartheta \mid y]$ and $\mathbb{E}[{\sigma}^{2}\mid y]$ over a range of $\epsilon $ values. The normal–normal hierarchical model is still employed, again under an NIG$(0,0.01,1,1)$ prior. We use the Euclidean distance between values of the sufficient statistic $s\left(y\right)=\left(\overline{y},\phantom{\rule{0.277778em}{0ex}}\frac{1}{n}{\sum}^{n}{({y}_{i}-\overline{y})}^{2}\right)$ as the measure of discrepancy. Figure 9 and Figure 10 presents the mean absolute error of these estimates averaged over 100 replications. In all cases, posterior estimation improves as $\epsilon \to 0$.

**Information Loss:**Consider the task of estimating the posterior marginal mean of $\vartheta $ and ${\sigma}^{2}$. We do so using samples from ${\pi}_{ABC}$ based on both the sufficient statistic ${S}_{1}\left(y\right):=(\overline{y},\frac{1}{n}{\sum}^{2}{({y}_{i}-\overline{y})}^{2})$, as well as the (not sufficient) summary statistic ${S}_{2}\left(y\right):=\overline{y}$. This process is repeated 100 times to produce the mean error estimates seen in Table 3.

**Monte Carlo Error:**Finally, ABC ultimately produces samples. Thus, using them in estimation will incur sampling errors. Below, we fix $\epsilon =0.05$, use a sufficient statistic in our sampler, and again estimate the posterior mean of $\vartheta $ based on $M\in \{100,200,\cdots ,1000\}$ samples. This process is repeated 100 times. The results superficially mirror those seen in Figure 9 for shrinking $\epsilon \to 0$; the mean absolute error falls with growing M. However, since $\epsilon =0.05$ is fixed, we do not expect the error to shrink beyond the bias inherent in any ABC sample with $\epsilon >0$. As an illustration, the middle panel of Figure 11 depicts the mean absolute error of ABC estimates of $\mathbb{E}[\vartheta \mid {y}_{0}]$ based on M samples. The mean runtime (in seconds) given is depicted in red.

#### 9.4.2. Example: ABC Parallelization

#### 9.5. Example: Population Genetics and Ancestral Inference

`ms`software by [194] with true parameters ${\vartheta}_{0}=50,\alpha =30$ and $n=20$ observations.

## 10. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Mauro, J.C.; Yue, Y.; Ellison, A.J.; Gupta, P.K.; Allan, D.C. Viscosity of glass-forming liquids. Proc. Natl. Acad. Sci. USA
**2009**, 106, 19780–19784. [Google Scholar] [CrossRef] [PubMed] - Curtin, C. Fact or Fiction?: Glass Is a (Supercooled) Liquid. Scientific American, 22 February 2007.
- Ngai, K. Why the glass transition problem remains unsolved? J. Non-Cryst. Solids
**2007**, 353, 709–718. [Google Scholar] [CrossRef] - Famprikis, T.; Canepa, P.; Dawson, J.A.; Islam, M.S.; Masquelier, C. Fundamentals of inorganic solid-state electrolytes for batteries. Nat. Mater.
**2019**, 18, 1278–1291. [Google Scholar] [CrossRef] [PubMed] - Winton, N. Solid-State Batteries Promise Electric Car Popularity Boost, But Technical Mountains Await. Forbes. 2021. Available online: https://www.forbes.com/sites/neilwinton/2021/11/28/solid-state-batteries-promise-electric-car-popularity-boost-but-technical-mountains-await/?sh=2ac61496632f (accessed on 15 August 2022).
- Pauling, L. The structure and entropy of ice and of other crystals with some randomness of atomic arrangement. J. Am. Chem. Soc.
**1935**, 57, 2680–2684. [Google Scholar] [CrossRef] - Salzmann, C.G. Advances in the experimental exploration of water’s phase diagram. J. Chem. Phys.
**2019**, 150, 060901. [Google Scholar] [CrossRef] - Gasser, T.M.; Thoeny, A.V.; Fortes, A.D.; Loerting, T. Structural characterization of ice XIX as the second polymorph related to ice VI. Nat. Commun.
**2021**, 12, 1128. [Google Scholar] [CrossRef] - Andreanov, A.; Chalker, J.T.; Saunders, T.E.; Sherrington, D. Spin-glass transition in geometrically frustrated antiferromagnets with weak disorder. Phys. Rev. B
**2010**, 81, 014406. [Google Scholar] [CrossRef] - Mauro, J.C. Topological constraint theory of glass. Am. Ceram. Soc. Bull.
**2011**, 90, 31. [Google Scholar] - Bramwell, S.T.; Gingras, M.J. Spin ice state in frustrated magnetic pyrochlore materials. Science
**2001**, 294, 1495–1501. [Google Scholar] [CrossRef] - Anderson, P.W. Ordering and antiferromagnetism in ferrites. Phys. Rev.
**1956**, 102, 1008. [Google Scholar] [CrossRef] - Ramirez, A.P.; Hayashi, A.; Cava, R.J.; Siddharthan, R.; Shastry, B. Zero-point entropy in ‘spin ice’. Nature
**1999**, 399, 333–335. [Google Scholar] [CrossRef] - Harris, M.J.; Bramwell, S.; McMorrow, D.; Zeiske, T.; Godfrey, K. Geometrical frustration in the ferromagnetic pyrochlore Ho
_{2}Ti_{2}O_{7}. Phys. Rev. Lett.**1997**, 79, 2554. [Google Scholar] [CrossRef] - Materials Project Dy2Ti2O7 Webpage. Available online: https://materialsproject.org/materials/mp-676874/ (accessed on 25 May 2022).
- Ashcroft, N.; Mermin, N. Solid State Physics; Saunders College: Philadelphia, PA, USA, 1976. [Google Scholar]
- Samarakoon, A.M.; Barros, K.; Li, Y.W.; Eisenbach, M.; Zhang, Q.; Ye, F.; Sharma, V.; Dun, Z.; Zhou, H.; Grigera, S.A.; et al. Machine-learning-assisted insight into spin ice Dy
_{2}Ti_{2}O_{7}. Nat. Commun.**2020**, 11, 892. [Google Scholar] [CrossRef] [PubMed] - Voter, A.F. Introduction to the kinetic Monte Carlo method. In Radiation Effects in Solids; Springer: Berlin/Heidelberg, Germany, 2007; pp. 1–23. [Google Scholar]
- Bramwell, S.T.; Harris, M.J. The history of spin ice. J. Phys. Condens. Matter
**2020**, 32, 374010. [Google Scholar] [CrossRef] [PubMed] - Miao, L.; Lee, Y.; Mei, A.; Lawler, M.; Shen, K. Two-dimensional magnetic monopole gas in an oxide heterostructure. Nat. Commun.
**2020**, 11, 1341. [Google Scholar] [CrossRef] - Dirac, P.A.M. The theory of magnetic poles. Phys. Rev.
**1948**, 74, 817. [Google Scholar] [CrossRef] - Pomaranski, D.; Yaraskavitch, L.; Meng, S.; Ross, K.; Noad, H.; Dabkowska, H.; Gaulin, B.; Kycia, J. Absence of Pauling’s residual entropy in thermally equilibrated Dy
_{2}Ti_{2}O_{7}. Nat. Phys.**2013**, 9, 353–356. [Google Scholar] [CrossRef] - Giblin, S.R.; Twengström, M.; Bovo, L.; Ruminy, M.; Bartkowiak, M.; Manuel, P.; Andresen, J.C.; Prabhakaran, D.; Balakrishnan, G.; Pomjakushina, E.; et al. Pauling Entropy, Metastability, and Equilibrium in Dy
_{2}Ti_{2}O_{7}Spin Ice. Phys. Rev. Lett.**2018**, 121, 067202. [Google Scholar] [CrossRef] - Tomasello, B.; Castelnovo, C.; Moessner, R.; Quintanilla, J. Correlated Quantum Tunneling of Monopoles in Spin Ice. Phys. Rev. Lett.
**2019**, 123, 067204. [Google Scholar] [CrossRef] - Kassner, E.R.; Eyvazov, A.B.; Pichler, B.; Munsie, T.J.; Dabkowska, H.A.; Luke, G.M.; Davis, J.S. Supercooled spin liquid state in the frustrated pyrochlore Dy
_{2}Ti_{2}O_{7}. Proc. Natl. Acad. Sci. USA**2015**, 112, 8549–8554. [Google Scholar] [CrossRef] - Dusad, R.; Kirschner, F.K.; Hoke, J.C.; Roberts, B.R.; Eyal, A.; Flicker, F.; Luke, G.M.; Blundell, S.J.; Davis, J. Magnetic monopole noise. Nature
**2019**, 571, 234–239. [Google Scholar] [CrossRef] [PubMed] - Samarakoon, A.M.; Grigera, S.; Tennant, D.A.; Kirste, A.; Klemke, B.; Strehlow, P.; Meissner, M.; Hallén, J.N.; Jaubert, L.; Castelnovo, C.; et al. Anomalous magnetic noise in an imperfectly flat landscape in the topological magnet Dy
_{2}Ti_{2}O_{7}. Proc. Natl. Acad. Sci. USA**2022**, 119, e2117453119. [Google Scholar] [CrossRef] [PubMed] - Lenosky, T.J.; Sadigh, B.; Alonso, E.; Bulatov, V.V.; de la Rubia, T.D.; Kim, J.; Voter, A.F.; Kress, J.D. Highly optimized empirical potential model of silicon. Model. Simul. Mater. Sci. Eng.
**2000**, 8, 825–841. [Google Scholar] [CrossRef] - Hennig, R.G.; Lenosky, T.J.; Trinkle, D.R.; Rudin, S.P.; Wilkins, J.W. Classical potential describes martensitic phase transformations between the α, β and ω titanium phases. Phys. Rev. B
**2008**, 78, 054121. [Google Scholar] [CrossRef] - Saito, Y.; Sasaki, N.; Moriya, H.; Kagatsume, A.; Noro, S. Parameter optimization of Tersoff interatomic potentials using a genetic algorithm. Jpn. Soc. Mech. Eng. A
**2001**, 44, 207–213. [Google Scholar] [CrossRef] - Sastry, K.; Johnson, D.D.; Thompson, A.L.; Goldberg, D.E.; Martinez, T.J.; Leiding, J.; Owens, J. Optimization of semiempirical quantum chemistry methods via multiobjective genetic algorithms: Accurate photodynamics for larger molecules and longer time scales. Mat. Man. Proc.
**2007**, 22, 553–561. [Google Scholar] [CrossRef] - Van de Walle, A.; Ceder, G. Automating first-principles phase diagram calculations. J. Phase Equil.
**2002**, 23, 248. [Google Scholar] - Brown, K.S.; Sethna, J.P. Statistical mechanical approaches to models with many poorly known parameters. Phys. Rev. E
**2003**, 68, 021904. [Google Scholar] [CrossRef] - Frederiksen, S.L.; Jacobsen, K.W.; Brown, K.S.; Sethna, J.P. Bayesian ensemble approach to error estimation of interatomic potentials. Phys. Rev. Lett.
**2004**, 93, 165501. [Google Scholar] [CrossRef] - Ercolessi, F.; Adams, J.B. Interatomic Potentials from First-Principles Calculations: The Force-Matching Method. Europhys. Lett. EPL
**1994**, 26, 583–588. [Google Scholar] [CrossRef] - Fischer, C.C.; Tibbetts, K.J.; Morgan, D.; Ceder, G. Predicting crystal structure by merging data mining with quantum mechanics. Nat. Mater.
**2006**, 5, 641–646. [Google Scholar] [CrossRef] [PubMed] - Zhang, P.; Trinkle, D.R. Database optimization for empirical interatomic potential models. Model. Simul. Mater. Sci. Eng.
**2015**, 23, 065011. [Google Scholar] [CrossRef] - Zhang, P.; Trinkle, D.R. A modified embedded atom method potential for interstitial oxygen in titanium. Comput. Mater. Sci.
**2016**, 124, 204–210. [Google Scholar] [CrossRef] - Vita, J.A.; Trinkle, D.R. Exploring the necessary complexity of interatomic potentials. Comp. Mater. Sci.
**2021**, 200, 110752. [Google Scholar] [CrossRef] - Jones, J.E. On the determination of molecular fields. —II. From the equation of state of a gas. Proc. R. Soc. Lond. Ser. A Contain. Pap. Math. Phys. Character
**1924**, 106, 463–477. [Google Scholar] [CrossRef] - Buckingham, R.A. The classical equation of state of gaseous helium, neon and argon. Proc. R. Soc. Lond. Ser. A Math. Phys. Sci.
**1938**, 168, 264–283. [Google Scholar] [CrossRef] - Daw, M.S.; Baskes, M.I. Embedded-atom method: Derivation and application to impurities, surfaces, and other defects in metals. Phys. Rev. B
**1984**, 29, 6443–6453. [Google Scholar] [CrossRef] - Baskes, M.I. Modified embedded-atom potentials for cubic materials and impurities. Phys. Rev. B
**1992**, 46, 2727–2742. [Google Scholar] [CrossRef] - Tersoff, J. New empirical model for the structural properties of silicon. Phys. Rev. Lett.
**1986**, 56, 632–635. [Google Scholar] [CrossRef] - Stillinger, F.H.; Weber, T.A. Computer simulation of local order in condensed phases of silicon. Phys. Rev. B
**1985**, 31, 5262–5271. [Google Scholar] [CrossRef] - Van Duin, A.C.T.; Dasgupta, S.; Lorant, F.; Goddard, W.A. ReaxFF: A Reactive Force Field for Hydrocarbons. J. Phys. Chem. A
**2001**, 105, 9396–9409. [Google Scholar] [CrossRef] - Brenner, D.W.; Shenderova, O.A.; Harrison, J.A.; Stuart, S.J.; Ni, B.; Sinnott, S.B. A second-generation reactive empirical bond order (REBO) potential energy expression for hydrocarbons. J. Phys. Condens. Matter
**2002**, 14, 783–802. [Google Scholar] [CrossRef] - Shan, T.R.; Devine, B.D.; Kemper, T.W.; Sinnott, S.B.; Phillpot, S.R. Charge-optimized many-body potential for the hafnium/hafnium oxide system. Phys. Rev. B
**2010**, 81, 125328. [Google Scholar] [CrossRef] - Behler, J.; Parrinello, M. Generalized Neural-Network Representation of High-Dimensional Potential-Energy Surfaces. Phys. Rev. Lett.
**2007**, 98, 146401. [Google Scholar] [CrossRef] - Bartók, A.P.; Payne, M.C.; Kondor, R.; Csányi, G. Gaussian Approximation Potentials: The Accuracy of Quantum Mechanics, without the Electrons. Phys. Rev. Lett.
**2010**, 104, 136403. [Google Scholar] [CrossRef] - Botu, V.; Ramprasad, R. Learning scheme to predict atomic forces and accelerate materials simulations. Phys. Rev. B
**2015**, 92, 094306. [Google Scholar] [CrossRef] - Thompson, A.; Swiler, L.; Trott, C.; Foiles, S.; Tucker, G. Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials. J. Comput. Phys.
**2015**, 285, 316–330. [Google Scholar] [CrossRef] - Artrith, N.; Urban, A. An implementation of artificial neural-network potentials for atomistic materials simulations: Performance for TiO
_{2}. Comput. Mater. Sci.**2016**, 114, 135–150. [Google Scholar] [CrossRef] - Khorshidi, A.; Peterson, A.A. Amp: A modular approach to machine learning in atomistic simulations. Comput. Phys. Commun.
**2016**, 207, 310–324. [Google Scholar] [CrossRef] - Shapeev, A.V. Moment Tensor Potentials: A Class of Systematically Improvable Interatomic Potentials. Multiscale Model. Simul.
**2016**, 14, 1153–1173. [Google Scholar] [CrossRef] - Schütt, K.T.; Sauceda, H.E.; Kindermans, P.J.; Tkatchenko, A.; Müller, K.R. SchNet – A deep learning architecture for molecules and materials. J. Chem. Phys.
**2018**, 148, 241722. [Google Scholar] [CrossRef] [PubMed] - Wang, H.; Zhang, L.; Han, J.; E, W. DeePMD-kit: A deep learning package for many-body potential energy representation and molecular dynamics. Comput. Phys. Commun.
**2018**, 228, 178–184. [Google Scholar] [CrossRef] - Wood, M.A.; Thompson, A.P. Extending the accuracy of the SNAP interatomic potential form. J. Chem. Phys.
**2018**, 148, 241721. [Google Scholar] [CrossRef] [PubMed] - Zhang, Y.; Hu, C.; Jiang, B. Embedded Atom Neural Network Potentials: Efficient and Accurate Machine Learning with a Physically Inspired Representation. J. Phys. Chem. Lett.
**2019**, 10, 4962–4967. [Google Scholar] [CrossRef] [PubMed] - Pun, G.P.P.; Batra, R.; Ramprasad, R.; Mishin, Y. Physically informed artificial neural networks for atomistic modeling of materials. Nat. Commun.
**2019**, 10, 2339. [Google Scholar] [CrossRef] - Gayatri, R.; Moore, S.; Weinberg, E.; Lubbers, N.; Anderson, S.; Deslippe, J.; Perez, D.; Thompson, A.P. Rapid Exploration of Optimization Strategies on Advanced Architectures using TestSNAP and LAMMPSl. arXiv
**2020**, arXiv:2011.12875. [Google Scholar] - Xie, Y.; Vandermause, J.; Sun, L.; Cepellotti, A.; Kozinsky, B. Bayesian force fields from active learning for simulation of inter-dimensional transformation of stanene. NPJ Comput. Mater.
**2021**, 7, 40. [Google Scholar] [CrossRef] - Behler, J. Atom-centered symmetry functions for constructing high-dimensional neural network potentials. J. Chem. Phys.
**2011**, 134, 074106. [Google Scholar] [CrossRef] - Bartók, A.P.; Kondor, R.; Csányi, G. On representing chemical environments. Phys. Rev. B
**2013**, 87, 184115. [Google Scholar] [CrossRef] - Fellinger, M.R.; Park, H.; Wilkins, J.W. Force-matched embedded-atom method potential for niobium. Phys. Rev. B
**2010**, 81, 144119. [Google Scholar] [CrossRef] - Park, H.; Fellinger, M.R.; Lenosky, T.J.; Tipton, W.W.; Trinkle, D.R.; Rudin, S.P.; Woodward, C.; Wilkins, J.W.; Hennig, R.G. Ab initio based empirical potential used to study the mechanical properties of molybdenum. Phys. Rev. B
**2012**, 85, 214121. [Google Scholar] [CrossRef] - Yang, C.; Qi, L. Modified embedded-atom method potential of niobium for studies on mechanical properties. Comput. Mater. Sci.
**2019**, 161, 351–363. [Google Scholar] [CrossRef] - Zuo, Y.; Chen, C.; Li, X.; Deng, Z.; Chen, Y.; Behler, J.; Csányi, G.; Shapeev, A.V.; Thompson, A.P.; Wood, M.A.; et al. Performance and Cost Assessment of Machine Learning Interatomic Potentials. J. Phys. Chem. A
**2020**, 124, 731–745. [Google Scholar] [CrossRef] - Stevens, B. Atmospheric Moist Convection. Annu. Rev. Earth Planet. Sci.
**2005**, 33, 605–643. [Google Scholar] [CrossRef] - Pauluis, O.M.; Schumacher, J. Idealized moist Rayleigh-Benard convection with piecewise linear equation of state. Commun. Math. Sci.
**2010**, 8, 295–319. [Google Scholar] [CrossRef] - Pauluis, O.M.; Schumacher, J. Self-aggregation of clouds in conditionally unstable moist convection. Proc. Natl. Acad. Sci. USA
**2011**, 108, 12623–12628. [Google Scholar] [CrossRef] [PubMed] - Chien, M.H.; Pauluis, O.M.; Almgren, A.S. Hurricane-like Vortices in Conditionally Unstable Moist Convection. J. Adv. Model. Earth Syst.
**2022**, 14, e2021MS002846. [Google Scholar] [CrossRef] - Almgren, A.S.; Bell, J.B.; Colella, P.; Howell, L.H.; Welcome, M.L. A Conservative Adaptive Projection Method for the Variable Density Incompressible Navier–Stokes Equations. J. Comput. Phys.
**1998**, 142, 1–46. [Google Scholar] [CrossRef] - Emanuel, K.A. An Air-Sea Interaction Theory for Tropical Cyclones. Part I: Steady-State Maintenance. J. Atmos. Sci.
**1986**, 43, 585–605. [Google Scholar] [CrossRef] - Lorenz, E.N. Available Potential Energy and the Maintenance of the General Circulation. Tellus
**1955**, 7, 157–167. [Google Scholar] [CrossRef] - Stone, P.H. Baroclinic Adjustment. J. Atmos. Sci.
**1978**, 35, 561–571. [Google Scholar] [CrossRef] - Lorenz, E.N. Available energy and the maintenance of a moist circulation. Tellus
**1978**, 30, 15–31. [Google Scholar] [CrossRef] - Charney, J.G. The Dynamics of Long Waves in a Baroclinic Westerly Current. J. Meteorol.
**1947**, 4, 136–162. [Google Scholar] [CrossRef] - Eady, E.T. Long Waves and Cyclone Waves. Tellus
**1949**, 1, 33–52. [Google Scholar] [CrossRef] - Phillips, N.A. Energy Transformations and Meridional Circulations associated with simple Baroclinic Waves in a two-level, Quasi-geostrophic Model. Tellus
**1954**, 6, 274–286. [Google Scholar] [CrossRef] - Lapeyre, G.; Held, I.M. The Role of Moisture in the Dynamics and Energetics of Turbulent Baroclinic Eddies. J. Atmos. Sci.
**2004**, 61, 1693–1710. [Google Scholar] [CrossRef] - Emanuel, K.A.; Neelin, J.D.; Bretherton, C.S. On large-scale circulations in convecting atmospheres. Q. J. R. Meteorol. Soc.
**1994**, 120, 1111–1143. [Google Scholar] [CrossRef] - Hinks, T.; Carr, H.; Laefer, D.F. Flight optimization algorithms for aerial LiDAR capture for urban infrastructure model generation. J. Comput. Civ. Eng.
**2009**, 23, 330–339. [Google Scholar] [CrossRef] - Stanley, M.H.; Laefer, D.F. Metrics for aerial, urban lidar point clouds. ISPRS J. Photogramm. Remote. Sens.
**2021**, 175, 268–281. [Google Scholar] [CrossRef] - Vo, A.V.; Laefer, D.F.; Byrne, J. Optimizing Urban LiDAR Flight Path Planning Using a Genetic Algorithm and a Dual Parallel Computing Framework. Remote Sens.
**2021**, 13, 4437. [Google Scholar] [CrossRef] - Vo, A.V.; Truong-Hong, L.; Laefer, D.F.; Bertolotto, M. Octree-based region growing for point cloud segmentation. ISPRS J. Photogramm. Remote Sens.
**2015**, 104, 88–100. [Google Scholar] [CrossRef] - Zolanvari, S.I.; Laefer, D.F. Slicing Method for curved façade and window extraction from point clouds. ISPRS J. Photogramm. Remote Sens.
**2016**, 119, 334–346. [Google Scholar] [CrossRef] - Soilán, M.; Truong-Hong, L.; Riveiro, B.; Laefer, D. Automatic extraction of road features in urban environments using dense ALS data. Int. J. Appl. Earth Obs. Geoinf.
**2018**, 64, 226–236. [Google Scholar] [CrossRef] - Aljumaily, H.; Laefer, D.F.; Cuadra, D.; Velasco, M. Voxel Change: Big Data–Based Change Detection for Aerial Urban LiDAR of Unequal Densities. J. Surv. Eng.
**2021**, 147, 04021023. [Google Scholar] [CrossRef] - Laefer, D.F.; Abuwarda, S.; Vo, A.V.; Truong-Hong, L.; Gharibi, H. 2015 Aerial Laser and Photogrammetry Survey of Dublin City Collection Record. 2017. Available online: https://archive.nyu.edu/handle/2451/38684 (accessed on 15 August 2022).
- O’Donnell, J.; Truong-Hong, L.; Boyle, N.; Corry, E.; Cao, J.; Laefer, D.F. LiDAR point-cloud mapping of building façades for building energy performance simulation. Autom. Constr.
**2019**, 107, 102905. [Google Scholar] [CrossRef] - Vo, A.; Hewage, C.; Le Khac, N.; Bertolotto, M.; Laefer, D. A parallel algorithm for local point density index computation of large point clouds. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci.
**2021**, VIII-4/W2-2021, 75–82. [Google Scholar] [CrossRef] - Majgaonkar, O.; Panchal, K.; Laefer, D.; Stanley, M.; Zaki, Y. Assessing LiDAR Training Data Quantities for Classification Models. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci.
**2021**, XLVI-4/W4-2021, 101–106. [Google Scholar] [CrossRef] - Wen, C.; Yang, L.; Li, X.; Peng, L.; Chi, T. Directionally constrained fully convolutional neural network for airborne LiDAR point cloud classification. ISPRS J. Photogramm. Remote Sens.
**2020**, 162, 50–62. [Google Scholar] [CrossRef] - Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
- Webb, G.I.; Sammut, C.; Perlich, C.; Horváth, T.; Wrobel, S.; Korb, K.B.; Raedt, L. Learning curves in machine learning. In Encyclopedia of Machine Learning; Springer: Berlin/Heidelberg, Germany, 2011; pp. 577–580. [Google Scholar]
- Cohen, O.; Malka, O.; Ringel, Z. Learning curves for overparametrized deep neural networks: A field theory perspective. Phys. Rev. Res.
**2021**, 3, 023034. [Google Scholar] [CrossRef] - Emmert-Streib, F.; Yang, Z.; Feng, H.; Tripathi, S.; Dehmer, M. An Introductory Review of Deep Learning for Prediction Models With Big Data. Front. Artif. Intell.
**2020**, 3, 4. [Google Scholar] [CrossRef] - Gershenfeld, N. The Nature of Mathematical Modeling; Cambridge University Press: New York, NY, USA, 1999. [Google Scholar]
- Goldstein, H. Classical Mechanics; Addison-Wesley: Boston, MA, USA, 1980. [Google Scholar]
- Arnold, V. Mathematical Methods of Classical Mechanics; Springer: Berlin/Heidelberg, Germany, 1989; Volume 60. [Google Scholar]
- Wiggins, S. Introduction to Applied Nonlinear Dynamical Systems and Chaos; Texts in Applied Mathematics; Springer: New York, NY, USA, 1990; Volume 2. [Google Scholar]
- Strogatz, S.H. Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry and Engineering; Westview Press: Boulder, CO, USA, 2000. [Google Scholar]
- Melia, F. Electrodynamics; University of Chicago Press: Chicago, IL, USA, 2001. [Google Scholar]
- Koopman, B.O. Hamiltonian Systems and Transformation in Hilbert Space. Proc. Natl. Acad. Sci. USA
**1931**, 17, 315–318. [Google Scholar] [CrossRef] [PubMed] - Koopman, B.O.; Neumann, J.V. Dynamical Systems of Continuous Spectra. Proc. Natl. Acad. Sci. USA
**1932**, 18, 255–263. [Google Scholar] [CrossRef] [PubMed] - Games, M. The fantastic combinations of John Conway’s new solitaire game “life” by Martin Gardner. Sci. Am.
**1970**, 223, 120–123. [Google Scholar] - Harrison, M.A. 4/67–1R Theory of Self-Reproducing Automata. 1966. John von Neumann. Arthur W. Burks, Editor. University of Illinois Press. Am. Doc.
**1967**, 18, 254. [Google Scholar] [CrossRef] - Schmid, P.J. Dynamic mode decomposition of numerical and experimental data. J. Fluid Mech.
**2010**, 656, 5–28. [Google Scholar] [CrossRef] - Proctor, J.L.; Brunton, S.L.; Kutz, J.N. Dynamic Mode Decomposition with Control. SIAM J. Appl. Dyn. Syst.
**2016**, 15, 142–161. [Google Scholar] [CrossRef] - Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys.
**2019**, 378, 686–707. [CrossRef] - Liboff, R. Introductory Quantum Mechanics; Addison-Wesley: Boston, MA, USA, 1987. [Google Scholar]
- Higham, D.J. An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations. SIAM Rev.
**2001**, 43, 525–546. [Google Scholar] [CrossRef] - Versteeg, H.K.; Malalasekera, W. An Introduction to Computational Fluid Dynamics—The Finite Volume Method; Addison-Wesley-Longman: Albany, NY, USA, 1995; pp. I–X, 1–257. [Google Scholar]
- Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng.
**1960**, 82, 35–45. [Google Scholar] [CrossRef] - Humpherys, J.; Redd, P.; West, J. A Fresh Look at the Kalman Filter. SIAM Rev.
**2012**, 54, 801–823. [Google Scholar] [CrossRef] - Müller, P.H. Non-Linear Transformations of Stochastic Processes. Editors: P. I. Kuznetsov, R. L. Stratonovich and V. I. Tikhonov. XVI + 498 S. m. Fig. Oxford/London/Edinburgh/New York/Paris/Frankfurt 1965. Pergamon Press. Preis geb. 7 net. ZAMM J. Appl. Math. Mech. Z. Angew. Math. Mech.
**1966**, 46, 76. [Google Scholar] [CrossRef] - Metropolis, N.; Ulam, S. The Monte Carlo method. J. Am. Stat. Assoc.
**1949**, 44, 335. [Google Scholar] [CrossRef] [PubMed] - Haile, J.M.; Johnston, I.; Mallinckrodt, A.J.; McKay, S. Molecular dynamics simulation: Elementary methods. Comput. Phys.
**1993**, 7, 625. [Google Scholar] [CrossRef] - Rapaport, D.C. Molecular dynamics simulation. Comput. Sci. Eng.
**1999**, 1, 70–71. [Google Scholar] [CrossRef] - Rapaport, D.C. The Art of Molecular Dynamics Simulation, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar] [CrossRef]
- Allen, M.P.; Tildesley, D.J. Computer Simulation of Liquids; Clarendon Press: New York, NY, USA, 1989. [Google Scholar]
- Onsager, L. Crystal Statistics. I. A Two-Dimensional Model with an Order-Disorder Transition. Phys. Rev.
**1944**, 65, 117–149. [Google Scholar] [CrossRef] - De Broglie, L. Non-Linear Wave Mechanics: A Causal Interpretation; Elsevier: Amsterdam, The Netherlands, 1960; 304p. [Google Scholar]
- Komech. Quantum Mechanics for Mathematicians (Nonlinear PDEs Point of View). 2005. Available online: http://xxx.lanl.gov/abs/math-ph/0505059 (accessed on 15 August 2022).
- Arter, W.; Osojnik, A.; Cartis, C.; Madho, G.; Jones, C.; Tobias, S. Data assimilation approach to analysing systems of ordinary differential equations. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018; pp. 1–5. [Google Scholar] [CrossRef]
- Kuznetsov, L.; Ide, K.; Jones, C.K.R. A method for direct assimilation of Lagrangian data. In Proceedings of the EGS-AGU-EUG Joint Assembly, Nice, France, 6–11 April 2003; p. 4837. [Google Scholar]
- Lin, K.; Chrzan, D.C. Kinetic Monte Carlo simulation of dislocation dynamics. Phys. Rev. B
**1999**, 60, 3799. [Google Scholar] [CrossRef] - Trinkle, D.R.; Hennig, R.G.; Srinivasan, S.G.; Hatch, D.M.; Jones, M.D.; Stokes, H.T.; Albers, R.C.; Wilkins, J.W. A new mechanism for the alpha to omega martensitic transformation in pure Titanium. Phys. Rev. Lett.
**2003**, 91, 025701. [Google Scholar] [CrossRef] - Fattah, E.A.; Niekerk, J.V.; Rue, H. Smart Gradient—An adaptive technique for improving gradient estimation. Found. Data Sci.
**2022**, 4, 123–136. [Google Scholar] [CrossRef] - Hasselmann, K. PIPs and POPs: The reduction of complex dynamical systems using principal interaction and oscillation patterns. J. Geophys. Res. Atmos.
**1988**, 93, 11015–11021. [Google Scholar] [CrossRef] - Friedlingstein, P.; Meinshausen, M.; Arora, V.K.; Jones, C.D.; Anav, A.; Liddicoat, S.K.; Knutti, R. Uncertainties in CMIP5 Climate Projections due to Carbon Cycle Feedbacks. J. Clim.
**2014**, 27, 511–526. [Google Scholar] [CrossRef] - Collins, G.W. The Virial Theorem in Stellar Astrophysics; Pachart Publishing House: Tucson, AZ, USA, 1978. [Google Scholar]
- Singh, G. Machine Learning Models in Stock Market Prediction. arXiv
**2022**, arXiv:2202.09359. [Google Scholar] [CrossRef] - Candes, E.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory
**2006**, 52, 489–509. [Google Scholar] [CrossRef] - Tu, J.H.; Rowley, C.W.; Luchtenburg, D.M.; Brunton, S.L.; Nathan Kutz, J. On dynamic mode decomposition: Theory and applications. J. Comput. Dyn.
**2014**, 1, 391–421. [Google Scholar] [CrossRef] - Kennedy, M.C.; O’Hagan, A. Predicting the output from a complex computer code when fast approximations are available. Biometrika
**2000**, 87, 1–13. [Google Scholar] [CrossRef] - Lavin, A.; Zenil, H.; Paige, B.; Krakauer, D.; Gottschlich, J.; Mattson, T.; Anandkumar, A.; Choudry, S.; Rocki, K.; Baydin, A.G.; et al. Simulation Intelligence: Towards a New Generation of Scientific Methods. arXiv
**2021**, arXiv:2112.03235. [Google Scholar] - Kumar, P.; Chandra, R.; Bansal, C.; Kalyanaraman, S.; Ganu, T.; Grant, M. Micro-climate Prediction-Multi Scale Encoder-decoder based Deep Learning Framework. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Virtual, 14–18 August 2021; pp. 3128–3138. [Google Scholar]
- Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations. arXiv
**2017**, arXiv:1711.10561. [Google Scholar] - Ghosh, A.; Elhamod, M.; Lee, W.C.; Karpatne, A.; Podolskiy, V.A. Physics-Informed Machine Learning for Optical Modes in Composites. arXiv
**2021**, arXiv:2112.07625. [Google Scholar] - Sun, L.; Gao, H.; Pan, S.; Wang, J.X. Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data. Comput. Methods Appl. Mech. Eng.
**2020**, 361, 112732. [Google Scholar] [CrossRef] - Raissi, M.; Wang, Z.; Triantafyllou, M.S.; Karniadakis, G.E. Deep learning of vortex-induced vibrations. J. Fluid Mech.
**2019**, 861, 119–137. [Google Scholar] [CrossRef] - Zhu, Y.; Zabaras, N.; Koutsourelakis, P.S.; Perdikaris, P. Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data. J. Comput. Phys.
**2019**, 394, 56–81. [Google Scholar] [CrossRef] - Daw, A.; Maruf, M.; Karpatne, A. PID-GAN: A GAN Framework based on a Physics-informed Discriminator for Uncertainty Quantification with Physics. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Virtual, 14–18 August 2021; pp. 237–247. [Google Scholar]
- Meng, X.; Karniadakis, G.E. A composite neural network that learns from multi-fidelity data: Application to function approximation and inverse PDE problems. J. Comput. Phys.
**2020**, 401, 109020. [Google Scholar] [CrossRef] - Berg, J.; Nyström, K. A unified deep artificial neural network approach to partial differential equations in complex geometries. Neurocomputing
**2018**, 317, 28–41. [Google Scholar] [CrossRef] - Muralidhar, N.; Islam, M.R.; Marwah, M.; Karpatne, A.; Ramakrishnan, N. Incorporating prior domain knowledge into deep neural networks. In Proceedings of the 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 10–13 December 2018; pp. 36–45. [Google Scholar]
- Bao, T.; Jia, X.; Zwart, J.; Sadler, J.; Appling, A.; Oliver, S.; Johnson, T.T. Partial Differential Equation Driven Dynamic Graph Networks for Predicting Stream Water Temperature. In Proceedings of the 2021 IEEE International Conference on Data Mining (ICDM), Auckland, New Zealand, 7–10 December 2021; pp. 11–20. [Google Scholar]
- Jia, X.; Chen, S.; Xie, Y.; Yang, H.; Appling, A.; Oliver, S.; Jiang, Z. Modeling Reservoir Release Using Pseudo-Prospective Learning and Physical Simulations to Predict Water Temperature. In Proceedings of the 2022 SIAM International Conference on Data Mining (SDM), Virtually, 28–30 April 2022; pp. 91–99. [Google Scholar]
- Chen, S.; Appling, A.; Oliver, S.; Corson-Dosch, H.; Read, J.; Sadler, J.; Zwart, J.; Jia, X. Heterogeneous stream-reservoir graph networks with data assimilation. In Proceedings of the 2021 IEEE International Conference on Data Mining (ICDM), Auckland, New Zealand, 7–10 December 2021; pp. 1024–1029. [Google Scholar]
- Hall, E.J.; Taverniers, S.; Katsoulakis, M.A.; Tartakovsky, D.M. Ginns: Graph-informed neural networks for multiscale physics. J. Comput. Phys.
**2021**, 433, 110192. [Google Scholar] [CrossRef] - Yin, M.; Zhang, E.; Yu, Y.; Karniadakis, G.E. Interfacing finite elements with deep neural operators for fast multiscale modeling of mechanics problems. Comput. Methods Appl. Mech. Eng.
**2022**, 115027. [Google Scholar] [CrossRef] - Lin, C.; Maxey, M.; Li, Z.; Karniadakis, G.E. A seamless multiscale operator neural network for inferring bubble dynamics. J. Fluid Mech.
**2021**, 929. [Google Scholar] [CrossRef] - Williams, C.K.; Rasmussen, C.E. Gaussian Processes for Machine Learning; MIT Press: Cambridge, MA, USA, 2006; Volume 2. [Google Scholar]
- Raissi, M.; Karniadakis, G.E. Hidden physics models: Machine learning of nonlinear partial differential equations. J. Comput. Phys.
**2018**, 357, 125–141. [Google Scholar] [CrossRef] - Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Numerical Gaussian processes for time-dependent and nonlinear partial differential equations. SIAM J. Sci. Comput.
**2018**, 40, A172–A198. [Google Scholar] [CrossRef] - Lindgren, F.; Rue, H.; Lindström, J. An explicit link between Gaussian fields and Gaussian Markov random fields: The stochastic partial differential equation approach. J. R. Stat. Soc. Ser. B Stat. Methodol.
**2011**, 73, 423–498. [Google Scholar] [CrossRef] - Hanuka, A.; Huang, X.; Shtalenkova, J.; Kennedy, D.; Edelen, A.; Zhang, Z.; Lalchand, V.; Ratner, D.; Duris, J. Physics model-informed Gaussian process for online optimization of particle accelerators. Phys. Rev. Accel. Beams
**2021**, 24, 072802. [Google Scholar] [CrossRef] - Gupta, K.; Vats, D.; Chatterjee, S. Bayesian equation selection on sparse data for discovery of stochastic dynamical systems. arXiv
**2021**, arXiv:2101.04437. [Google Scholar] - Perdikaris, P.; Raissi, M.; Damianou, A.; Lawrence, N.D.; Karniadakis, G.E. Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling. Proc. R. Soc. A Math. Phys. Eng. Sci.
**2017**, 473, 20160751. [Google Scholar] [CrossRef] [PubMed] - Perdikaris, P.; Venturi, D.; Karniadakis, G.E. Multifidelity information fusion algorithms for high-dimensional systems and massive data sets. SIAM J. Sci. Comput.
**2016**, 38, B521–B538. [Google Scholar] [CrossRef] - Peherstorfer, B.; Willcox, K.; Gunzburger, M. Survey of multifidelity methods in uncertainty propagation, inference, and optimization. SIAM Rev.
**2018**, 60, 550–591. [Google Scholar] [CrossRef] - Rasp, S. Coupled online learning as a way to tackle instabilities and biases in neural network parameterizations. arXiv
**2019**, arXiv:1907.01351. [Google Scholar] - Sharma, S.; Chatterjee, S. Winsorization for Robust Bayesian Neural Networks. Entropy
**2021**, 23, 1546. [Google Scholar] [CrossRef] [PubMed] - Cardelli, L.; Kwiatkowska, M.; Laurenti, L.; Patane, A. Robustness guarantees for Bayesian inference with Gaussian processes. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 7759–7768. [Google Scholar]
- Le Gratiet, L.; Garnier, J. Recursive co-kriging model for design of computer experiments with multiple levels of fidelity. Int. J. Uncertain. Quantif.
**2014**, 4, 365–386. [Google Scholar] [CrossRef] - Willard, J.; Jia, X.; Xu, S.; Steinbach, M.; Kumar, V. Integrating scientific knowledge with machine learning for engineering and environmental systems. arXiv
**2020**, arXiv:2003.04919. [Google Scholar] [CrossRef] - Karpatne, A.; Atluri, G.; Faghmous, J.H.; Steinbach, M.; Banerjee, A.; Ganguly, A.; Shekhar, S.; Samatova, N.; Kumar, V. Theory-Guided Data Science: A New Paradigm for Scientific Discovery from Data. IEEE Trans. Knowl. Data Eng.
**2017**, 29, 2318–2331. [Google Scholar] [CrossRef] - Jia, X.; Willard, J.; Karpatne, A.; Read, J.; Zwart, J.; Steinbach, M.; Kumar, V. Physics guided RNNs for modeling dynamical systems: A case study in simulating lake temperature profiles. In Proceedings of the 2019 SIAM International Conference on Data Mining, Calgary, AB, USA, 2–4 May 2019; pp. 558–566. [Google Scholar]
- Wang, Z.; Xing, W.; Kirby, R.; Zhe, S. Multi-fidelity high-order Gaussian processes for physical simulation. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Virtual, 13–15 April 2021; pp. 847–855. [Google Scholar]
- Cutajar, K.; Pullin, M.; Damianou, A.; Lawrence, N.; González, J. Deep gaussian processes for multi-fidelity modeling. arXiv
**2019**, arXiv:1903.07320. [Google Scholar] - Williams, C.K.; Rasmussen, C.E.; Scwaighofer, A.; Tresp, V. Observations on the Nyström Method for Gaussian Process Prediction; Max Planck Institute for Biological Cybernetics: Tübingen, Germany, 2002. [Google Scholar]
- Quinonero-Candela, J.; Rasmussen, C.E. A unifying view of sparse approximate Gaussian process regression. J. Mach. Learn. Res.
**2005**, 6, 1939–1959. [Google Scholar] - Hristopulos, D.T. Stochastic Local Interaction (SLI) model: Bridging machine learning and geostatistics. Comput. Geosci.
**2015**, 85, 26–37. [Google Scholar] [CrossRef] - Hristopulos, D.T.; Agou, V.D. Stochastic local interaction model with sparse precision matrix for space–time interpolation. Spat. Stat.
**2020**, 40, 100403. [Google Scholar] [CrossRef] - Berger, J.O. Statistical Decision Theory and Bayesian Analysis; Springer: Berlin/Heidelberg, Germany, 1985. [Google Scholar]
- Gelman, A.; Carlin, J.B.; Stern, H.S.; Dunson, D.B.; Vehtari, A.; Rubin, D.B. Bayesian Data Analysis; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
- Barber, S.; Voss, J.; Webster, M. The rate of convergence for approximate Bayesian computation. Electron. J. Stat.
**2015**, 9, 80–105. [Google Scholar] [CrossRef] - Sisson, S.A.; Fan, Y.; Beaumont, M. Handbook of Approximate Bayesian Computation; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
- Marjoram, P.; Molitor, J.; Plagnol, V.; Tavaré, S. Markov chain Monte Carlo without likelihoods. Proc. Natl. Acad. Sci. USA
**2003**, 100, 15324–15328. [Google Scholar] [CrossRef] - Csillery, K.; Francois, O.; Blum, M.G.B. abc: An R package for approximate Bayesian computation (ABC). Methods Ecol. Evol.
**2012**, 3, 475–479. [Google Scholar] [CrossRef] - Biau, G.; Cérou, F.; Guyader, A. New insights into approximate Bayesian computation. Ann. l’IHP Probab. Stat.
**2015**, 51, 376–403. [Google Scholar] [CrossRef] - Weiss, A.J.; Möstl, C.; Amerstorfer, T.; Bailey, R.L.; Reiss, M.A.; Hinterreiter, J.; Amerstorfer, U.A.; Bauer, M. Analysis of coronal mass ejection flux rope signatures using 3DCORE and approximate Bayesian Computation. Astrophys. J. Suppl. Ser.
**2021**, 252, 9. [Google Scholar] [CrossRef] - Schaaf, A.; de la Varga, M.; Wellmann, F.; Bond, C.E. Constraining stochastic 3-D structural geological models with topology information using approximate Bayesian computation in GemPy 2.1. Geosci. Model Dev.
**2021**, 14, 3899–3913. [Google Scholar] [CrossRef] - Pacchiardi, L.; Künzli, P.; Schöngens, M.; Chopard, B.; Dutta, R. Distance-learning for approximate bayesian computation to model a volcanic eruption. Sankhya B
**2021**, 83, 288–317. [Google Scholar] [CrossRef] - Vrugt, J.A.; Sadegh, M. Toward diagnostic model calibration and evaluation: Approximate Bayesian computation. Water Resour. Res.
**2013**, 49, 4335–4345. [Google Scholar] [CrossRef] - Picchini, U. Inference for SDE models via approximate Bayesian computation. J. Comput. Graph. Stat.
**2014**, 23, 1080–1100. [Google Scholar] [CrossRef] - Schuhmacher, D.; Bähre, B.; Gottschlich, C.; Hartmann, V.; Heinemann, F.; Schmitzer, B. Transport: Computation of Optimal Transport Plans and Wasserstein Distances, 2020. R Package Version 0.12-2. Available online: https://cran.microsoft.com/snapshot/2022-07-06/web/packages/transport/index.html (accessed on 15 August 2022).
- Beygelzimer, A.; Kakadet, S.; Langford, J.; Arya, S.; Mount, D.; Li, S. FNN: Fast Nearest Neighbor Search Algorithms and Applications, 2019. R Package Version 1.1.3. Available online: https://cran.microsoft.com/snapshot/2022-07-06/web/packages/FNN/index.html (accessed on 15 August 2022).
- Vo, A.V.; Hewage, C.N.L.; Russo, G.; Chauhan, N.; Laefer, D.F.; Bertolotto, M.; Le-Khac, N.A.; Oftendinger, U. Efficient LiDAR point cloud data encoding for scalable data management within the Hadoop eco-system. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 5644–5653. [Google Scholar]
- Tavaré, S.; Balding, D.J.; Griffiths, R.C.; Donnelly, P. Inferring coalescence times from DNA sequence data. Genetics
**1997**, 145, 505–518. [Google Scholar] [CrossRef] [PubMed] - Pritchard, J.K.; Seielstad, M.T.; Perez-Lezaun, A.; Feldman, M.W. Population growth of human Y chromosomes: A study of Y chromosome microsatellites. Mol. Biol. Evol.
**1999**, 16, 1791–1798. [Google Scholar] [CrossRef] [PubMed] - Hudson, R.R. Generating samples under a Wright–Fisher neutral model of genetic variation. Bioinformatics
**2002**, 18, 337–338. [Google Scholar] [CrossRef] [PubMed]

**Figure 1.**Water ice is not frozen. (

**Left**): The crystal structure of Ice I${}_{\mathrm{c}}$. Oxygen atoms (red) occupy the diamond lattice and hydrogen atoms (blue) occupy one of two possible (white) locations on the line between two oxygen atoms. Each oxygen atom obeys the “ice rules”: they must have two hydrogen atoms close to them and two far from them—a requirement imposed by the covalent bonding of 2H atoms to an O atom. (

**Right**): Sketch of the complex phase diagram of water ice with pressure on a log scale and temperature on a linear scale (for more details, see Ref. [7]). Each ice phase is given a roman numeral. All of the higher temperature phases exhibit “proton disorder” with hydrogen atoms free to choose lattice sites as in ice I${}_{\mathrm{c}}$.

**Figure 2.**Spin ice is a magnetic ice. (

**Left**): A cubic unit cell of the crystal Dy${}_{2}$Ti${}_{2}$O${}_{7}$ showing the positions of the Dy atoms as large blue spheres. The information needed to produce this figure can be found in the Dy${}_{2}$Ti${}_{2}$O${}_{7}$ .cif file at Materials Project [15]. (

**Right**): Each Dy atom behaves like an atomic sized bar magnet, which we conventionally display as a vector, called a “spin vector”. The ice rules are obeyed in Dy${}_{2}$Ti${}_{2}$O${}_{7}$ at low temperatures where each tetrahedron replaces an oxygen atom, each spin replaces a hydrogen atom, and two spin vectors point inward to each tetrahedra and two point outward.

**Figure 3.**Spin flip process and monopole hopping. (

**Left**): Starting from an ice-rule obeying spin configuration as in Figure 2, a pair of monopoles is formed after one spin flip as shown. Now, one tetrahedron has three spin vectors pointing out and one pointing in (a 3-in-1-out monopole) denoted by a red sphere, and one tetrahedron has on spin vector pointing out and three pointing in (a 1-in-3-out monopole) denoted by a green sphere. (

**Right**): After a second spin flip, one of three things can happen: the monopoles annhilate and the two tetrahedra return to obeying the ice rules. Consequently, another pair of monopoles is created with one tetrahedra containing two (a 4-in-0-out or 0-in-4-out monopole, not shown), or more commonly, one of the tetrahedra returns to obeying the ice rules, and the monopole moves to another tetrahedra. This latter possibility leads to a gas of monopoles that move around the system much like electrons and holes do in semiconductors [20].

**Figure 4.**Spin ice rings and cylinders. Left: rings enable the study of monopoles as an electrically neutral fluid. The flow of monopoles around a ring enables the study [25] of the monopoles’ fluid properties; it is found that the monopoles behave as a supercooled liquid at low temperatures. Right: Noise detected in a coil wrapped around a cylinder of Dy${}_{2}$Ti${}_{2}$O${}_{7}$ enables a study of the random motion of the monopoles. The noise detected demonstrates this happens slowly, at audio frequencies (20–20,000 Hz) that are far below the expected MHz range of single spin-flip processes.

**Figure 5.**Instantaneous snapshots of vertically integrated moist buoyancy for the (

**a**) no-rotation case and (

**b**) rotation case in the $A=40$ domain, normalized by half the buoyancy difference between the upper and lower boundary. The normalized integrated buoyancy for an air column with the same moist buoyancy as the lower boundary is 0, and that for an air column with the same moist buoyancy as the upper boundary is $-2$.

**Figure 6.**The barotropic and baroclinic vorticities, and the effective heat that acts as a source for the baroclinic vorticity. The top row is an entirely dry case, and as such, the effective heat is a proxy for temperature alone. The moist stratification in the dry case is defined as $\mu =1$. The bottom is a moist case, so the effective heat is constructed from a combination of temperature and water vapor content. The moist stratification in this case is given by $\mu =4.0$. The dry parameters are the same between the two cases. Note the different color scales—the dry case is much less energetic than the saturated.

**Figure 7.**Geometric basis for predicting LiDAR data capture resolution from aerial platform (fixed wing or rotary), where ${\theta}_{H}$ is the data capture angle from nadir, and ${\theta}_{L}$ is the sensor beam spread.

**Figure 8.**

**Rows**: estimating posterior mean and variance of $\vartheta $, top and bottom, respectively.

**Columns**: prior hyperparameter $\lambda =0.01,1$. Distance metrics: sufficient Euclidean (orange), insufficient Euclidean (green), Wasserstein (blue), K-L divergence (purple).

**Figure 9.**Estimating $\mathbb{E}[\vartheta \mid Y]$ with multiple configurations of Algorithm 3 for varying n (line type: $n=10$ solid, $n=25$ dashed), $\epsilon $ (horizontal axis), true data-generating model (facets clockwise from top left: $(\vartheta ,{\sigma}^{2})=(1,1),\phantom{\rule{0.277778em}{0ex}}(\vartheta ,{\sigma}^{2})=(1,2),\phantom{\rule{0.277778em}{0ex}}(\vartheta ,{\sigma}^{2})=(10,1),\phantom{\rule{0.277778em}{0ex}}(\vartheta ,{\sigma}^{2})=(10,2)$), M (line color: $M=50$ orange, $M=100$ green, $M=200$ blue).

**Figure 10.**Estimating $\mathbb{E}[{\sigma}^{2}\mid Y]$ with multiple configurations of Algorithm 3 for varying n (line type: $n=10$ solid, $n=25$ dashed), $\epsilon $ (horizontal axis), true data generating model (facets clockwise from top left: $(\vartheta ,{\sigma}^{2})=(1,1),\phantom{\rule{0.277778em}{0ex}}(\vartheta ,{\sigma}^{2})=(1,2),\phantom{\rule{0.277778em}{0ex}}(\vartheta ,{\sigma}^{2})=(10,1),\phantom{\rule{0.277778em}{0ex}}(\vartheta ,{\sigma}^{2})=(10,2)$), M (line color: $M=50$ orange, $M=100$ green, $M=200$ blue).

**Figure 11.**(

**Left panel**): Estimation bias as a function of $\epsilon $ given a fixed computational budget. (

**Middle panel**): Mean Absolute Error of ABC estimates of $\mathbb{E}[\vartheta \mid {y}_{0}]$ based on M samples. Mean runtime (in seconds) given in red. (

**Right panel**): Parallelizing ABC across multiple cores (1 orange, 4 green, 8 blue, 16 purple) affects runtimes (${log}_{2}$ seconds).

**Figure 12.**Posterior density estimates for $(\alpha ,{\vartheta}_{0})$ under varying summary statistics (clockwise): average Hamming distance, number of segregation sites, Tajima’s D and Fay-Wu’s ${H}_{0}$, and all four statistics. Shading corresponds to $\{0.25,0.5,0.75,0.9\}$ highest posterior density contours. True data-generating $(\alpha ,{\vartheta}_{0})$ shown as black point in all frames.

**Table 1.**Spatiotemporal Generalized Techniques Comparison YES ≡ ✓, NO ≡ ✕. Many entries are not applicable, indicated by a dash, (-).

Method Classification | Results Classification | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Spatiotemporal Schemes | Linear/Non-Linear | Data/Eqn-Driven | Deterministic/Statistical | Continuous/Discrete | Boundary Conditions | Constraints/Conserved | Least Squares/Projection | Predictive | Generate PDF | Simulation | Scaling | Interpolation | Extrapolation | Quality/Limitations |

(notation) | L/NL | D/E | D/S | C/D | LS/PR | $\mathit{h}=|\mathbf{\Delta}\mathit{r}|,\mathbf{\Delta}\mathit{t}$ | ||||||||

Particle Dynamical Systems | - | E | D | C | - | $(E,\mathbf{p})$ | - | ✓ | ∼ | ✓ | ✓ | - | - | ${\mathcal{P}}_{n}(\mathbf{\Delta}t)$ |

Field Evolution | - | E | D | C | varied | $(E,\mathbf{p})$ | PR | - | ✓ | ✓ | - | ✓ | ∼ | - |

Lagranian Hamiltonian | - | E | D | C | varied | $(E,\mathbf{p})$ | PR | ✓ | ✓ | ✓ | ✓ | - | - | - |

PDE-Linear | L | E | D | C | varied | - | PR | ✓ | - | ✓ | - | ✓ | ∼ | - |

PDE-Non-Linear | NL | E | D | C | varied | - | LS/PR | ✓ | - | ✓ | - | ✓ | ∼ | - |

SDE [113] | NL | D | S | C | - | - | LS | partially | ✓ | ✓ | - | ✓ | ✓ | - |

Finite Element Finite Difference [114] | L | - | - | - | per element | smooth | LS | - | - | - | - | ✓ | ✓ | ${\mathcal{P}}_{n}\left(h\right)$ |

Forward Time Evolution | - | D/E | D | C/D | initial | rules | LS | ✓ | - | ✓ | - | - | ✓ | ${\mathcal{P}}_{n}\left(iter\right)$ |

Markovian | L | D/E | D | C/D | initial | - | LS/PR | ✓ | ∼ | ✓ | - | ✓ | ∼ | ${\mathcal{P}}_{n}\left(iter\right)$ |

Gaussian Process | - | D/E | S | C/D | - | asymp bound | LS | ✓ | ✓ | ✓ | ✕ | ✓ | ✕ | - |

Machine Learning | NL | D | ∼D+S | C+D | - | - | LS | mostly | ✓/✕ | ∼ | ∼ | ✓ | ∼ | N_nodes |

Discrete Operations | L/NL | E | D | D | - | rules | LS | ✓ | - | ✓ | - | - | - | - |

Modal Decomposition | L | D/E | - | C | - | ortho-gonality | PR | ✕ | - | - | ✓ | ✓ | ✕ | ${N}_{max}<\infty $ |

Virial Theorem | NL | E | D/S | C | - | ✓ | LS | - | - | - | - | - | - | - |

**Table 2.**Specific Spatiotemporal Techniques Comparison YES ≡ ✓, NO ≡ ✕. Many entries are not applicable, indicated by a dash, (-).

Schemes Employed | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Specific Spatiotemporal Schemes | Particle/Dynamical | Field/Lattice Evolution | Lagr/Hamiltonian | PDE-Linear | PDE-Non-Linear | Stochastic DE | Finite Element/Diff. | Forward Time Evol. | Markov/Kernel | Modal Decomposition | Virial Theorem | Discrete Ops. | Gaussian Proc. | Neural Net |

Monte Carlo [118] | ✕ | ✕ | ✓ | - | - | - | ✕ | ✕ | - | - | - | - | - | - |

Molecular Dynamics [119,120,121] | ✓ | ✕ | ✓ | - | - | ✓ | ✓ | ✓ | - | - | ✓ | - | - | - |

Fluid Advection (CFD) [122] | ✓ | ✕ | ✕ | - | ✓ | - | ✓ | ✓ | ✓ | - | - | - | - | - |

Ising Model [123] | - | ✓ | ✓ | - | - | - | - | ✓ | - | - | - | ✓ | - | - |

Quantum Mechanics [112,124,125] | - | ✓ | ✓ | ✓ | ✓ | - | - | - | - | ✓ | - | - | - | - |

Data Assimilation [126,127] | - | ✓ | ✕ | ✓ | ✓ | ✓ | - | - | - | - | - | - | - | - |

Cellular Automata (Life) [107,108] | - | ✓ | ✕ | - | - | - | - | ✓ | - | - | - | ✓ | - | - |

Kinetic Monte Carlo [128,129] | - | ✕ | - | - | - | - | - | - | - | - | - | - | - | ✓ |

LiDAR [130] | - | ✕ | - | ✕ | ✕ | - | ✓ | - | ✓ | - | - | ✓ | - | ✓ |

Spin Ice [13,20] | - | ✓ | ✓ | - | - | - | - | - | - | - | - | ✓ | - | ✓ |

Rayliegh-Bernard Moist Conv. [71] | - | ✓ | - | - | ✓ | - | - | ✓ | - | - | - | - | ✓ | ✓ |

DMD [109,110] | - | ✓ | ✕ | - | - | - | - | - | - | ✓ | - | - | - | - |

Physics Informed NN [111] | - | ✕ | ✕ | - | - | - | - | ✓ | - | - | - | - | - | ✓ |

Ocean Modeling - POM/ROM [131] | - | ✓ | - | ✕ | ✓ | ✓ | - | ✓ | - | - | - | - | - | - |

Climate Modeling [132] | - | ✓ | - | ✕ | ✓ | ✓ | - | ✓ | - | - | - | - | - | - |

Galactic Structure [133] | ✓ | ✕ | - | - | - | - | - | - | - | - | ✓ | - | - | - |

Stock Futures [134] | - | - | - | - | - | ✓ | ✓ | ✓ | - | - | - | ✓ | ✓ | ✓ |

Compressive Sensing [135] | - | - | - | - | - | ✓ | - | - | ✓ | - | - | ✓ | - | - |

**Table 3.**${S}_{2}$ does not contain any information on the variance of the generated data, and the estimates concerning ${\sigma}^{2}$ correspondingly suffer.

${\mathbb{E}}_{{\mathit{\pi}}_{\mathbf{ABC}}}[\mathit{\vartheta}\mid \mathit{Y}]$ Error | ${\mathbb{E}}_{{\mathit{\pi}}_{\mathbf{ABC}}}[{\mathit{\sigma}}^{2}\mid \mathit{Y}]$ Error | |||
---|---|---|---|---|

$\mathbf{\epsilon}$ | ${\mathit{S}}_{\mathbf{1}}$ | ${\mathit{S}}_{\mathbf{2}}$ | ${\mathit{S}}_{\mathbf{1}}$ | ${\mathit{S}}_{\mathbf{2}}$ |

0.01 | 0.033 (0.029) | 0.040 (0.036) | 0.069 (0.074) | 1.269 (1.673) |

0.1 | 0.035 (0.031) | 0.043 (0.038) | 0.069 (0.064) | 1.233 (1.937) |

0.2 | 0.038 (0.032) | 0.045 (0.040) | 0.085 (0.074) | 1.118 (1.188) |

0.3 | 0.040 (0.033) | 0.048 (0.052) | 0.093 (0.071) | 1.344 (4.544) |

0.4 | 0.042 (0.036) | 0.051 (0.042) | 0.105 (0.078) | 1.134 (1.327) |

0.5 | 0.045 (0.037) | 0.053 (0.045) | 0.121 (0.087) | 1.131 (0.995) |

M | 1 Core | 4 Cores | 8 Cores | 16 Cores |
---|---|---|---|---|

1 | 0.46 (0.07) | 0.51 (0.05) | 0.51 (0.08) | 0.58 (0.05) |

${2}^{2}$ | 0.57 (0.10) | 0.51 (0.07) | 0.58 (0.14) | 0.65 (0.12) |

${2}^{4}$ | 0.83 (0.30) | 0.62 (0.16) | 0.60 (0.10) | 0.68 (0.10) |

${2}^{6}$ | 1.88 (1.36) | 0.92 (0.38) | 0.79 (0.27) | 0.83 (0.20) |

${2}^{8}$ | 6.15 (5.13) | 2.25 (1.47) | 1.58 (1.01) | 1.42 (0.68) |

${2}^{10}$ | 23.56 (20.03) | 7.48 (5.98) | 4.16 (3.07) | 3.46 (2.29) |

${2}^{12}$ | 93.00 (74.08) | 26.10 (23.15) | 14.35 (11.15) | 11.66 (11.26) |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Sharma, S.; Thompson, M.; Laefer, D.; Lawler, M.; McIlhany, K.; Pauluis, O.; Trinkle, D.R.; Chatterjee, S.
Machine Learning Methods for Multiscale Physics and Urban Engineering Problems. *Entropy* **2022**, *24*, 1134.
https://doi.org/10.3390/e24081134

**AMA Style**

Sharma S, Thompson M, Laefer D, Lawler M, McIlhany K, Pauluis O, Trinkle DR, Chatterjee S.
Machine Learning Methods for Multiscale Physics and Urban Engineering Problems. *Entropy*. 2022; 24(8):1134.
https://doi.org/10.3390/e24081134

**Chicago/Turabian Style**

Sharma, Somya, Marten Thompson, Debra Laefer, Michael Lawler, Kevin McIlhany, Olivier Pauluis, Dallas R. Trinkle, and Snigdhansu Chatterjee.
2022. "Machine Learning Methods for Multiscale Physics and Urban Engineering Problems" *Entropy* 24, no. 8: 1134.
https://doi.org/10.3390/e24081134