A Taxonomic Survey of Physics-Informed Machine Learning

: Physics-informed machine learning (PIML) refers to the emerging area of extracting physically relevant solutions to complex multiscale modeling problems lacking sufﬁcient quantity and veracity of data with learning models informed by physically relevant prior information. This work discusses the recent critical advancements in the PIML domain. Novel methods and applications of domain decomposition in physics-informed neural networks (PINNs) in particular are highlighted. Additionally, we explore recent works toward utilizing neural operator learning to intuit relationships in physics systems traditionally modeled by sets of complex governing equations and solved with expensive differentiation techniques. Finally, expansive applications of traditional physics-informed machine learning and potential limitations are discussed. In addition to summarizing recent work, we propose a novel taxonomic structure to catalog physics-informed machine learning based on how the physics-information is derived and injected into the machine learning process. The taxonomy assumes the explicit objectives of facilitating interdisciplinary collaboration in methodology, thereby promoting a wider characterization of what types of physics problems are served by the physics-informed learning machines and assisting in identifying suitable targets for future work. To summarize, the major twofold goal of this work is to summarize recent advancements and introduce a taxonomic catalog for applications of physics-informed machine learning.


Introduction
Building reliable multiphysics models is an essential operation in nearly any given area of scientific research. However, solving and interpreting said models can be an essential limitation for many. As complexity and dimensionality in physical models, so to does the expense associated with traditional methods towards differentiation-such as finite element mesh construction. In addition to numerical hurdles, challenging experimental observation or otherwise unavailable data can also limit the applicability of certain modeling approaches. Furthermore, traditional learning machines fail to learn relationships in many complex physical systems due to the typical imbalance of data for one and the lack of physically relevant knowledge for another. Problems lacking trustworthy observational data are typically modeled by precise systems of complex equations with initial conditions and tuned coefficients.
This work details recent advancements in physics-informed machine learning. Physicsinformed machine learning is a tool by which researchers can extract physically relevant solutions to multiscale modeling problems. Crucially, physics-informed learning machines have been shown to accurately learn general solutions to complex physical processes having sparse multifidelity and/or otherwise incomplete data by leveraging the knowledge of the underlying physical features. What differentiates physics-informed learning from traditional statistical model is the somehow tangible inclusion of physically relevant prior knowledge. The constitutional need for qualitatively defined, physically relevant learning interventions further increases the need for a qualitative taxonomy.
Most notable among the recent advancements, we focus on the increasing parallelism of the physics-informed neural network algorithm and the introduction of neural operators for learning systems of differential equations. In 2021, Karniadakis et al. [1] provided a comprehensive review of the methods leveraged in physics-informed learning and formed an outline of biases catalyzed by prior physical knowledge. Karniadakis et al. assert as a key point, "Physics-informed machine learning integrates seamlessly data and mathematical physics models, even in partially understood, uncertain and high-dimensional contexts" [1]. This comprehensive review primarily details physics-informed neural learning machines for applicability to a diverse set of difficult, ill-posed, and inverse problems. Karniadakis et al. continue to discuss domain decomposition for scalability and operator learning as future areas of research. Toward a recent expansion in a wide range of application, the use of physics-informed learning machines has seen the method's application in diverse fields including fluids [2], heat transfer [3], COVID-19 spread [4,5], and cardiac modeling [6][7][8]. Cai et al. [2] offer a review of physics-informed machine learning implementations for three-dimensional wake flows, supersonic flows, and biomedical flows. High-dimensional and noisy data from fluid flows are prohibitively difficult to train with traditional learning algorithms; this review highlights the applicability of physics-informed neural networks tackling this problem in fluid flow modeling. For heat transfer problems, Cai et al. [3] discuss a variety of physics-informed machine learning approaches in convection heat transfer problems with unknown boundary conditions, including several forced convection and mixed convection problems. Again, Cai et al. showcase diverse applications of physics-informed neural networks to apply neural learning machines in traditionally impractical settings where injecting physically relevant prior information makes neural network modeling viable. In 2022, Nguyen et al. [4] provided an SEIRP-informed neural with architecture and training routine changes defined by governing compartmental infection model equations. Additionally, Cai et al. [5] propose the fractional PINN (fPINN), a physics-informed neural network created for the rapidly mutable COVID-19 variants trained on Caputo-Hadamard derivatives in the loss propagation of the training process. Cuomo et al. [9] provide a summary of several physics-informed machine learning use cases. The wide range of apt applications for physics-informed machine learning further perpetuates the need for qualitative discussion and subclassification.
The need for computational methods, especially where problems are modeled by complex and/or multiscale systems of nonlinear equations, is growing expeditiously. An exhaustive amount of scholarly thought has recently been afforded toward methods advancing the data-driven learning of partial differential equations. Raissi et al. [10] were able to infer lift and drag from velocity measurements and flow visualizations, with Navier-Stokes being estimated using automatic differentiation to obtain the required derivatives and compute the residuals composing loss step augmentations. A similar process is common for learning process augmentations in physics-informed learning. Raissi et al. [11] introduced a hidden physics model for learning nonlinear partial differential equations (PDEs) from noisy and limited experimental data by leveraging the underpinning physical laws. This approach uses the Gaussian process to balance model complexity and data fitting. The hidden physics model was applied to the data-driven discovery of PDEs, such as Navier-Stokes, Schrödinger, and Kuramoto-Sivashinsky equations. Later, in another paper, two neural networks were employed for similar problems [12]. The first neural network models the prior of the unknown solution, and the second neural network models the nonlinear dynamics of the system. In another work, the same group used deep neural networks combined with a multistep time-stepping scheme to identify nonlinear dynamics from noisy data [13]. The effectiveness of this approach was shown for nonlinear and chaotic dynamics, Lorenz system, fluid flow, and Hopf bifurcation. In 2019, Raissi et al. [14] also proposed two types of algorithms: continuous-time and discrete-time models. The so-titled physics-informed neural network is tailored to two classes of problems as follows: (a) the data-driven solution of PDEs and (b) the data-driven discovery of PDEs. The approach's effectiveness was demonstrated for several problems, including Navier-Stokes, Burgers' equation, and Schrödinger equations. Consequently, the PINN has been adapted to intuit governing equations or solution spaces for many types of physical systems. For Reynoldsaveraged Navier-Stokes problems, Wang et al. [15] propose a physics-informed random forest model for assisting data-driven flow modeling. Other research introduce physics constraints, consequently biasing the learning process driven by prior relevant knowledge. Sirignano et al. [16] accurately solved highly-dimensional free-boundary partial differential equations and proved the approximation utility of neural networks for quasilinear partial differential equations. Han et al. [17] proposed a deep learning method for highdimensional partial differential equations with backward stochastic differential equation reformulations. Rudy et al. [18] propose a method for estimating governing equations of numerically derived flow data using automatic model selection. Long et al. [19] proposed another data-driven approach for governing equation discovery, and Leake et al. [20] introduce the combination of the deep theory of functional connections with neural networks to estimate solutions to partial differential equations. The preceding selection includes examples of the transition from solving partial differential equations with expensive techniques for learning solutions with high-throughput learning machines, and we cover some works on the latter topic next. Figure 1 show how physics-informed machine learning is used in the learning process to accelerate training and allow applicability of models to problems whose data inconsistencies have posed obstacles to traditional learning. In neural networks, as Figure 1 shows, (1) learning bias in the form of physically relevant modeling equations is used directly in training error propagation, (2) inductive bias through neural model architecture augmentation, is used to introduce previously understood physical structures, and (3) observational bias in the form of data-gathering techniques is used to introduce physical bias via informed data structures of simulation or observation. Here, i are input features. h i display an arbitrary number of hidden layers of arbitrary size. A, B, and C are physically relevant structures. For example, in compartmental epidemiology, the population bins and their underlying governing equations are differentiated in process (1). Biases are explored further in Section 2.2.
This survey aims to summarize recent advances in physics-informed machine learning, including improved parallelism and the introduction of the neural operator while also discussing the broadening scope to which physics-informed machine learning is being applied. Most crucially, we introduce a taxonomic system for classifying the source and effect of physics-based information augmentations on learning machines to categorize existing work and promote the wide applicability of PIML by highlighting problems well posed for the informed learning machine paradigm.
The body of work surveyed herein was obtained by Google Scholar searches with the keywords "physics-informed machine learning" or "physics-informed neural networks". We restricted the search results to only recent articles appearing since 2021. Both queries returned greater than 17,000 publications as of January 2023. Further filtering by keywords appearing in the title was thus needed. Publications chosen for this survey are meant to be representative of recent trends and were chosen for usefulness in discussing the proposed taxonomic structure. Hence, the individual paper impact was considered to be less important than its utility for taxonomic discussion-for all intents and purposes of this work. Moreover, some additional papers predating our search, which are foundational to the selected works, have been added for clarity of the narrative. Given the acute popularity of these methods, an exhaustive survey is infeasible. Thus, a representative group of research has been selected to cover recent trends and to adequately inform our taxonomic structure, which in itself is necessitated by the wide range of new work.

Taxonomy
Machine learning techniques toward learning initial-intermediate state relationships in systems traditionally modeled by complex collections of differential equations solved with conventional numerical methods are experiencing a growing popularity and vitally increasing applicability and thus so too must our understanding of how information gathered from the physical understanding of systems is employed to facilitate the learning processes and how physically-derived information is injected into the machine learning pipeline. To this end, a taxonomy to classify physics-informed machine learning applications is proposed. In summary, physics information is implicitly driven by numerically derived data or explicitly driven by well-understood relationships in the physical system, where both drivers are often generated with or informed by traditionally studied governing differential equations. There are several typical sources for such prior physical knowledge. Data can enforce physics information in several ways: physical symmetries could be introduced, or implicit relationships from high-fidelity simulations can implicitly affect model convergence. Additionally, physically relevant information in the form of high-fidelity models can be imposed on the learning process. Governing equations can be incorporated directly into the optimization process, or learning machine architectures can be tweaked to reflect physically important model qualities.
Toward increasing the explainability in physics-informed machine learning, the taxonomic system affords a researcher the framework to answer two fundamental questions. The first relates to the origin of the physics information that is applied to the learning process, or how the information is driven; in other words, what is the driver of physically relevant priors? Two general answers are proposed: physics-model and physics-data drivers. Second, in what way is physics-derived information utilized in the learning process? Or, in which way is physical bias introduced to the learning process? Three biases, outlined by Karniadakis et al. [1], are proposed as solutions to question two: observational, inductive, and learning biases.
The proposed taxonomic structure promotes collaborations toward future work in physics-informed machine learning by offering a structure where researchers utilizing scientific machine learning may discern strategies by which physics-derived information can bolster the learning processes. Furthermore, the taxonomic structure illuminates how model-driven analyses of physical systems can be supplemented by cost-mitigating methods of machine learning. We next identify the basic components of physics-informed machine learning models to motivate the design of the proposed taxonomy. Figure 2 gives an overview of how the different drivers inform the various biases.  Table 1.

Driver
The pipeline of physics-informed machine learning is given in Figure 3. The primary consideration when confronting the explainability of PIML modeling is the means by which the physics-based information is obtained for use in the machine learning process. This question crucially differentiates the qualifications for physics-based learning while further the broadening scope and promoting the introduction of physics-informed machine learning. The proposed taxonomy draws two distinctions: first, physics-information is enforced on the learning process through governing equations or physical structural patterns, and second, physics-information is encoded implicitly into the data through simulation or observation. The data may be structured in a physically relevant way, or physics-information is derived from traditional multiphysics models directly, as typically expressed by the governing equations. Figure 3 shows the adoption of the images of heat transfer problems, neural networks, and generic datasets as stand-ins for any physicsbased model, learning machine, or physically structured dataset, respectively. In our generic example, u and v are physically relevant parameters in the feature set X and f (u, v, t) are labels. Consequently, δu dt and δv dt are governing physics equations underpinning the relationship between features and labels. Traditionally, expensive differentiation is employed to realize the predictive power of the physics-based model. However, Figure 3 displays the alternative PIML pipeline in the context of the two possible drivers of prior physics information.
Physics-model driven. First-and most tangibly-physics information can be derived from preexisting physical models of a given problem. Commonly, governing equations are directly infused into the learning process. The learning machine might also realize these augmentations in the form of architectural augmentations which reflect physical symmetries. The features of the machine learning model are explicitly constrained by the problem's underlying physics.
Physics-data driven. Second-and expressed broadly-physics information can be derived implicitly from numerically data-driven methods. It is not to say that all data-driven machine learning is "physics informed". Yet, if data have a physical structure informed by high fidelity, numerically derived training data or by empirical data quantifiable by a trustworthy physical model, such a model is-at least indirectly-constrained by the wellunderstood underlying physical priors. After all, it is the same relationships traditionally modeled by complex systems of differential equations that researchers are attempting to learn, in lieu of differentiating through expensive means.

Bias
In the taxonomy of PIML, biases describe how physics information is enforced in the machine learning application. Stemming from two different drivers, three resultant biases are defined. Karniadakis et al. [1] in their 2021 review of physics-informed machine learning, provided a categorization of bias in the machine learning process. An interpretation of these processes are given in Figure 4-which adopts the same general abstractions as those in Figure 3 and includes the generic representation of a physics model's governing equations by δu δt and δv δt . The wide variety of drivers and biases are given in Table 1. As exemplified by the neural network in Figure 4, learning bias (1) is typically enforced in the training error, observational bias (2) is applied in the architecture, and observational bias (3) is applied at the data level.
Observational bias. Physics information can be incorporated into the learning process through data. Models learning from numerically derived data-driven methods intuit physically relevant relationships in the structure of the data that have been produced ipso facto by the researcher's understanding of the underlying physics. The abundance of various sensors makes physically relevant observational data for multiphysics modeling problems equally abundant. Much work has incorporated the need to gain maximum generalizability from sparse simulated data and other sources of multifidelity data.
Inductive bias. Physics-information is directly injected into the learning process through architecture-level decisions of the model. Various partitions of the model can be trained in a multitask fashion to implicitly satisfy the underlying physics. Architecture level changes induce bias on the training process by influencing modeling choices with intrinsic physical principles.
Learning bias. Physics information is given as an informed biasing of the optimization step in the machine learning model. Often, loss functions are directly informed by calculating residuals of underlying physics equations. Rather than implicitly influencing the training process, learning bias forces constraints on the model in a multitask learning process where the model is trained with constraints informed by the underlying physical features.

Taxonomy Tableau
The works surveyed in this paper include taxonomically distinguishable implementations of the physics-informed machine learning paradigm and are grouped by the classifying drivers and biases as explained above. Articles that share multiple methods or otherwise cannot be categorized are excluded for the sake of readability.
It becomes abundantly obvious that the extremely popular physics-informed neural network paradigm dominates our space of surveyed methods. In fact, physics-informed neural networks, which train on some numerical data, train with loss estimates derived from the governing equations and augment the model architecture to fit physically relevant parameters by typically checking each box of the proposed taxonomic catalog. This fact, however, does not diminish the utility of the taxonomy; instead, it highlights how the minutiae differentiating applications of physics-informed learning machines can be discussed rather precisely within the confines of the taxonomy framework. In other words, the differences between two methods that appear to be the same in terms of name and taxonomic qualities can be discussed in the context of how each of its drivers or biases are individually implemented. Additionally, many applications of physics-informed neural networks include implementations of several problems. For this reason, if a taxonomic quality is found in any of the individual examples, it is included in the Table 1.
The proposed taxonomic structure facilitates the tangible description and discussion concerning the qualities by which physics-informed learning machines and their applications might be differentiated. For example, two models might both include learning biases where one model calculates Caputo-Hadamard fractional representations of governing equations and another calculates algebraic differential equation solutions.

Discussion of Relevant Applications
We next discuss the advances in physics-information driven machine learning. One major consideration in recent physics-informed machine learning work has been the increasing potential parallelism. Spearheading advancements in the parallelism of PIML are techniques for domain decomposition in physics-informed neural networks. Implementations of the extended physics-informed neural network algorithm employ domain decomposition traditionally accomplished by expensive meshed differentiation. Progression in physics-data driven machine learning has an important feedback relationship with physics-model-driven machine learning. Select methods by which physics-data-driven machine learning is performed are also discussed next. Namely, we revisit the methods for approximating solutions to largely nonlinear differential equations with neural operators. This feedback relationship has led to the introduction of the physics-informed neural operator and other methods for learning functionals with physics-based constraints.

Domain Decomposition
One important applicability of physics-informed neural networks is in the regime of problems traditionally resolved with expensive meshing methods, such as finite element, for example. In several applications, PINNs reduce the cost with substantial accuracy, while training time and residual calculations can still be concerning. Toward the end of mitigating cost, much work has improved the parallelization of the PINN architecture. Jagtap et al. [22] introduced XPINN, a generally applicable space-time decomposition for easily parallelized subdomains governed by individually interconnected sub-PINNs. XPINN's wide applicability to forward and inverse modeling problems were displayed. Impressively complex subdomains can be solved reliably with XPINN due to the formulation of relatively simple interfacing conditions. XPINN employs model-driven inductive and learning biases by augmenting network shape and loss functions based upon physical priors. XPINN is a broad example which encompasses each of the branches of the taxonomy. Shukla et al. [24] provide a parallel implementation of the previously proposed cPINN [25] and XPINN on two-dimensional steady-state incompressible Navier-Stokes equations, viscous Burgers equation, and a steady-state heat conduction with variable conductivity inverse problem. This work shows the advantages and disadvantages of each method and their use in tandem. Further, optimization of the distributed computing process is given. Each implementation further exemplifies the applicability of domain decomposition methods for physics-informed neural networks toward arbitrarily shaped and complex subdomains. Physics-model-driven inductive and learning biases, as fundamentally similar to XPINN, were discussed by Shukla et al. [24] and Jagtap et al. [25]. Physics-data drivers are available depending upon modeling choices. Jagtap et al. [26] outlined XPINN's effectiveness toward inverse problems in supersonic flows. Enforced physics information includes governing equations as well as entropy conditions-displaying the utility of additional physics information beyond the model governing system. In this study, the conservative form Euler PDE equations are given as δ t U + ∇ · G(U) = 0. Here, both conservation laws, F := δ t U + ∇ · G(U), and entropy conditions, η t + φ 1x + φ + 2y ≤ 0, are applied as learning biases. Papadopoulos et al. [27] provide an XPINN implementation for steady-state heat transfer in composite materials with interface interaction.
The general adaptability of domain decomposition in physics-informed neural networks is further exemplified by the APINNs [28] proposed by Hu et al.; they provide a variable decomposition technique for fine-tuning subdomain boundaries. APINN utilizes a gating network to mimic XPINN and provide soft domain decomposition. hp-VPINN constitutes an additional method for domain decomposition [50], which is a variational method for neural network approximation via high-order polynomial projections toward efficient domain decomposition in physics-informed neural networks. Finally, Xu et al. [29] provides an MPI implementation for physics-constrained learning (PCL) [30,31], employing the halo domain decomposition method. PCL carefully couples artificial neural networks and finite element models. The constitutive law in the finite element model is approximated using the neural learning machine.

Neural Operator Learning
One example of advancement in physics-data-driven machine learning worth highlighting for its consequent physics-model=driven adaptation is the neural operator. The ubiquity of the operator approach toward learning solutions to complex physical problems has led to incorporating physics-model based information into the operator learning process. The neural operator approximates latent operators which govern a mapping between input parameters and solutions. As noted by Lu et al. [51], any point y in the domain of G(u), the output G(u)(y) is a real number. Hence, the network receives two component inputs, u and y, and outputs G(u)(y). For operator learning, sufficiently numerous discrete underlying function sensor values are utilized for training. The neural operator abstracts complex multiphysics modeling problems as control function maps. Importantly, the introduction of the neural operator and the functional learning paradigm drastically change how model-driven inductive biases can be used in physics-informed machine learning. Li et al. [32] presents a framework for the use of Fourier transform layers in neural operator networks, applying the method to many examples, including Navier-Stokes, Burgers, and Darcy flow problems. The Fourier neural operator performs zero-shot super-resolution. Kovachki et al. [33] conducted a study on the general applicability of Fourier neural operators to inverse problems governed by highly nonlinear differential equations. Another neural operator learning machine, the DeepONet, has received recent attention. Deng et al. [34] studied the convergence of operator learning by branch and trunk networks in the context of Burgers equation and advection-diffusion problems. Several important theorems regarding the convergence of functional learning machines are also included. Most importantly, the neural operator learning paradigm exemplifies a predisposition to the introduction of learning, inductive, or observational biases with model and data drivers.

Physics-Informed Neural Operators
Neural operators are included in our discussion of physics-informed machine learning for their obvious potential for physics-model-driven learning machines-specifically, whether or not methods discussed previously distinctly have implemented drivers of physics-informed machine learning. Regardless, advancements in neural operators have already led to the use of physics-informed neural operators (PINOs). Li et al. [35] employed Fourier neural operator layers, alongside physics-informed learning with observational biases. Learning bias is introduced via physics-informed residuals computed by automatic differentiation via autograd, function-wise differentiation, or Fourier continuation. The physics-informed neural operator is applied to a wide range of examples including Burgers, Darcy, and Kolmogorov flow problems. Wang et al. [36] provide the framework for physicsinformed DeepONets, applying this to a Burgers transport problem and a 2-dimensional eikonal equation, among other PDE models. A physics-model informed learning bias is introduced by augmenting loss calculations by merging latent representations of solutions. Toward its general applicability, the DeepONet does not specify the architecture of its constituent branch and trunk networks, affording the use of many learning architectures.
Regularization mechanisms can also force functional learning toward desired partial differential equation formulation. Goswami et al. [37] have given a variational physicsinformed DeepONet that can be applied to quasibrittle materials modeling. Two wellstudied fracture models are used to benchmark variation. By extension of the underlying models, learning bias is introduced into the functional learning framework in the loss calculations. Additionally, Schiassi et al. [38] propose an extreme learning machine (X-TFC) approach for learning functionals, employing physics-model-informed learning bias for several optimal control problems by initial and boundary constraints in the extreme learning machine algorithm with constrained expressions.
Physics-informed neural networks have become widely applied in research areas where physics-model-and physics-data-driven information is available. Consequently, an all-encompassing discussion of applications is difficult. Instead, the remaining focus of the discussion will be on work which has advanced the theory of machine learning via physics-informed neural networks or discussions on the limitations facing physicsinformed machine learning.

Learning Processes
Some work has been done on advancing the specific mechanisms of physics-informed learning machines. As mentioned previously, De Ryck et al. [49] have formulated error estimates for PINNs approximating the Navier-Stokes equations. Wu et al. [52] have proposed an adaptive method for the formulation and calculation of residuals, reducing the number of residual points required. Jagtap et al. [39] provided an adaptive activation function method for improving accuracy and reducing expense, applying a PINN to the Burgers equation and deep neural networks to MNIST and CIFAR-10, among other examples. In another work, Jagtap et al. describe a technique for locally adaptive activation functions. Work has also been conducted in improving the scope of model types for which PINNs are applicable. Several complexity-reduction methods have been proposed to manage particularly stiff underlying equations which model chemical kinetics [40,41]. Additionally, PINNs have been adapted to serve fractional expressions of differential equations with a Caputo-Hadamard augmentation [5,42]. Jagtap et al. [43] have also proposed models which train multifidelity data from observation and simulation applied to Serre-Green-Naghdi equations. Finally, novel types of learning machines have been introduced to the physics-informed paradigm. McClenny et al. [44] and Rodriguez et al. [45] have proposed attention-based mechanisms for physics-informed learning. Finally, in multiple papers, Schiassi et al. have employed the theory of functional connections to ease the computational expense in working with complex, constrained PDEs [38,[46][47][48].

Limitations
As with other multiphysics and multiscale modeling, traditional physics-informed learning machines are often plagued by high-dimensionality and model complexity. Models demanding high order differentiation present high computational expense and make optimization difficult. Spectral biases on solution frequencies can force the model toward inaccurate equilibria. In complex models, residual calculations for training can still present prohibitive cost. The addition of Fourier features and other mathematical innovations have begun to address frequency bias issues.
Wang et al. [53] studied the limitation of physics-informed neural networks using the neural tangent kernel (NTK) theory and showed that the PINNs learning function is biased toward the dominant Eigen directions of the data. They proposed a new architecture with coordinate embedding layers, which leads to a robust and accurate estimation of the target function. This architecture showed excellent performance for wave propagation and reaction--diffusion dynamics where the regular PINNs often fail. In another paper, Wang et al. [54] examine PINN training failures and propose a neural tangent kernelguided optimization method for addressing convergence rate issues.
Another important limitation on physics-informed machine learning is data acquisition and benchmarking. In many problems where the physics-informed machine learning architecture is applicable, the right data are simply not available. Much work has generated models capable of learning general solutions from sparse and incomplete data. In general, benchmarking for physics-informed machine learning is difficult, but the comparison to traditional methods and development of baseline tools are addressing the benchmarking concerns. Mishra et al. [55] provide a robust t justification for the use of physics-informed neural networks in data assimilation or unique continuation inverse problems. Estimates on the PINNs generalization error are given via conditional stability estimates. Finally, Aditi et al. [56] explore possible failure modes of physics-informed neural networks. The authors conclude that the generic formulation of physics-informed neural networks utilizing soft regularization methods are susceptible to the burden of ill-posed problems. It is noted that the loss landscape in complex PINNs can be difficult to optimize, and regularization techniques to ease training are introduced.

Conclusions
A well-defined taxonomy can provide a guide for first-time researchers entering the field of physics-informed machine learning and serve as a tool for seasoned researchers to sift through large volumes of new work while providing insight into novel use cases of the physics-informed learning paradigm. Recent advancements in techniques toward domain decomposition and methods for addressing particularly ill-posed models that are difficult to study traditionally or with learning machines are important to the development of physics-informed machine learning. Work in improving the training regimen of physicsinformed learning machines must continue while its application broadens. Decreasing the burden of complex loss calculations and creating tailored optimization algorithms will continue to be of paramount importance moving forward.
Future work in this field can surely benefit from a taxonomic structure, whether for assisting in building physics-informed models or for allowing a framework by which they can be studied. Furthermore, with future work pushing the boundary of PIML use cases, the taxonomy must too remain adaptive. The current framework is not, and need not, be fully comprehensive. Further exploration of gaps in the taxonomy will catalyze fruitful exploration of the applicability of physics-informed machine learning. This applicability is limited only by systems with some well-understood prior physical knowledge-which is not much of a limiting factor considering the breadth of well-studied physical systems. Most crucially, future explorations of physics-informed machine learning are primarily limited by the number of researchers and students who understand its robust applicability. This is the primary motivation for providing a framework for taxonomic surveys of physicsinformed machine learning.