Abstract
This work investigates the use of physics-informed neural networks (PINNs) for solving representative classes of differential and integro-differential equations, including the Burgers, Poisson, and Volterra equations. The examples presented are chosen to address both symmetric and asymmetric domains. PINNs integrate prior physical knowledge with the approximation capabilities of neural networks, allowing the modeling of physical phenomena without explicit domain discretization. In addition to evaluating accuracy against analytical solutions (where available) and established numerical methods, the study systematically examines the impact of key hyperparameters—such as the number of hidden layers, neurons per layer, and training points—on solution quality and stability. The impact of a symmetric domain on solution speed is also analyzed. The experimental results highlight the strengths and limitations of PINNs and provide practical guidelines for their effective application as an alternative or complement to traditional computational approaches.
1. Introduction
The increase in computational power, the growing importance of computer simulations (e.g., digital twins), and the development of soft computing and artificial intelligence systems are driving the creation of various new computational methods and approaches. A fundamental challenge in computational methods is solving initial-boundary value problems (e.g., differential equations, integro-differential equations, and their systems). One of the important and relatively recent approaches to solving such problems involves neural networks, specifically physics-informed neural networks (PINNs). This approach incorporates knowledge of the governing equations, initial and boundary conditions, and possibly additional information directly into the neural network. The network then uses this information to train the model. Once the model is trained, the values of the unknown functions can be obtained directly at specified points in the domain. Some of the earliest works describing PINNs can be seen in the articles [1,2,3], which focus on the structure of such networks and their application to solving forward and inverse problems.
An interesting work by Cuomo et al. [4] presents the structure of PINN-type networks along with various variants. The authors provide a comprehensive review of such networks. In [5], the application of PINNs in solving a variational calculus problem is discussed. The neural network-based approach proved effective for the problem considered. Additionally, the study compares PINNs with the Differential Transform Method (DTM). The article [6] focuses on the presentation of physics-guided neural networks (PgNNs), physics-informed neural networks (PiNNs), and physics-encoded neural networks (PeNNs) in the context of fluid and solid mechanics. The conducted experiments demonstrate that the proper use of PINNs can be an effective tool in numerical simulations. In [7], PINNs were used to solve a traffic state estimation problem. The authors showed that applying PINNs to the LWR physical traffic flow model is effective, as evidenced by the experimental results. More information on potential improvements and different variants of PINN networks—such as self-adaptive loss balancing, auxiliary PINNs, adaptive collocation point movement, and adaptive loss weighting—can be found in references [8,9,10,11]. The work in [12] proposes an anti-derivatives approximator, offering a new architectural perspective to enhance the approximation of derivatives within PINNs. Self-Adaptive PINNs represent a recent paradigm for training physics-informed neural networks, where the weights of different loss components are treated as trainable parameters. This approach allows the network to dynamically balance competing objectives during optimization, improving stability and reducing the need for manual hyperparameter tuning. Several recent works have demonstrated the effectiveness of SA-PINNs in enhancing convergence and accuracy [13]. More information regarding PINNs and their applications can be found in [14,15,16,17,18,19].
In this study, we focus on three representative equations: the Poisson equation, Burgers’ equation, and the Volterra integro-differential equation. While simplified, each of these test cases reflects important classes of real-world phenomena. The Poisson equation underlies models of electrostatics, incompressible fluid flow, and steady-state heat transfer. The Burgers’ equation, often regarded as a prototype for the Navier–Stokes equations, is used to study nonlinear wave propagation, turbulence, and shock formation. Volterra-type integro-differential equations naturally arise in viscoelastic materials, population dynamics, and systems with memory effects. By studying these canonical problems, we can systematically evaluate the strengths and limitations of PINNs in controlled settings, while maintaining direct relevance to practical applications. Section 2 provides an overview of PINN architecture. It discusses the loss function, possible network architectures, and how such networks operate. Section 3 is dedicated to numerical experiments and demonstrates the effectiveness of PINNs on three selected examples. The focus is placed on the impact of hyperparameters on method performance, and comparisons are also made with classical numerical methods and exact solutions. The tests indicate that PINNs can be an effective tool for solving various initial-boundary value problems. Finally, Section 4 presents the conclusions.
2. Overview of Physics-Informed Neural Networks
Physics-informed neural networks (PINNs) represent a significant advancement in the field of neural networks, offering a completely new approach by integrating knowledge of physical laws into the training of deep learning models. This is achieved by embedding the governing physical laws, often expressed as differential equations, directly into the loss functions of the neural networks. This integration guides the learning process, encouraging the network to find solutions that not only fit any available data but also adhere to the fundamental physical principles governing the system. PINNs are also sometimes referred to as Theory-Trained Neural Networks (TTNs), emphasizing the incorporation of theoretical knowledge into the learning process. PINNs can approximate solutions to forward and inverse PDE problems without the need for a discretized mesh, which is a common requirement in traditional numerical methods like Finite Element or Finite Difference methods. By incorporating physical constraints, PINNs can generalize well even with limited or imperfect data [20]. This is the reason PINNs are considered as a powerful tool for problems where data is sparse but physics is well understood, such as fluid dynamics, material science, and inverse modeling. This capability is particularly valuable in situations where obtaining complete or high-quality data is challenging. PINNs have found extensive applications across diverse scientific and engineering disciplines such as computational fluid dynamics, heat transfer, structural mechanics, and geophysics etc.
2.1. Structure of the Physics-Informed Neural Networks
The core of PINNs architecture is typically a standard feed-forward neural network. Architectures like recurrent neural networks or convolutional neural networks can also be used depending on the problem. The core structure of a PINN can be broken down into two primary components.
2.1.1. Neural Network Architecture
PINNs utilizes a neural network, most commonly a feedforward neural network or a multilayer perceptron, as a universal function approximator. This network takes spatio-temporal coordinates () or other relevant input parameters as input. The output is then the predicted solution to the differential equations describing the physical system. Mathematically, the neural network can be represented as
where is the predicted solution, are the input coordinates and denotes the parameters of the neural network. The universal approximation theorem ensures that, with sufficient depth and width, the network can approximate any continuous function to arbitrary accuracy [21].
2.1.2. Physics-Informed Loss Function
The feature that sets apart PINNs from other neural networks is the incorporation of physical laws into the loss function, which guides the training process. The total loss consists of two main components and . The component enforces agreement with observed or boundary condition data while the second component ensures the solution satisfies the governing differential equations.
The term ensures that the neural network’s predictions match available experimental or simulation data. For example, if we have data points , the data loss can be defined by the equation
While the term encodes the governing physical laws, typically represented as PDEs. Consider a general PDE of the form
where is a differential operator. The physics loss, , is defined as follows:
where are collocation points sampled from the domain. The derivatives of required for are computed using an interesting feature of PINNs mapped to automatic differentiation [22].
The mathematical formulation of the physics loss often involves the Mean Squared Error (MSE) of the PDE residual, calculated at a set of collocation points within the problem domain. This MSE provides a quantitative measure of how well the neural network’s predicted solution satisfies the governing physical laws.
2.2. Automatic Differentiation
Automatic differentiation is an important technique in PINNs that enables the network to learn solutions to differential equations. The applications of automatic differentiation will be presented in Section 3. We know that PINNs are designed to solve differential equations by leveraging deep neural networks. It takes input variables t and spatial coordinates and outputs an approximation of the solution . To check how well this approximate solution satisfies a given PDE, we need to compute its partial derivatives with respect to the input variables. Automatic differentiation makes these derivatives analytically accurate at any given collocation point within the domain. These derivatives are then plugged into the PDE to form the residual term. For example, for a PDE of the form , the residual would be .
When a neural network is trained by minimizing the composite loss function, automatic differentiation is used in this step to compute the gradients of the total loss function with respect to the network parameters . This is the standard backpropagation algorithm, which is a special case of reverse-mode automatic differentiation. These gradients guide the update of the network’s weights and biases to reduce the loss. Some highly noticeable features of automatic differentiation are as follows: high accuracy, computational efficiency, ease of implementation, and handling complex geometries and high dimensions.
2.3. How PINNs Work
PINNs function by utilizing a neural network, often a deep learning model, to approximate the solution of a given differential equation. The core of the PINN approach is the incorporation of the differential equation itself into the network’s training process as an additional term within the loss function [23]. This is made possible through the use of automatic differentiation, a powerful technique that allows for the efficient and accurate computation of the derivatives of the neural network’s output with respect to its input variables such as space and time. These computed derivatives enable the evaluation of the residual of the differential equation. During training, the neural network’s parameters such as weights and biases are adjusted by an optimization algorithm to minimize a loss function. This loss function typically comprises the error in satisfying the differential equation, the physics loss and optionally the terms that quantify the error between the network’s predictions and any available labeled data, as well as the error in satisfying the specified boundary and initial conditions [10]. By minimizing this combined loss, the PINN is guided towards learning a solution that is not only consistent with any provided data but also adheres to the known physical laws encoded in the differential equation.
A distinctive feature of PINNs is the integration of the governing equations into the training loop. Through automatic differentiation, the derivatives of with respect to x are efficiently computed. These derivatives are then used to formulate the residual of the governing equation where and denote differential and integral operators applied to , respectively. The loss function is constructed to penalize deviations from the equation’s residual, initial and boundary conditions.
where each term represents the MSE at collocation, boundary, or data points. The weights , , and control the contribution of each loss component.
The flexibility of PINNs allows them to solve both forward and inverse problems. In forward problems, the objective is to compute given known coefficients and source terms. In inverse problems, unknown parameters in the PDE are treated as trainable variables and inferred simultaneously with the solution.
In Figure 1, we illustrate the architecture and working mechanism of PINNs. The process begins with a neural network , where x represents the input variables and denotes the trainable parameters of the network. The network—composed of multiple hidden layers with activation functions denoted by —outputs an approximation of the solution to a given differential equation. This predicted output u is then used to compute the residuals of the governing equations within the domain. These include differential operators and possibly integral operators , which are evaluated through automatic differentiation. The residuals are substituted into a composite physics expression , representing the original ODE, PDE, or IDE.
Figure 1.
Schematic diagram of a physics-informed neural network.
Simultaneously, the network output is also tested against initial and boundary conditions via constraint functions , , etc., forming the condition residual . Both the domain-based residual and the boundary-based residual are combined into a total loss function. This loss function, typically composed of multiple weighted terms, quantifies how well the neural network output adheres to the physical laws and constraints. It is then minimized using gradient-based optimization to update the neural network parameters , guiding the network toward a solution that satisfies the physical system.
2.4. Theoretical Properties and Analysis
2.4.1. Some Popular Activation Functions
A crucial component of neural networks is the activation function, which introduces nonlinearity into the model. This nonlinearity is essential because without it, a neural network, regardless of its depth, would essentially function as a single linear layer, severely limiting its ability to learn complex patterns and relationships present in real-world data. Various types of activation functions are commonly used in neural networks, each with its own characteristics and suitability for different tasks. The most popular of some of these are Rectified Linear Unit (ReLU) and its variants like Leaky ReLU, Parametric ReLU (PReLU), and Exponential Linear Unit (ELU). Some other notable activation functions are Sigmoid, tanh, Swish, and Softmax.
2.4.2. Error and Convergence Analysis
To understand the theory behind the working principles of PINNs, we need to study the approximation capabilities. In this section, we will discuss key theoretical aspects of PINNs, including convergence and error analysis.
Convergence refers to the process where the neural network progressively approaches a solution that satisfies both the physics constraints and any given initial and boundary conditions. For linear PDEs, Shin et al. [24] proved that as the number of training points grows, any sequence of PINNs minimizer converges to the true solution. In particular, for second-order linear elliptic and parabolic PDEs, the PINNs minimizer , trained with n collocation points, converges strongly to the unique PDE solution in norm. If initial/boundary conditions are enforced at all collocation points, convergence is even maintained in the norm.
On the optimization side, overfitting can occur if PINNs fit the data points too precisely without capturing global physics. Doumeche et al. [25] found that to prevent the overfitting and make the neural network’s learning more reliable and consistent, we need to use the regularization technique. With a standard (ridge) penalty on weights, the trained PINN risk converges to the minimum possible risk in the network class as data increase. We will describe these results below.
Let denote the class of ridge functions where n represent the input dimension of the ridge function, denote the number of extracted features, refer to the number of ridge components, and is the parametric function. Then
where is the ridge hyperparameter. We denote by a minimizing sequence of this risk, i.e.,
Theorem 1
(after Shin et al. [24]). Consider the ridge PINN problem (1), over the class , where . Assume that the condition function h is Lipschitz and that are polynomial operators. Assume, in addition, that the ridge parameter is of the form
Then, almost surely,
Theorem 2
(after Shin et al. [24]). [The ridge PINN is asymptotically unbiased] Under the same assumptions as in Theorem 1, one has, almost surely,
The fundamental results in [24] shows that if a PINN is trained at increasingly many collocation points, its error converges strongly. In the limit, the PINN solution matches the exact solution in uniform norm. In the work of Yoo et al. [26], stable error bound of 1D linear elliptic boundary-value problems has been proved. The norm of is bounded by the PINN loss, independent of the differential equation coefficients.
Mishra et al. [27] carried out a detailed analysis for linear and semi-linear parabolic PDEs in high dimensions. They show that there exist PINNs that achieve arbitrary accuracy in approximating the solution, with network size scaling only polynomial in d and . The construction of their work is summarized in the following theorem.
Theorem 3
(after Mishra et al. [27]). Let and let . For every , let , with bounded first partial derivatives, let be a probability space, and let be a function that satisfies
Moreover, assume that for every , there exist tanh neural networks and with, respectively, and neurons and weights that grow as and such that
Then there exist constants C, such that for every and , there exist a constant and a tanh neural network with at most neurons and weights that grow at most as for such that
Moreover, is defined as
where is the solution of the stochastic differential equation
and is independent of d.
They further prove quantitative generalization bounds. If the PINN training loss is below , then the -solution error is also . They present this result precisely in the following theorem.
Theorem 4
(after Mishra et al. [27]). Let u be a (classical) solution to a linear Kolmogorov equation
where and are affine functions, denotes the gradient and the Hessian. with and , a PINN and let the residuals be defined by
Then
where
2.5. Training and Optimization in Neural Networks
Training a neural network involves adjusting its parameters to minimize the loss function. This minimization occurs by iteratively adjusting the internal parameters like weights () and biases (b). This process typically involves four steps to constitute one training iteration. We will describe these four steps briefly.
- (i)
- Forward Pass: Input data is fed into the network and propagates through its layers. Each neuron in a layer receives inputs from the previous layer, applies a weighted sum and an activation function, and passes the output to the next layer. This process continues until an output is generated by the final layer.
- (ii)
- Loss Computation: The network’s output is compared to the true target values using a loss function. This function quantifies the error or discrepancy between the predicted output and the expected output. Common loss functions include measures prediction errors like Mean Squared Error and Cross-Entropy. A higher loss value indicates a greater error.
- (iii)
- Backward Pass: The error calculated by the loss function is propagated backward through the network. It computes the gradient of the loss function with respect to each weight and bias in the network by using chain rule.
- (iv)
- Parameter Update: Using the gradients computed during backpropagation, an optimizer algorithm adjusts the weights and biases of the network. The goal is to update the parameters in a way that reduces the loss function. The size of the steps taken during this adjustment is controlled by the learning rate.
The training process involves performing many such iterations, often grouped into epochs, where one epoch represents a full pass through the entire training dataset. The cycle repeats for each batch of training data. The efficiency and overall success of neural network training process are profoundly influenced by the selection of optimization algorithms and the careful tuning of hyperparameters. Modern optimizers like Adam [28] rapidly adjust learning rates during training that often leads to faster convergence and improved performance compared to traditional methods. There are also other optimization algorithms used in neural networks like RMSprop and Adagrad. Each optimizer has its own strengths and weaknesses, and the most suitable choice can depend on the specific network architecture, the input data, and the nature of the problem being solved. Performing experiments with different optimizers is common practice to find the one that yields the best results for a given task.
3. Application of PINNs to Selected Equations
This section is dedicated to the practical applications of physics-informed neural networks in the domain of ordinary differential equations, partial differential equations, and integro-differential equations. The manuscript considers the following equations: the Poisson equation, the Burgers’ equation, and the Volterra integro-differential equation. For clarity and to better justify the choice of test problems, we briefly outline the key differences between the Poisson Equation (2) and the Burgers’ Equation (3). The Poisson equation in the considered form is a linear, second-order elliptic equation depending only on the spatial variable, thus describing a stationary boundary-value problem. In contrast, the Burgers’ equation is an evolutionary equation, which for has a parabolic character, involves the time derivative, and requires both an initial condition and boundary conditions. Unlike the linear diffusion operator in the Poisson equation, Burgers’ equation contains the nonlinear advection term , leading to much more complex phenomena such as the formation of steep gradients or shock waves in the limit . These differences also result in distinct solution behaviors: for the Poisson equation, solutions are smooth (for smooth input data) and free of discontinuities, whereas the Burgers’ equation can generate boundary layers and high-gradient structures, requiring careful consideration when choosing numerical methods. In practice, this means that solutions of the Poisson equation can be relatively easily approximated using PINNs, while for the Burgers’ equation, the additional challenge lies in accurately capturing the temporal dynamics and the nonlinear advection term. This sometimes necessitates denser sampling in regions with steep gradients or appropriate weighting of the loss function components during the training of the network. To explicitly quantify the accuracy of our implementations, we directly compared the PINN solutions with the exact analytical solutions for each test case. This provides implementation-specific error bounds, expressed as mean and maximum errors, which are reported in the corresponding tables and figures.
We calculated the solutions and compared them using different hyperparameters in the constructed network. We also considered some traditional numerical methods for benchmarking. For the sake of completeness, our focus was on nonlinear problems.
3.1. Poisson Equation
As our first problem to be tested for physics-informed neural networks (PINNs), we are considering a nonlinear yet simple one-dimensional Poisson equation. The Poisson equation arises in various fields such as electrostatics, fluid dynamics, and heat transfer [29,30]. Numerical and exact solutions of Poisson equation have been found in studies using different approaches, such as R.W. Klopfenstein et al., who studied the Poisson equation for semiconductors doped with an ion-implanted profile using the mesh method of solution [31]. Using deep neural networks, S. Bhardwaj et al. studied a solution of the Poisson equation [32].
When one dimensional (1D), the Poisson equation is simpler but still captures the essential features of the problem. We are considering the equation of the form
with boundary conditions , and . The exact solution for this problem is given as . When this problem is solved using PINNs, quite satisfactory results were obtained. We have tested different hyperparameters by changing the number of training points on the domain, number of neurons, and number of hidden layers.
The performance of the PINNs in solving the one-dimensional Poisson equation is summarized in Table 1. To compute the errors, a grid independent of the training points was used. The table reveals that the mean absolute error varies significantly with different hyperparameter combinations. Notably, the smallest error was achieved with higher neuron counts and intermediate layer depths, suggesting that a moderate network size can yield highly accurate results.
Table 1.
Performance of different hyperparameter combinations for Equation (2).
Figure 2 compares the PINNs solution with the exact solution, showcasing the network’s ability to closely match the theoretical result. The close alignment between the predicted and exact solutions confirms the robustness of PINNs for this class of problems. A plot of the absolute error as a function of x reveals that the error is expected to be minimal across the domain.
Figure 2.
Plots of the approximated and exact solution (a) and the absolute approximation error (b).
The train loss and test loss during the process of solving Equation (2) is shown in Figure 3. It can be seen that the training loss and test loss exhibit a strong correlation, with both decreasing sharply in early steps before stabilizing. The minimal gap between training and test loss highlights the model’s balanced capacity.
Figure 3.
Train and test loss during solution of (2).
The influence of the number of hidden layers and neurons on the mean and maximum errors was also investigated. These errors were computed on a uniform test grid of dimension 100, independent of the training data. In all experiments, 200 collocation points and iterations were used. The last column in Table 2 (Params) refers to the total number of network parameters (weights and biases). The tests showed that the training time increases almost linearly with the number of parameters (correlation coefficient ). The lowest mean and maximum errors were obtained for moderately small architectures, e.g., (20 neurons, two hidden layers, parameters). These networks achieved errors on the order of with relatively short computation time (≈13 s). For very large architectures (e.g., with parameters), the results were significantly worse, with errors reaching the range of . This indicates optimization difficulties and instability of PINNs in this regime, likely due to the limited number of iterations (all tests used iterations). The best trade-offs between accuracy and training time are achieved by shallower and moderately wide networks, e.g., or . Based on the data in Table 2, Figure 4 and Figure 5 illustrate the mean error as a function of the number of network parameters, as well as the computation time as a function of the number of parameters.
Table 2.
Performance of different hyperparameter combinations for Poisson equation.
Figure 4.
Mean error as a function of the number of network parameters (Poisson equation).
Figure 5.
Training time as a function of the number of network parameters (Poisson equation).
In Figure 6, the error distribution at the grid points is shown for four selected network architectures, while Figure 7 shows prediction from PINN for two hidden layers and 20 neurons.

Figure 6.
Error distribution obtained from the PINN for selected hyperparameters concerning the number of hidden layers and neurons (Poisson equation).
Figure 7.
Prediction from PINN for 2 hidden layers and 20 neurons (Poisson equation).
The results, summarized in Table 3, confirm the expected trend that the mean error decreases as the number of training points increases, thus providing numerical evidence for the theoretical convergence. We note, however, that simply increasing the number of training points is not sufficient by itself. To achieve stable convergence, it is also necessary to appropriately adjust the training procedure, in particular the number of iterations and the learning rate schedule of the Adam optimizer. Our experiments implement such a multi-stage training scheme, ensuring that the optimizer has the capacity to effectively use the additional information provided by more collocation points. In the cases of 5000 and 10,000 training points (compared to the other cases), the number of training iterations was significantly increased.
Table 3.
Mean and maximum error for different numbers of training points in the Poisson equation experiment.
Additional experiments on the Poisson equation were conducted using several commonly employed activation functions, including ReLU, Sigmoid, sin, Swish, tanh, and ELU. The results are summarized in Table 4. The comparison clearly shows that tanh yields the lowest errors among the tested functions, with Swish and Sigmoid also performing competitively well. In contrast, ReLU and ELU result in significantly larger prediction errors, confirming that smoother nonlinearities are more suitable for this PDE problem. These results justify our choice of tanh as the primary activation function in the paper, while also highlighting the potential of Swish and Sigmoid as alternatives in related applications.
Table 4.
Mean and maximum error for different activation functions in the Poisson equation experiment.
3.2. Burgers’ Equation
Burgers’ equation is a fundamental partial differential equation (PDE) that combines nonlinear advection and diffusion, serving as a simplified model for fluid dynamics, shock waves, and traffic flow [33]. Burgers’ equation is a one-dimensional analog of the Navier–Stokes equations, making it a valuable tool for studying. It is given by the following:
with boundary conditions and initial condition and . In this case:
- u is the solution as a function of space and time, ;
- is the viscosity coefficient which controls the smoothness of the solution;
- and are known functions.
Burgers’ equation has its basis in the mathematical literature as shown by S.-S. Xie et al., who solved it using reproducing kernel function [34], and B. Inan et al., who solved it using implicit and fully implicit exponential finite difference methods [35]; even more extensive literature on Burgers’ equation can be seen in the work of M.P. Bonkile [36].
In this work we consider Equation (3) for with the initial condition
and homogeneous boundary conditions
This equation is solved by S. Kutluay et al. in [37]. They have used the explicit and exact-explicit finite difference methods to solve the equation with two different initial conditions. We explain the explicit and exact-explicit finite difference methods here and then compare the results through these methods with the physics-informed neural networks. First, we briefly describe these methods, and then we proceed with the comparison of the results.
3.2.1. Explicit Finite Difference Method
In this approach, the Burgers’ equation is discretized using a standard explicit scheme applied to the linear heat equation obtained via the Hopf–Cole transformation. The spatial domain is divided into N intervals with step size h, and the time domain is discretized with step size k. The finite difference approximation for the linear heat equation is given by the following:
where and approximates the solution at grid point . The boundary conditions are handled separately for and . The stability condition for this method is . Once the solution is computed, the Hopf–Cole transformation is applied to obtain the numerical solution for the Burgers’ equation:
This method is straightforward but requires careful attention to the stability constraint, especially for small values of viscosity .
3.2.2. Exact-Explicit Finite Difference Method
This approach derives an exact solution to the finite difference scheme itself, rather than discretizing the continuous equation. The method assumes a product solution of the form , separating spatial () and temporal () components. The spatial part yields the following:
while the temporal part satisfies the following:
The complete solution combines these through superposition:
The coefficients are determined from the initial condition using Fourier cosine series. Finally, the Hopf–Cole transformation converts this to the Burgers’ equation solution:
This method provides an exact solution to the discrete equations, converging to the Fourier solution as .
3.2.3. Accuracy Comparison Between Classical Numerical Methods and PINNs
We have solved Equation (3) using PINNs and then compare the results with the results obtained from explicit, exact-explicit methods, and with exact solution [37]. The solutions are obtained using 200 training points and 10 layers of neurons, each with five neurons, while the activation function is tanh. Table 5 shows a comparison of numerical solutions obtained using the explicit method, exact-explicit method, and PINNs with the exact solution at time and . We can see that all methods closely follow the exact solution, but there are small differences. The PINNs method gives values that are very close to the exact solution, especially at points where x is around .
Table 5.
Comparison of numerical solutions using explicit method, exact-explicit method, and PINNs with exact solution at and .
A comparison of exact solution and approximate solutions is given in Figure 8. Figure 8a plots these solutions visually. The PINNs, Explicit, and Exact-Explicit solutions all follow the exact solution curve very well. However, the small differences are more noticeable in Figure 8b, which shows the absolute error between each method and the exact solution. The Explicit method has the largest error near , while PINNs and Exact-Explicit methods have smaller and more consistent errors.
Figure 8.
Plots of the approximated and exact solution (a) and the absolute approximation error (b).
Figure 9 shows the training and testing loss of the PINNs model. Both the training and testing losses decrease quickly in the first few steps and continue to go down gradually. This suggests that the model is learning well and generalizing properly without overfitting.
Figure 9.
Train and test loss.
To further validate the performance of the physics-informed neural networks approach in solving Burgers’ equation, we investigated the evolution of the solution at a fixed spatial point . Table 6 presents a comparison between the PINNs solution and traditional numerical methods (explicit and exact-explicit schemes) against the exact solution for various time values. It is evident from the table that the PINNs approach provides results with high accuracy, closely matching the exact solution.
Table 6.
Comparison of the numerical solutions with exact solution at different times for .
As time progresses from to , the solution exhibits a smooth decay, which is characteristic of the dissipative nature of Burgers’ equation with viscosity . This behavior is clearly illustrated in Figure 10, where the PINNs prediction aligns tightly with both explicit schemes and the exact solution.
Figure 10.
Comparison of solutions for different values of t and .
As part of the tests, a comparison was also conducted with another numerical method described in [38]. This method depends on the mesh density (parameter N). Table 7 presents the values obtained with the referenced numerical method, the results from PINN, as well as the exact solutions. Figure 11 shows the error plots for the referenced method (for ) and for the PINN results at . The errors of the PINN solution are comparable to those reported in [38]. However, it can be observed that for , the errors obtained with PINN are smaller.
Table 7.
Comparison of the numerical solution (BDF-1) with the exact solution and PINN method at different space points for .
Figure 11.
Comparison of absolute errors of the BDF-1 method for different values of N with the PINN method as a function of the spatial coordinate x at .
Another test involved examining how the weights in the loss function on the PDE and the initial–boundary conditions affected the obtained results. For this purpose, a set of several weights was assumed, and after training the model, the mean error computed on a grid was evaluated (see Table 8). In this test, the network architecture and hyperparameters were as follows: hidden layers: 5; neurons per layer: 10; Adam optimizer; collocation points inside domain: 200; boundary/initial condition points: 128; number of iterations: 10,000. It is also assumed that .
Table 8.
The effect of loss function weights on the mean and maximum error of the results.
The weights assigned to individual components of the loss function have a moderate impact on the obtained errors. In particular, it can be clearly observed that when the weights for the initial–boundary conditions are several times larger than those for the PDE inside the domain, the resulting solution errors are significantly reduced.
The effect of the distribution of collocation points on the mean and max errors in the domain, computed on a grid, was also examined (see Table 9). The impact of the number of collocation points inside domain on this error was also investigated (see Table 10). In these tests, equal weights were used, while the settings of the remaining parameters and the network architecture are the same as in the previous test.
Table 9.
Effect of collocation point distribution type on the mean and maximum error.
Table 10.
Effect of the number of collocation points on the mean and maximum error.
The differences in the obtained errors when changing the distribution of collocation points are minor. The smallest errors were obtained using the Hammersley method. When increasing the number of collocation points, the obtained errors were slightly smaller. For example, taking 10,000 points results in the max error , while for 100 points it was ≈0.031. However, even with 50 points, the results are satisfactory. Increasing the number of collocation points only slightly reduces the errors while extending the computation time. Additionally, the training times of the model are provided in Table 10. Increasing the number of collocation points leads to longer computation times; however, the growth is not linearly proportional. For example, for 1000 collocation points, the computation time was 34 s, whereas for 10,000 points, the model required approximately 120 s to train.
The final test conducted on this example was to examine the impact of the network architecture (the number of hidden layers and neurons per layer) on the obtained errors. The results of this test are shown in Table 11. The last column in Table 11, named Params, refers to the total number of network parameters (the number of weights and biases). An insufficient number of hidden layers or neurons in the network is not enough for the model to provide satisfactory predictions. The mean and maximum errors for a single hidden layer are unsatisfactory, as are the errors for two hidden layers with five neurons. Moving from one to two layers yields the largest quality jump (e.g., with 10 neurons Mean Error drops to at similar training time). The best results were obtained with six layers and 30 neurons. Increasing the number of hidden layers and neurons has a significant impact on the computation time. The training time grows almost linearly with the number of parameters (correlation ). Large and deep models may require significantly longer training time without a guaranteed improvement in error. The results indicate that some deeper yet still moderately wide models (e.g., ) are more efficient per parameter and per unit of time than very wide ones (e.g., , ), which suffer from optimization difficulties. Based on the data in the Table, Figure 12 and Figure 13 show the mean error as a function of the number of network parameters, as well as the computation time as a function of the number of parameters. The solution obtained from PINN for best case is presented in Figure 14. Figure 15 shows the error distribution obtained from the PINN model for selected network architecture settings. The smallest errors were obtained with six hidden layers and 30 neurons (Figure 15c), while the largest errors occurred with one hidden layer and 10 neurons (Figure 15a).
Table 11.
Performance of different hyperparameter combinations concerning number of hidden layers and neurons.
Figure 12.
Mean error as a function of the number of network parameters (Burgers’ equation).
Figure 13.
Training time as a function of the number of network parameters (Burgers’ equation).
Figure 14.
Prediction from PINN for 6 hidden layers and 30 neurons (Burgers’ equation).
Figure 15.
Error distribution obtained from the PINN for selected hyperparameters concerning the number of hidden layers and neurons (Burgers’ equation).
The results demonstrate that physics-informed neural networks (PINNs) can effectively solve the Burgers’ equation with high accuracy as compared to traditional methods like the explicit and exact-explicit schemes. PINNs produced smaller approximation errors across most of the domain.
3.3. Volterra Integro-Differential Equation
Volterra integro-differential equations (VIDEs) are a class of functional equations that combine differential and integral operations on an unknown function. A standard form of a first-order VIDE is as follows:
where is a known kernel function, and f defines the local dynamics. VIDEs naturally arise in systems where the future state depends not only on the current state but also on the historical evolution of the system [39].
Volterra integro-differential equations appear in various scientific and engineering contexts such as population dynamics, heat transfer in materials with memory, neuroscience, finance, etc. A detailed analysis and overview on these models can be seen in the works [40,41]. Thus, these equations are crucial for modeling systems where historical data fundamentally influences future dynamics.
Traditional numerical techniques, such as quadrature methods, finite difference schemes, or collocation methods, often face difficulties due to high computational cost from evaluating the integral term at each time step, stability issues over long time intervals, and accuracy demands for capturing both differential and integral contributions. These challenges intensify for higher-order VIDEs due to the presence of multiple derivatives and their interactions with history-dependent terms. PINNs offer a modern approach by embedding the physical laws into the loss function of a neural network.
In this section we are going to solve a fourth order Volterra integro-differential equation using PINNs and then compare the results with a traditional numerical method where the same equation is solved using variational iteration method with collocation. First we will describe this method and see its convergence analysis and then compare the results through figures and tables. The equation is solved by Otaide et al. in their work on “Numerical treatment of linear Volterra integro differential equations using variational iteration algorithm with collocation” [42]. The equation is described as
The exact solution for this problem is as follows:
In the work of Otaide et al. [42], the equation is solved using fourth-kind Chebyshev polynomials combined with the variational iteration method and a collocation technique, resulting in a hybrid approach that integrates variational iteration with collocation. The method is briefly described in Section 3.3.1.
Before comparing the results obtained with PINNs to those from the method described in Section 3.3.1, computations were performed on various network architectures (see Table 12). Similar to the previous examples, the lowest mean errors were achieved by moderate architectures: (Mean ), (Mean ), and (Mean ). Analogous to the previously analyzed examples, Figure 16 presents the error distribution at the grid points for four sample cases from Table 12. The solution obtained with PINN using six hidden layers and 50 neurons is shown in Figure 17.
Table 12.
Performance of different hyperparameter combinations for VIDE.
Figure 16.
Error distribution obtained from the PINN for selected hyperparameters concerning the number of hidden layers and neurons (Volterra equation).
Figure 17.
Prediction from PINN for 6 hidden layers and 50 neurons (VIDE).
3.3.1. The Standard Variational Iteration Method Combined with Shifted Chebyshev Polynomials of the Fourth Kind
Consider a Volterra integro-differential equation of the form
where is the unknown function, is a known forcing term, and is the kernel. The correction functional for Variational Iteration Method (VIM) is
with as the Lagrange multiplier determined optimally using variational principles. The subscript m indicates the mth iteration, and is considered as a restricted variation () during the optimization.
To discretize the problem, we apply the standard collocation technique by choosing collocation points uniformly distributed over
where N denotes the number of collocation points. This discretization converts the continuous problem into a system of algebraic equations.
For better approximation, we expand the solution using Chebyshev polynomials of the fourth kind , which are orthogonal with respect to the weight function
These polynomials satisfy the recurrence relation
Chebyshev polynomials exhibit excellent approximation properties, such as rapid convergence and minimization of the maximum error.
To adapt Chebyshev polynomials to the interval , we employ the shifted Chebyshev polynomials defined by the following:
with the following recurrence relation:
These shifted polynomials retain the favorable properties of their unshifted counterparts while matching the domain of the problem. The hybrid method approximates the solution as
and iteratively refines it using
where are coefficients to be determined.
3.3.2. Convergence of the Method
When solving a differential equation numerically using the iteration methods, it is important to prove that the method actually converges, i.e., the sequence of approximations produced by the method will become closer and closer to the true solution as . This analysis—in this case—relies on the Banach’s Fixed-Point Theorem.
Theorem 5
Let X be a Banach space with norm , and let be the linear operator defined by the following:
where is the approximate solution using shifted Chebyshev polynomials . If is a contraction, i.e., there exists such that
then
- 1.
- has a unique fixed point .
- 2.
- The sequence generated by converges to for any initial guess .
Proof.
By the Banach Fixed-Point Theorem, has a unique fixed point since X is complete and is a contraction. For , the triangle inequality and contraction property yield
The series converges as , so is Cauchy and thus convergent. At the fixed point , the iteration reduces to
implying . Hence, solves the VIDE. □
Proposition 1.
According to Theorem 5, for the linear mapping defined as the following:
or equivalently
this is a necessary condition for the variational iteration approach to converge. The sequence converges to a fixed point of which is also a solution of (5).
3.3.3. Comparison of Results
Now we show the comparison between results of Equation (5) solved through the standard variational iteration method in [42] and through PINNs. Solving Volterra integro-differential equations by PINNs, the integral term is approximated via numerical quadrature. Gaussian quadrature with a prescribed degree to approximate the integral was used. In Figure 18a the plot shows excellent agreement between the approximate solutions and the exact analytical solution (in case of PINNs) over the interval . Figure 18b illustrates the absolute error between the approximate and exact solutions, showing that the error remains minimal throughout the domain, with a slight increase near the upper boundary.
Figure 18.
Plots of the approximated and exact solution (a) and the absolute approximation error (b).
The results are obtained using 50 training points and three layers of neurons, each with eight neurons, while the activation function is tanh.
In Table 13, we provide numerical values of the solution at selected time points. It compares the results from the variational iteration method [42], PINNs, and the exact solution. It can be seen that at both solutions perfectly match the exact solution. The solutions [42] are generally better than PINNs for while for larger t values PINNs unexpectedly outperform solutions [42].
Table 13.
Comparison of approximate and exact solutions.
Comparing in terms of error metrics we can see that in Table 14 the mean and median errors for PINNs are an order of magnitude smaller than the reference solutions [42]. For example, PINNs’ mean error is ∼27 times smaller than the reference error, i.e, . The standard deviation for PINNs is much lower, which testifies that it has more consistent performance as compared to the reference solutions [42].
Table 14.
Error metrics comparison between PINNs and solutions [42].
In Figure 19, we give a comparative analysis of the absolute errors between the PINNs and the reference solutions [42]. In Figure 19a, the line plot shows that the PINNs approach consistently maintains a significantly lower error across the entire time interval compared to solutions [42]. The error for the reference method [42] grows steadily with time and reaches above near , whereas the error for the PINNs solution remains very close to zero.
Figure 19.
Illustration of comparative analysis of the absolute errors, (a) absolute error for the reference solution and that obtained with PINN; (b) boxplot of the statistical summary of the error distributions.
In Figure 19b, the boxplot provides a statistical summary of the error distributions. The median error and interquartile range for the PINNs is lower than those of reference solutions [42]. Moreover, the PINNs error exhibits minimal spread and lower maximum error.
These comparisons clearly demonstrate that the PINNs-based approach outperforms the traditional method in both average accuracy and robustness, making it a more reliable choice for solving these kind of Volterra integro-differential equations.
4. Conclusions
This work investigated the application of physics-informed neural networks (PINNs) to solve the Poisson equation, Burgers’ equation, and a fourth-order Volterra integro-differential equation. Comparative analysis with traditional numerical methods such as finite difference schemes and variational iteration techniques highlighted the flexibility and accuracy of PINNs, especially in scenarios where classical approaches encounter limitations. A systematic study of key hyperparameters—including the distribution of collocation points, network architecture, and activation functions—revealed their significant influence on both training efficiency and solution accuracy. Theoretical results further demonstrated that, for second-order linear elliptic and parabolic PDEs, PINN solutions converge strongly to the exact solution in the norm, and in the norm when boundary or initial conditions are enforced at all collocation points.
While the findings support the potential of PINNs as a viable alternative to conventional numerical solvers, challenges such as sensitivity to hyperparameter choices and high computational cost persist. All experiments were conducted using the DeepXDE library [43], which proved to be a reliable platform for model implementation and evaluation. Future work may focus on improving training strategies, automating hyperparameter tuning, and extending the framework to more complex or high-dimensional PDEs and inverse problems.
Author Contributions
Conceptualization, R.B., M.P. and D.A.M.; methodology, R.B., M.P. and D.A.M.; software, R.B., M.P. and D.A.M.; validation, R.B., M.P. and D.A.M.; investigation, R.B., M.P. and D.A.M.; writing—original draft preparation, R.B., M.P. and D.A.M.; writing—review and editing, R.B., M.P. and D.A.M. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
- Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations. arXiv 2017, arXiv:1711.10561. [Google Scholar] [CrossRef]
- Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics Informed Deep Learning (Part II): Data-driven Discovery of Nonlinear Partial Differential Equations. arXiv 2017, arXiv:1711.10566. [Google Scholar] [CrossRef]
- Cuomo, S.; Di Cola, V.S.; Giampaolo, F.; Rozza, G.; Raissi, M.; Piccialli, F. Scientific machine learning through physics–informed neural networks: Where we are and what’s next. J. Sci. Comput. 2022, 92, 88. [Google Scholar]
- Brociek, R.; Pleszczyński, M. Differential Transform Method and Neural Network for Solving Variational Calculus Problems. Mathematics 2024, 12, 2182. [Google Scholar] [CrossRef]
- Faroughi, S.A.; Pawar, N.M.; Fernandes, C.; Raissi, M.; Das, S.; Kalantari, N.K.; Kourosh Mahjour, S. Physics-Guided, Physics-Informed, and Physics-Encoded Neural Networks and Operators in Scientific Computing: Fluid and Solid Mechanics. J. Comput. Inf. Sci. Eng. 2024, 24, 040802. [Google Scholar] [CrossRef]
- Usama, M.; Ma, R.; Hart, J.; Wojcik, M. Physics-Informed Neural Networks (PINNs)-Based Traffic State Estimation: An Application to Traffic Network. Algorithms 2022, 15, 447. [Google Scholar] [CrossRef]
- Wang, S.; Wang, H.; Perdikaris, P. On the eigenvector bias of Fourier feature networks: From regression to solving multi-scale PDEs with physics-informed neural networks. Comput. Methods Appl. Mech. Eng. 2021, 384, 113938. [Google Scholar] [CrossRef]
- Yuan, L.; Ni, Y.Q.; Deng, X.Y.; Hao, S. A-PINN: Auxiliary physics informed neural networks for forward and inverse problems of nonlinear integro-differential equations. J. Comput. Phys. 2022, 462, 111260. [Google Scholar] [CrossRef]
- Xiang, Z.; Peng, W.; Liu, X.; Yao, W. Self-adaptive loss balanced Physics-informed neural networks. Neurocomputing 2022, 496, 11–34. [Google Scholar] [CrossRef]
- Dwivedi, V.; Parashar, N.; Srinivasan, B. Distributed learning machines for solving forward and inverse problems in partial differential equations. Neurocomputing 2021, 420, 299–316. [Google Scholar] [CrossRef]
- Lee, J. Anti-derivatives approximator for enhancing physics-informed neural networks. Comput. Methods Appl. Mech. Eng. 2024, 426, 117000. [Google Scholar] [CrossRef]
- McClenny, L.D.; Braga-Neto, U.M. Self-adaptive physics-informed neural networks. J. Comput. Phys. 2023, 474, 111722. [Google Scholar] [CrossRef]
- Brociek, R.; Pleszczyński, M. Differential Transform Method (DTM) and Physics-Informed Neural Networks (PINNs) in Solving Integral–Algebraic Equation Systems. Symmetry 2024, 16, 1619. [Google Scholar] [CrossRef]
- Ren, Z.; Zhou, S.; Liu, D.; Liu, Q. Physics-Informed Neural Networks: A Review of Methodological Evolution, Theoretical Foundations, and Interdisciplinary Frontiers Toward Next-Generation Scientific Computing. Appl. Sci. 2025, 15, 92. [Google Scholar] [CrossRef]
- Lawal, Z.K.; Yassin, H.; Lai, D.T.C.; Che Idris, A. Physics-Informed Neural Network (PINN) Evolution and Beyond: A Systematic Literature Review and Bibliometric Analysis. Big Data Cogn. Comput. 2022, 6, 140. [Google Scholar] [CrossRef]
- Coutinho, E.J.R.; Dall’Aqua, M.; McClenny, L.; Zhong, M.; Braga-Neto, U.; Gildin, E. Physics-informed neural networks with adaptive localized artificial viscosity. J. Comput. Phys. 2023, 489, 112265. [Google Scholar] [CrossRef]
- Diao, Y.; Yang, J.; Zhang, Y.; Zhang, D.; Du, Y. Solving multi-material problems in solid mechanics using physics-informed neural networks based on domain decomposition technology. Comput. Methods Appl. Mech. Eng. 2023, 413, 116120. [Google Scholar] [CrossRef]
- Lazovskaya, T.; Malykhina, G.; Tarkhov, D. Physics-Based Neural Network Methods for Solving Parameterized Singular Perturbation Problem. Computation 2021, 9, 97. [Google Scholar] [CrossRef]
- Jagtap, A.D.; Mao, Z.; Adams, N.; Karniadakis, G.E. Physics-informed neural networks for inverse problems in supersonic flows. J. Comput. Phys. 2022, 466, 111402. [Google Scholar] [CrossRef]
- Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
- Baydin, A.G.; Pearlmutter, B.A.; Radul, A.A.; Siskind, J.M. Automatic differentiation in machine learning: A survey. J. Mach. Learn. Res. 2018, 18, 1–43. [Google Scholar]
- Uddin, Z.; Ganga, S.; Asthana, R.; Ibrahim, W. Wavelets based physics informed neural networks to solve non-linear differential equations. Sci. Rep. 2023, 13, 2882. [Google Scholar] [CrossRef] [PubMed]
- Shin, Y.; Darbon, J.; Karniadakis, G.E. On the convergence of physics informed neural networks for linear second-order elliptic and parabolic type PDEs. arXiv 2020, arXiv:2004.01806. [Google Scholar] [CrossRef]
- Doumèche, N.; Biau, G.; Boyer, C. Convergence and error analysis of PINNs. arXiv 2023, arXiv:2305.01240. [Google Scholar] [CrossRef]
- Yoo, J.; Lee, H. Robust error estimates of PINN in one-dimensional boundary value problems for linear elliptic equations. arXiv 2024, arXiv:2407.14051. [Google Scholar] [CrossRef]
- De Ryck, T.; Mishra, S. Error analysis for physics-informed neural networks (PINNs) approximating Kolmogorov PDEs. Adv. Comput. Math. 2022, 48, 79. [Google Scholar] [CrossRef]
- Kingma, D.P. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Holm, D.D. Applications of Poisson geometry to physical problems. Geom. Topol. Monogr. 2011, 17, 221–384. [Google Scholar]
- Nolasco, C.; Jácome, N.; Hurtado-Lugo, N. Applications of the Poisson and diffusion equations to materials science. J. Phys. Conf. Ser. 2020, 1587, 012014. [Google Scholar]
- Klopfenstein, R.; Wu, C. Computer solution of one-dimensional Poisson’s equation. IEEE Trans. Electron Devices 1975, 22, 329–333. [Google Scholar] [CrossRef]
- Bhardwaj, S.; Gohel, H.; Namuduri, S. A Multiple-Input Deep Neural Network Architecture for Solution of One-Dimensional Poisson Equation. IEEE Antennas Wirel. Propag. Lett. 2019, 18, 2244–2248. [Google Scholar] [CrossRef]
- Kraichnan, R.H. Lagrangian-history statistical theory for Burgers’ equation. Phys. Fluids 1968, 11, 265–277. [Google Scholar] [CrossRef]
- Xie, S.S.; Heo, S.; Kim, S.; Woo, G.; Yi, S. Numerical solution of one-dimensional Burgers’ equation using reproducing kernel function. J. Comput. Appl. Math. 2008, 214, 417–434. [Google Scholar] [CrossRef]
- Inan, B.; Bahadir, A.R. Numerical solution of the one-dimensional Burgers’ equation: Implicit and fully implicit exponential finite difference methods. Pramana 2013, 81, 547–556. [Google Scholar] [CrossRef]
- Bonkile, M.P.; Awasthi, A.; Lakshmi, C.; Mukundan, V.; Aswin, V. A systematic literature review of Burgers’ equation with recent advances. Pramana 2018, 90, 69. [Google Scholar] [CrossRef]
- Kutluay, S.; Bahadir, A.; Özdeş, A. Numerical solution of one-dimensional Burgers equation: Explicit and exact-explicit finite difference methods. J. Comput. Appl. Math. 1999, 103, 251–261. [Google Scholar] [CrossRef]
- Mukundan, V.; Awasthi, A. Efficient numerical techniques for Burgers’ equation. Appl. Math. Comput. 2015, 262, 282–297. [Google Scholar] [CrossRef]
- Brunner, H. Collocation Methods for Volterra Integral and Related Functional Differential Equations; Cambridge University Press: Cambridge, UK, 2004; Volume 15. [Google Scholar]
- Volterra, V. Leçons sur la Théorie Mathématique de la Lutte pour la Vie; Gauthier Villars: Paris, France, 1931. [Google Scholar]
- Joseph, D.D.; Preziosi, L. Heat waves. Rev. Mod. Phys. 1989, 61, 41. [Google Scholar] [CrossRef]
- Otaide, I.J.; Oluwayemi, M.O. Numerical treatment of linear volterra integro differential equations using variational iteration algorithm with collocation. Partial. Differ. Equ. Appl. Math. 2024, 10, 100693. [Google Scholar] [CrossRef]
- Lu, L.; Meng, X.; Mao, Z.; Karniadakis, G.E. DeepXDE: A deep learning library for solving differential equations. SIAM Rev. 2021, 63, 208–228. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).