Next Article in Journal
On the Propagation Model of Two-Component Nonlinear Optical Waves
Next Article in Special Issue
A Survey on the Oscillation of First-Order Retarded Differential Equations
Previous Article in Journal
Passive Damping of Longitudinal Vibrations of a Beam in the Vicinity of Natural Frequencies Using the Piezoelectric Effect
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

A Comparative Study of the Explicit Finite Difference Method and Physics-Informed Neural Networks for Solving the Burgers’ Equation

1
Faculty of Science, University of Kragujevac, R. Domanovića 12, 34000 Kragujevac, Serbia
2
Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai 519087, China
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(10), 982; https://doi.org/10.3390/axioms12100982
Submission received: 27 September 2023 / Revised: 12 October 2023 / Accepted: 16 October 2023 / Published: 18 October 2023

Abstract

:
The Burgers’ equation is solved using the explicit finite difference method (EFDM) and physics-informed neural networks (PINN). We compare our numerical results, obtained using the EFDM and PINN for three test problems with various initial conditions and Dirichlet boundary conditions, with the analytical solutions, and, while both approaches yield very good agreement, the EFDM results are more closely aligned with the analytical solutions. Since there is good agreement between all of the numerical findings from the EFDM, PINN, and analytical solutions, both approaches are competitive and deserving of recommendation. The conclusions that are provided are significant for simulating a variety of nonlinear physical phenomena, such as those that occur in flood waves in rivers, chromatography, gas dynamics, and traffic flow. Additionally, the concepts of the solution techniques used in this study may be applied to the development of numerical models for this class of nonlinear partial differential equations by present and future model developers of a wide range of diverse nonlinear physical processes.

1. Introduction

For many years, both in the fields of fluid mechanics and heat transfer, significant research has been conducted on the analytical techniques and numerical simulations of the non-linear partial differential equations (PDEs) encountered in computational fluid dynamics. The complexity of non-linear PDEs is increasing as more and more real-world characteristics affecting engineering systems are taken into account. The Burgers’ equation is one of the most famous equations including both non-linear propagation effects and diffusive effects. Due to its applicability in a variety of domains, such as gas dynamics [1], heat conduction [2], elasticity [3], and solute transport in ground water [4], etc., the study of the general properties of the Burgers’ equation has attracted significant interest. In addition, it can be used to evaluate different numerical algorithms. Several studies have been conducted to explore the features of its solution using different analytical and numerical techniques because of its broad range of applications. Using the aid of the Hopf–Cole transformation, Rodin [5] studied a few approximative and exact solutions to the boundary value problem for Burger’s equation. With various initial conditions, Benton and Platzman [6] provided 35 different analytic solutions to Burgers’ equation. By using group actions on coset bundles, Wolf et al. [7] found a method to extend the analytical solution of Burgers’ equation to n-dimensional situations. The solutions were expanded to curvilinear coordinate systems by Nerney et al. [8]. Using Hopf–Cole and Darboux transformations, Kudryavtsev and Sapozhnikov [9] proposed a method to find the exact solution of the inhomogeneous Burgers’ equation. Significant efforts have been made over the past few decades to create reliable numerical techniques for handling the Burgers’ equation. The Burgers’ equation was transformed into the heat equation by Kutluay et al. [10] using the Hopf–Cole method. Using explicit and exact-EFD solutions, the Burgers’ equation transformed into the heat equation with insulated boundary conditions was solved. A two-level, three-point explicit FD scheme that was second-order accurate in time and fourth-order accurate in space was created by Hassanien et al. [11]. The approach is unconditionally stable, according to a von-Neumann stability analysis. In contrast to Bahadir’s [12] proposed fully implicit FD scheme, the non-linear system is solved using Newton’s method. For solving the Burgers’ equation, Kadalbajoo et al. [13] proposed an implicit method. For a numerical simulation of the Burgers’ equation, Mukundan and Awastji [14] suggested a numerical approach based on the semi-discretization technique and implicit FDM. The modeled problem must be set up on a mesh (grid) of finite points due to the technique of these numerical approaches. Even though they are thought of as elegant and useful strategies, the more dimensions there are, the more difficult it is to use them. The rise in dimensions is accompanied by an increase in computing processes and resource allocation because of its mesh-based design. As a result of the continued research into artificial intelligence and improvements in computing power, a completely new field of modeling techniques, such as PINN, has been created. Raissi et al. [15] recently demonstrated how PINN can be successfully employed for solving the Burgers’ equation. It has been shown that PINN is also an effective tool for solving a nonlinear Schrodinger equation, Allen–Cahn equation, Navier–Stokes equation, and Korteweg–de Vries equation, as well as a high-dimensional inverse problems [15].
In this work, the EFDM and PINN are employed for solving the Burgers’ equation. The FDM and PINN are two different approaches for solving PDEs. FDM is a numerical method for approximating a function’s derivatives at discrete points inside a domain. The process requires creating a grid of discrete points within the domain, where the function values are calculated. Next, by calculating the difference between the function values at nearby places, the approximate derivatives are derived. The grid step size and approximation order utilized to generate the derivatives have an impact on the FDM’s accuracy. PINN is a machine-learning-based approach for solving PDEs. In PINN, a neural network is trained to learn the fundamental physics of a system and approximatively solve a PDE. The residual of the PDE, which measures the discrepancy between the predicted solution and the exact solution, is measured and the neural network is trained to minimize it. The main benefit of PINN is that it can handle complicated boundary conditions and geometries, which can be difficult to model using conventional numerical techniques. On the other hand, it may be computationally expensive and requires a lot of data for training. The selection of hyper-parameters, such as the number of layers and neurons in the neural network, may also have an impact on PINN. In this study, our numerical results for three test problems with various initial conditions and Dirichlet boundary conditions obtained using EFDM and PINN are compared to the analytical solutions reported in the literature.

2. The Burgers’ Equation

We consider the Burgers’ equation:
u ( x , t ) t = v u 2 ( x , t ) x 2 u ( x , t ) u ( x , t ) x ,   x [ 0 , 1 ] ,   t [ 0 , T ]
with the initial condition:
u ( x , 0 ) = u 0 ( x ) , 0 < x < 1
and the boundary conditions:
u ( 0 , t ) = 0 = u ( 1 , t ) , 0 < t T
where ν > 0 is a parameter, u u ( x , t ) / x is the non-linear term, and u0(x) is a given sufficiently smooth function. Burgers’ Equation (1) can describe the behavior of fluid flow and can be used to model various physical phenomena, such as shock waves and turbulence. Then, in Equation (1), ν is a kinematic viscosity parameter and the term u ( x , t ) / t represents the time derivative of the velocity, which describes how the velocity of the fluid changes with time. The term u u ( x , t ) / x represents the non-linear advection of the velocity field, which describes how the fluid carries its own velocity along with it as it flows. The term v 2 u ( x , t ) / x 2 represents the diffusion of the velocity field due to the viscosity of the fluid. It describes how the velocity field spreads out over time and space due to the internal friction of the fluid. Therefore, Burgers’ equation describes the balance between the advection of the velocity field and the diffusion of the velocity field due to viscosity. When ν approaches zero, Equation (1) becomes an inviscid Burgers’ equation, which is a model for nonlinear wave propagation.

3. Explicit Finite Difference Method

Using the EFDM, where the forward FD scheme is used to represent the derivative term ( u ( x , t ) / t ) = ( u i j + 1 u i j ) / Δ t and central FD schemes are used to represent the derivative terms ( u ( x , t ) / x ) = ( u i + 1 j u i 1 j ) / ( 2 Δ x ) and ( 2 u ( x , t ) / x 2 ) = ( u i + 1 j 2 u i j + u i 1 j ) / ( Δ x ) 2 , Equation (1) is written in the following form:
u i j + 1 u i j Δ t = v u i + 1 j 2 u i j + u i 1 j ( Δ x ) 2 u i j u i + 1 j u i 1 j 2 Δ x
where u i j u ( x i , t j ) and indexes i and j refer to the discrete step lengths ∆x and ∆t for the coordinate x and time t, respectively. The grid dimensions in the x and t directions are K = 1 / Δ x and M = T / Δ t , respectively. Using the FD scheme, the initial condition (2) and boundary conditions (3) are given as:
u i 0 = u 0 ( x i ) ; 0 < x i < 1 , i = 1 , 2 , , K ( t = 0 )
u 0 j = 0 = u K j , j = 0 , 1 , , M ( x = 0 and x = 1 )
Equation (4) represents a formula for u i j + 1 at the (i, j + 1)th mesh point in terms of the known values along the jth time row. The truncation error for the difference Equation (4) is O(∆t, (Δx)2). The truncation error can be decreased using small enough values of ∆t and ∆x until the accuracy attained is within the error tolerance.

4. Physics-Informed Neural Networks

4.1. The Basic Concept of Physics-Informed Neural Networks in Solving PDEs

A machine learning method called the PINN can be used to approximatively solve PDEs. A general form of PDEs with corresponding initial and boundary conditions is:
u ( x , t ) t + Ν [ u ( x , t ) ] = 0 , x Ω , t [ 0 , T ] u ( x , t = 0 ) = h ( x ) , x Ω u ( x , t ) = g ( x , t ) , x Ω g , t [ 0 , T ]
Here, N is a differential operator, x Ω R d and t R represent spatial and temporal dimensions, respectively, Ω R d is a computational domain, Ω g Ω is a computational domain of the exposed boundary conditions, and u ( x , t ) is the solution of the PDEs with the initial condition h ( x ) and boundary conditions g ( x , t ) .
In the original formulation [16], an approximator network and a residual network are the two subnets that make up PINN. After receiving the input ( x , t ) and going through the training process, the approximator network outputs an approximate solution u ( x , t ) . A grid of points, referred to as collocation points, sampled at random or on a regular basis from the simulation domain, is used by the approximator network to train. The weights and biases of the approximator network make up a set of trainable parameters, trained by minimizing a composite loss function of the following form:
L = L r + L 0 + L b
where:
L r = 1 N r i = 1 N r | u ( x i , t i ) + N [ u ( x i , t i ) ] | 2 L 0 = 1 N 0 i = 1 N 0 | u ( x i , t i ) h i ) ] | 2 L b = 1 N b i = 1 N b | u ( x i , t i ) g i ) ] | 2
Here, L r , L 0 , and L b represent residuals of the governing equations, initial, and boundary conditions, respectively. N r , N 0 , and Nb are the numbers of the mentioned collocation points of the computational domain, initial, and boundary conditions, respectively. The residual network, a non-trainable component of the PINN model, calculates these residuals. PINN needs derivatives of the outputs with respect to the inputs x and t to calculate the residual L r . Such a calculation is performed through automated differentiation, which relies on the fact that combining derivatives of the constituent operations by the chain rule produces the derivative of the entire composition. This technique is a key enabler for the development of PINNs and is the main element that differentiates PINNs from comparable efforts in the early 1990’s, which relied on the manual derivation of back-propagation rules. Nowadays, automatic differentiation capabilities are well-implemented in most deep learning frameworks, such as TensorFlow and PyTorch, avoiding tedious derivations or numerical discretization while computing derivatives of all orders in space–time.
A schematic of the PINN is demonstrated in Figure 1, in which a simple partial differential equation f / x + f / y = u is used as an example. The approximator network is used to approximate the solution u ( x , t ) which then goes to the residual network to calculate the residual loss L r , boundary condition loss L b , and initial condition loss L 0 . The weights and biases of the approximator network are trained using a composite loss function consisting of the residuals L r , L 0 , and L b through a gradient descent technique based on the back-propagation.

4.2. Implementation of PINN in Solving the Burgers’ Equation

To conduct the PINN model development for solving the Burgers’ equation, we employed the DeepXDE library [17]. Our PINN has two inputs (x, t) and contains three layers consisting of 20 neurons each. All neurons exhibit tanh activation. The set of collocation points consists of three subsets. The largest subset of 5080 contains the collocation points that belong to a general problem domain. The second and third subsets are smaller, with 320 and 160 collocation points, and their purpose is to enforce the boundary and initial conditions, respectfully. These conditions are identical in all our test problems. The PINN training process consists of two phases. In the first phase, we optimize the weights and biases using the Adam algorithm for 15,000 epochs with a learning rate of 10−3. In the second phase, after a “global” search is completed, the Limited Memory Broyden–Fletcher–Goldfarb–Shanno algorithm (L-BFGS) acts to get closer to the optimal solution according to [18]. The whole training process takes approximately 50 s on an nVidia Tesla T4 GPU accelerator. Practically speaking, it is very likely that using various hyper-parameters, such as various activation functions, training techniques, and varying PINN topologies, will result in better solutions. However, since finding hyper-parameters is a tedious and time-consuming process and is outside the scope of our study, we selected the hyper-parameter values that were most prevalent in the Burgers’ problem literature.
The purpose of this work is to compare the accuracy of the numerical results obtained using the EFDM and PINN for three test problems of Burgers’ equation with respect to the analytical solutions available in the literature.

5. Results and Discussion

To illustrate the accuracy of the EFD scheme and PINN, several numerical computations are carried out for three test problems.
Test problem 1: Consider the Burgers’ equation:
u ( x , t ) t = v u 2 ( x , t ) x 2 u ( x , t ) u ( x , t ) x ,   x [ 0 , 1 ] ,   t [ 0 , T ]
with the initial condition:
u ( x , 0 ) = sin ( π x ) , 0 < x < 1
and the boundary conditions:
u ( 0 , t ) = 0 = u ( 1 , t ) , 0 < t T
The analytical solution of the problem is given as [19]:
u ( x , t ) = 2 π v n = 1 C n exp ( n 2 π 2 v t ) n sin ( n π x ) C 0 + n = 1 C n exp ( n 2 π 2 v t ) cos ( n π x )
where:
C 0 = 0 1 exp { 1 2 π v [ 1 cos ( π x ) ] } d x
C n = 2 0 1 exp { 1 2 π v [ 1 cos ( π x ) ] } cos ( n π x ) d x
Equation (4) represents the EFD scheme of this test problem, the boundary conditions are given in (6), and the initial condition (11) becomes:
u i 0 = sin ( π x i ) ; 0 < x i < 1 , i = 1 , 2 , , K ( t = 0 )
Figure 2 and Figure 3 compare our numerical solutions of the Burgers’ Equation (10) obtained using the EFD scheme (step lengths are ∆x = 0.01 and ∆t = 0.0001) and PINN, with analytical solutions (13) at different times T, for a kinematic viscosity parameter ν = 0.5 and 0.05, respectively. A good agreement between our numerical solutions and analytical solutions can be seen. Because Figure 2 and Figure 3 are insufficient for an exact comparison of the two numerical methods, we considered the root mean square error defined by:
E r r o r = 1 N i = 1 K ( u i m e t h o d u i a n a l i t ) 2
where K is the total number of observed points along the x axis. Equation (17) was taken as the error function, representing an accuracy evaluation of the method. As the error value decreases, the method gives a better distribution u(x,t) over a given time interval. As an additional illustration, Figure 4 shows the physical behavior of the EFD and PINN solutions of Test problem 1 in 3D at different times for ν = 0.05.
Table 1 represents the accuracy of the EFDM and PINN for two kinematic viscosity parameters v. It can be noted that the EFDM provides a better match with the analytical solution.
Test problem 2: Consider the Burgers’ equation:
u ( x , t ) t = v u 2 ( x , t ) x 2 u ( x , t ) u ( x , t ) x ,   x [ 0 , 1 ] ,   t [ 0 , T ]
with the initial condition:
u ( x , 0 ) = 4 x ( 1 x ) , 0 < x < 1
and the boundary conditions:
u ( 0 , t ) = 0 = u ( 1 , t ) , 0 < t T
The analytical solution of the problem is given as [19]:
u ( x , t ) = 2 π v n = 1 D n exp ( n 2 π 2 v t ) n sin ( n π x ) D 0 + n = 1 D n exp ( n 2 π 2 v t ) cos ( n π x )
where:
D 0 = 0 1 exp { 1 3 v [ x 2 ( 3 2 x ) ] } d x
D n = 2 0 1 exp { 1 3 v [ x 2 ( 3 2 x ) ] } cos ( n π x ) d x
Equation (4) represents the EFD solution of this test problem, the boundary conditions are given in Equation (6), and the initial condition (19) becomes:
u i 0 = 4 x i ( 1 x i ) ; 0 < x i < 1 , i = 1 , 2 , , K ( t = 0 )
Figure 5 and Figure 6 compare our numerical solutions of the Burgers’ Equation (18) obtained using the EFD scheme (step lengths are Δx = 0.01 and Δt = 0.0001) and PINN with analytical solutions (21) at different times T for a kinematic viscosity parameter ν = 0.5 and 0.1. A good agreement between these solutions can be seen. Figure 7 depicts the physical behavior of the EFD and PINN solutions of Test problem 2 in 3D at different times for ν = 0.5.
The accuracy of the EFDM and PINN for two kinematic viscosity parameters v is given in Table 2. It can be seen that the numerical results obtained using the EFDM are in better agreement with the analytical solution.
Test problem 3: Consider the Burgers’ equation:
u ( x , t ) t = v u 2 ( x , t ) x 2 u ( x , t ) u ( x , t ) x , x [ 0 , 1 ] ,   t [ 0 , T ]
with the initial condition:
u ( x , 0 ) = 2 v π sin ( π x ) m + cos ( π x ) , 0 < x < 1
where m > 1 is a parameter, and the boundary conditions:
u ( 0 , t ) = 0 = u ( 1 , t ) , 0 < t T
The analytical solution of the problem is given as [20]:
u ( x , t ) = 2 v π exp ( π 2 v t ) sin ( π x ) m + exp ( π 2 v t ) cos ( π x ) , m > 1
Equation (4) represents the EFD solution of this test problem, the boundary conditions are given in Equation (6), and the initial condition (26) becomes:
u i 0 = 2 v π sin ( π x i ) m + cos ( π x i ) , 0 < x i < 1 , i = 1 , 2 , , K ( t = 0 )
Figure 8 and Figure 9 compare our numerical solutions of the Burgers’ Equation (25) obtained using the EFD scheme (step lengths are Δx = 0.01 and Δt = 0.0001) and PINN, with analytical solutions (28) (we used parameter m = 2) at different times T for a kinematic viscosity parameter ν = 0.5 and 0.02. A good agreement between these solutions can be seen. One can observe from Figure 10a the physical behavior of the EFD and PINN solutions of Test problem 1 in 3D at different times for ν = 0.02.
Table 3 represents the accuracy of the EFDM and PINN for two kinematic viscosity parameters ν. It can be noted that the EFDM provides a better match with the analytical solution. It can also be seen that, with a decreasing kinematic viscosity parameter, the error decreases both for the EFDM and PINN.
It is worth noting that, in this work, we used a small enough value of ∆t in order to achieve the stability of the EFD scheme. As an illustration, a similar situation arises for solving a Lax–Wendroff-modified differential equation for linear and nonlinear advection [21]. Alternatively, one can adopt unconditionally stable algorithms, such as the unconditionally positive finite difference method [22], Dufort–Frankel [23], and Leapfrog–Hopscotch scheme [24]. On the other hand, one should also mention that the Burgers’ equation with Neumann boundary conditions was solved using a domain decomposition method [25].

6. Conclusions

In solving nonlinear parabolic differential equations of the Burgers’ type, we compared our numerical results obtained using EFDM and PINN with the analytical solutions reported in the literature. To the best of our knowledge, for the first time, we compared, in this work, the accuracy of the EFDM and PINN for solving the Burgers’ equation with three different initial conditions and Dirichlet boundary conditions. We demonstrated that, although both approaches yield a very good agreement with analytical solutions, the EFD scheme with sufficiently fine step lengths ∆x and ∆t showed a higher accuracy compared to the numerical solutions calculated using PINN. Since all the numerical results obtained by the above methods showed a reasonably good agreement with the analytical solutions, both methods can therefore be competitive and worth recommendation. Current and future developers of models for a broad range of various nonlinear physical processes may draw on the ideas of the solution methods employed in this study to further develop numerical models for nonlinear partial differential equations. The presented results are important when modeling various nonlinear physical processes using the Burgers’ equation, including those which arise in gas dynamics, traffic flow, chromatography, and flood waves in rivers.

Author Contributions

Conceptualization, S.S. and M.I.; methodology, S.S.; software, S.S. and M.I.; validation, S.S., M.I. and R.M.; formal analysis, R.M.; investigation, M.I.; resources, R.M.; data curation, M.I.; writing—original draft preparation, S.S. and M.I.; writing—review and editing, S.S. and M.I.; visualization, R.M.; supervision, S.S.; project administration, R.M.; funding acquisition, R.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Serbian Ministry of Science, Technological Development and Innovations (Agreement No. 451-03-47/2023-01/200122) and by grant from Science Fund of the Republic of Serbia (Agreement No. CTPCF-6379382). the National Natural Science Foundation of China (62111530238, 62003046); Guangdong Basic and Applied Basic Research Foundation (2021A1515011997); Special project in key field of Guangdong Provincial Department of Education (2021ZDZX1050); The Innovation Team Project of Guangdong Provincial Department of Education (2021KCXTD014).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Korshunova, A.A.; Rozanova, O.S. The Riemann Problem for the Stochastically Perturbed Non-Viscous Burgers Equation and the Pressureless Gas Dynamics Model. In Proceedings of the International Conference Days on Diffraction 2009, St. Petersburg, Russia, 26–29 May 2009. [Google Scholar]
  2. Hills, R.G. Model Validation: Model Parameter and Measurement Uncertainty. J. Heat Transf. 2005, 128, 339–351. [Google Scholar] [CrossRef]
  3. Sugimoto, N.; Kakutani, T. ‘Generalized Burgers’ Equation’ for Nonlinear Viscoelastic Waves. Wave Motion 1985, 7, 447–458. [Google Scholar] [CrossRef]
  4. Yonti Madie, C.; Kamga Togue, F.; Woafo, P. Numerical Solution of the Burgers’ Equation Associated with the Phenomena of Longitudinal Dispersion Depending on Time. Heliyon 2022, 8, e09776. [Google Scholar] [CrossRef] [PubMed]
  5. Rodin, E.Y. On Some Approximate and Exact Solutions of Boundary Value Problems for Burgers’ Equation. J. Math. Anal. Appl. 1970, 30, 401–414. [Google Scholar] [CrossRef]
  6. Benton, E.R.; Platzman, G.W. A Table of Solutions of the One-Dimensional Burgers Equation. Q. Appl. Math. 1972, 30, 195–212. [Google Scholar] [CrossRef]
  7. Wolf, K.B.; Hlavatý, L.; Steinberg, S. Nonlinear Differential Equations as Invariants under Group Action on Coset Bundles: Burgers and Korteweg-de Vries Equation Families. J. Math. Anal. Appl. 1986, 114, 340–359. [Google Scholar] [CrossRef]
  8. Nerney, S.; Schmahl, E.J.; Musielak, Z.E. Limits to Extensions of Burgers’ Equation. Q. Appl. Math. 1996, 54, 385–393. [Google Scholar] [CrossRef]
  9. Kudryavtsev, A.G.; Sapozhnikov, O.A. Determination of the Exact Solutions to the Inhomogeneous Burgers Equation with the Use of the Darboux Transformation. Acoust. Phys. 2011, 57, 311–319. [Google Scholar] [CrossRef]
  10. Kutluay, S.; Bahadir, A.R.; Özdeş, A. Numerical Solution of One-Dimensional Burgers Equation: Explicit and Exact-Explicit Finite Difference Methods. J. Comput. Appl. Math. 1999, 103, 251–261. [Google Scholar] [CrossRef]
  11. Hassanien, I.A.; Salama, A.A.; Hosham, H.A. Fourth-Order Finite Difference Method for Solving Burgers’ Equation. Appl. Math. Comput. 2005, 170, 781–800. [Google Scholar] [CrossRef]
  12. Bahadır, A.R. A Fully Implicit Finite-Difference Scheme for Two-Dimensional Burgers’ Equations. Appl. Math. Comput. 2003, 137, 131–137. [Google Scholar] [CrossRef]
  13. Kadalbajoo, M.K.; Sharma, K.K.; Awasthi, A. A Parameter-Uniform Implicit Difference Scheme for Solving Time-Dependent Burgers’ Equations. Appl. Math. Comput. 2005, 170, 1365–1393. [Google Scholar] [CrossRef]
  14. Mukundan, V.; Awasthi, A. Linearized Implicit Numerical Method for Burgers’ Equation. Nonlinear Eng. 2016, 5, 219–234. [Google Scholar] [CrossRef]
  15. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  16. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics Informed Deep Learning (Part I): Data-Driven Solutions of Nonlinear Partial Differential Equations. arXiv 2017, arXiv:1711.10561. [Google Scholar]
  17. Lu, L.; Meng, X.; Mao, Z.; Karniadakis, G.E. DeepXDE: A Deep Learning Library for Solving Differential Equations. SIAM Rev. 2021, 63, 208–228. [Google Scholar] [CrossRef]
  18. Markidis, S. The Old and the New: Can Physics-Informed Deep-Learning Replace Traditional Linear Solvers? Front. Big Data 2021, 4, 669097. [Google Scholar] [CrossRef]
  19. Cole, J.D. On a Quasi-Linear Parabolic Equation Occurring in Aerodynamics. Q. Appl. Math. 1951, 9, 225–236. [Google Scholar] [CrossRef]
  20. Wood, W.L. An Exact Solution for Burger’s Equation. Commun. Numer. Methods Eng. 2006, 22, 797–798. [Google Scholar] [CrossRef]
  21. Winnicki, I.; Jasinski, J.; Pietrek, S. New approach to the Lax-Wendroff modified differential equation for linear and nonlinear advection. Numer. Methods Partial. Differ. Equ. 2019, 35, 2275–2304. [Google Scholar] [CrossRef]
  22. Savović, S.; Drljača, B.; Djordjevich, A. A comparative study of two different finite difference methods for solving advection–diffusion reaction equation for modeling exponential traveling wave. Ric. Mat. 2022, 71, 245–252. [Google Scholar] [CrossRef]
  23. Yang, X.; Ralescu, D.A. A Dufort–Frankel scheme for one-dimensional uncertain heat equation. Math. Comput. Simul. 2021, 181, 98–112. [Google Scholar] [CrossRef]
  24. Nagy, Á.; Omle, I.; Kareem, H.; Kovács, E.; Barna, I.F.; Bognar, G. Stable, Explicit, Leapfrog-Hopscotch Algorithms for the Diffusion Equation. Computation 2021, 9, 92. [Google Scholar] [CrossRef]
  25. Bakodah, H.O.; Al-Zaid, N.A.; Mirzazadeh, M.; Zhou, Q. Decomposition method for Solving Burgers’ Equation with Dirichlet and Neumann boundary conditions. Optik 2017, 130, 1339–1346. [Google Scholar] [CrossRef]
Figure 1. The architecture of a PINN and the standard training loop of a PINN constructed for solving a simple partial differential equation, where PDE and Cond denote governing equations, while R and I represent their residuals. The approximator network is subjected to a training process and provides an approximate solution. The residual network is a non-trainable part of PINN capable of computing derivatives of the approximator network outputs with respect to the inputs, resulting in the composite loss function, denoted by MSE.
Figure 1. The architecture of a PINN and the standard training loop of a PINN constructed for solving a simple partial differential equation, where PDE and Cond denote governing equations, while R and I represent their residuals. The approximator network is subjected to a training process and provides an approximate solution. The residual network is a non-trainable part of PINN capable of computing derivatives of the approximator network outputs with respect to the inputs, resulting in the composite loss function, denoted by MSE.
Axioms 12 00982 g001
Figure 2. EFD and PINN solutions (open symbols) compared to analytical solutions (solid lines) of Test problem 1 at different times T = 0.02, 0.05, and 0.1 for ν = 0.5.
Figure 2. EFD and PINN solutions (open symbols) compared to analytical solutions (solid lines) of Test problem 1 at different times T = 0.02, 0.05, and 0.1 for ν = 0.5.
Axioms 12 00982 g002
Figure 3. EFD and PINN solutions (open symbols) compared to analytical solutions (solid lines) of Test problem 1 at different times T = 0.5, 0.7, and 0.9 for ν = 0.05.
Figure 3. EFD and PINN solutions (open symbols) compared to analytical solutions (solid lines) of Test problem 1 at different times T = 0.5, 0.7, and 0.9 for ν = 0.05.
Axioms 12 00982 g003
Figure 4. (a) EFD and (b) PINN solutions of Test problem 1 in 3D at different times for ν = 0.05.
Figure 4. (a) EFD and (b) PINN solutions of Test problem 1 in 3D at different times for ν = 0.05.
Axioms 12 00982 g004
Figure 5. EFD and PINN solutions (open symbols) compared to analytical solutions (solid lines) of Test problem 2 at different times T = 0.05, 0.25, and 0.5 for ν = 0.5.
Figure 5. EFD and PINN solutions (open symbols) compared to analytical solutions (solid lines) of Test problem 2 at different times T = 0.05, 0.25, and 0.5 for ν = 0.5.
Axioms 12 00982 g005
Figure 6. EFD and PINN solutions (open symbols) compared to analytical solutions (solid lines) of Test problem 2 at different times T = 0.3, 0.5, and 0.7 for ν = 0.1.
Figure 6. EFD and PINN solutions (open symbols) compared to analytical solutions (solid lines) of Test problem 2 at different times T = 0.3, 0.5, and 0.7 for ν = 0.1.
Axioms 12 00982 g006
Figure 7. (a) EFD and (b) PINN solutions of Test problem 2 in 3D at different times for ν = 0.5.
Figure 7. (a) EFD and (b) PINN solutions of Test problem 2 in 3D at different times for ν = 0.5.
Axioms 12 00982 g007
Figure 8. EFD and PINN solutions (open symbols) compared to analytical solutions (solid lines) of Test problem 3 at different times T = 0.2, 0.4, and 0.8 for ν = 0.5.
Figure 8. EFD and PINN solutions (open symbols) compared to analytical solutions (solid lines) of Test problem 3 at different times T = 0.2, 0.4, and 0.8 for ν = 0.5.
Axioms 12 00982 g008
Figure 9. EFD and PINN solutions (open symbols) compared to analytical solutions (solid lines) of Test problem 3 at different times T = 0.5, 1, and 2 for ν = 0.02.
Figure 9. EFD and PINN solutions (open symbols) compared to analytical solutions (solid lines) of Test problem 3 at different times T = 0.5, 1, and 2 for ν = 0.02.
Axioms 12 00982 g009
Figure 10. (a) EFD and (b) PINN solutions of Test problem 3 in 3D at different times for ν = 0.02.
Figure 10. (a) EFD and (b) PINN solutions of Test problem 3 in 3D at different times for ν = 0.02.
Axioms 12 00982 g010
Table 1. The accuracy of EFDM and PINN for different kinematic viscosity coefficients ν.
Table 1. The accuracy of EFDM and PINN for different kinematic viscosity coefficients ν.
TError (EFDM)Error (PINN)
ν = 0.50.025.14 × 10−72.56 × 10−5
0.055.07 × 10−74.96 × 10−5
0.15.43 × 10−59.51 × 10−5
ν = 0.050.54.43 × 10−77.09 × 10−6
0.72.38 × 10−71.46 × 10−6
0.97.03 × 10−81.02 × 10−6
Table 2. The accuracy of EFDM and PINN for different kinematic viscosity coefficients ν.
Table 2. The accuracy of EFDM and PINN for different kinematic viscosity coefficients ν.
TError (EFDM)Error (PINN)
ν = 0.50.055.36 × 10−82.16 × 10−4
0.252.37 × 10−72.27 × 10−6
0.51.14 × 10−71.57 × 10−4
ν = 0.10.33.80 × 10−99.09 × 10−7
0.56.19 × 10−71.65 × 10−4
0.74.34 × 10−74.79 × 10−5
Table 3. The accuracy of EFDM and PINN for different kinematic viscosity coefficients ν.
Table 3. The accuracy of EFDM and PINN for different kinematic viscosity coefficients ν.
TError (EFDM)Error (PINN)
ν = 0.50.26.05 × 10−59.72 × 10−4
0.46.07 × 10−57.56 × 10−4
0.81.24 × 10−52.32 × 10−4
ν = 0.020.53.85 × 10−62.15 × 10−5
17.45 × 10−62.33 × 10−5
21.12 × 10−53.27 × 10−4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Savović, S.; Ivanović, M.; Min, R. A Comparative Study of the Explicit Finite Difference Method and Physics-Informed Neural Networks for Solving the Burgers’ Equation. Axioms 2023, 12, 982. https://doi.org/10.3390/axioms12100982

AMA Style

Savović S, Ivanović M, Min R. A Comparative Study of the Explicit Finite Difference Method and Physics-Informed Neural Networks for Solving the Burgers’ Equation. Axioms. 2023; 12(10):982. https://doi.org/10.3390/axioms12100982

Chicago/Turabian Style

Savović, Svetislav, Miloš Ivanović, and Rui Min. 2023. "A Comparative Study of the Explicit Finite Difference Method and Physics-Informed Neural Networks for Solving the Burgers’ Equation" Axioms 12, no. 10: 982. https://doi.org/10.3390/axioms12100982

APA Style

Savović, S., Ivanović, M., & Min, R. (2023). A Comparative Study of the Explicit Finite Difference Method and Physics-Informed Neural Networks for Solving the Burgers’ Equation. Axioms, 12(10), 982. https://doi.org/10.3390/axioms12100982

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop