Next Article in Journal
A Deep Learning Based Printing Defect Classification Method with Imbalanced Samples

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Space–Time Spectral Collocation Method for Solving Burgers Equations with the Convergence Analysis

by
Yu Huang
1,
2,
2,
Hojjat Ahsani Tehrani
2,
Svetlin Georgiev Georgiev
3,
Emran Tohidi
4 and
Stanford Shateyi
5,*
1
College of Mathematics and Statistics, Nanjing University of Information Science and Technology, Nanjing 210044, China
2
Faculty of Mathematical Sciences, Shahrood University of Technology, Shahrood 3619995161, Iran
3
Department of Applied Mathematics, Sorbonne University, 75005 Paris, France
4
Department of Mathematics, Kosar University of Bojnord, Bojnord P. O. Box 9415615458, Iran
5
Department of Mathematics and Applied Mathematics, University of Venda, P. Bag X5050, Thohoyandu 0950, South Africa
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(12), 1439; https://doi.org/10.3390/sym11121439
Submission received: 27 August 2019 / Revised: 3 October 2019 / Accepted: 4 October 2019 / Published: 22 November 2019

## Abstract

:
This article deals with a numerical approach based on the symmetric space-time Chebyshev spectral collocation method for solving different types of Burgers equations with Dirichlet boundary conditions. In this method, the variables of the equation are first approximated by interpolating polynomials and then discretized at the Chebyshev–Gauss–Lobatto points. Thus, we get a system of algebraic equations whose solution is the set of unknown coefficients of the approximate solution of the main problem. We investigate the convergence of the suggested numerical scheme and compare the proposed method with several recent approaches through examining some test problems.

## 1. Introduction

Many phenomena in physics, biology and engineering can be modelled mathematically by partial differential equations (PDEs). The Burgers equation is one of the most important PDEs to be surveyed in the recent years by many researchers [1,2]. This equation describes various kinds of phenomena in plasma physics, solid state physics, optical fibers, fluid dynamics, chemical kinetics, non-linear acoustics, gas dynamics, traffic flow, etc.
Also, the generalized Burgers–Fisher equation is one of the most important classes of non-linear PDEs which has appeared in several categories of applications, such as shockwave formation, turbulence, heat conduction, sound waves in viscous medium, and some other fields of applied branches of science and engineering [3]. Moreover, The Burgers–Huxley equation has been considered to be an evolution equation that describes nerve pulse propagation in biology from which molecular CB properties shall be computed. The generalized Burgers–Huxley equation was investigated to describe the interaction between reaction mechanisms, convection effects, and diffusion transport [4].
Since an analytical in a closed-form solution is generally unavailable for non-linear PDEs, numerical methods are widely used for solving them. There are some effective numerical methods to solve PDEs, especially for the Burgers equation. In [5], a comprehensive review of some techniques is presented. Berger and Kohn in [6] used the med refinement method. Budd et al. in [7] applied mesh movement. Soheili et al. used a moving-mesh PDE (MMPDE) approach [8]. In [9], Ramadan et al. suggested a method based on collocation of septic B-splines over finite elements for numerical solutions of the non-linear Burgers equation. Ramadan and El-Danaf considered the solution of the modified Burgers equation using the collocation method with quintic splines [10]. Haq et al. in [11] formulated simple classical radial basis functions (RBFs) collocation method for the numerical solution of the non-linear dispersive and dissipative Burgers equation. Both Orac et al. in [12] and Lepik in [13] investigated the numerical solutions using Haar wavelet. Inan and Bahadir described implicit exponential difference method in two cases: finite and fully finite [14]. Saka and Dog used quintic B-spline collocation procedure [15]. Irk in [16] used sextic B-spline collocation technique. In [17], Demiray suggested the hyperbolic tangent method and presented travelling wave solution for the perturbed Burgers equation. Hon and Mao in [18] applied the multiquadric (MQ) as a spatial approximation scheme. Schulze-Halberg discussed a linearization method for solving Burgers equation with time dependent coefficients and a non-linear forcing term [19]. In [20], Seydaoglu presented the high-order splitting methods. Guo et al. proposed a high-order finite-volume compact scheme [21]. In [22], Mukundan and Awasthi used new efficient numerical techniques for solving one-dimensional quasi-linear Burgers equation. Hammad and El-Azab solved Burgers–Huxley and Burgers–Fisher equations with discretization in time by a new linear approximation scheme and in space by a high order compact finite difference method in [23]. In [24], Arora and Kumar used a new numerical method entitled “modified cubic B-spline differential quadrature method (MCB-DQM)” to find the approximate solution of the Burgers equations.El-Wakil et al. presented the Burgers equation and some other PDEs with self-similar solutions [25]. Singh et al. in [26] approximated numerical solutions for the generalized Burgers–Huxley (gBH) equation using modified cubic B-spline differential quadrature method (MCB-DQM). The scheme was based on the differential quadrature method in which the weighted coefficients were computed using modified cubic B-splines as a set of basis functions.
The spectral collocation (SC) method is one of the most important methods to solve continuous-time problems including ODEs and PDEs systems in various fields of science and engineering. In this method, an interpolating polynomial is applied to approximate the unknown function. In fact, unknown function in the problem can be expressed in the term of the approximate values at the special nodes. This method shows suitable results in comparison with those of other methods, since it has used the orthogonal polynomials for instance, Legendre and Chebyshev polynomials.
Weinan in [27] analyzed numerical methods for some evolutionary equations which admit semigroup formulations and the author shows the spectral accuracy of the spectral and pseudospectral methods for the Burgers equation. Xiao et al. used the non-linear Petrov–Galerkin method to reduce the order of Navier–Stokes equations and improved the stability of ROM results without tuning parameters. Also, the obtained numerical results show that the proposed POD Petrov–Galerkin method gives more accurate and stable results than the corresponding results obtained by using the POD Bubnov–Galerkin method [28]. In [29], a generalized Langevin equation is investigated and shown that the form of its coefficients depends critically on the assumption of continuity of the reconstructed trajectory. In [30], the two-dimensional unsteady Burgers equation is presented and the authors use the 4-bit lattice Boltzmann model to solve the 2D unsteady Burgers equation. In [31], numerical solutions for the 2D Burgers equation are computed using higher-order accurate finite difference schemes. More precisely, the author used the fourth-order accurate Du Fort Frankel scheme for solving the two-dimensional Burgers equation.
In this paper, we apply the symmetric space-time Chebyshev SC (CSC) method for solving Burgers equation and compare the associated results with those of some other aforementioned well-known methods. Through the numerical examples, we show that CSC method is more effective than other methods and we can achieve to more precise results for the solution of Burgers equations. In fact, we see that the number of discretization points (i.e., the CGL points in CSC method) and also the error of CSC method are less than the others used to solve Burgers equations.
This paper is organized as follows: In Section 2, Burgers equation and its different types are introduced. The CSC method for Burgers equation is implemented in Section 3. In Section 4, the convergence of the CSC method is analyzed. Next, the two-dimensional Burgers equation is introduced in Section 5 and the CSC method is applied to solve the equation. Then, Section 6 contains numerical examples to solve the Burgers equations in both cases of one and two-dimensional. Also, the figures of errors (associated with the proposed method) are depicted in some cases which confirm the efficiency of our suggested numerical scheme. Finally, the paper is concluded with a reasonable conclusion.

## 2. Burgers Equation

Three important types of Burgers equations are as follows:
• For a field $V ( . , . )$ and diffusion coefficient (or viscosity, as in the original fluid mechanical context) $ε$, the general form of viscous Burgers equation is as follows
$V t + V V x − ε V x x = 0 , a ≤ x ≤ b , t > 0 .$
• The Burgers–Fisher equation is a non-linear PDE of second order of the form
$V t − V x x + α ( t ) V V x = β ( t ) V ( 1 − V ) , a ≤ x ≤ b ,$
where $α ( . )$ and $β ( . )$ are given functions. It plays an important role in various fields of gas dynamics, traffic flow, physics applications, financial and applied mathematics [23].
• The Burgers–Huxley equation is as follows
$V t + α V δ V x − β V x x = γ V ( 1 − V δ ) ( V δ − η ) , 0 ≤ x ≤ 1 ,$
where $α ,$$β ,$$δ ,$$γ$ and $η$ are given constants. The Burgers–Huxley as a non-linear PDE describes the interaction between reaction mechanisms, convection effects, and diffusion transports [23].
In this paper, we represent the Burgers Equations (1)–(3) as the following general form
$V t = Λ ( t , V , V x , V x x ) .$
The time initial and space boundary conditions (in Dirichlet form) for Burgers Equation (4) are usually given as follows
$V ( 0 , x ) = f ( x ) , a ≤ x ≤ b ,$
$V ( t , a ) = g 1 ( t ) , t ∈ [ T 0 , T 1 ] ,$
$V ( t , b ) = g 2 ( t ) , t ∈ [ T 0 , T 1 ] .$

## 3. Implementing the CSC Method

Here, the CSC method for Burgers Equation (4) with conditions (5)–(7) is presented. The CSC method [32,33,34] is one of the most efficient numerical methods to solve continuous-time problems. Recently, some researchers have applied it to solve special problems (see [35,36,37]). One of the most important advantages of CSC method in comparison with other approximate methods is the high degree of accuracy that CSC approximations offer. Also, the CSC underlying polynomial space spanned by Chebyshev orthogonal polynomials on the interval $[ − 1 , 1 ]$ with respect to a weight function $w ( t ) = 1 1 − t 2$. In the CSC method, we use some points on the interval $[ − 1 , 1 ]$ to discretize the problem. Hence, we first transform the variables of Equation (4) to this interval by the following relations
$t = T 1 − T 0 2 t ¯ + T 1 + T 0 2 , t ∈ [ T 0 , T 1 ] , t ¯ ∈ [ − 1 , 1 ] , x = b − a 2 x ¯ + b + a 2 , x ∈ [ a , b ] , x ¯ ∈ [ − 1 , 1 ] .$
Therefore, Burgers Equation (4)–(7) converted into the following form
$U t ¯ = ψ t ¯ , U ( t ¯ , x ¯ ) , U x ¯ ( t ¯ , x ¯ ) , U x ¯ x ¯ ( t ¯ , x ¯ ) , U ( − 1 , x ¯ ) = F ( x ¯ ) , U ( t ¯ , − 1 ) = G 1 ( t ¯ ) , U ( t ¯ , 1 ) = G 2 ( t ¯ ) ,$
where
$ψ t ¯ , U , U x ¯ , U x ¯ x ¯ = T 1 − T 0 2 Λ T 1 − T 0 2 t ¯ + T 1 + T 0 2 , U , U x ¯ , U x ¯ x ¯ U ( t ¯ , x ¯ ) = V ( T 1 − T 0 2 t ¯ + T 1 + T 0 2 , b − a 2 x ¯ + b + a 2 ) , U x ¯ ( t ¯ , x ¯ ) = 2 b − a V x ( T 1 − T 0 2 t ¯ + T 1 + T 0 2 , b − a 2 x ¯ + b + a 2 ) , U x ¯ x ¯ ( t ¯ , x ¯ ) = 2 b − a 2 V x x ( T 1 − T 0 2 t ¯ + T 1 + T 0 2 , b − a 2 x ¯ + b + a 2 ) , F ( x ¯ ) = f ( b − a 2 x ¯ + b + a 2 ) , G 1 ( t ¯ ) = g 1 ( T 1 − T 0 2 t ¯ + T 1 + T 0 2 ) , G 2 ( t ¯ ) = g 2 ( T 1 − T 0 2 t ¯ + T 1 + T 0 2 ) .$
To discretize system (9), we use the CGL points on $[ − 1 , 1 ]$ which are defined by the following relations
$x ¯ k = t ¯ k = c o s ( N − k N π ) , k = 0 , 1 , . . . , N ,$
where they are the roots of $( 1 − t ¯ 2 ) d T N d t ¯$ and $T N ( . )$ is the Chebyshev polynomial of order N. It should be noted that the Chebyshev polynomials are expressed by
$T j ( t ¯ ) = c o s ( j c o s − 1 ( t ¯ ) ) , t ¯ ∈ [ − 1 , 1 ] , j = 0 , 1 , … , N ,$
and it is easy to show that
$T 0 ( t ¯ ) = 1 , T 1 ( t ¯ ) = t ¯ T j + 1 ( t ¯ ) = 2 t ¯ T j ( t ¯ ) − T j − 1 ( t ¯ ) , j = 1 , 2 , … .$
For interpolating in the CSC method, the following Lagrange polynomials are used
$L k ( t ¯ ) = ∏ j = 0 j ≠ k N t ¯ − t ¯ j t ¯ k − t ¯ j = 2 N μ k ∑ j = 0 N 1 μ j T j ( t ¯ k ) T j ( t ¯ ) , k = 0 , 1 , … , N , t ¯ ∈ [ − 1 , 1 ] ,$
where
$μ j = 2 , i f j = 0 , N , 1 , i f 1 ≤ j ≤ N − 1 ,$
and we have
$L j ( t ¯ k ) = δ j k = 1 , i f j = k , 0 , i f j ≠ k .$
In the CSC method, to approximate the solution of Burgers Equation (9), we use the following polynomial interpolation
$U ( t ¯ , x ¯ ) ≈ U N ( t ¯ , x ¯ ) = ∑ i = 0 N ∑ j = 0 N a ¯ i j N L i ( t ¯ ) L j ( x ¯ ) .$
By (12), we have
$U N ( t ¯ i , x ¯ j ) = a ¯ i j N .$
To express the derivative $U t ¯ N ( · , · ) , U x ¯ N ( · , · )$ and $U x ¯ x ¯ N ( · , · )$ in terms of $U N ( · , · )$ at the node points $t ¯ k$, we can use the matrix multiplication $D = ( D k j )$ and get
$U t ¯ N ( t ¯ p , x ¯ k ) = ∑ i = 0 N a ¯ i k N D p i , U x ¯ N ( t ¯ p , x ¯ k ) = ∑ j = 0 N a ¯ p j N D k j , U x ¯ x ¯ N ( t ¯ p , x ¯ k ) = ∑ j = 0 N a ¯ p j N D ^ k j , k , p = 0 , 1 , … , N ,$
where
$D k j = L j ′ ( t ¯ k ) = μ k μ j ( − 1 ) k + j 1 t ¯ k − t ¯ j , i f j ≠ k , − t ¯ k 2 − 2 t ¯ k 2 , i f 0 ≤ j = k ≤ N − 1 , − 2 N 2 + 1 6 , i f j = k = 0 , 2 N 2 + 1 6 , i f j = k = N ,$
$D ^ = D · D = ( D ^ k j )$, $D ^ k j = ∑ l = 0 N D k l D l j , k , j = 0 , 1 , … , N$. In fact, multiplication by matrix D transforms a vector of the state variables at the CGL points to the vector of approximate derivatives at these points.
Now, by relations (14) and (15), the system (9) can be converted into the following system of algebraic equations
$∑ i = 0 N a ¯ i k N D p i − ψ t ¯ p , a ¯ p k N , ∑ j = 0 N a ¯ p j N D k j , ∑ j = 0 N a ¯ p j N D ^ k j = 0 , a ¯ 0 k N = F ( x ¯ k ) ; k = 0 , 1 , … , N , a ¯ p 0 N = G 1 ( t ¯ p ) , a ¯ p N N = G 2 ( t ¯ p ) ; p = 1 , … , N .$
Here, by solving above system with respect to $( a ¯ p k N ; p , k = 0 , 1 , … , N )$, we can obtain continuous and pointwise approximate solutions (13) and (14), respectively.

## 4. The Convergence of the Method

In this section, first we give definition of the modulus of continuity and then analyze the convergence of the presented method.
Assume that $Ω ¯ = [ − 1 , 1 ] × [ − 1 , 1 ]$. With $C r ( Ω ¯ )$ we denote the space of the continuous functions with continuous derivatives of rth order.
Definition 1.
Function $W : R + → R +$ with the following properties is called a modulus of continuity [38]
• W is increasing,
• 1.
$lim z → 0 W ( z ) = 0$,
2.
for any $z 1$ and $z 2 ∈ R +$, $W ( z 1 + z 2 ) ≤ W ( z 1 ) + W ( z 2 )$,
3.
there exists a constant c such that for all $0 < z ≤ 2 , c W ( z ) ≥ z$.
Some important modulus of continuity can be defined as
$W ( z ) = z α , 0 < α ≤ 1 .$
Now, assume that $B 2$ is a unit circle in $R 2$. We say that a continuous function $f ( · , · )$ on $Ω ¯$ admits $W ( · )$ as a modulus of continuity, if the following value is finite
$| f ( · , · ) | W = sup | f ( t ¯ , x ¯ ) − f ( t ˜ , x ˜ ) | W ∥ ( t ¯ , x ¯ ) − ( t ˜ , x ˜ ) ∥ ∞ : ( t ¯ , x ¯ ) , ( t ˜ , x ˜ ) ∈ Ω ¯ , ( t ¯ , x ¯ ) ≠ ( t ˜ , x ˜ ) ,$
where
$∥ ( t ¯ , x ¯ ) − ( t ˜ , x ˜ ) ∥ ∞ = max { | t ¯ − t ˜ | , | x ¯ − x ˜ | } ; ( t ¯ , x ¯ ) , ( t ˜ , x ˜ ) ∈ Ω ¯ , ( t ¯ , x ¯ ) ≠ ( t ˜ , x ˜ ) .$
With $C W 1 ( B 2 )$ we denote the space of all functions $f ( · , · )$ on $B 2$ with continuous first-order partial derivatives, and let it is endowed with the following norm
$∥ f ( · , · ) ∥ 1 , W = ∥ f ( · , · ) ∥ ∞ + ∥ f t ¯ ( · , · ) ∥ ∞ + ∥ f x ¯ ( · , · ) ∥ ∞ + | f t ¯ ( · , · ) | W + | f x ¯ ( · , · ) | W ·$
Next, define
$C W 1 ( Ω ¯ ) = { f ( · , · ) ∈ C 1 ( Ω ¯ ) : ∀ ( t ˜ , x ˜ ) ∈ Ω ¯ , ∃ m a p ϕ : B 2 → Ω ¯ , s . t . ( t ˜ , x ˜ ) ∈ i n t ( ϕ ( B 2 ) ) a n d f ∘ ϕ ( · , · ) ∈ C W 1 ( B 2 ) } .$
It can be proved that if $Ω ¯ = ⋃ i = 1 l i n t ( ϕ i ( B 2 ) )$ for some $ϕ 1 , … , ϕ l$, then $f ( · , · ) ∈ C W 1 ( Ω ¯ )$ if and only if $f ∘ ϕ i ( · , · ) ∈ C W ( B 2 )$ for each $i = 1 , … , l$. Moreover, $C W 1 ( Ω ¯ )$ with norm
$∥ f ( · , · ) ∥ 1 , W = ∑ i = 1 l ∥ f ∘ ϕ i ( · , · ) ∥ 1 , W ,$
is a Banach space (for more details see [38]). At follows, we show the space of all polynomials of total degree at most $2 N$ on $Ω ¯$ by $P o l ( N , N , Ω ¯ )$, i.e.,
$P o l ( N , N , Ω ¯ ) = { η ( t ˜ , x ˜ ) = ∑ i = 0 N ∑ j = 0 N γ i j t ˜ i x ˜ j : ( t ˜ , x ˜ ) ∈ Ω ¯ , γ i j ∈ R } .$
Theorem 1.
For any $f ( · , · ) ∈ C W 1 ( Ω ¯ )$, there is a polynomial $η ( · , · ) ∈ P o l ( N , N , Ω ¯ )$ such that
$∥ f ( · , · ) − η ( · , · ) ∥ ∞ ≤ c 0 c 1 2 N W ( 1 2 N ) ,$
where $c 1 = ∥ f ( · , · ) ∥ 1 , W$ and $c 0$ is a constant that depends on $W ( · )$, but independent of N.
Proof.
The proof is a result of Theorem 2.1 in [38]. □
To prove the existence of solutions of the system (17), we convert it in the following form
$| ∑ i = 0 N a ¯ i k N D p i − ψ t ¯ p , a ¯ p k N , ∑ j = 0 N a ¯ p j N D k j , ∑ j = 0 N a ¯ p j N D ^ k j | ≤ N 2 N − 1 W ( 1 2 N − 1 ) , p , k = 1 , 2 , … , N − 1 , | a ¯ p 0 N − G 1 ( t ¯ p ) | ≤ N 2 N − 1 W ( 1 2 N − 1 ) , | a ¯ p N N − G 2 ( t ¯ p ) | ≤ N 2 N − 1 W ( 1 2 N − 1 ) , p = 0 , 1 , … , N , | a ¯ 0 k N − F ( x ¯ k ) | ≤ N 2 N − 1 W ( 1 2 N − 1 ) , k = 0 , 1 , … , N ,$
where N is sufficiently big and $W ( · )$ is a given modulus of continuity. Since $lim N → ∞ N 2 N − 1 W ( 1 2 N − 1 ) = 0$, any solution $a ¯ N = ( a ¯ p k N ; p , k = 0 , 1 , … , N )$ for system of algebraic inequalities (25) is a solution for system of algebraic Equation (17) when N tends to infinity.
We assume that $ψ$ has bounded and continuous derivatives with respect to its arguments. Hence, there exists a constant M such that
$| ψ t ¯ , U , U x ¯ , U x ¯ x ¯ − ψ t ¯ , U ˜ , U ˜ x ¯ , U ˜ x ¯ x ¯ | ≤ M | U − U ˜ | .$
Now we will show that the system (25) has at least one solution $a ¯ N$.
Theorem 2.
Let $U ( · , · )$ be a solution for system (9) where $U ( · , · )$ is in $C W 1 ( Ω ¯ )$. Then there is a positive integer K such that for any $N ≥ K$, system (25) has a solution as
$a ¯ N = ( a ¯ p k ; p , k = 0 , 1 , … , N ) ,$
which satisfies
$| U ( t ¯ p , x ¯ k ) − a ¯ p k N | ≤ L 2 N − 1 W ( 1 2 N − 1 ) , p , k = 0 , 1 , … , N ,$
where L is a positive constant independent of N.
Proof of Theorem 2.
Assume that$η ( · , · )$in$P o l ( N − 1 , N , Ω ¯ )$is the best polynomial approximation of$U t ¯ ( · , · )$. By Theorem 1, we get
$∥ U t ¯ ( t ¯ , x ¯ ) − η ( t ¯ , x ¯ ) ∥ ∞ ≤ γ 2 N − 1 W ( 1 2 N − 1 ) , ( t ¯ , x ¯ ) ∈ Ω ¯ ,$
where $γ$ is a constant independent of N. We define
$U ˜ ( t ¯ , x ¯ ) = U ( − 1 , x ¯ ) + ∫ − 1 t ¯ η ( τ , x ¯ ) d τ , ( t ¯ , x ¯ ) ∈ Ω ¯ ,$
and
$a ¯ p k N = U ˜ ( t ¯ p , x ¯ k ) ; p , k = 0 , 1 , … , N .$
We will see that $a ¯ N = ( a ¯ p k N ; p , k = 0 , 1 , … , N )$ satisfies system (25). By (29)–(31), for $( t ¯ , x ¯ ) ∈ Ω ¯$, we get
$| U ( t ¯ , x ¯ ) − U ˜ ( t ¯ , x ¯ ) | = | ∫ − 1 τ ( U t ¯ ( τ , x ¯ ) − η ( τ , x ¯ ) ) d τ | ≤ ∫ − 1 τ U t ¯ ( τ , x ¯ ) − η ( τ , x ¯ ) d τ ≤ γ 2 N − 1 W ( 1 2 N − 1 ) ∫ − 1 τ d τ ≤ 2 γ 2 N − 1 W ( 1 2 N − 1 ) .$
Now, by the relation (30), the function $U ˜ ( · , x ¯ )$, $x ¯ ∈ [ − 1 , 1 ]$, is a polynomial of total degree at most $2 N$. Hence, its derivatives at CGL nodes $t ¯ 0 , t ¯ 1 , … , t ¯ N$ are exactly equal to the value of the polynomial at the nodes multiplied by the differential matrix D, defined by (16). Thus, we have
$∑ i = 0 N a ¯ i k N D p i = U ˜ t ¯ ( t ¯ p , x ¯ k ) ; p , k = 0 , 1 , … , N .$
Therefore, by the relations (26) and (32), we get
$∑ i = 0 N a i k N D p i − ψ t ¯ p , a ¯ p k N , ∑ j = 0 N a ¯ p j N D k j , ∑ j = 0 N a ¯ p j N D ^ k j ≤ U ˜ t ¯ ( t ¯ p , x ¯ k ) − U t ¯ ( t ¯ p , x ¯ k ) + U t ¯ ( t ¯ p , x ¯ k ) − ψ t ¯ p , a ¯ p k N , ∑ j = 0 N a ¯ p j N D k j , ∑ j = 0 N a ¯ p j N D ^ k j = η ( t ¯ p , x ¯ k ) − U t ¯ ( t ¯ p , x ¯ k ) + ψ t ¯ p , U ( t ¯ p , x ¯ k ) , U x ¯ ( t ¯ p , x ¯ k ) , U x ¯ x ¯ ( t ¯ p , x ¯ k ) − ψ t ¯ p , a ¯ p k N , ∑ j = 0 N a ¯ p j N D k j , ∑ j = 0 N a ¯ p j N D ^ k j ≤ η ( t ¯ p , x ¯ k ) − U t ¯ ( t ¯ p , x ¯ k ) + M U ( t ¯ p , x ¯ k ) − a ¯ p k N ≤ γ 2 N − 1 W ( 1 2 N − 1 ) + M 2 γ 2 N − 1 W ( 1 2 N − 1 ) = γ ( 2 M + 1 ) 2 N − 1 W ( 1 2 N − 1 ) , p , k = 1 , … , N − 1 ,$
where M is a Lipschitz constant which satisfies the relation (26). Furthermore, for boundary conditions, we get
$U ˜ ( − 1 , x ¯ k ) − F ( x ¯ k ) ≤ U ˜ ( − 1 , x ¯ k ) − U ( − 1 , x ¯ k ) + U ( − 1 , x ¯ k ) − F ( x ¯ k ) ≤ 2 γ 2 N − 1 W ( 1 2 N − 1 ) , k = 0 , 1 , … , N .$
Also, for all $p = 0 , 1 , … , N$,
$U ˜ ( t ¯ p , − 1 ) − G 1 ( t ¯ p ) = U ˜ ( t ¯ p , − 1 ) − U ( t ¯ p , − 1 ) ≤ 2 γ 2 N − 1 W ( 1 2 N − 1 ) ,$
$U ˜ ( t ¯ p , 1 ) − G 2 ( t ¯ p ) = U ˜ ( t ¯ p , 1 ) − U ( t ¯ p , 1 ) ≤ 2 γ 2 N − 1 W ( 1 2 N − 1 ) .$
Hence, if we select K such that
$max { γ ( 2 M + 1 ) , 2 γ } ≤ N ,$
for $N ≥ K$, then by (34)–(37), $a ¯ N$ satisfies the system (25). This completes the proof. □
Now, we will show that the sequence of the solutions for the system (25) and the sequence of their interpolating polynomials converge to the solution of the system (9).
Theorem 3.
Let ${ a ¯ N = a ¯ p k N ; p , k = 0 , 1 , … , N } N = K ∞$ be a sequence of solutions for the system (25) and ${ U N ( · , · ) } N = K ∞$ be their interpolating polynomials sequence defined by (13). Also, we assume that for any $x ¯$ in $[ − 1 , 1 ]$, the sequence ${ ( U N ( − 1 , x ¯ ) , U t ¯ N ( · , · ) ) } N = K ∞$ has a subsequence ${ ( U N i ( − 1 , x ¯ ) , U t ¯ N i ( · , · ) ) } i = 0 ∞$ that uniformly converges to $( ϕ ∞ ( x ¯ ) , λ ( · , · ) )$, where $λ ( · , · ) ∈ C 2 ( Ω ¯ )$, $ϕ ∞ ( · ) ∈ C 2 ( [ − 1 , 1 ] )$ and $lim i → ∞ N i = ∞$. Then the pair
$U ˜ ( t ¯ , x ¯ ) = lim i → ∞ U N i ( t ¯ , x ¯ ) ,$
for $( t ¯ , x ¯ ) ∈ Ω ¯$, is a solution of the system (9).
Proof of Theorem 3.
By our assumptions, we have
$U ˜ ( t ¯ , x ¯ ) = ϕ ∞ ( x ¯ ) + ∫ − 1 t ¯ λ ( τ , x ¯ ) d τ .$
We first show that $U ˜ ( t ¯ , x ¯ )$ for $t ¯ ∈ [ − 1 , 1 ]$ and $x ¯ = x k$, $k = 0 , 1 , … , N$ satisfy the system (9). Assume that $U ˜ ( · , x ¯ k )$ for some $k = 1 , … , N − 1$ does not satisfy the first equation of (9). Then, there is a $τ$ in $( − 1 , 1 )$ such that
$U ˜ t ¯ ( τ , x ¯ k ) − ψ t ¯ p , U ˜ ( τ , x ¯ k ) , U ˜ x ¯ ( τ , x ¯ k ) , U ˜ x ¯ x ¯ ( τ , x ¯ k ) ≠ 0 .$
Since the CGL nodes ${ t ¯ p } p = 0 N$ are dense in $[ − 1 , 1 ]$ when $N → ∞$, there is a sequence ${ t ¯ l N i } i = 1 ∞$ such that $0 < l N i < N i$ and $lim i → ∞ t ¯ l N i = τ$. Thus,
$lim i → ∞ U ˜ t ¯ ( t ¯ l N i , x ¯ k ) − ψ t ¯ p , U ˜ ( t ¯ l N i , x ¯ k ) , U ˜ x ¯ ( t ¯ l N i , x ¯ k ) , U ˜ x ¯ x ¯ ( t ¯ l N i , x ¯ k ) = U ˜ t ¯ ( τ , x ¯ k ) − ψ t ¯ p , U ˜ ( τ , x ¯ k ) , U ˜ x ¯ ( τ , x ¯ k ) , U ˜ x ¯ x ¯ ( τ , x ¯ k ) ≠ 0 .$
On the other hand, since $lim i → ∞ N i 2 N i − 1 W ( 1 2 N i − 1 ) = 0$, by (25) we obtain
$lim i → ∞ U ˜ t ¯ ( t ¯ l N i , x ¯ k ) − ψ t ¯ p , U ˜ ( t ¯ l N i , x ¯ k ) , U ˜ x ¯ ( t ¯ l N i , x ¯ k ) , U ˜ x ¯ x ¯ ( t ¯ l N i , x ¯ k ) = 0 ,$
which contradicts with (40). Thus, $U ˜ ( t ¯ , x ¯ )$ (for all $t ¯ ∈ [ − 1 , 1 ]$ and $x ¯ = x ¯ k , k = 1 , … , N − 1$) satisfies the first equation of (9). Also, it can be easily proven that $U ˜ ( · , x ¯ k )$, $k = 0 , 1 , … , N$, satisfies the boundary conditions. For example, we show that $U ˜ ( − 1 , x ¯ k ) = F ( x ¯ k )$ for $k = 0 , 1 , … , N$. We have
$0 ≤ | U ˜ ( − 1 , x ¯ k ) − F ( x ¯ k ) | = | lim i → ∞ U N i ( − 1 , x ¯ k ) − F ( x ¯ k ) | = lim i → ∞ | U N i ( − 1 , x ¯ k ) − F ( x ¯ k ) | = lim i → ∞ | a ¯ 0 k N i − F ( x ¯ k ) | ≤ lim i → ∞ N i 2 N i − 1 W ( 1 2 N i − 1 ) = 0 .$
Hence, $U ˜ ( − 1 , x ¯ k ) = F ( x ¯ k )$ for all $k = 0 , 1 , … , N$. Now, we know that the nodes ${ x ¯ k } k = 0 N$ are dense in $[ − 1 , 1 ]$ when $N → ∞$. Therefore the pair $U ˜ ( · , · )$, defined by (38), is a solution of (9) on $Ω ¯ = [ − 1 , 1 ] × [ − 1 , 1 ]$. This completes the proof. □

## 5. The Generalization of the CSC Method for Two-Dimensional Burgers Equation

In this section, we introduce the two-dimensional Burgers equation and solve it by using the CSC method. The two-dimensional Burgers equation is as follows
$U t ( x , y , t ) + U ( x , y , t ) U x ( x , y , t ) + U ( x , y , t ) U y ( x , y , t ) = ε ( U x x ( x , y , t ) + U y y ( x , y , t ) ) , ( x , y , t ) ∈ [ 0 , 1 ] × [ 0 , 1 ] × [ 0 , T ] ,$
and the time initial and space boundary conditions are as follows
$U ( x , y , 0 ) = f ( x , y ) ,$
$U ( 0 , y , t ) = h 1 ( y , t ) ,$
$U ( 1 , y , t ) = h 2 ( y , t ) ,$
$U ( x , 0 , t ) = h 3 ( x , t ) ,$
$U ( x , 1 , t ) = h 4 ( x , t ) .$
As we explained $x ¯ k , y ¯ n$ and $t ¯ p$ are the CGL points which were defined in the Section 3.
Now, for obtaining the numerical solution of the two-dimensional Burgers Equation (41) by applying the CSC method, the interpolation polynomial is following
$U ( x ¯ , y ¯ , t ¯ ) ≈ U N ( x ¯ , y ¯ , t ¯ ) = ∑ i = 0 N ∑ j = 0 N ∑ l = 0 N a ¯ i j l N L i ( x ¯ ) L j ( y ¯ ) L l ( t ¯ ) .$
By (12), we have
$U N ( x ¯ k , y ¯ n , t ¯ p ) = a ¯ k n p N$
To represent the derivatives $U t ¯ N ( · , · , · ) , U x ¯ N ( · , · , · ) , U y ¯ N ( · , · , · ) , U x ¯ x ¯ N ( · , · , · )$ and $U y ¯ y ¯ N ( · , · , · )$ in terms of $U N ( · , · , · )$ at the node points $( x ¯ k , y ¯ n , t ¯ p )$, by using the matrix multiplication $D = ( D k j )$, we have
$U t ¯ N ( x ¯ k , y ¯ n , t ¯ p ) = ∑ l = 0 N a ¯ k n l N D p l , U x ¯ N ( x ¯ k , y ¯ n , t ¯ p ) = ∑ i = 0 N a ¯ i n p N D k i , U y ¯ N ( x ¯ k , y ¯ n , t ¯ p ) = ∑ j = 0 N a ¯ k j p N D n j , U x ¯ x ¯ N ( x ¯ k , y ¯ n , t ¯ p ) = ∑ i = 0 N a ¯ i n p N D ^ k j , U y ¯ y ¯ N ( x ¯ k , y ¯ n , t ¯ p ) = ∑ j = 0 N a ¯ k j p N D ^ n j , k , n , p = 0 , 1 , … , N ,$
where $D k j$ and $D ^ k j$ were defined in the Section 3.
Now, by applying relations (48) and (49) Equation (41) can be rewritten as
$∑ l = 0 N a ¯ k n l N D p l + a ¯ k n p ∑ i = 0 N a ¯ i n p N D k i + ∑ j = 0 N a ¯ k j p N D n j − ε ∑ i = 0 N a ¯ i n p N D ^ k j + ∑ j = 0 N a ¯ k j p N D ^ n j = 0 a ¯ k n 0 = f ( x ¯ k , y ¯ n ) , k , n = 0 , 1 , … , N , a ¯ 0 n p = h 1 ( y ¯ n , t ¯ p ) , n = 0 , 1 , … , N , p = 1 , … , N , a ¯ N n p = h 2 ( y ¯ n , t ¯ p ) , n = 0 , 1 , … , N , p = 1 , … , N , a ¯ k 0 p = h 3 ( x ¯ k , t ¯ p ) , k = 1 , … , N − 1 , p = 1 , … , N , a ¯ k N p = h 4 ( x ¯ k , t ¯ p ) , k = 1 , … , N − 1 , p = 1 , … , N ,$
Now, we can obtain the numerical solution of the above system with respect to $( a ¯ k n p N ; k , n , p = 0 , 1 , … , N )$.

## 6. Numerical Examples

In the following examples, we use the Levenberg-Marquardt method (a quasi-Newton method) for FSOLVE command in MATLAB software to solve the algebraic system (17). We calculate $L 2$ and $L ∞$ errors as follows
$L 2 = ‖ U ( t ¯ , · ) − U N ( t ¯ , · ) ‖ 2 = ( ∑ j = 0 N ∣ U ( t ¯ , x j ) − U N ( t ¯ , x j ) ∣ 2 ) 1 2 , L ∞ = ‖ U ( t ¯ , · ) − U N ( t ¯ , · ) ‖ ∞ = max 0 ≤ j ≤ N ∣ U ( t ¯ , x j ) − U N ( t ¯ , x j ) ∣ ,$
where $x j , j = 0 , 1 , … , N$, are the collocation points, $t ¯ ∈ [ − 1 , 1 ]$ is a given point and $U ( · , · )$ and $U N ( · , · )$ are the analytical and approximate solutions, respectively. Also, the absolute error can be obtained by
$E ( t ¯ , x ¯ ) = ∣ U ( t ¯ , x ¯ ) − U N ( t ¯ , x ¯ ) ∣ , ( t ¯ , x ¯ ) ∈ [ − 1 , 1 ] × [ − 1 , 1 ] .$
Example 1.
Consider the Burgers–Fisher Equation (2), where $T 0 = − 0.2$, $T 1 = 0$, $a = − 1$, $b = 0$, $α ( t ) = 24$ and $β ( t ) = − 48$ for $t ∈ [ T 0 , T 1 ]$. Also, we assume that the boundary conditions are given by
$U ( t , − 1 ) = 1 2 − 1 2 t a n h [ 6 ( − 1 − 8 t ) ] , U ( t , 0 ) = 1 2 − 1 2 t a n h [ 6 ( − 8 t ) ] ,$
and the initial condition is as follows
$U ( − 0.2 , x ) = 1 2 − 1 2 t a n h [ 6 ( x + 1.6 ) ] .$
Then the exact solution is
$U ( t , x ) = 1 2 − 1 2 t a n h [ 6 ( x − 8 t ) ] .$
We gain the numerical results by the CSC method at $t = − 0.1$, $− 0.05$, $− 0.04$, $− 0.035$, $− 0.03$ for $N = 30 × 30$ which are shown in Table 1. We observe that our numerical results are better than the results of MMPDE methods [8], which they are obtained for $Δ t = 10 − 6$ and $Δ x = 1 60$ (or N = 200,000 × 60). In Table 1, $L 2$ errors are presented. In Figure 1 and Figure 2, we show the approximate solution and absolute error, respectively. In Figure 3, we represent the exact and approximate solutions for $t = − 0.1$, $− 0.05$ and $− 0.03$. We also illustrate $L 2$ errors in Figure 4.
Example 2.
Consider the Burgers–Fisher Equation (2), where $T 0 = − 0.05$, $T 1 = 0.05$, $a = − 1$, $b = 0$, $α ( t ) = 20$ and $β ( t ) = − 1 + 3 s i n t$ for $t ∈ [ T 0 , T 1 ]$. Also, consider the boundary conditions
$U ( t , − 1 ) = c o s h [ − 1 − 3 c o s t ] + s i n h [ − 1 − 3 c o s t ] − 5 − 4 c o s h [ − 1 − 3 c o s t ] + 6 s i n h [ − 1 − 3 c o s t ] , U ( t , 0 ) = c o s h [ − 3 c o s t ] + s i n h [ − 3 c o s t ] − 5 − 4 c o s h [ − 3 c o s t ] + 6 s i n h [ − 3 c o s t ] ,$
and the initial condition
$U ( − 0.05 , x ) = c o s h [ x − 3 c o s ( − 0 . 05 ) ] + s i n h [ x − 3 c o s ( − 0 . 05 ) ] − 5 − 4 c o s h [ x − 3 c o s ( − 0.05 ) ] + 6 s i n h [ x − 3 c o s ( − 0.05 ) ] .$
By these, the exact solution is as follows
$U ( t , x ) = c o s h [ x − 3 c o s t ] + s i n h [ x − 3 c o s t ] − 5 − 4 c o s h [ x − 3 c o s t ] + 6 s i n h [ x − 3 c o s t ] .$
We solve the system (17) according to this example. Table 2 shows the $L 2$ errors at $t = − 0.05$, $− 0.025$, 0, $0.025$ and $0.05$ for CSC method and MMPDE methods [8]. Our numerical results are satisfied $N = 10 × 10$ (or equivalently $Δ x = 0.1$ and $Δ t = 0.01$) and results of the MMPDE methods are with $Δ x = 1 40$ and $Δ t = 10 − 4$ (or equivalently $N = 1000 × 40$). In Figure 5, Figure 6 and Figure 7, we illustrate the approximate solution, absolute error and the $L 2$ error, respectively.
Example 3.
Consider the Burgers–Huxley Equation (3) with $α = 1$, $δ = 2$ and $γ = 0$. Therefore, it can be written as follows
$U t + U 2 U x − β U x x = 0 ,$
where $T 0 = 1$, $T 1 = 10$, $a = 0$ and $b = 1$. Also, consider the boundary conditions
$U ( t , 0 ) = 0 , U ( t , 1 ) = 1 t + t t c 0 e x p ( 1 4 β t ) ,$
and the initial condition
$U ( 1 , x ) = x 1 + 1 c 0 e x p ( x 2 4 β ) ,$
where $0 < c 0 < 1$. Hence, the exact solution is given by
$U ( t , x ) = x t 1 + t c 0 e x p ( x 2 4 β t ) .$
We take $c 0 = 0.5$ and $β = 0.01$. Table 3 shows the $L ∞$ errors by using CSC method and given methods in [9,12,15,16] for $t = 2$, 4 and 6 and $N = 9 × 9$. In Figure 8 and Figure 9, we show the approximate solution and absolute error, respectively. Also, in Figure 10, we represent the exact and approximate solutions for $t = 2$, 6 and 10. Moreover, Figure 11 shows the $L ∞$ error.
Example 4.
Consider the Burgers Equation (52) for $x ∈ [ 0 , 1.3 ]$ and
$U ( t , 0 ) = 0 , U ( t , 1 ) = 1 . 3 t + t t c 0 e x p ( 1 . 69 4 β t ) , U ( 1 , x ) = x 1 + 1 c 0 e x p ( x 2 4 β ) .$
The numerical results of our method and other methods [12,16] are displayed in Table 4 for different values of t. The $L ∞$ error results for this example is depicted in Table 4 along with the comparison of the error computed by the present method and other methods. In Figure 12 and Figure 13, the approximate solution and absolute error are shown, respectively. Also, in Figure 14, we represent the exact and approximate solutions for $t = 2$, 6 and 10.
Example 5.
Consider the Burgers Equation (1), where $T 0 = 1$, $T 1 = 5$, $a = 0$, $b = 8$ and $ε = 0.5$. The initial condition for the current problem is
$U ( 1 , x ) = x 1 + e x p ( 1 4 ε ( x 2 − 1 4 ) ) ,$
and the boundary conditions are
$U ( t , 0 ) = 0 , U ( t , 8 ) = 8 t + t ( t e x p ( 1 8 ε ) ) 1 2 e x p ( 16 ε t ) .$
Here, we have the following analytical solution
$U ( t , x ) = x t 1 + ( t e x p ( 1 8 ε ) ) 1 2 e x p ( x 2 4 ε t ) .$
In Figure 15 and Figure 16, we show the approximate solution and absolute error, respectively. Also, Figure 17 illustrates the comparison between the exact solution and numerical solution given by the proposed method. Also, we compare the $L ∞$ and $L 2$ errors which are computed by the present method and other methods [14] in Table 5. We can observe that the results of CSC for $N = 30 × 30$ are better than the results of method of Inan and Bahadir [14] for $N = 320 × 40 , 000$.
Example 6.
By considering the two-dimensional Burgers Equation (41) with $T = 1$ and $ε = 0.1 , 0.2 , 0.5$ and 1, the initial condition is as follow [30,31]
$U ( x , y , 0 ) = 1 1 + e x p ( ( x + y ) 2 ε ) ,$
and the boundary conditions are
$U ( 0 , y , t ) = 1 1 + e x p ( y − t 2 ε ) ,$
$U ( 1 , y , t ) = 1 1 + e x p ( ( 1 + y − t ) 2 ε ) ,$
$U ( x , 0 , t ) = 1 1 + e x p ( ( x − t ) 2 ε ) ,$
$U ( x , 1 , t ) = 1 1 + e x p ( ( x + 1 − t ) 2 ε ) .$
By these considerations, the exact solution is
$U ( x , y , t ) = 1 1 + e x p ( ( x + y − t ) 2 ε ) .$
We calculate approximate solutions and absolute errors in different $ε = 0.1 , 0.2 , 0.5$ and 1 with $N = 10$. In Figure 18 and Figure 19, we observe the approximate solutions and absolute errors with $ε = 0.1$, respectively. Also, Figure 20 and Figure 21 illustrate the numerical solution and absolute error given by the presented method for $ε = 0.1$. The approximate solutions and absolute errors with $ε = 0.5$ and 1 are expressed in Figure 22, Figure 23, Figure 24 and Figure 25, respectively. Moreover, we computed the $L ∞$ errors for various $ε$ at $t = t 0$ which are shown in Table 6.

## 7. Conclusions

In this article, we used the Chebyshev spectral collocation method to get numerical solutions for different types of one and two-dimensional Burgers equation. We analyzed the convergence of the CSC method by using the concept of module of continuity and compared the obtained approximate solutions with those of other methods. We showed that the CSC method has very high accuracy and it is more precise with respect to the other numerical methods. Our investigations can be used in the three-dimensional case and we prepare these investigations as a future article.

## Author Contributions

The authors contributed equally to this work.

## Funding

This research was funded by the National Natural Science Foundation of China Projects grant number [11601240, 11771214].

## Conflicts of Interest

The authors declare no conflict of interest.

## Abbreviations

The following abbreviations are used in this manuscript:
 CSC Chebyshev Spectral Collocation CGL Chebyshev–Gauss–Lobatto

## References

1. Eltayeb, H.; Bachar, I.; Kilicman, A. On conformable double laplace transform and one dimensional fractional coupled Burgers equation. Symmetry 2019, 11, 417. [Google Scholar] [CrossRef]
2. Zhang, X.; Zhang, Y. Some similarity solutions and numerical solutions to the Time-Fractional Burgers system. Symmetry 2019, 11, 37–50. [Google Scholar] [CrossRef]
3. Malik, S.A.; Qureshi, I.M.; Amir, M.; Malik, A.N.; Haq, I. Numerical solution to generalized Burgers-Fisher equation using Exp-Function method hybridized with heuristic computation. PLoS ONE 2015, 10, e0121728. [Google Scholar] [CrossRef]
4. Ray, S.S.; Gupta, A.K. On the Solution of Burgers-Huxley and Huxley equation using wavelet collocation method. Cmes-Comp. Model. Eng. 2013, 91, 409–424. [Google Scholar] [CrossRef]
5. Dhawan, S.; Kapoor, S.; Kumar, S.; Rawat, S. Contemporary review of techniques for the solution of nonlinear Burgers equation. J. Comput. Sci. 2012, 3, 405–419. [Google Scholar] [CrossRef]
6. Berger, M.; Kohn, R.V. A rescaling algorithm for the numerical calculation of blowing-up solutions. Commun. Pur. Appl. Anal. 1988, 41, 841–863. [Google Scholar] [CrossRef]
7. Budd, C.J.; Huang, W.; Russell, R.D. Moving mesh methods for problems with blow-up. SIAM J. Sci. Comput. 1996, 17, 305–327. [Google Scholar] [CrossRef]
8. Soheili, A.R.; Kerayechian, A.; Davoodi, N. Adaptive numerical method for Burgers’-type nonlinear equations. Appl. Math. Comput. 2012, 219, 3486–3495. [Google Scholar] [CrossRef]
9. Ramadan, M.A.; El-Danaf, T.S.; Abd Alaal, F.E.I. A numerical solution of the Burgers’ equation using septic B-splines. Chaos. Soliton. Ftact. 2005, 26, 795–804. [Google Scholar] [CrossRef]
10. Ramadan, M.A.; El-Danaf, T.S. Numerical treatment for the modified Burgers equation. Math. Comput. Simul. 2005, 70, 90–98. [Google Scholar] [CrossRef]
11. Haq, S.; Ul-Islam, S.; Uddin, M. A mesh-free method for the numerical solution of the KdV-Burgers’ equation. Appl. Math. Model. 2009, 33, 3442–3449. [Google Scholar] [CrossRef]
12. Oruc, O.; Bulut, F.; Esen, A. A Haar wavelet-finite difference hybrid method for the numerical solution of the modified Burgers’ equation. J. Math. Chem. 2015, 53, 1592–1607. [Google Scholar] [CrossRef]
13. Lepik, U. Numerical solution of evolution equations by the Haar wavelet method. Appl. Math. Comput. 2007, 185, 695–704. [Google Scholar] [CrossRef]
14. Inan, B.; Bahadir, A.R. Numerical solution of the one-dimensional Burgers’ equation: Implicit and fully implicit exponential finite difference methods. Pramana J. Phys. 2013, 81, 547–556. [Google Scholar] [CrossRef]
15. Saka, B.; Dag, I. A numerical study of the Burgers’ equation. J. Franklin. I. 2008, 345, 328–348. [Google Scholar] [CrossRef]
16. Irk, D. Sextic B-spline collocation method for the modified Burgers’ equation. Kybernetes 2009, 38, 1599–1620. [Google Scholar] [CrossRef]
17. Demiray, H. A note on the travelling wave solution to the perturbed Burgers equation. Appl. Math. Model. 2002, 26, 37–40. [Google Scholar] [CrossRef]
18. Hon, Y.C.; Mao, X.Z. An efficient numerical scheme for Burgers’ equation. Appl. Math. Comput. 1998, 95, 37–50. [Google Scholar] [CrossRef]
19. Schulze-Halberg, A. Burgers’ equation with time-dependent coefficients and nonlinear forcing term: Linearization and exact solvability. Commun. Nonlinear. Sci. 2015, 22, 1068–1083. [Google Scholar] [CrossRef]
20. Seydaoglu, M.; Erdogan, U.; Ozis, T. Numerical solution of Burgers’ equation with high order splitting methods. J. Comput. Appl. Math. 2016, 291, 410–421. [Google Scholar] [CrossRef]
21. Guo, Y.; Shi, Y.; Li, Y. A fifth order finite volume weighted compact scheme for solving one-dimensional Burgers equation. Appl. Math. Comput. 2016, 281, 172–185. [Google Scholar] [CrossRef]
22. Mukundan, V.; Awasthi, A. Efficient numerical techniques for Burgers’ equation. Appl. Math. Comput. 2015, 262, 282–297. [Google Scholar] [CrossRef]
23. Hammad, D.A.; El-Azab, M.S. 2N order compact finite difference scheme with collocation method for solving the generalized Burgers’-Huxley and Burgers’-Fisher equations. Appl. Math. Comput. 2015, 258, 296–311. [Google Scholar] [CrossRef]
24. Arora, G.; Kumar Singh, B. Numerical solution of Burgers equation with modified cubic B-spline differential quadrature method. Appl. Math. Comput. 2013, 224, 166–177. [Google Scholar] [CrossRef]
25. El-Wakil, S.A.; Abulwafa, E.M.; El-hanbaly, A.M.; El-Shewy, E.K.; Abd-El-Hamid, H.M. Self-similar solutions for some nonlinear evolution equations: KdV, mKdV and Burgers equations. J. Assoc. Arab Univ. Basic Appl. 2016, 19, 44–51. [Google Scholar] [CrossRef]
26. Singh, B.K.; Arora, G.; Singh, M.K. A numerical scheme for the generalized Burgers’-Huxley equation. J. Egypt. Math. Soc. 2016, 24, 629–637. [Google Scholar] [CrossRef]
27. Weinan, E. Convergence of spectral methods for Burgers’ equation. SIAM J. Numer. Anal. 1992, 29, 1520–1541. [Google Scholar] [CrossRef]
28. Xiao, D.; Fang, F.; Du, J.; Pain, C.C.; Navon, I.M.; Buchon, A.G.; ElSheikh, A.H.; Hu, G. Non-Linear Petrov-Galerkin methods for reduced order modelling of the Navier-Stokes equations using a mixed finite element pair. Comp. Methods Appl. Mech. Engrg. 2013, 255, 147–157. [Google Scholar] [CrossRef]
29. Prodanov, D. Analytical and numerical treatments of conservative diffusions and the Burgers equation. Entropy 2018, 20, 492. [Google Scholar] [CrossRef]
30. Duan, Y.; Liu, R. Lattice Boltzmann model for two-dimensional unsteady Burgers’ equation. J. Comput. Appl. Math. 2007, 206, 432–439. [Google Scholar] [CrossRef]
31. Radwan, S.F. Comparison of higher-order accurate schemes for solving the two-dimensional unsteady Burgers’ equation. J. Comput. Appl. Math. 2005, 174, 383–397. [Google Scholar] [CrossRef] [Green Version]
32. Elnegar, G.N.; Kazemi, M.A. Pseudo-spectral Chebyshev optimal control of constrained nonlinear dynamical systems. Comput. Optim. Appl. 1998, 11, 195–217. [Google Scholar] [CrossRef]
33. Gong, Q.; Ross, I.M.; Fahroo, F. A Chebyshev pseudo-spectral method for nonlinear constrained optimal control problems. In Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference, Shanghai, China, 29 January 2010. [Google Scholar] [CrossRef]
34. Fahroo, F.; Ross, I.M. Direct trajectory optimization by a Chebyshev pseudo-spectral method. J. Guid. Control. Dynam. 2002, 25, 160–166. [Google Scholar] [CrossRef] [Green Version]
35. Noori Skandari, M.H.; Ghaznavi, M. Chebyshev Pseudo-Spectral method for Bratu’s problem. Iran. J. Sci. Technol. Trans. Sci. 2017, 41, 913–921. [Google Scholar] [CrossRef]
36. Ghaznavi, M.; Noori Skandari, M.H. An efficient Pseudo-Spectral method for nonsmooth dynamical systems. Iran. J. Sci. Technol. Trans. Sci. 2016, 42, 635–646. [Google Scholar] [CrossRef]
37. Noori Skandari, M.H.; Kamyad, A.V.; Effati, S. Generalized Euler-Lagrange equation for nonsmooth calculus of variations. Nonlinear. Dyn. 2014, 75, 85–100. [Google Scholar] [CrossRef]
38. Ragozin, D.L. Polynomial approximation on compact manifolds and homogeneous spaces. Trans. Amer. Math. Soc. 1971, 162, 157–170. [Google Scholar] [CrossRef]
Figure 1. The approximate solution for Example 1.
Figure 1. The approximate solution for Example 1.
Figure 2. The absolute error $E ( . , . )$ for example 1
Figure 2. The absolute error $E ( . , . )$ for example 1
Figure 3. Exact and approximate solutions for $t = − 0.1$, $− 0.05$ and $− 0.03$.
Figure 3. Exact and approximate solutions for $t = − 0.1$, $− 0.05$ and $− 0.03$.
Figure 4. $L 2$ errors for Example 1.
Figure 4. $L 2$ errors for Example 1.
Figure 5. The approximate solution for Example 2.
Figure 5. The approximate solution for Example 2.
Figure 6. The absolute error $E ( . , . )$ for Example 2.
Figure 6. The absolute error $E ( . , . )$ for Example 2.
Figure 7. $L 2$ errors for Example 2.
Figure 7. $L 2$ errors for Example 2.
Figure 8. The approximate solution for Example 3.
Figure 8. The approximate solution for Example 3.
Figure 9. The absolute error $E ( . , . )$ for Example 3.
Figure 9. The absolute error $E ( . , . )$ for Example 3.
Figure 10. Exact and approximate solutions for $t = 2$, 6 and 10.
Figure 10. Exact and approximate solutions for $t = 2$, 6 and 10.
Figure 11. $L ∞$ errors for Example 3.
Figure 11. $L ∞$ errors for Example 3.
Figure 12. The approximate solution for Example 4.
Figure 12. The approximate solution for Example 4.
Figure 13. The absolute error $E ( . , . )$ for Example 4.
Figure 13. The absolute error $E ( . , . )$ for Example 4.
Figure 14. Exact and approximate solutions for $t = 2$, 6 and 10.
Figure 14. Exact and approximate solutions for $t = 2$, 6 and 10.
Figure 15. The approximate solution for Example 5.
Figure 15. The approximate solution for Example 5.
Figure 16. The absolute error $E ( . , . )$ for Example 5.
Figure 16. The absolute error $E ( . , . )$ for Example 5.
Figure 17. Exact and approximate solutions for Example 5.
Figure 17. Exact and approximate solutions for Example 5.
Figure 18. The approximate solution for $ε = 0.1$ Example 6.
Figure 18. The approximate solution for $ε = 0.1$ Example 6.
Figure 19. The absolute error for $ε = 0.1$ Example 6.
Figure 19. The absolute error for $ε = 0.1$ Example 6.
Figure 20. The approximate solution for $ε = 0.2$ Example 6.
Figure 20. The approximate solution for $ε = 0.2$ Example 6.
Figure 21. The absolute error for $ε = 0.2$ Example 6.
Figure 21. The absolute error for $ε = 0.2$ Example 6.
Figure 22. The approximate solution for $ε = 0.5$ Example 6.
Figure 22. The approximate solution for $ε = 0.5$ Example 6.
Figure 23. The absolute error for $ε = 0.5$ Example 6.
Figure 23. The absolute error for $ε = 0.5$ Example 6.
Figure 24. The approximate solution for $ε = 1$ Example 6.
Figure 24. The approximate solution for $ε = 1$ Example 6.
Figure 25. The absolute error for $ε = 1$ Example 6.
Figure 25. The absolute error for $ε = 1$ Example 6.
Table 1. Comparison of the $L 2$ error for Example 1.
Table 1. Comparison of the $L 2$ error for Example 1.
MethodN$t = − 0.1$$t = − 0.05$$t = − 0.04$$t = − 0.035$$t = − 0.03$
Presented method$30 × 30$$1.8293 × 10 − 4$$1.1920 × 10 − 4$$1.2691 × 10 − 4$$1.4053 × 10 − 4$$1.4187 × 10 − 4$
Mesh for optimal M [8]200,000 × 60$2.1 × 10 − 3$$2.9 × 10 − 3$$3.4 × 10 − 3$$3.7 × 10 − 3$$4.2 × 10 − 3$
Mesh for arc-length M [8]200,000 × 60$2.7 × 10 − 3$$8 × 10 − 3$$7.9 × 10 − 3$$7.6 × 10 − 3$$7.1 × 10 − 3$
Mesh for curvature M [8]200,000 × 60$2.1 × 10 − 3$$2.4 × 10 − 3$$2.4 × 10 − 3$$2.4 × 10 − 3$$2.5 × 10 − 3$
Table 2. Comparison of the $L 2$ error for Example 2.
Table 2. Comparison of the $L 2$ error for Example 2.
MethodN$t = − 0.05$$t = − 0.025$$t = 0$$t = 0.025$$t = 0.05$
Presented method$10 × 10$$4.8044 × 10 − 6$$2.4602 × 10 − 4$$4.1079 × 10 − 4$$5.3422 × 10 − 4$$6.2951 × 10 − 4$
Mesh for optimal M [8]$1000 × 40$$2.2 × 10 − 3$$4 × 10 − 3$$5.3 × 10 − 3$$3.5 × 10 − 3$$1.2 × 10 − 3$
Mesh for arc-length M [8]$1000 × 40$$1.8 × 10 − 3$$4.1 × 10 − 3$$6.6 × 10 − 3$$6.8 × 10 − 3$$2.3 × 10 − 3$
Mesh for curvature M [8]$1000 × 40$$2.2 × 10 − 3$$4 × 10 − 3$$5.2 × 10 − 3$$3.6 × 10 − 3$$9.6 × 10 − 4$
Table 3. Comparison of the $L ∞$ error for Example 3.
Table 3. Comparison of the $L ∞$ error for Example 3.
MethodN$T = 2$$T = 6$$T = 10$
Presented method$9 × 9$$5.5673 × 10 − 4$$4.4466 × 10 − 4$$3.0034 × 10 − 4$
Haar wavelet method [12]$16 × 900$$7.5978 × 10 − 4$$4.6335 × 10 − 4$$1.16480 × 10 − 3$
QBC method [10]$200 × 900$$1.21698 × 10 − 3$$7.2249 × 10 − 4$$1.28124 × 10 − 3$
SBC method [9]$50 × 900$$1.70309 × 10 − 3$$7.6105 × 10 − 4$$1.80239 × 10 − 3$
QBCA1 method [15]$200 × 900$$8.1680 × 10 − 4$$5.2579 × 10 − 4$$1.28125 × 10 − 3$
QBCA2 method [15]$200 × 900$$8.2212 × 10 − 4$$5.2579 × 10 − 4$$1.28125 × 10 − 3$
SBC1 method [16]$200 × 900$$8.2934 × 10 − 4$$1.28127 × 10 − 3$
SBC2 method [16]$200 × 900$$8.2734 × 10 − 4$$1.28127 × 10 − 3$
Table 4. Comparison of the $L ∞$ error for Example 4.
Table 4. Comparison of the $L ∞$ error for Example 4.
MethodN$T = 2$$T = 6$$T = 10$
Presented method$10 × 10$$5.306 × 10 − 4$$4.294 × 10 − 4$$3.166 × 10 − 4$
Haar wavelet method [12]$16 × 900$$7.2890 × 10 − 4$$4.5606 × 10 − 4$$3.2374 × 10 − 4$
SBC1 method [16]$260 × 900$$8.2934 × 10 − 4$$3.2723 × 10 − 4$
SBC2 method [16]$260 × 900$$8.2734 × 10 − 4$$3.2337 × 10 − 4$
Table 5. Comparison of the $L ∞$ and $L 2$ errors for Example 5
Table 5. Comparison of the $L ∞$ and $L 2$ errors for Example 5
MethodTN$L 2$$L ∞$
Presented method$T = 1.5$$30 × 30$$3.2025 × 10 − 8$$1.2611 × 10 − 7$
$I − E F D$ method [14]$T = 1.5$$320 ×$ 40,000 $2.1 × 10 − 5$$1.8 × 10 − 5$
$F I − E F D$ method  [14]$T = 1.5$$320 ×$ 40,000 $2.2 × 10 − 5$$1.9 × 10 − 5$
Presented method$T = 3$$30 × 30$$3.2026 × 10 − 8$$6.9546 × 10 − 9$
$I − E F D$ method [14]$T = 3$$320 ×$ 40,000 $2.2 × 10 − 5$$3.8 × 10 − 5$
$F I − E F D$ method [14]$T = 3$$320 ×$ 40,000 $2.3 × 10 − 5$$1.8 × 10 − 5$
Presented method$T = 4.5$$30 × 30$$3.2028 × 10 − 8$$1.6022 × 10 − 9$
$I − E F D$ method [14]$T = 4.5$$320 ×$ 40,000 $4.08 × 10 − 4$$7.43 × 10 − 4$
$F I − E F D$ method [14]$T = 4.5$$320 ×$ 40,000 $4.08 × 10 − 4$$7.43 × 10 − 4$
Table 6. The $L ∞$ error for various $ε$ Example 6.
Table 6. The $L ∞$ error for various $ε$ Example 6.
The CSC Method$ε = 0.1$$ε = 0.2$$ε = 0.5$$ε = 1$
$L ∞$$1.076 × 10 − 4$$3.421 × 10 − 5$$6.073 × 10 − 5$$7.333 × 10 − 6$

## Share and Cite

MDPI and ACS Style

Huang, Y.; Skandari, M.H.N.; Mohammadizadeh, F.; Tehrani, H.A.; Georgiev, S.G.; Tohidi, E.; Shateyi, S. Space–Time Spectral Collocation Method for Solving Burgers Equations with the Convergence Analysis. Symmetry 2019, 11, 1439. https://doi.org/10.3390/sym11121439

AMA Style

Huang Y, Skandari MHN, Mohammadizadeh F, Tehrani HA, Georgiev SG, Tohidi E, Shateyi S. Space–Time Spectral Collocation Method for Solving Burgers Equations with the Convergence Analysis. Symmetry. 2019; 11(12):1439. https://doi.org/10.3390/sym11121439

Chicago/Turabian Style

Huang, Yu, Mohammad Hadi Noori Skandari, Fatemeh Mohammadizadeh, Hojjat Ahsani Tehrani, Svetlin Georgiev Georgiev, Emran Tohidi, and Stanford Shateyi. 2019. "Space–Time Spectral Collocation Method for Solving Burgers Equations with the Convergence Analysis" Symmetry 11, no. 12: 1439. https://doi.org/10.3390/sym11121439

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.