Next Article in Journal
Asymmetric Shocks and Pension Fund Volatility: A GARCH Approach with Macroeconomic Predictors to an Unexplored Emerging Market
Previous Article in Journal
Optimal Subsampling for Upper Expectation Parametric Regression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Matrix Approach by Convolved Fermat Polynomials for Solving the Fractional Burgers’ Equation

by
Waleed Mohamed Abd-Elhameed
1,*,
Omar Mazen Alqubori
2,
Naher Mohammed A. Alsafri
2,
Amr Kamel Amin
3 and
Ahmed Gamal Atta
4
1
Department of Mathematics, Faculty of Science, Cairo University, Giza 12613, Egypt
2
Department of Mathematics and Statistics, College of Science, University of Jeddah, Jeddah 23831, Saudi Arabia
3
Department of Mathematics, Adham University College, Umm Al-Qura University, Makkah 28653, Saudi Arabia
4
Department of Mathematics, Faculty of Education, Ain Shams University, Roxy, Cairo 11341, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(7), 1135; https://doi.org/10.3390/math13071135
Submission received: 1 March 2025 / Revised: 26 March 2025 / Accepted: 26 March 2025 / Published: 30 March 2025

Abstract

:
This article employs certain polynomials that generalize standard Fermat polynomials, called convolved Fermat polynomials, to numerically solve the fractional Burgers’ equation. New theoretical results of these polynomials are developed and utilized along with the collocation method to find approximate solutions of the fractional Burgers’ equation. The basic idea behind the proposed numerical algorithm is based on establishing the operational matrices of derivatives of both integer and fractional derivatives of the convolved Fermat polynomials that help to convert the equation governed by its underlying conditions into an algebraic system of equations that can be treated numerically. A comprehensive study is performed to analyze the error of the proposed convolved Fermat expansion. Some numerical examples are presented to test our proposed numerical algorithm, and some comparisons are made. The results indicate that the proposed algorithm is applicable and accurate.

1. Introduction

Fractional differential equations (FDEs) have gained interest in science, engineering, and technology because they describe systems with complex and anomalous processes involving infinite correlations. They have applications across many fields of applied sciences, so the investigations for these equations have attracted much interest. Some applications for FDEs can be found in [1,2,3]. Due to the inability to solve these equations analytically, it is important to resort to numerical analysis with different approaches. Many numerical approaches are employed to treat the different types of FDEs. Among these methods are the Adomian decomposition method [4,5], the variational iteration transform method [6], the Laplace transform method [7], the wavelet neural method [8], the Haar wavelets method [9], the generalized Taylor wavelets [10], the Adams–Bashforth–Moulton methods [11,12], the homotopy perturbation method [13], the shifted Gegenbauer–Gauss collocation method [14], the shifted Jacobi collocation method [15], and the localized hybrid kernel meshless method [16].
The classical Burgers’ equation is one of the most essential FDEs; it employs fractional derivatives to describe a wide range of physical events. This equation has been significantly extended to derive the fractional Burgers’ equation. This equation becomes even more critical when modeling processes with typical diffusion or memory effects because traditional integer-order differential equations cannot account for them. Understanding the physical meaning of the fractional Burgers’ equation is crucial. In [17,18], the authors highlighted the role of this equation in modeling subdiffusion convection processes relevant to various physical problems and in describing the propagation of weakly nonlinear acoustic waves through gas-filled pipes, respectively. Many numerical algorithms have been employed to solve the fractional Burgers’ equation. For example, in [19], the time-fractional Burgers’ equation was solved based on the Haar–Sinc spectral approach. Recently, the authors of [20] followed a compact difference tempered fractional Burgers’ equation. A discontinuous Galerkin method was used to treat the time-fractional Burgers’ equation. The homotopy method was applied in [21]. The authors of [22] solved the one- and two-dimensional time-fractional Burgers’ equation using Lucas polynomials. Some other contributions can be found in [23,24,25,26,27].
Modern engineering and physics applications necessitate a more comprehensive understanding of applied mathematics than previously required. Specifically, a solid grasp of the fundamental properties of special functions is essential. These functions are frequently used in various fields, including communication systems, electromagnetic theory, quantum mechanics, approximation theory, probability theory, electrical circuit theory, and heat conduction. For some applications of special functions, one can refer to [28,29].
Fibonacci and Lucas polynomials and their generalizations have been extensively studied in the mathematical literature due to their rich properties and diverse applications across various fields, such as combinatorics, graph theory, and numerical analysis. These polynomials are extensively used in numerical analysis to solve several differential equations (DEs). We give some contributions in this direction. In [30], the author used Fibonacci polynomials to treat FDEs. The authors of [31] treated numerically a modified epidemiological model of computer viruses using Fibonacci wavelets. Pell polynomials were used in [32] to solve stochastic FDEs. Another application for the Pell polynomials was given in [33]. Shifted Lucas polynomials were used in [34] to solve the time-fractional FitzHugh–Nagumo DEs. Other shifted Horadam polynomials were used to solve the nonlinear fifth-order KdV equations in [35]. In [36], the authors treated fractional Bagley–Torvik using Fibonacci wavelets. The authors of [37] handled the Lane–Emden type equation using combined Pell–Lucas polynomials. Vieta–Lucas polynomials were used in [38] to solve specific systems of FDEs using an operational approach.
Among the extensions of Fibonacci sequences are the convolved generalized Fibonacci polynomials, which were presented and examined in [39]. These polynomials extend several classical sequences, including Fibonacci and Pell polynomials and their generalized ones. In [40], the authors examined some features of the convolved generalized Fibonacci and Lucas polynomials. Some other formulas concerned with the convolved polynomials were developed in [41]. In [42], the authors introduced certain generalizations of convolved generalized Fibonacci and Lucas polynomials. In [43], the authors formulated novel formulas for convolved Pell polynomials. In [44], the authors developed some new formulas for convolved Fibonacci polynomials. This paper will introduce and use particular polynomials of the generalized ones introduced in [39], namely, convolved Fermat polynomials. These polynomials generalize the standard conventional Fermat polynomials.
Spectral methods are a class of numerical methods that approximate the solution of differential equations by expanding the solution in terms of global basis functions, which are typically special functions. In contrast to finite difference and finite element methods, spectral methods use global approximations. A key advantage of spectral methods is their exponential convergence to solutions in smooth scenarios; that is, the error decays fast as the basis functions count increases. This means that spectral methods are ideal for highly accurate models. There are three major categories of spectral methods. These are the Galerkin, tau, and collocation methods. With the spectral Galerkin method, the solution is written as a series of basis functions, and the residual is ensured to be orthogonal to these functions. In addition, the selected trial and test functions should coincide. In [45], the authors handled some partial DEs through the Galerkin method. In [46], the authors applied spectral Galerkin methods for some FDEs. The authors of [47] used the Galerkin method and generalized Chebyshev polynomials to treat some fractional delay pantograph DEs. The tau method is unlike the Galerkin method in that one is free to pick the trial and test functions. The authors of [48] utilized the tau method to treat the time-fractional cable problem. In [49], the authors treated systems of fractional-order integro-DEs using the tau method along with the monic Laguerre polynomial. Other integro-DEs were treated in [50] using a tau–Gegenbauer spectral method. In the collocation method, the residual should vanish at some discrete collocation points. This approach is advantageous due to its applicability to all types of DEs, so it is used extensively to treat many DEs. For example, the authors of [51] followed a collocation approach to certain stochastic FDEs. Another collocation method based on Chebyshev polynomials was used in [52] to solve some elliptic partial DEs. In [53], the Laguerre spectral collocation method was used to solve the space-fractional diffusion equation. The authors of [54] followed a collocation method to treat FitzHugh-–Nagumo DEs. Another collocation method was employed in [55] to treat certain variable-order FDEs.
This paper concentrates on the numerical solution of the time-fractional Burgers’ equation [56]:
β χ ( σ , t ) t β + χ ( σ , t ) χ ( σ , t ) σ Ψ 2 χ ( σ , t ) σ 2 = G ( σ , t ) , 0 < β < 1 ,
with the following conditions:
χ ( σ , 0 ) = χ 1 ( σ ) , 0 < σ 1 ,
                                                                χ ( 0 , t ) = χ 2 ( t ) , χ ( 1 , t ) = χ 3 ( t ) , 0 < t 1 ,
where G ( σ , t ) represents the source term and Ψ represents the kinematic viscosity.
We comment here on the remark of Wang and He in  [57], in which they demonstrated that, to maintain consistency, both time and space are modeled fractionally—a principle termed the fractional spatio-temporal relation. This generalization will be a target for us in a forthcoming paper.
The following are the primary objectives of this paper:
  • Providing certain polynomials that generalize the well-known standard Fermat polynomials, called the convolved Fermat polynomials.
  • Establishing the analytic and inversion formulas of these polynomials.
  • Constructing the operational derivative matrices of the convolved Fermat polynomials for both integer and fractional derivatives.
  • Developing a collocation technique to handle the time-fractional Burgers’ equation.
  • Analyzing the convergence and error analysis of the convolved Fermat expansion.
  • Testing the accuracy of our numerical algorithm against other established techniques to demonstrate its effectiveness.
Furthermore, we mention here that some advantages and original contributions in this work can be listed as follows:
  • The new theoretical results concerning the convolved Fermat polynomials are new. We think that the theoretical background of these polynomials in this paper may be utilized in other contributions in the scope of the numerical solutions of differential equations.
  • To our knowledge, employing such polynomials in numerical analysis is new. This gives us motivation for our study.
  • Highly accurate solutions can be obtained by employing the convolved Fibonacci polynomials as basis functions.
The rest of the paper is organized as follows. The next section overviews the convolved generalized polynomials and some of their particular polynomials. Some new formulas of the convolved Fermat polynomials, such as their power form representation and its inversion formula, are developed in Section 3. Section 4 analyzes the numerical algorithm designed to solve the fractional Burgers’ equation based on the application of the collocation method using the convolved Fermat polynomials as basis functions. The convergence of the convolved Fermat expansion is presented in Section 5 by stating and proving some lemmas and theorems. Some illustrative examples are presented in Section 6 accompanied by comparisons with some other algorithms to test our numerical algorithm. Finally, Section 7 presents the concluding findings.

2. Fundamentals and Essential Relations

This section is confined to introducing an account of some essential characteristics of fractional calculus, the generalized convolved polynomials, and some of their particular polynomials.

2.1. Caputo’s Fractional Derivative

Definition 1
([58]). In the Caputo sense, the fractional derivative D s β ξ ( s ) is defined as
D s β ξ ( s ) = 1 Γ ( r β ) 0 s ( s t ) r β 1 ξ ( r ) ( t ) d t , β > 0 , s > 0 , r 1 < β r , r N .
For D s β with r 1 < β r , r N , the following identities are valid:
D s β C = 0 , C is a constant ,
                                                                                          D s β s r = 0 , if   r N 0 a n d r < β , r ! Γ ( r β + 1 ) s r β , if   r N 0 a n d r β ,
where N = { 1 , 2 , }   and   N 0 = { 0 , 1 , 2 , } , and β is the ceiling function.
Remark 1.
For some other definitions of fractional derivatives and their applications, one can refer to [59,60].

2.2. An Account on Fermat Polynomials

If we consider the first kind of the Lucas polynomial sequence { ϕ i a , b ( t ) } i 0 generated by the recursive formula [39]:
ψ i a , b ( t ) = a t ψ i 1 a , b ( t ) + b ψ i 2 a , b ( t ) , ψ 0 a , b ( t ) = 1 , ψ 1 a , b ( t ) = a t ,
then the Fermat polynomials are particular cases of ψ i a , b ( t ) corresponding to a = 3 , b = 2 . Thus, the sequence { F k ( t ) } k 0 may be constructed using the following recursive formula:
F k ( t ) = 3 t F k 1 ( t ) + 2 F k 2 ( t ) .
These polynomials are explicitly expressed by the following formula [61]:
F k ( t ) = r = 0 k 2 3 k 2 r ( 2 ) r k r r t k 2 r ,
and their inversion formula is given by [61]
t k = r = 0 k 2 ( 1 ) r ( k 2 r + 1 ) ( k r + 2 ) r 1 ( 2 ) r r ! 3 k F k 2 r ( t ) .

2.3. Convolved Generalized Fibonacci Polynomials

In [39], the authors developed the convolved generalized Fibonacci polynomials, denoted as CGF j r , s , μ ( σ ) , where r = r ( σ ) and s = s ( σ ) are polynomials with real coefficients, and μ is a complex integer. The generating function for CGF j r , s , μ ( σ ) is [39]
1 r ( σ ) t s ( σ ) t 2 μ = k = 0 CGF k r , s , μ ( σ ) t k .
Furthermore, they can be represented as
CGF k r , s , μ ( σ ) = r = 0 k 2 μ + r 1 r μ + k r 1 k 2 r r k 2 r ( σ ) s r ( σ ) ,
and they can be generated with the aid of the following recursive formula [39]:
k CGF k r , s , μ ( σ ) ( μ + k 1 ) r ( σ ) CGF k 1 r , s , μ ( σ ) ( 2 μ + k 2 ) s ( σ ) CGF k 2 r , s , μ ( σ ) = 0 , k 2 ,
with the following initial values:
CGF 0 r , s , μ ( σ ) = 1 , CGF 1 r , s , μ ( σ ) = μ r ( σ ) .
Remark 2.
Many celebrated sequences can be extracted as particular sequences of the convolved generalized Fibonacci polynomials. Among these sequences are the following sequences:
  • The convolved Fibonacci polynomials CF k ( σ ) , which were investigated theoretically in [44] and employed numerically in [62]. These polynomials are defined as
    CF k ( σ ) = CGF k σ , 1 , μ ( σ ) .
  • The convolved Pell polynomials CP k ( σ ) , which were investigated theoretically in [62]. These polynomials are defined as
    CP k ( σ ) = CGF k 2 σ , 1 , μ ( σ ) .

2.4. Convolved Fermat Polynomials

This paper introduces a sequence of polynomials that generalize the Fermat sequence of polynomials, called convolved Fermat polynomials. These polynomials are particular polynomials of the polynomials generated by (12). More precisely, they can be obtained from the polynomials CGF k r , s , μ ( x ) with the following choices:
r ( σ ) = 3 σ , s ( σ ) = 2 , μ = n ,
where n is a positive real constant.
We will denote these polynomials by ϕ i n ( σ ) . The recursive formula fulfilled by ϕ i n ( σ )  is
2 ( i + 2 n 2 ) ϕ i 2 n ( σ ) 3 σ ( i + n 1 ) ϕ i 1 n ( σ ) + i ϕ i n ( σ ) = 0 , i 2 .
Form (11), the analytic form of the convolved Fermat polynomials, ϕ i n ( σ ) takes the following form:
ϕ i n ( σ ) = r = 0 i 2 ( 2 ) r 3 i 2 r ( n ) i r r ! ( i 2 r ) ! σ i 2 r .
Remark 3.
It is useful to write the power form representation (14) in the following alternative form:
ϕ i n ( σ ) = k = 0 i 3 k ( 2 ) i k 2 a i + k ( n ) i + k 2 k ! i k 2 ! σ k ,
a r = 1 , i f r even , 0 , o t h e r w i s e .
Remark 4.
Fermat polynomials FR k generated by (8) are particular polynomials of the convolved Fermat polynomials in the sense that
FR k ( σ ) = ϕ k 1 ( σ ) .
The next section develops some new formulas regarding the convolved Fermat polynomials that will be useful in the sequel.

3. Some New Formulas of the Convolved Fermat Polynomials

Our suggested numerical technique for solving the fractional Burger’s equation relies heavily on some new formulas of the convolved Fermat polynomials, which are the focus of this section. Our focus will be to develop the following formulas:
  • The inversion formula of the convolved Fermat polynomials.
  • An explicit formula for the integer and fractional derivatives of the convolved Fermat polynomials.
  • The operational matrices of the integer derivatives of the convolved Fermat polynomials.
  • An explicit formula for the fractional derivatives of the convolved Fermat polynomials.
  • The operational matrices of both integer and fractional derivatives of the convolved Fermat polynomials.
The following theorem displays the inversion formula for the power Formula (14). First, one requires the lemma below.
Lemma 1.
Consider p to be any non-negative integer. The following identity is valid:
m = 0 p ( 1 ) m ( i 2 m + n ) ( i m + n p 1 ) ! m ! ( i m + n ) ! ( i 2 p ) ! ( p m ) ! = 1 i ! , p = 0 , 0 , p > 0 .
Proof. 
It is clear that the identity holds for p = 0 , so to complete the proof we set
S p , i = m = 0 p ( 1 ) m ( i 2 m + n ) ( i m + n p 1 ) ! m ! ( i m + n ) ! ( i 2 p ) ! ( p m ) ! , p 1 .
and we will show that
S p , i = 0 , p 1 .
We can write S p , i as
S p , i = m = 1 p + 1 ( 1 ) m + 1 ( 2 + i 2 m + n ) ( i m + n p ) ! ( m 1 ) ! ( 1 + i m + n ) ! ( i 2 p ) ! ( 1 m + p ) ! = m = 1 p ( 1 ) m + 1 ( 2 + i 2 m + n ) ( i m + n p ) ! ( m 1 ) ! ( 1 + i m + n ) ! ( i 2 p ) ! ( 1 m + p ) ! + ( 1 ) p ( i + n 2 p ) ! ( i 2 p ) ! ( i + n p ) ! p ! .
We write the last formula in the form
S p , i = M p , i + ( 1 ) p ( i + n 2 p ) ! ( i 2 p ) ! ( i + n p ) ! p ! ,
with
M p , i = m = 1 p ( 1 ) m + 1 ( 2 + i 2 m + n ) ( i m + n p ) ! ( m 1 ) ! ( 1 + i m + n ) ! ( i 2 p ) ! ( 1 m + p ) ! .
Now, Zeilberger’s algorithm [63] aids in finding a closed form for M p , i . The recurrence relation that satisfied by M p , i is
( p + 1 ) ( 2 p + i + n 1 ) ( 2 p + i + n ) M p + 1 , i + ( 2 p + i 1 ) ( 2 p + i ) ( p + i + n ) M p , i = 0 ,
with the initial condition
M 0 , i = ( i + n ) ( 2 + i + n ) ! ( 2 + i ) ! ( i + n ) ! .
The exact solution of (19) is given by
M p , i = ( 1 ) p + 1 ( i + n 2 p ) ! ( i 2 p ) ! ( i + n p ) ! p ! .
The closed form in (20) together with Formula (18) implies that
S p , i = 0 , p 1 .
This ends the proof.    □
Theorem 1.
Let i be any real non-positive integer. The inversion formula of ϕ i n ( σ ) is given by
σ i = i ! 3 i r = 0 i 2 2 r r ! ( n ) i 2 r ( i + n 2 r + 1 ) r ϕ i 2 r n ( σ ) .
Proof. 
To prove Formula (21), we consider
Z ( σ ) = i ! 3 i r = 0 i 2 2 r r ! ( n ) i 2 r ( i + n 2 r + 1 ) r ϕ i 2 r n ( σ ) ,
and we will show that
Z ( σ ) = σ i .
Now, based on the analytic form in (14), Z ( σ ) can be written as
Z ( σ ) = 3 i i ! m = 0 i 2 2 m m ! ( n ) i 2 m ( 1 + i 2 m + n ) m × r = 0 i 2 m ( 2 ) r 3 i 2 m 2 r ( n ) i 2 m r ( i 2 m 2 r ) ! r ! σ i 2 m 2 r ,
which can be represented as
Z ( σ ) = p = 0 i 2 2 9 p i ! σ i 2 p m = 0 p ( 1 ) m ( i 2 m + n ) ( i m + n p 1 ) ! m ! ( i m + n ) ! ( i 2 p ) ! ( p m ) ! ,
which is also equivalent to
Z ( σ ) = σ i + p = 1 i 2 2 9 p i ! σ i 2 p m = 0 p ( 1 ) m ( i 2 m + n ) ( i m + n p 1 ) ! m ! ( i m + n ) ! ( i 2 p ) ! ( p m ) ! .
It is clear now that the application of Lemma 1 leads to
Z ( σ ) = σ i ,
and, accordingly, the proof is now complete.    □
Remark 5.
Equation (21) can be rewritten in another form as
σ p = L = 0 p H L , p ϕ L n ( σ ) ,
where
H r , i = i ! 2 i r 2 a i + r 3 i i r 2 ! ( n ) r ( n + r + 1 ) i r 2 .
Theorem 2.
For any positive integers q and i with i q , the qth-derivative of ϕ i n ( σ ) can be expressed as
d q ϕ i n ( σ ) d σ q = k = 0 i q A i , k q ϕ k n ( σ ) ,
where
A i , k q = 3 q ( 1 ) i k q 2 1 2 ( i k q ) ( k + n ) 1 2 ( i k + q 2 ) 1 2 ( i k q ) × 1 2 ( 2 + i + k + 2 n q ) q 1 a i k q ,
and a k is defined as in (16).
Proof. 
Differentiating Formula (14) q times with respect to σ yields
D q ϕ i n ( σ ) = j = 0 i q 2 ( 2 ) j 3 i 2 j ( n ) i j ( 1 + i 2 j q ) q ( i 2 j ) ! j ! σ i 2 j q .
The application of the inversion Formula (21) leads to the following formula:
D q ϕ i n ( σ ) = 3 q ( n 1 ) ! j = 0 i q 2 ( 2 ) j ( i j + n 1 ) ! j ! × r = 0 i q 2 j 2 r r ! ( n ) i 2 j q 2 r ( 1 + i 2 j + n q 2 r ) r ϕ i q 2 ( j + r ) n ( σ ) ,
which can be written alternatively as
D q ϕ i n ( σ ) = L = 0 i q 2 ( 2 ) L ( i 2 L + n q ) s = 0 L ( 1 ) L s 3 q ( i + n s 1 ) ! s ! ( L s ) ! ( i L + n q s ) ! ϕ i q 2 L n ( σ ) ,
which can be expressed again as
D q ϕ i n ( σ ) = 3 q ( i + n 1 ) ! L = 0 i q 2 ( 1 ) L + 1 ( i + 2 L n + q ) ( 2 ) L L ! ( i L + n q ) ! × F 1 2 L , i + L n + q 1 i n | 1 ϕ i q 2 L n ( σ ) .
Using Chu Vandermond’s identity [64], the above F 1 2 ( 1 ) , can be summed to give the following closed form:
F 1 2 L , i + L n + q 1 i n | 1 = ( i L + n 1 ) ! ( L + q 1 ) ! ( i + n 1 ) ! ( q 1 ) ! ,
and, accordingly, Formula (32) turns into
D q ϕ i n ( σ ) = 3 q L = 0 i q 2 ( 1 ) L ( i 2 L + n q ) ( 2 ) L 1 + L + q L ( i L + n + 1 q ) q 1 × ϕ i q 2 L n ( σ ) .
The last formula can be written as in (27). This proves Theorem 2.    □
Corollary 1.
d ϕ i n ( σ ) d σ has the form
d ϕ i n ( σ ) d σ = k = 0 i 1 λ k , i ϕ k n ( σ ) , i 1 ,
where
λ k , i = 3 ( 1 ) i k + 1 2 1 2 ( i k 1 ) ( k + n ) a i k 1 .
Proof. 
Formula (34) may be readily obtained by substituting q = 1 in Theorem 2.    □
Corollary 2.
d 2 ϕ i n ( σ ) d σ 2 has the form
d 2 ϕ i n ( σ ) d σ 2 = k = 0 i 2 γ k , i ϕ k n ( σ ) , i 2 ,
where
γ k , i = 9 ( 1 ) i k 2 1 2 ( i k 6 ) ( i k ) ( k + n ) ( i + k + 2 n ) a i k .
Proof. 
Formula (36) may be readily obtained by substituting q = 2 in Theorem 2.    □
Remark 6.
As immediate consequences of Corollaries 1 and 2, the operational matrices of derivatives of the convolved Fermat polynomials can be deduced. The following corollary presents this result.
Corollary 3.
If we define the vector ϕ n ( σ ) = [ ϕ 0 n ( σ ) , ϕ 1 n ( σ ) , , ϕ M n ( σ ) ] T , then d ϕ n ( σ ) d σ , and d 2 ϕ n ( σ ) d 2 σ can be expressed in the following forms:
d ϕ n ( σ ) d σ = λ ¯ ϕ n ( σ ) ,
d 2 ϕ n ( σ ) d 2 σ = γ ¯ ϕ n ( σ ) ,
where λ ¯ = ( λ ¯ k , i ) and γ ¯ = ( γ ¯ k , i ) are the operational matrices of derivatives of order ( M + 1 ) × ( M + 1 ) whose entries are, respectively, given as
λ ¯ k , i = λ k , i , i f   i > k , 0 , o t h e r w i s e , γ ¯ k , i = γ k , i , i f   i > k + 1 , 0 , o t h e r w i s e .
For example, the matrices λ ¯ and γ ¯ take the following forms for M = 6 .
λ ¯ = 0 0 0 0 0 0 0 3 n 0 0 0 0 0 0 0 3 ( n + 1 ) 0 0 0 0 0 6 n 0 3 ( n + 2 ) 0 0 0 0 0 6 ( n + 1 ) 0 3 ( n + 3 ) 0 0 0 12 n 0 6 ( n + 2 ) 0 3 ( n + 4 ) 0 0 0 12 ( n + 1 ) 0 6 ( n + 3 ) 0 3 ( n + 5 ) 0 ,
γ ¯ = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 n ( n + 1 ) 0 0 0 0 0 0 0 9 ( n + 1 ) ( n + 2 ) 0 0 0 0 0 36 n ( n + 2 ) 0 9 ( n + 2 ) ( n + 3 ) 0 0 0 0 0 36 ( n + 1 ) ( n + 3 ) 0 9 ( n + 3 ) ( n + 4 ) 0 0 0 108 n ( n + 3 ) 0 36 ( n + 2 ) ( n + 4 ) 0 9 ( n + 4 ) ( n + 5 ) 0 0 .
Theorem 3.
For 0 < β < 1 , the fractional derivative of ϕ j n ( t ) can be represented as
D t β ϕ j n ( t ) = t β p = 0 j ψ p , j β ϕ p n ( t ) ν j ,
where
ψ p , j β = L = p j L ! B L , j H p , L Γ ( L β + 1 ) ,
and
ν j = ( 1 ) j 2 2 j 2 a j ( n ) j 2 j 2 ! Γ ( 1 β ) .
Proof. 
Formula (15) allows one to write D t β ϕ j n ( t ) as
D t β ϕ j n ( t ) = p = 1 j p ! B p , j Γ ( p β + 1 ) t p β ,
where
B k , i = 3 k ( 2 ) i k 2 a i + k ( n ) i + k 2 k ! i k 2 ! .
Now, the inversion Formula (25) can be rewritten as
t p = L = 0 p H L , p ϕ L n ( t ) ,
where H L , p is defined in Equation (26). Therefore, we can write Equation (46) after using Equation (48) as
D t β ϕ j n ( t ) = t β p = 1 j L = 0 p p ! B p , j H L , p Γ ( p β + 1 ) ϕ L n ( t ) ,
which can be transformed into
D t β ϕ j n ( t ) = t β p = 0 j ψ p , j β ϕ p n ( t ) ν j ,
where
ψ p , j β = L = p j L ! B L , j H p , L Γ ( L β + 1 ) ,
and
ν j = ( 1 ) j 2 2 j 2 a j ( n ) j 2 j 2 ! Γ ( 1 β ) .      
Corollary 4.
The fractional derivative of ϕ n ( t ) can be written in matrix form as
D t β ϕ n ( t ) = t β ( ψ ϕ n ( t ) ν ) ,
where ν = [ ν 0 , ν 1 , , ν M ] T . Additionally, ψ = ( ψ p , j β ) is the operational matrix of the fractional derivative of order ( M + 1 ) × ( M + 1 ) whose entries are given, respectively, in (44) and (45).

4. Numerical Technique for the Time-Fractional Burgers’ Equation

This section focuses on presenting a numerical algorithm for solving the time-fractional Burgers’ Equation (1) governed by the initial and boundary conditions (2)–(3).
One may set
I M ( Ω ) = span { ϕ i n ( σ ) ϕ j n ( t ) : 0 i , j M } ,
where Ω = ( 0 , 1 ) × ( 0 , 1 ) , and ϕ i n ( σ ) are the convolved Fermat polynomials given explicitly in (14).
Consequently, any function χ M ( σ , t ) I M ( Ω ) can be represented as
χ M ( σ , t ) = i = 0 M j = 0 M c i j ϕ i n ( σ ) ϕ j n ( t ) = ϕ n ( σ ) T C ϕ n ( t ) ,
where, ϕ n ( σ ) = [ ϕ 0 n ( σ ) , ϕ 1 n ( σ ) , , ϕ M n ( σ ) ] T , ϕ n ( t ) = [ ϕ 0 n ( t ) , ϕ 1 n ( t ) , , ϕ M n ( t ) ] T , and C = ( c i j ) 0 i , j M is the unknown matrix whose order is ( M + 1 ) 2 .
Now, the residual R ( σ , t ) of Equation (1) may be expressed as
R ( σ , t ) = β χ M ( σ , t ) t β + χ M ( σ , t ) χ M ( σ , t ) σ Ψ 2 χ M ( σ , t ) σ 2 S ( σ , t ) = i = 0 M j = 0 M c i j ϕ i n ( σ ) β ϕ j n t β ( t ) + i = 0 M j = 0 M m = 0 M ν = 0 M c i j c m n ϕ i n ( σ ) ϕ j n ( t ) ϕ m + 1 n ( σ ) σ ϕ ν n ( t ) Ψ i = 0 M j = 0 M c i j 2 ϕ i n ( σ ) σ 2 ϕ j n ( t ) G ( σ , t ) .
Based on Corollaries 3 and 4, in conjunction with (54), we may express R ( σ , t ) (55) in matrix form as
R ( σ , t ) = ( ϕ n ( σ ) T C t β [ ψ ϕ n ( t ) ν ] + [ ( ϕ n ( σ ) T C ϕ n ( t ) ] [ λ ¯ ( ϕ n ( σ ) T C ϕ n ( t ) ] Ψ [ γ ¯ ( ϕ n ( σ ) T ] C ϕ n ( t ) G ( σ , t ) .
The collocation method is applied in the following manner to obtain c i j . Specifically, for certain nodes ( σ i , t j ) , the residual R ( σ , t ) is forced to be zero. We have
R i + 1 M + 2 , j + 1 M + 2 = 0 , 0 i M 2 , 0 j M 1 .
Furthermore, the conditions in (2) and (3) lead to
ϕ n i + 1 M + 2 T C ϕ n ( 0 ) = χ 1 i + 1 M + 2 , i = 0 , 1 , 2 , , M , ϕ n ( 0 ) T C ϕ n j + 1 M + 2 = χ 2 j + 1 M + 2 , j = 0 , 1 , 2 , , M 1 , ϕ n ( 1 ) T C ϕ n j + 1 M + 2 = χ 3 j + 1 N + 2 , j = 0 , 1 , 2 , , M 1 .
The nonlinear system of Equations (57) and (58) with dimension ( M + 1 ) 2 in the unknown expansion coefficients c i j may be solved using Newton’s iterative technique.
Remark 7.
The Algorithm 1 lists all the steps required to obtain the proposed numerical solution.
Algorithm 1 Coding algorithm for the proposed technique
Input  β , Ψ , χ 1 ( σ ) , χ 2 ( t ) , χ 3 ( t ) and G ( σ , t ) .
Step 1. Assume the spectral solution χ M ( σ , t ) as in (54).
Step 2. Apply Corollaries 3 and 4, together with (54) to obtain the matrix form
of R ( σ , t ) as in (56).
Step 3. Apply the collocation method to obtain the nonlinear system of Equations (57) and (58).
Step 4. Using FindRoot command with initial guess { c i j = 10 i j , i , j : 0 , 1 , , M } ,
              to solve the system (57) and (58) to get c i j .
Output  χ M ( σ , t ) .

5. The Convergence and Error Analysis

Here, we examine the convergence of the convolved Fermat expansion. The following theorems and lemmas are considered.
  • Lemma 4 expresses the infinitely differentiable function f ( ζ ) in terms of ϕ i n ( σ ) .
  • Lemma 5 gives an upper bound for ϕ i n ( σ ) .
  • Theorem 4 gives an upper bound for the unknown expansion coefficients c i .
  • Theorem 5, an upper bound for the truncation error e M ( σ ) is given.
  • Theorem 6 gives an upper bound for the unknown double expansion coefficients c i j .
  • Theorem 7 gives an upper bound for the truncation error | Ø ( σ , t ) Ø M ( σ , t ) | .
Lemma 2.
The following inequality holds [65]:
| I i ( σ ) | σ i cosh ( σ ) 2 i Γ ( i + 1 ) , σ > 0 ,
where I i ( x ) is the modified Bessel function of the first kind.
Lemma 3.
The following inequality holds [66]:
| L i ( σ ) | σ i sinh ( σ ) π 2 i Γ i + 3 2 , σ > 0 ,
where L i ( x ) are the modified Struve functions.
Lemma 4.
For an infinitely differentiable function f ( σ ) at the origin, one has
f ( σ ) = i = 0 s = i 3 s f s ( 0 ) ( n + i ) Γ ( n ) 2 s i 2 a i + s Γ 1 2 ( i + s + 2 ) Γ 1 2 ( 2 n + i + s + 2 ) ϕ i n ( σ ) .
Proof. 
Let f ( σ ) have the following expansion:
f ( σ ) = p = 0 f ( p ) ( 0 ) p ! σ p .
In virtue of the inversion Formula (25), the last expansion can be turned into
f ( σ ) = p = 0 L = 0 p f ( p ) ( 0 ) H L ( p ) p ! ϕ L n ( σ ) .
Now, if we expand and rearrange the terms of the last equation, we obtain
f ( σ ) = i = 0 s = i f ( s ) ( 0 ) H i ( s ) s ! ϕ i n ( σ ) ,
which can be rewritten as
f ( σ ) = i = 0 s = i 3 s f s ( 0 ) ( n + i ) Γ ( n ) 2 s i 2 a i + s Γ 1 2 ( i + s + 2 ) Γ 1 2 ( 2 n + i + s + 2 ) ϕ i n ( σ ) .
The lemma is now proved. □
Lemma 5.
For any positive integer i, the following inequality is valid:
| ϕ i n ( σ ) | ( 4 n ) i , σ [ 0 , 1 ] .
Proof. 
We proceed by induction on i. Assume that (66) is valid for ( i 1 ) and ( i 2 ) , one obtains
| ϕ i 1 n ( σ ) | ( 4 n ) i 1 and | ϕ i 2 n ( σ ) | ( 4 n ) i 2 .
By virtue of the recursive Formula (13), together with with the two inequalities in (67), we obtain
| ϕ i n ( σ ) | = 3 ( i + n 1 ) i σ ϕ i 1 n ( σ ) + 2 ( i + 2 n 2 ) i ϕ i 2 n ( σ ) 3 n ( 4 n ) i 1 + 4 n ( 4 n ) i 2 = ( 4 n ) i 1 ( 3 n + 1 ) .
Noting the simple identity
3 n + 1 4 n , n 1 ,
we obtain the estimation result (66). □
Theorem 4.
If f ( σ ) is defined on [ 0 , 1 ] , and | f ( i ) ( 0 ) | K i , i > 0 , where K is a positive constant, and f ( σ ) = i = 0 c i ϕ i n ( σ ) , we obtain
| c i | 3 i cosh 2 2 K 3 Γ ( n ) K i Γ ( n + i ) .
Moreover, the series converges absolutely.
Proof. 
Based on Lemma 4, we can write
c i = s = i 3 s f s ( 0 ) ( n + i ) Γ ( n ) 2 s i 2 a i + s Γ 1 2 ( i + s + 2 ) Γ 1 2 ( 2 n + i + s + 2 ) .
Now, the assumptions of the theorem enable us to write
| c i | s = i 3 s 2 s i 2 K s s i 2 ! ( n ) i ( n + i + 1 ) s i 2 .
Some simplifications lead to the following formula:
| c i | = 3 n K n 2 1 2 ( n i ) ( n + i ) Γ ( n ) I n + i 2 2 K 3 + L n + i 2 2 K 3 ,
where I i ( x ) and L i ( x ) are, respectively, the modified Bessel of the first kind and modified Struve functions.
The application of Lemmas 2 and 3 enables us to write the previous equation as
| c i | 3 i cosh 2 2 K 3 Γ ( n ) K i Γ ( n + i ) .
This ends the proof. □
Theorem 5.
If f ( σ ) satisfies the hypothesis of Theorem 4, and e M ( σ ) = i = M + 1 c i ϕ i n ( σ ) , then the following error estimation is satisfied:
| e M ( σ ) | < 3 4 M 1 cosh 2 2 K 3 e 4 K n 3 n N + 1 Γ ( n ) K M + 1 Γ ( n + M + 1 ) .
Proof. 
The definition of e N ( σ ) enables us to write
| e M ( σ ) | = i = M + 1 c i ϕ i n ( σ ) cosh 2 2 K 3 Γ ( n ) i = M + 1 3 i K i ( 4 n ) i Γ ( n + i ) .
Since
i = M + 1 3 i K i ( 4 n ) i Γ ( n + i ) = 4 3 1 n e 4 K n 3 n M K M ( K n ) n M + 1 1 Γ n + M , 4 K n 3 Γ ( n + M ) < 3 4 M 1 e 4 K n 3 n M + 1 K M + 1 Γ ( n + M + 1 ) ,
where Γ ( . , . ) denotes upper incomplete gamma functions [67], then we have
| e M ( σ ) | < 3 4 M 1 cosh 2 2 K 3 e 4 K n 3 n M + 1 Γ ( n ) K M + 1 Γ ( n + M + 1 ) .
This proves the theorem. □
Theorem 6.
If a function Ø ( σ , t ) = g 1 ( σ ) g 2 ( t ) = i = 0 j = 0 c i j ϕ i n ( σ ) ϕ j n ( t ) , with | g 1 ( i ) ( 0 ) | K 1 i , and | g 2 ( i ) ( 0 ) | K 2 i , where K 1 , and K 2 are positive constants, then one has the following estimation:
| c i j | 3 i j cosh 2 2 K 1 3 cosh 2 2 K 2 3 Γ 2 ( n ) K 1 i K 2 j Γ ( n + i ) Γ ( n + j ) .
Proof. 
If Lemma 4 is applied along with using the assumption that Ø ( σ , t ) = g 1 ( σ ) g 2 ( t ) , then we obtain
c i j = s = i r = j 3 s r g 1 s ( 0 ) g 2 r ( 0 ) ( n + i ) ( n + j ) Γ 2 ( n ) 2 r + s i j 2 a i + s a j + s Γ 1 2 ( i + s + 2 ) Γ 1 2 ( 2 n + i + s + 2 ) Γ 1 2 ( j + r + 2 ) Γ 1 2 ( 2 n + j + r + 2 )
Using the assumption | g 1 ( i ) ( 0 ) | K 1 i and | g 2 ( i ) ( 0 ) | K 2 i , one obtains
| c i j | s = i 3 s K 1 s ( 0 ) ( n + i ) Γ ( n ) 2 s i 2 Γ 1 2 ( i + s + 2 ) Γ 1 2 ( 2 n + i + s + 2 ) × r = j 3 r K 2 r ( 0 ) ( n + j ) Γ ( n ) 2 r j 2 Γ 1 2 ( j + r + 2 ) Γ 1 2 ( 2 n + j + r + 2 ) .
Now, performing similar steps as in the proof of Theorem 4, we obtain the desired result. □
Theorem 7.
We obtain the following upper bound on the truncation error if Ø ( σ , t ) meets the hypothesis of Theorem 6.
| Ø ( σ , t ) Ø M ( σ , t ) | < Z 3 n M 2 4 n + M + 2 n M + 2 ( K 1 n ) n K 1 M + ( K 2 n ) n K 2 M Γ ( n + M + 1 ) ,
where
Z = cosh 2 2 K 1 3 cosh 2 2 K 2 3 Γ 2 ( n ) K 1 K 2 ( K 1 n ) n ( K 2 n ) n e 4 3 n ( K 1 + K 2 ) .
Proof. 
From the definitions of Ø ( σ , t ) and Ø M ( σ , t ) , we can write
| Ø ( σ , t ) Ø M ( σ , t ) | = i = 0 j = 0 c i j ϕ i n ( σ ) ϕ j n ( t ) i = 0 M j = 0 M c i j ϕ i n ( σ ) ϕ j n ( t ) i = 0 M j = M + 1 c i j ϕ i n ( σ ) ϕ j n ( t ) + i = M + 1 j = 0 c i j ϕ i n ( σ ) ϕ j n ( t ) .
If Theorem 6, and Lemma 5 are used along with the following inequalities:
i = 0 M 3 i K 1 i ( 4 n ) i Γ ( i + n ) = 1 Γ ( n 1 ) Γ ( n + M ) × K 1 4 3 1 n n e 4 K 1 n 3 ( K 1 n ) n Γ ( n 1 ) Γ n + M , 4 n K 1 3 Γ n 1 , 4 n K 1 3 Γ ( n + M ) < K 1 4 3 1 n n e 4 K 1 n 3 ( K 1 n ) n ,
j = M + 1 3 j K 2 j ( 4 n ) j Γ ( j + n ) = 4 3 1 n e 4 K 2 n 3 n M K 2 M ( K 2 n ) n M + 1 Γ ( n + M ) Γ n + M , 4 n K 2 3 Γ ( n + M ) < 3 4 M 1 e 4 K 2 n 3 n M + 1 K 2 M + 1 Γ ( n + M + 1 ) ,
i = 0 3 i K 1 i ( 4 n ) i Γ ( i + n ) = K 1 4 3 1 n n e 4 K 1 n 3 ( K 1 n ) n Γ ( n 1 ) Γ n 1 , 4 n K 1 3 Γ ( n 1 ) < K 1 4 3 1 n n e 4 K 1 n 3 ( K 1 n ) n ,
then we obtain the following estimation:
| Ø ( σ , t ) Ø M ( σ , t ) | < Z 3 n M 2 4 n + M + 2 n M + 2 ( K 1 n ) n K 1 M + ( K 2 n ) n K 2 M Γ ( n + M + 1 ) ,
where
Z = cosh 2 2 K 1 3 cosh 2 2 K 2 3 Γ 2 ( n ) K 1 K 2 ( K 1 n ) n ( K 2 n ) n e 4 3 n ( K 1 + K 2 ) .
This finalizes the proof. □

6. Illustrative Examples

In this section, our algorithm is tested by introducing some illustrative examples. We will test the performance and accuracy of our suggested numerical method. In addition, we will present comparisons with some other methods in the literature. All codes were written and debugged by Mathematica 11 on an HP Z420 Workstation, Processor: Intel(R) Xeon(R) CPU E5-1620 v2 - 3.70GHz, 16 GB Ram DDR3, and 512 GB storage.
Example 1
([56]). Consider the following fractional Burgers’ equation:
β χ ( σ , t ) t β + χ ( σ , t ) χ ( σ , t ) σ 2 χ ( σ , t ) σ 2 = 2 Γ ( 3 β ) t 2 β cos ( π σ ) π t 4 sin ( π σ ) cos ( π σ ) + π 2 t 2 cos ( π σ ) ,
controlled by
χ ( σ , 0 ) = 0 , 0 < σ 1 , χ ( 0 , t ) = t 2 , χ ( 1 , t ) = t 2 , 0 < t 1 ,
where χ ( σ , t ) = t 2 cos ( π σ ) is the exact solution to (90).
Table 1 displays a comparison between our proposed algorithm at various n for M = 10 with the method in [56] in the sense of L -error. Figure 1 shows the AE at different values of t (left) and the approximate solution at t = 1 (right) at β = 0.7 , n = 9 , and M = 10 . Table 2 and Table 3 report the MAE and the L - error at different values of M . In addition, the CPU time (in seconds) for Table 2 is presented in Table 4.
Example 2
([56]). Consider the following fractional Burgers’ equation:
β χ ( σ , t ) t β + χ ( σ , t ) χ ( σ , t ) σ 2 χ ( σ , t ) σ 2 = 2 Γ ( 3 β ) e σ t 2 β + t 4 e 2 σ t 2 e σ ,
controlled by
χ ( σ , 0 ) = 0 , 0 < σ 1 , χ ( 0 , t ) = t 2 , χ ( 1 , t ) = e t 2 , 0 < t 1 ,
where χ ( σ , t ) = t 2 e σ is the exact solution of this problem.
Table 5 reports the AE at β = 0.8 and M = 10 . In addition, the CPU time (in seconds) for our proposed method is presented in this table. Figure 2 illustrates the Log10(error) at β = 0.9 and different values of n. Table 6 presents a comparison between our method at different values of n at M = 10 with the method in [56] in the sense of L -error. Also, Table 7 presents a comparison of L -error between our method at n = 5 and M = 10 with the method in [68]. Figure 3 shows the AE at different values of t (left) and the approximate solution at t = 1 (right) at β = 0.7 , n = 10 , and M = 10 .
Remark 8.
Table 8 demonstrates the agreement for both theoretical and numerical results in Figure 2 when n = 3 . As an example, if we set K 1 = 0.1 , K 2 = 0.2 , and n = 3 in Theorem 7.
Remark 9.
The results of Table 6 demonstrate that the approximations corresponding to n = 1 (the Fermat case) are not always better than the approximations for other choices of n.
Example 3
([56,69]). Consider the following fractional Burgers’ equation:
β χ ( σ , t ) t β + χ ( σ , t ) χ ( σ , t ) σ 2 2 χ ( σ , t ) σ 2 = 2 Γ ( 3 β ) t 2 β sin ( π σ ) + π t 4 sin ( π σ ) cos ( π σ ) + 2 π 2 t 2 sin ( π σ ) ,
controlled by
χ ( σ , 0 ) = 0 , 0 < σ 1 , χ ( 0 , t ) = χ ( 1 , t ) = 0 , 0 < t 1 ,
where χ ( σ , t ) = t 2 sin ( π σ ) is the exact solution to (96).
Table 9 displays a comparison between our proposed algorithm at different various n for M = 10 with methods in [56,69] in the sense of L -error. Also, Table 10 presents a comparison of L -error between our method at n = 7 and M = 10 with the method in [68]. Figure 4 shows the AE at different values of t (left) and the approximate solution at t = 1 (right) at β = 0.6 , n = 5 and M = 10 . Table 11 reports the AE at β = 0.7 , and M = 10 . Figure 5 illustrates the Log10(error) at β = 0.8 and different values of n. In addition, the CPU time (in seconds) for our proposed method is presented in this table.
Example 4.
Consider the following fractional Burgers’ equation:
β χ ( σ , t ) t β + χ ( σ , t ) χ ( σ , t ) σ 2 χ ( σ , t ) σ 2 = 0 ,
controlled by
χ ( σ , 0 ) = 0 , 0 < σ 1 , χ ( 0 , t ) = t , χ ( 1 , t ) = 0 , 0 < t 1 .
Since the exact solution is not available, let us define the following absolute residual error norm:
A R E = max ( σ , t ) ( 0 , 1 ) × ( 0 , 1 ) β χ M ( σ , t ) t β + χ M ( σ , t ) χ M ( σ , t ) σ 2 χ M ( σ , t ) σ 2 .
and applying the presented method at n = 1 , β = 0.2 , and M = 10 to obtain Table 12, which illustrates the ARE at different values of t and the CPU time (in seconds). Figure 6 shows the ARE (left) and the approximate solution (right) at β = 0.2 , n = 1 , and M = 4 .
Remark 10.
The approximate solution at β = 0.2 , n = 1 , and M = 4 is
χ 4 ( σ , t ) = 4.3021142204224816 × 10 15 σ 4 + 1.0375381109817283 × 10 14 σ 3 8.784639682346551 × 10 15 σ 2 + 3.096915085487595 × 10 15 σ + 0.0313691 σ 4 + 0.141414 σ 3 0.262584 σ 2 + 0.152539 σ 1.4183099139586375 × 10 14 t 4 + 0.167817 σ 4 0.577364 σ 3 + 0.822897 σ 2 0.41335 σ + 2.8976820942716586 × 10 14 t 3 + 0.191013 σ 4 + 0.806954 σ 3 1.36999 σ 2 + 0.754052 σ 1.9734214262712158 × 10 14 t 2 + 0.0460916 σ 4 0.362139 σ 3 + 0.830823 σ 2 1.51478 σ + 1 . t 3.86843335142828 × 10 16 .

7. Concluding Remarks

A numerical algorithm for the fractional Burgers’ equations was presented in this paper. A set of polynomials that generalizes the set of the standard Fermat polynomials was used as basis functions. Some theoretical results regarding these generalized polynomials were developed and used to design our proposed collocation algorithm. Some inequalities were also developed and used to help investigate the convolved Fermat polynomials. The presented numerical algorithm showed the applicability and accuracy of our proposed algorithm. Other convolved polynomials will be a target to investigate in the near future from both theoretical and numerical points of view.

Author Contributions

Conceptualization, W.M.A.-E. and A.G.A.; Methodology, W.M.A.-E., O.M.A., N.M.A.A. and A.G.A.; Software, A.G.A.; Validation, W.M.A.-E., O.M.A., N.M.A.A., A.K.A. and A.G.A.; Formal analysis, W.M.A.-E. and A.G.A.; Investigation, W.M.A.-E., O.M.A., N.M.A.A., A.K.A. and A.G.A.; Writing—original draft, W.M.A.-E. and A.G.A.; Writing—review & editing, W.M.A.-E. and A.G.A.; Supervision, W.M.A.-E.; Funding acquisition, A.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research work was funded by Umm Al-Qura University, Saudi Arabia, under grant number: 25UQU4331287GSSR02.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors extend their appreciation to Umm Al-Qura University, Saudi Arabia, for funding this research work through grant number: 25UQU4331287GSSR02.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tarasov, V.E. Fractional Dynamics: Applications of Fractional Calculus to Dynamics of Particles, Fields and Media; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  2. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier: Amsterdam, The Netherlands, 2006; Volume 204. [Google Scholar]
  3. Atanackovic, T.M.; Pilipovic, S.; Stankovic, B.; Zorica, D. Fractional Calculus with Applications in Mechanics: Wave Propagation, Impact and Variational Principles; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  4. Afreen, A.; Raheem, A. Study of a nonlinear system of fractional differential equations with deviated arguments via Adomian decomposition method. Int. J. Appl. Comput. Math. 2022, 8, 269. [Google Scholar] [CrossRef] [PubMed]
  5. Obeidat, N.A.; Rawashdeh, M.S.; Al Erjani, M.Q. A novel Adomian natural decomposition method with convergence analysis of nonlinear time-fractional differential equations. Int. J. Model. Simul. 2024. [Google Scholar] [CrossRef]
  6. Jafari, H.; Jassim, H.K.; Ansari, A.; Nguyen, V.T. Local fractional variational iteration transform method: A tool for solving local fractional partial differential equations. Fractals 2024, 32, 2440022. [Google Scholar] [CrossRef]
  7. Jose, S.; Parthiban, V. Finite-time synchronization of fractional order neural networks via sampled data control with time delay. J. Math. Comput. Sci. 2024, 35, 374–387. [Google Scholar] [CrossRef]
  8. Tripura, T.; Chakraborty, S. Wavelet neural operator for solving parametric partial differential equations in computational mechanics problems. Comput. Methods Appl. Mech. Eng. 2023, 404, 115783. [Google Scholar] [CrossRef]
  9. Zada, L.; Aziz, I. Numerical solution of fractional partial differential equations via Haar wavelet. Numer. Methods Partial Differ. Equ. 2022, 38, 222–242. [Google Scholar] [CrossRef]
  10. Yuttanan, B.; Razzaghi, M.; Vo, T.N. A numerical method based on fractional-order generalized Taylor wavelets for solving distributed-order fractional partial differential equations. Appl. Numer. Math. 2021, 160, 349–367. [Google Scholar] [CrossRef]
  11. Kumar, S.; Kumar, R.; Agarwal, R.P.; Samet, B. A study of fractional Lotka-Volterra population model using Haar wavelet and Adams-Bashforth-Moulton methods. Math. Methods Appl. Sci. 2020, 43, 5564–5578. [Google Scholar] [CrossRef]
  12. Diethelm, K. An efficient parallel algorithm for the numerical solution of fractional differential equations. Frac. Calc. Appl. Anal. 2011, 14, 475–490. [Google Scholar] [CrossRef]
  13. Farhood, A.K.; Mohammed, O.H. Homotopy perturbation method for solving time-fractional nonlinear variable-order delay partial differential equations. Partial Differ. Equ. Appl. Math. 2023, 7, 100513. [Google Scholar]
  14. Hafez, R.M.; Youssri, Y.H. Shifted Gegenbauer-Gauss collocation method for solving fractional neutral functional-differential equations with proportional delays. Kragujev. J. Math. 2022, 46, 981–996. [Google Scholar] [CrossRef]
  15. Hafez, R.M.; Youssri, Y.H. Shifted Jacobi collocation scheme for multidimensional time-fractional order telegraph equation. Iran. J. Numer. Anal. Optim. 2020, 10, 195–223. [Google Scholar]
  16. Avazzadeh, Z.; Nikan, O.; Nguyen, A.T.; Nguyen, V.T. A localized hybrid kernel meshless technique for solving the fractional Rayleigh–Stokes problem for an edge in a viscoelastic fluid. Eng. Anal. Bound. Elem. 2023, 146, 695–705. [Google Scholar]
  17. Li, L.; Li, D. Exact solutions and numerical study of time fractional Burgers’ equations. Appl. Math. Lett. 2020, 100, 106011. [Google Scholar]
  18. Akram, T.; Abbas, M.; Riaz, M.B.; Ismail, A.I.; Ali, N.M. An efficient numerical technique for solving time fractional Burgers equation. Alex. Eng. J. 2020, 59, 2201–2220. [Google Scholar]
  19. Pirkhedri, A. Applying Haar-Sinc spectral method for solving time-fractional Burger equation. Math. Comput. Sci. 2024, 5, 43–54. [Google Scholar]
  20. Dwivedi, H.K.; Rajeev. A novel fast second order approach with high-order compact difference scheme and its analysis for the tempered fractional Burgers equation. Math. Comput. Simul. 2025, 227, 168–188. [Google Scholar]
  21. Singh, J.; Kumar, D.; Swroop, R. Numerical solution of time-and space-fractional coupled Burgers’ equations via homotopy algorithm. Alex. Eng. J. 2016, 55, 1753–1763. [Google Scholar]
  22. Ali, I.; Haq, S.; Aldosary, S.F.; Nisar, K.S.; Ahmad, F. Numerical solution of one-and two-dimensional time-fractional Burgers equation via Lucas polynomials coupled with Finite difference method. Alex. Eng. J. 2022, 61, 6077–6087. [Google Scholar]
  23. Mittal, A.K. Spectrally accurate approximate solutions and convergence analysis of fractional Burgers’ equation. Arabian J. Math. 2020, 9, 633–644. [Google Scholar]
  24. Chawla, R.; Deswal, K.; Kumar, D.; Baleanu, D. Numerical simulation for generalized time-fractional Burgers’ equation with three distinct linearization schemes. J. Comput. Nonlinear Dynam. 2023, 18, 041001. [Google Scholar]
  25. Huang, Y.; Mohammadi Zadeh, F.; Hadi Noori Skandari, M.; Ahsani Tehrani, H.; Tohidi, E. Space–time Chebyshev spectral collocation method for nonlinear time-fractional Burgers equations based on efficient basis functions. Math. Methods Appl. Sci. 2021, 44, 4117–4136. [Google Scholar] [CrossRef]
  26. Zongo, G.; Ousséni, S.; Barro, G. a numerical method to solve the viscosity problem of the Burgers equation. Adv. Differ. Equ. Control Process. 2024, 31, 153–164. [Google Scholar] [CrossRef]
  27. Hamadou, B.; Wassiha, N.A.; Bassono, F.; Youssouf, M.; Moussa, B. approximated solutions of the homogeneous linear fractional diffusion-convection-reaction equation. Adv. Differ. Equ. Control Process. 2024, 31, 257–274. [Google Scholar] [CrossRef]
  28. Boyd, J.P. Chebyshev and Fourier Spectral Methods; Courier Corporation: Chelmsford, MA, USA, 2001. [Google Scholar]
  29. Shen, J.; Tang, T.; Wang, L.L. Spectral Methods: Algorithms, Analysis and Applications, 1st ed.; Springer: New York, NY, USA, 2011. [Google Scholar]
  30. Postavaru, O. An efficient numerical method based on Fibonacci polynomials to solve fractional differential equations. Math. Comput. Simul. 2023, 212, 406–422. [Google Scholar] [CrossRef]
  31. Manohara, G.; Kumbinarasaiah, S. Numerical solution of a modified epidemiological model of computer viruses by using Fibonacci wavelets. J. Anal. 2024, 32, 529–554. [Google Scholar]
  32. Singh, P.K.; Ray, S.S. A numerical approach based on Pell polynomial for solving stochastic fractional differential equations. Numer. Algorithms 2024, 97, 1513–1534. [Google Scholar]
  33. Çerdik Yaslan, H. Pell polynomial solution of the fractional differential equations in the Caputo–Fabrizio sense. Indian J. Pure Appl. Math. 2024. [Google Scholar] [CrossRef]
  34. Abd-Elhameed, W.M.; Alqubori, O.M.; Atta, A.G. A collocation procedure for treating the time-fractional FitzHugh-Nagumo differential equation using shifted Lucas polynomials. Mathematics 2024, 12, 3672. [Google Scholar] [CrossRef]
  35. Abd-Elhameed, W.M.; Alqubori, O.M.; Atta, A.G. A collocation approach for the nonlinear fifth-order KdV equations using certain shifted Horadam polynomials. Mathematics 2025, 13, 300. [Google Scholar] [CrossRef]
  36. Yadav, P.; Jahan, S.; Nisar, K.S. Solving fractional Bagley-Torvik equation by fractional order Fibonacci wavelet arising in fluid mechanics. Ain Shams Eng. J. 2024, 15, 102299. [Google Scholar] [CrossRef]
  37. Zheng, Z.; Yuan, H.; He, J. A Physics-Informed Neural Network model combined Pell–Lucas polynomials for solving the Lane–Emden type equation. Eur. Phys. J. Plus 2024, 139, 223. [Google Scholar] [CrossRef]
  38. Chaudhary, R.; Aeri, S.; Bala, A.; Kumar, R.; Baleanu, D. Solving system of fractional differential equations via Vieta-Lucas operational matrix method. Int. J. Appl. Comput. Math. 2024, 10, 14. [Google Scholar]
  39. Wang, W.; Wang, H. Some results on convolved (p,q)-Fibonacci polynomials. Integral Transform. Spec. Funct. 2015, 26, 340–356. [Google Scholar]
  40. Ramírez, J.L. On convolved generalized Fibonacci and Lucas Polynomials. Appl. Math. Comput. 2014, 229, 208–213. [Google Scholar]
  41. Şahin, A.; Ramirez, J.L. Determinantal and permanental representations of convolved Lucas polynomials. Appl. Math. Comput. 2016, 281, 314–322. [Google Scholar]
  42. Ye, X.; Zhang, Z. A common generalization of convolved generalized Fibonacci and Lucas polynomials and its applications. Appl. Math. Comput. 2017, 306, 31–37. [Google Scholar] [CrossRef]
  43. Abd-Elhameed, W.M.; Napoli, A. New formulas of convolved Pell polynomials. AIMS Math. 2022, 9, 565–593. [Google Scholar] [CrossRef]
  44. Abd-Elhameed, W.M.; Alqubori, O.M.; Napoli, A. On convolved Fibonacci polynomials. Mathematics 2024, 13, 22. [Google Scholar] [CrossRef]
  45. Hafez, R.M.; Youssri, Y.H. Review on Jacobi-Galerkin spectral method for linear PDEs in applied mathematics. Contemp. Math. 2024, 5, 2051–2088. [Google Scholar]
  46. Zhang, X.; Wang, J.; Wu, Z.; Tang, Z.; Zeng, X. Spectral Galerkin Methods for Riesz Space-Fractional Convection-Diffusion Equations. Fractal Fract. 2024, 8, 431. [Google Scholar] [CrossRef]
  47. Abd-Elhameed, W.M.; Alsuyuti, M.M. New spectral algorithm for fractional delay pantograph equation using certain orthogonal generalized Chebyshev polynomials. Commun. Nonlinear Sci. Numer. Simul. 2025, 141, 108479. [Google Scholar] [CrossRef]
  48. Abd-Elhameed, W.M.; Alqubori, O.M.; Al-Harbi, A.K.; Alharbi, M.H.; Atta, A.G. Generalized third-kind Chebyshev tau approach for treating the time fractional cable problem. Elect. Res. Arch. 2024, 32, 6200–6224. [Google Scholar] [CrossRef]
  49. Masoud, M. Numerical solution of systems of fractional order integro-differential equations with a Tau method based on monic Laguerre polynomials. J. Math. Anal. Model. 2022, 3, 1–13. [Google Scholar] [CrossRef]
  50. Sadri, K.; Amilo, D.; Hosseini, K.; Hinçal, E.; Seadawy, A.R. A tau-Gegenbauer spectral approach for systems of fractional integrodifferential equations with the error analysis. AIMS Math. 2024, 9, 3850–3880. [Google Scholar] [CrossRef]
  51. Banihashemi, S.; Jafari, H.; Babaei, A. A stable collocation approach to solve a neutral delay stochastic differential equation of fractional order. J. Comput. Appl. Math. 2022, 403, 113845. [Google Scholar] [CrossRef]
  52. Wang, F.; Zhao, Q.; Chen, Z.; Fan, C.M. Localized Chebyshev collocation method for solving elliptic partial differential equations in arbitrary 2D domains. Appl. Math. Comput. 2021, 397, 125903. [Google Scholar] [CrossRef]
  53. Ayalew, M.; Ayalew, M.; Aychluh, M. Numerical approximation of space-fractional diffusion equation using Laguerre spectral collocation method. Int. J. Math. Ind. 2025, 2450029. [Google Scholar] [CrossRef]
  54. Abd-Elhameed, W.M.; Alqubori, O.M.; Atta, A.G. A collocation procedure for the numerical treatment of the FitzHugh–Nagumo equation using a kind of Chebyshev polynomials. AIMS Math. 2025, 10, 1201–1223. [Google Scholar] [CrossRef]
  55. Hafez, R.M.; Youssri, Y.H. Legendre-collocation spectral solver for variable-order fractional functional differential equations. Comput. Methods Differ. Equ. 2020, 8, 99–110. [Google Scholar]
  56. Shafiq, M.; Abbas, M.; Abdullah, F.A.; Majeed, A.; Abdeljawad, T.; Alqudah, M.A. Numerical solutions of time fractional Burgers’ equation involving Atangana–Baleanu derivative via cubic B-spline functions. Results Phys. 2022, 34, 105244. [Google Scholar] [CrossRef]
  57. Wang, K.L.; He, C.H. A remark on Wang’s fractal variational principle. Fractals 2019, 27, 1950134. [Google Scholar]
  58. Podlubny, I. Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications; Elsevier: Amsterdam, The Netherlands, 1998; Volume 198. [Google Scholar]
  59. Raza, N.; Raza, A.; Ullah, M.A.; Gómez-Aguilar, J. Modeling and investigating the spread of COVID-19 dynamics with Atangana-Baleanu fractional derivative: A numerical prospective. Phys. Scr. 2024, 99, 035255. [Google Scholar] [CrossRef]
  60. Khan, M.A.; Atangana, A. Numerical Methods for Fractal-Fractional Differential Equations and Engineering: Simulations and Modeling; CRC Press: Boca Raton, FL, USA, 2023. [Google Scholar]
  61. Youssri, Y.H. A new operational matrix of Caputo fractional derivatives of Fermat polynomials: An application for solving the Bagley-Torvik equation. Adv. Differ. Equ. 2017, 2017, 73. [Google Scholar]
  62. Abd-Elhameed, W.M.; Al-Harbi, M.S.; Atta, A.G. New convolved Fibonacci collocation procedure for the Fitzhugh–Nagumo non-linear equation. Nonlinear Eng. 2024, 13, 20220332. [Google Scholar]
  63. Koepf, W. Hypergeometric Summation, 2nd ed.; Springer Universitext Series; Springer: Cham, Switzerland, 2014. [Google Scholar]
  64. Andrews, G.; Askey, R.; Roy, R. Special Functions; Cambridge University Press: Cambridge, UK, 1999; Volume 71. [Google Scholar]
  65. Luke, Y.L. Inequalities for generalized hypergeometric functions. J. Approx. Theory 1972, 5, 41–65. [Google Scholar]
  66. Gaunt, R.E. Bounds for modified Struve functions of the first kind and their ratios. J. Math. Anal. Appl. 2018, 468, 547–566. [Google Scholar]
  67. Jameson, G.J.O. The incomplete gamma functions. Math. Gaz. 2016, 100, 298–306. [Google Scholar]
  68. Ghafoor, A.; Fiaz, M.; Shah, K.; Abdeljawad, T. Analysis of nonlinear Burgers equation with time fractional Atangana-Baleanu-Caputo derivative. Heliyon 2024, 10, e33842. [Google Scholar] [CrossRef]
  69. Yadav, S.; Pandey, R.K. Numerical approximation of fractional Burgers equation with Atangana–Baleanu derivative in Caputo sense. Chaos Solitons Fractals 2020, 133, 109630. [Google Scholar]
Figure 1. The AE (left) at different values of t and the approximate solution (right) at t = 1 of Example 1 at β = 0.7 , n = 9 , and M = 10 .
Figure 1. The AE (left) at different values of t and the approximate solution (right) at t = 1 of Example 1 at β = 0.7 , n = 9 , and M = 10 .
Mathematics 13 01135 g001
Figure 2. The Log10(error) of Example 2 at β = 0.9 and different values of n.
Figure 2. The Log10(error) of Example 2 at β = 0.9 and different values of n.
Mathematics 13 01135 g002
Figure 3. The AE (left) at different values of t and the approximate solution (right) at t = 1 of Example 2 at β = 0.7 , n = 10 , and M = 10 .
Figure 3. The AE (left) at different values of t and the approximate solution (right) at t = 1 of Example 2 at β = 0.7 , n = 10 , and M = 10 .
Mathematics 13 01135 g003
Figure 4. The AE (left) at different values of t and the approximate solution (right) at t = 1 of Example 3 at β = 0.6 , n = 5 , and M = 10 .
Figure 4. The AE (left) at different values of t and the approximate solution (right) at t = 1 of Example 3 at β = 0.6 , n = 5 , and M = 10 .
Mathematics 13 01135 g004
Figure 5. The Log10(error) of Example 3 at β = 0.8 and different values of n.
Figure 5. The Log10(error) of Example 3 at β = 0.8 and different values of n.
Mathematics 13 01135 g005
Figure 6. The ARE (left) and the approximate solution (right) of Example 4 at β = 0.2 , n = 1 , and M = 4 .
Figure 6. The ARE (left) and the approximate solution (right) of Example 4 at β = 0.2 , n = 1 , and M = 4 .
Mathematics 13 01135 g006
Table 1. Comparison of L -error of Example 1.
Table 1. Comparison of L -error of Example 1.
Technique in [56] Our Method at M = 10
β N = 2 7 , M = 2 12 M = 2 7 , N = 2 11 n = 2 n = 5 n = 7
0.2 1.06427 × 10 5 9.66513 × 10 6 5.42251 × 10 7 5.42251 × 10 7 5.42101 × 10 7
0.3 1.06304 × 10 5 9.60898 × 10 6 5.41781 × 10 7 5.41638 × 10 7 5.41301 × 10 7
0.4 1.06126 × 10 5 9.53096 × 10 6 5.30199 × 10 7 5.30219 × 10 7 5.31348 × 10 7
0.5 1.05886 × 10 5 9.42186 × 10 6 5.2937 × 10 8 5.29367 × 10 7 5.29347 × 10 7
Table 2. Errors of Example 1 at β = 0.8 and n = 15 .
Table 2. Errors of Example 1 at β = 0.8 and n = 15 .
M 246810
MAE 1.33771 × 10 1 2.28344 × 10 2 1.13447 × 10 3 2.84933 × 10 5 4.25414 × 10 7
L -error 2.10514 × 10 1 3.22965 × 10 2 1.44796 × 10 3 3.57062 × 10 5 5.26989 × 10 7
Table 3. Errors of Example 1 at β = 0.9 and n = 7 .
Table 3. Errors of Example 1 at β = 0.9 and n = 7 .
M 246810
MAE 1.33771 × 10 1 2.27161 × 10 2 1.13171 × 10 3 2.84198 × 10 5 4.24203 × 10 7
L -error 2.10514 × 10 1 3.21217 × 10 2 1.44485 × 10 3 3.56291 × 10 5 5.2567 × 10 7
Table 4. Our CPU time used for Table 2.
Table 4. Our CPU time used for Table 2.
M 246810
CPU time of MAE3.1575.7513.46926.82750.453
CPU time of L -error3.2825.98413.67227.21850.875
Table 5. The AE for Example 2 at β = 0.8 and M = 10 .
Table 5. The AE for Example 2 at β = 0.8 and M = 10 .
( σ , t ) n = 2 CPU Time n = 4 CPU Time n = 6 CPU Time
(0.1,0.1) 1.39127 × 10 13 3.44585 × 10 14 5.65832 × 10 14
(0.2,0.2) 1.26128 × 10 13 1.39194 × 10 14 3.02466 × 10 14
(0.3,0.3) 1.02154 × 10 13 6.84175 × 10 15 5.11952 × 10 14
(0.4,0.4) 4.57134 × 10 14 3.36953 × 10 14 4.39093 × 10 14
(0.5,0.5) 4.28546 × 10 14 48.642 3.61933 × 10 14 51.422 2.95874 × 10 14 53.516
(0.6,0.6) 2.34812 × 10 13 7.54952 × 10 14 2.93099 × 10 13
(0.7,0.7) 8.20677 × 10 13 3.11862 × 10 13 1.22746 × 10 12
(0.8,0.8) 2.49534 × 10 12 8.51985 × 10 13 4.75198 × 10 12
(0.9,0.9) 6.3638 × 10 12 3.50675 × 10 12 1.75979 × 10 11
Table 6. Comparison of L -error of Example 2.
Table 6. Comparison of L -error of Example 2.
Technique in [56] Our Method at M = 10
β N = 2 7 , M = 2 11 M = 2 7 , N = 2 4 n = 1 n = 2 n = 3
0.2 5.69465 × 10 7 1.97043 × 10 5 1.17417 × 10 11 1.52283 × 10 11 2.39085 × 10 12
0.3 5.66088 × 10 7 1.99857 × 10 5 2.99682 × 10 12 3.01143 × 10 11 3.59137 × 10 12
0.4 5.61387 × 10 7 2.03716 × 10 5 2.41995 × 10 11 5.67333 × 10 12 2.21096 × 10 12
0.5 5.54967 × 10 7 2.09231 × 10 5 2.12224 × 10 11 1.40504 × 10 11 2.70304 × 10 12
Table 7. Comparison of L -error of Example 2.
Table 7. Comparison of L -error of Example 2.
Technique in [68]
β N = 2 7 , M = 2 7 N = 2 4 Our Method at M = 10 , n = 5
0.2 1.54090 × 10 8 4.2287 × 10 6 7.20079 × 10 12
0.3 1.38810 × 10 8 3.82638 × 10 6 1.00335 × 10 10
0.4 1.18634 × 10 8 3.29441 × 10 6 2.13661 × 10 11
0.5 9.20033 × 10 9 2.58682 × 10 6 2.82427 × 10 11
Table 8. Theoretical error of Example 2.
Table 8. Theoretical error of Example 2.
M 246810
Error in Theorem 7 3.733 × 10 1 5.560 × 10 3 4.913 × 10 5 2.854 × 10 7 1.170 × 10 9
Table 9. Comparison of L -error of Example 3.
Table 9. Comparison of L -error of Example 3.
Technique in [69]Technique in [56] Our Method at M = 10
β N = 2 7 , M = 2 12 N = 2 7 , M = 2 12 n = 1 n = 2 n = 3
0.2 5.47402 × 10 5 4.76511 × 10 5 8.87866 × 10 8 8.89375 × 10 8 8.86872 × 10 8
0.3 5.46367 × 10 5 4.75486 × 10 5 8.87477 × 10 8 8.83007 × 10 8 8.85437 × 10 8
0.4 5.44879 × 10 5 4.74015 × 10 5 8.93898 × 10 8 8.93463 × 10 8 8.79626 × 10 8
0.5 5.42862 × 10 5 4.72029 × 10 5 8.76302 × 10 8 8.78076 × 10 8 8.76954 × 10 8
Table 10. Comparison of L -error of Example 3.
Table 10. Comparison of L -error of Example 3.
Technique in [68]
β N = 2 7 , M = 2 12 N = 2 9 Our Method at M = 10 , n = 7
0.2 5.9687 × 10 6 5.64352 × 10 6 8.88667 × 10 8
0.3 5.95582 × 10 6 5.5615 × 10 6 8.86095 × 10 8
0.4 5.93727 × 10 6 5.4489 × 10 6 8.79734 × 10 8
0.5 5.91227 × 10 6 5.2927 × 10 6 8.76423 × 10 8
Table 11. The AE for Example 3 at β = 0.7 and M = 10 .
Table 11. The AE for Example 3 at β = 0.7 and M = 10 .
( σ , t ) n = 3 CPU Time n = 5 CPU Time n = 8 CPU Time
(0.1,0.1) 6.88724 × 10 10 6.88726 × 10 10 6.88622 × 10 10
(0.2,0.2) 2.5737 × 10 19 2.57334 × 10 9 2.57324 × 10 9
(0.3,0.3) 5.64325 × 10 9 5.643 × 10 9 5.64277 × 10 9
(0.4,0.4) 1.01399 × 10 8 1.01403 × 10 8 1.01399 × 10 8
(0.5,0.5) 1.64631 × 10 8 51.627 1.64642 × 10 8 52.204 1.6464 × 10 8 52.063
(0.6,0.6) 2.51697 × 10 8 2.51707 × 10 8 2.51715 × 10 8
(0.7,0.7) 3.69135 × 10 8 3.69109 × 10 8 3.6915 × 10 8
(0.8,0.8) 5.21996 × 10 8 5.21855 × 10 8 5.21974 × 10 8
(0.9,0.9) 7.01578 × 10 8 7.01168 × 10 8 7.01439 × 10 8
Table 12. The ARE for Example 4 at β = 0.2 and M = 10 .
Table 12. The ARE for Example 4 at β = 0.2 and M = 10 .
σ t = 3 t = 6 t = 8 CPU Time
0.1 1.9295 × 10 6 2.31228 × 10 7 2.90447 × 10 6
0.2 1.53756 × 10 6 1.86984 × 10 7 2.204 × 10 6
0.3 1.22152 × 10 6 1.43812 × 10 7 1.66797 × 10 6
0.4 9.61916 × 10 7 1.09957 × 10 7 1.25194 × 10 6
0.5 7.45597 × 10 7 8.29039 × 10 8 9.27122 × 10 7 83.014
0.6 5.61974 × 10 7 6.10056 × 10 8 6.70458 × 10 7
0.7 4.02449 × 10 7 4.29632 × 10 8 4.63476 × 10 7
0.8 2.59984 × 10 7 2.73363 × 10 8 2.9123 × 10 7
0.9 1.25138 × 10 7 2.37574 × 10 8 1.39256 × 10 7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abd-Elhameed, W.M.; Alqubori, O.M.; Alsafri, N.M.A.; Amin, A.K.; Atta, A.G. A Matrix Approach by Convolved Fermat Polynomials for Solving the Fractional Burgers’ Equation. Mathematics 2025, 13, 1135. https://doi.org/10.3390/math13071135

AMA Style

Abd-Elhameed WM, Alqubori OM, Alsafri NMA, Amin AK, Atta AG. A Matrix Approach by Convolved Fermat Polynomials for Solving the Fractional Burgers’ Equation. Mathematics. 2025; 13(7):1135. https://doi.org/10.3390/math13071135

Chicago/Turabian Style

Abd-Elhameed, Waleed Mohamed, Omar Mazen Alqubori, Naher Mohammed A. Alsafri, Amr Kamel Amin, and Ahmed Gamal Atta. 2025. "A Matrix Approach by Convolved Fermat Polynomials for Solving the Fractional Burgers’ Equation" Mathematics 13, no. 7: 1135. https://doi.org/10.3390/math13071135

APA Style

Abd-Elhameed, W. M., Alqubori, O. M., Alsafri, N. M. A., Amin, A. K., & Atta, A. G. (2025). A Matrix Approach by Convolved Fermat Polynomials for Solving the Fractional Burgers’ Equation. Mathematics, 13(7), 1135. https://doi.org/10.3390/math13071135

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop