Next Article in Journal
MHD Slip Flow of Casson Fluid along a Nonlinear Permeable Stretching Cylinder Saturated in a Porous Medium with Chemical Reaction, Viscous Dissipation, and Heat Generation/Absorption
Next Article in Special Issue
Reversible Data Hiding Scheme Using Adaptive Block Truncation Coding Based on an Edge-Based Quantization Approach
Previous Article in Journal
Sampling Associated with a Unitary Representation of a Semi-Direct Product of Groups: A Filter Bank Approach
Previous Article in Special Issue
On a SIR Model in a Patchy Environment Under Constant and Feedback Decentralized Controls with Asymmetric Parameterizations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Inverse Laplace Transform for Solving a Class of Fractional Differential Equations

1
Department of Mathematics, Sant Longowal Institute of Engineering and Technology, Longowal-148106, Punjab, India
2
Engineering School (DEIM), University of Tuscia, 01100 Viterbo, Italy
3
Ton Duc Thang University, HCMC 700000, Vietnam
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(4), 530; https://doi.org/10.3390/sym11040530
Submission received: 3 March 2019 / Revised: 7 April 2019 / Accepted: 10 April 2019 / Published: 12 April 2019
(This article belongs to the Special Issue Symmetry and Complexity 2019)

Abstract

:
This paper discusses the applications of numerical inversion of the Laplace transform method based on the Bernstein operational matrix to find the solution to a class of fractional differential equations. By the use of Laplace transform, fractional differential equations are firstly converted to system of algebraic equations then the numerical inverse of a Laplace transform is adopted to find the unknown function in the equation by expanding it in a Bernstein series. The advantages and computational implications of the proposed technique are discussed and verified in some numerical examples by comparing the results with some existing methods. We have also combined our technique to the standard Laplace Adomian decomposition method for solving nonlinear fractional order differential equations. The method is given with error estimation and convergence criterion that exclude the validity of our method.

1. Introduction

Over the years, researchers have been attracted to study the scientific problems modelled in fractional differential equations due to their constant appearance in the different disciplines of mathematical sciences and engineering such as fluid mechanics, viscoelasticity, mathematical physics, mathematical biology, system identification, control theory, electrochemistry and signal processing [1,2,3,4,5,6]. Several analytical and numerical techniques have been developed to solve such kind of equations in the literature. Among these methods, Li and Sun [7] derived the generalized block pulse operational matrix to find the solution of fractional differential equations in terms of block pulse function. A truncated Legendre series together with generalized Legendre operational matrix is used to solve fractional differential equations by Saadatmandi and Dehgan in [8] and they also have presented shifted Legendre-tau method for finding the solution of fractional diffusion equations with variable coefficients in [9]. Doha et al. [10] used the shifted Jacobi operational matrix of fractional derivatives applied together with spectral tau-method for solving fractional differential equations. Kazem et al. [11] constructed an orthogonal fractional order Legendre function based on Legendre polynomials to solve fractional differential equations. Mokhtary et al. [12] provided the operational tau method based on Muntz–Legendre polynomials for solving fractional differential equations. Bernoulli wavlet operational matrix of fractional order integration has been derived to approximate the numerical solution of fractional differential equations in [13]. Fractional-order Lagrange polynomials have been proposed to solve the fractional differential equations in [14]. Albadarneh et al. [15] adopted the fractional finite difference method for solving linear and nonlinear fractional differential equations. Garrappa [16] provided a detailed survey on the two methods, i.e., product integration rules (PI) and Lubich’s fractional linear multi step methods (FLMMs) for solving fractional differential equations. Dehghan et al. [17] adopted homotopy analysis method to solve linear fractional partial differential equations, a meshless approximation strategy for solving fractional partial differential equations based on radial basis function is used in [18]. Haar wavlets have been employed to obtain the solutions of boundary value problems for linear fractional partial differential equations by Rehman and Khan [19]. Li et al. [20] solved the linear fractional partial differential equations based on operational matrix of fractional Bernstein polynomials. Yang et al. [21] discussed the solution of fractional order diffusion equations within the negative Prabhakar kernel comprise with the Laplace transform and the series solutions in terms of general Mittag–Leffler functions. In [22], a new factorization technique has been adopted for nonlinear differential equations involving local fractional derivatives and found the exact solutions for nonlinear local fractional FitzHugh–Nagumo and Newell–Whitehead equations. Cesarano [23] proposed the Hermite polynomials based operational method to solve fractional diffusive equations. From the last few decades, the Laplace transform method has become popular and adopted by many researchers to solve differential and integral equations. Since then it is necessary to find the inverse Laplace transform for finding the solution in its original domain. There exist a number of analytical and numerical methods for inverting a Laplace transform. For details one can refer [24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41]. The Laplace transform method reduces the differential or integral equation into a system of algebraic equations due to the Heaviside’s operational method. Further, Hasio and Chen [42] extended the idea of Heaviside’s operational method to operational matrix of integration. In the literature, few papers reported the use of operational matrices for inversion of Laplace transform: Chen et al. [39] obtained the walsh operational matrices and applied to distributed system, Wu et al. [40] adopted Haar wavlet operational matrices for numerical inverse Laplace transform of certain functions, Aznam and Hussin [38] modified Haar wavlet operational matrices by using generalized block pulse functions, Babolian and Shamloo [24] used operational matrices based on piecewise-constant block pulse function to invert the Laplace transform and solved Volterra integral equations, Maleknejad and Nouri [25] improved the operational matrices based on block pulse functions and Shamloo et al. [41] adopted this method to solve first kind of integral equations. The main purpose of using an operational matrix is that it converts the differential or integral equations into a system of algebraic equations that is simple and easy to solve.
Bernstein polynomials and its operational matrix is used to solve many differential, integral and integro-differential equations [43,44,45,46,47,48,49]. But these are not adopted with Laplace transform for inverting the Laplace transform. A Bernstein operational matrix of integration has been developed to find the numerical inverse Laplace transform of certain functions in our paper [50]. The proposed method expresses the solution of equations in terms of truncated Bernstein expansion and then using its operational matrix of integration, numerical inverse Laplace transform is obtained. The operational matrix of integration of Bernstein polynomials is easily calculated using a single formula of integration rather than Haar or block pulse function where the order of matrix is taken too large, i.e., 8 , 16 , 128 . Here in our method, we achieve the accuracy using a matrix of order 6 or 7. In the present research, our aim is to find the numerical solutions of linear fractional ordinary and partial differential equations, nonlinear fractional differential equations and system of fractional differential equations using numerical inverse Laplace transform method based on Bernstein operational matrix of integration developed in [50]. At first, for linear problems, we transform the fractional differential equations to system of linear algebraic equations using Laplace transform then the numerical approach for calculating the inverse Laplace transform is used to retrieve the time-domain. Here, we extend our numerical approach to solve some nonlinear fractional differential equations together with a well-known iterative method, i.e., a Laplace Adomian decomposition method [51] (briefly explained in the Section 4). One more advantage of using our proposed method is that there is no need of any fractional order matrix of integration for numerical inversion to solve fractional order differential equations.
The paper is organized as follows: In Section 2, some necessary definitions and preliminaries of the fractional calculus theory and Laplace transform are given. Section 3 reveals the basics of Bernstein polynomials, derivation of operational matrix of integration and function approximation in Bernstein polynomials. In Section 4, the proposed method is explained and presented the application to a class of fractional differential equations. Section 5 represents the error estimation and convergence analysis. In Section 6 the illustrative examples are given to show the applicability of the method. Section 7 is refers to conclusions.

2. Preliminaries

In this section, we give some basic definitions and properties of a Laplace transform [36] and fractional calculus as follows [1,2,4]:
Definition 1.
The Laplace transform of a continuous or piecewise continuous function f ( t ) in [ 0 , ) is defined as
L ( f ( t ) ) = F ( s ) = 0 e s t f ( t ) d t ,
where s is known as Laplace variable.
Definition 2.
A real function f ( t ) , t > 0 is said to be in the space C μ if μ R , there exists a real number p > μ and the function f 1 ( t ) C [ 0 , ) such that f ( t ) = t p f 1 ( t ) . Moreover, if f ( n ) C μ then f ( t ) is said to be in the space C μ n , n N .
Definition 3.
The Riemann–Liouville fractional integral of order α 0 for a function f ( t ) is defined as
J α f ( t ) = 1 Γ ( α ) 0 t ( t τ ) α 1 f ( τ ) d τ , α > 0 f ( t ) , α = 0 ,
where Γ ( . ) denotes the Gamma function.
Definition 4.
The Riemann–Liouville fractional derivative of order α > 0 for a function f ( t ) is defined as
D α f ( t ) = d n d t n J n α f ( t ) , n N , n 1 < α n .
Definition 5.
The Caputo fractional derivative of order α > 0 is defined as
D α f ( t ) = d n f ( t ) d t n , α = n , n N 1 Γ ( n α ) 0 t f ( n ) ( τ ) ( t τ ) α n + 1 d τ , 0 n 1 < α < n ,
where n is an integer, t > 0 , and f ( t ) C 1 n .
Definition 6.
The Laplace transform of a function f ( x , t ) , denoted as F ( x , s ) , t 0 is defined by
L ( f ( x , t ) ) = F ( x , s ) = 0 e s t f ( x , t ) d t ,
where s is the transformed parameter.
Definition 7.
The Caputo fractional partial derivative operator of order α > 0 is defined as
α f ( x , t ) t α = n f ( x , t ) t n , α = n , n N 1 Γ ( n α ) 0 t n f ( x , τ ) t n ( t τ ) α n + 1 d τ , 0 n 1 < α < n ,
where n is an integer, t > 0 .
Property 1.
The Laplace transform of the Caputo fractional derivative D α f ( t ) can be found as
L ( D α f ( t ) ) = 1 s m α s m L ( f ( t ) ) s m 1 f ( 0 ) s m 2 f ( 0 ) f ( m 1 ) ( 0 ) .

3. Bernstein Polynomials and Function Approximation

To explain the operational matrices of Bernstein polynomials, we need to give some basic definitions and properties following [52].
Definition 8.
The Bernstein basis polynomials of degree n are defined over the interval [ 0 , 1 ] :
b i , n ( t ) = n i t i ( 1 t ) n i , i = 0 , 1 , , n .
Some useful results of Bernstein polynomials are as follows:
  • b i , n ( t ) = 0 , if i < 0 or i > n .
  • b i , n ( t ) 0 for t [ 0 , 1 ] .
  • The Bernstein polynomials form a partition of unity i.e., i = 0 n b i , n ( t ) = 1 .
Bernstein polynomials have one more important property that these polynomials are not orthogonal therefore to conquer this difficulty Bernstein polynomials of degree n are orthonormailized using Gram–Schmidt orthonormalization procedure and denoted as B i , n ( t ) , i = 0 . , 1 n .
We can find the function approximation based on Bernstein polynomials. A function f ( t ) , f ( t ) L 2 [ 0 , 1 ] can be expressed in terms of orthonormal Bernstein polynomials [47]:
f ( t ) = lim n i = 0 n c i B i , n ( t ) ,
where c i = f , B i , n and . , . denotes the standard inner product.
If the infinite series is truncated at n = k , then the approximate solution can be expressed as
f k ( t ) = i = 0 k c i B i , k ( t ) = C T B ( t ) ,
where C = c 0 c 1 c 2 c k T and B ( t ) = B 0 , k ( t ) B 1 , k ( t ) B 2 , k ( t ) B k , k ( t ) T .
Here, the operational matrix of integration of orthonormal Bernstein polynomials is introduced which depends on the integral property of basis vector, i.e., suppose we have a column vector ϕ ( t ) = [ ϕ 0 ( t ) , ϕ 1 ( t ) , ϕ k ( t ) ] where ϕ 0 ( t ) , ϕ 1 ( t ) , ϕ k ( t ) are the basis functions orthogonal on some interval [ a , b ] , then the property states that
a t a t ϕ ( x ) ( d x ) m = A k + 1 m ϕ ( t ) ,
where A k + 1 m is the operational matrix of integration of ϕ ( t ) which is a constant matrix of order ( k + 1 ) × ( k + 1 ) . Now adopting this property on vector B ( t ) , we get
0 t B ( x ) d x = I k + 1 B ( t ) ,
where I k + 1 is the operational matrix of integration of Bernstein polynomials defined as
0 t B i , k ( x ) d x = α i = j = 0 k a j k i B j , k , i = 0 , 1 , , k , 0 t < 1 .
Therefore
I k + 1 = ( a j k i ) = α i , B j , k , i , j = 0 , 1 , , k .

4. The Method for Numerical Inverse Laplace Transform

In this section, we describe the algorithm proposed in [50] by considering the linear time varying system
f ( t ) + α f ( t ) = u ( t ) , f ( 0 ) = 0
where u ( t ) is the unit step function.
Here, we convert this differential equation to integral equation
f ( t ) + 0 t α f ( x ) d x = 0 t u ( x ) d x , f ( 0 ) = 0 .
Performing Laplace transform on both sides of (11), we get
F ( s ) = 1 s ( s + α ) .
We can rewrite (12) as
F ( s ) = 1 s 2 ( 1 + α s ) = F ¯ 1 s .
Here, we use result from Laplace theory [36]: if L ( f ( t ) ) = F ( s ) , then
L 0 t f ( x ) d x = 1 s F ( s ) .
This result can be explained as the integration in time-domain is corresponding to multiplication of 1 / s in s d o m a i n .
We can say that here three domains are introduced: one is time-domain, by performing Laplace transform we move from time-domain, f ( t ) to s d o m a i n F ( s ) , or F ¯ 1 s , and then to matrix domain where we define the functional F ˜ ( I k + 1 ) (say) defined on the space of matrices.
The integration in time domain of the inverse Laplace function is corresponding to the definition of the functional F ˜ defined onto the space of Bernstein operational matrices. Therefore (13) becomes
F ˜ I k + 1 = I k + 1 2 ( I + α I k + 1 ) 1 .
To solve the integral Equation (11), we approximate
0 t f k ( x ) d x = C T I k + 1 B ( t )
Also
0 t u ( x ) d x = d T I k + 1 B ( t ) ,
where d = d 0 d 1 d 2 d k T defined by
d i = 0 t B i , k ( t ) d t , i = 0 , 1 , 2 , k , 0 t < 1 .
Therefore, the integral Equation (11) becomes
C T B ( t ) + α C T I k + 1 B ( t ) = d T I k + 1 B ( t )
C T = d T I k + 1 ( I + α I k + 1 ) ) 1
C T = d T I k + 1 1 F ˜ ( I k + 1 ) .
Thus, the unknown vector C T is calculated, where F ˜ ( I k + 1 ) can be taken from Equation (15). Consequently, the approximate solution f k ( t ) can be obtained in terms of Bernstein polynomials by substituting the unknown vector C T into (5).
In all these computations for finding the solution, we used simple algebraic operations of matrices, which have been computed using MATLAB R2014a on Intel® Core™ i3 processor (2328M)(2nd Gen.).

4.1. Application to a Class of Fractional Differential Equations

Here, we present the fundamental importance of proposed method by applying it to linear fractional differential equations, linear partial fractional differential equations and nonlinear fractional differential equations.

4.1.1. Application to Linear Fractional Differential Equations

Consider the linear ordinary fractional differential equation
D α f ( t ) = M ( f ( t ) ) + g ( t ) ,
with initial conditions f ( i ) ( 0 ) = f i , i = 0 , 1 , , m 1 .
Here D α denotes the Caputo fractional order derivative and M ( f ( t ) ) represents the linear operator, and may also contain the other derivatives than order α .
To solve the initial problem, Laplace transform is applied to Equation (20) and we attain
1 s m α s m L ( f ( t ) ) s m 1 f ( 0 ) s m 2 f ( 0 ) f ( m 1 ) ( 0 ) = L [ M ( f ( t ) ) + g ( t ) ] = G ( s )
L ( f ( t ) ) = 1 s m s m α G ( s ) + s m 1 f ( 0 ) + s m 2 f ( 0 ) + f ( m 1 ) ( 0 ) .
Therefore, f ( t ) = L 1 ( H ( s ) ) , where H ( s ) = 1 s m s m α G ( s ) + s m 1 f ( 0 ) + s m 2 f ( 0 ) + f ( m 1 ) ( 0 ) . Hence, f ( t ) , i.e., the solution can be obtained by finding the inverse Laplace transform of H ( s ) using the above described procedure.

4.1.2. Application to Linear Partial Fractional Differential Equations

Consider the linear partial fractional differential equation of the form
α f t α + A ( x ) f x + B ( x ) 2 f x 2 + C ( x ) f = g ( x , t ) ,
with initial condition k f t k ( x , 0 ) = h k ( x ) , k = 0 , 1 , m 1 .
Taking Laplace transform to both sides of Equation (22), we have
1 s m α s m F ( x , s ) s m 1 h 0 ( x ) s m 2 h 1 ( x ) h m 1 ( x ) + A ( x ) d F ( x , s ) d x + B ( x ) d 2 F ( x , s ) d x 2 + C ( x ) F ( x , s ) = G ( x , s ) ,
where F ( x , s ) and G ( x , s ) denote the Laplace transform of f ( x , t ) and g ( x , t ) respectively. The above equation can be written as
B ( x ) d 2 F ( x , s ) d x 2 + A ( x ) d F ( x , s ) d x + ( s α + C ( x ) ) F ( x , s ) = G ( x , s ) + 1 s m α s m F ( x , s ) + s m 1 h 0 ( x ) + s m 2 h 1 ( x ) + h m 1 ( x ) .
Now Equation (23) has become a second order ordinary differential equation in F ( x , s ) , that can be easily solved by any classical method for variable x, while keeping s as Laplace variable. The obtained solution F ( x , s ) can be inverted to f ( x , t ) using our developed technique.

4.1.3. Application to Nonlinear Fractional Differential Equations

In view to solve, nonlinear fractional order differential equations using standard Laplace adomian decomposition method, we briefly recall the Laplace adomian decomposition method here.
Consider the nonlinear fractional order differential equation
D α f ( t ) + M ( f ( t ) ) + N ( f ( t ) ) = g ( t ) , m 1 α < m ,
with initial condition f ( i ) ( 0 ) = f i , i = 0 , 1 , , m 1 .
Here M ( f ( t ) ) represents the linear operator which may include other derivatives than order α and N ( f ( t ) ) be the nonlinear operator of f ( t ) .
The standard Laplace Adomian Decomposition Method (LADM) procedure begins with taking Laplace transform to this nonlinear equation and using the properties of Laplace transform, we have
1 s m α s m L ( f ( t ) ) s m 1 f ( 0 ) s m 2 f ( 0 ) f ( m 1 ) ( 0 ) + L ( M ( f ( t ) ) ) + L ( N ( f ( t ) ) ) = L ( g ( t ) )
L ( f ( t ) ) = a s α 1 s α L ( M ( f ( t ) ) ) 1 s α L ( N ( f ( t ) ) ) + 1 s α L ( g ( t ) ) ,
where a = i = 0 m 1 s α i 1 f ( i ) ( 0 ) .
The method describes the series solution as
f ( t ) = n = 0 f n ( t ) .
Therefore, truncated series solution takes the form
f m ( t ) = n = 0 m f n ( t ) ,
and the nonlinear term is decomposed as follows:
N ( f ( t ) ) = n = 0 A n ,
where A n ’s are the Adomian polynomials, given by
A n = 1 n ! d n d λ n N i = 0 n λ i f i λ = 0 .
Consequently, Equation (25) leads to following recurrence relation
f 0 ( t ) = L 1 a s α 1 s α L { M ( f ( t ) ) } + 1 s α L { g ( t ) }
f 1 ( t ) = L 1 1 s α L A 0 ( f 0 ( t ) ) .
In general
f m ( t ) = L 1 1 s α L A m 1 ( f 0 ( t ) , f 1 ( t ) f m 1 ( t ) ) .
The new development is that we find the values of f 0 ( t ) , f 1 ( t ) by finding inverse Laplace transform using our proposed technique, i.e., Bernstein operational matrix, as described above.

5. Error Estimation and Convergence Analysis

5.1. Error Estimation via RK45 Method

We have investigated error estimation of the proposed method [24]. The error function e k ( t ) of the truncated Bernstein expansion f k ( t ) is defined as
e k ( t ) = f ( t ) f k ( t ) .
From the following theorem the absolute error bound can be estimated:
Theorem 1.
Let f ( t ) be the function defined on [ 0 , 1 ] , then the upper bound for the errors in truncated Bernstein expansion can be estimated.
Proof. 
Suppose that f k ( t ) and f k + 1 ( t ) are two consecutive approximate solutions of given differential equation. If the error sequence is increasing or decreasing, then we use the triangle inequality
| f ( t ) f k 1 ( t ) f ( t ) f k 2 ( t ) | f k 1 ( t ) f k 2 ( t ) ,
to the approximate solutions at k and k + 1 and we achieve
| f ( t ) f k + 1 ( t ) f ( t ) f k ( t ) | = β f ( t ) f m ( t ) f k + 1 ( t ) f k ( t ) , 0 β < 1 ,
where
f ( t ) f m ( t ) = max { f ( t ) f k ( t ) , f ( t ) f k + 1 ( t ) }
and β = f k + 1 ( t ) f k ( t ) f ( t ) f m ( t ) .
Hence, it can be said that the error can be bounded from above. One of the absolute errors e k ( t ) or e k + 1 ( t ) are bounded by f k + 1 ( t ) f k ( t ) , if the error sequence is monotone. The computable error bound is represented for 0 β < 1 . But, the solution diverges when the Bernstein series diverges for β > 1 . □

5.2. Convergence Analysis

In order to show the effectiveness of our method, a residual function of a linear time varying system in the Banach space for the values of k is adopted to interpret the convergence of the Bernstein polynomials solution as described in [47,53].
Suppose f k ( t ) are the approximate solution of (10), we write the residual function as
R k ( t ) f k ( t ) + α f k ( t ) u ( t ) , t [ 0 , 1 ] .
The Bernstein polynomial-based numerical solution or the residual function can be expressed in terms of Taylor series expansion as:
R k ( t ) = r 0 + r 1 t + r 2 t 2 + r k t k = j = 0 k r j t j .
Now, we desire to prove that the residual function sequence is convergent in Banach space and satisfying the condition: R k + 1 ζ k R k , for 0 ζ k < 1 .
For the convergence, we prove here that { R k + 1 ( t ) } , k = 0 , 1 , is a Cauchy sequence.
Let us take that
R k + 1 ( t ) = j = 0 k + 1 r j t j sup j = 0 k + 1 r j t j : t [ a , b ] = | R k + 1 ( b ) | .
Therefore, the given condition can be written as
| R k + 1 ( b ) | ζ k | R k ( b ) | ,
and this can be easily shown as follows. Let us write explicitly as functions of Bernstein polynomials
f k ( t ) = i = 0 k f i k B i , k ( t )
f k ( t ) = i = 0 k f i k B i , k ( t ) .
If we define
m k = max i = 0 , 1 , k f i k ,
we have
f k ( t ) = m k i = 0 k B i , k ( t ) .
Here we used the following property of Bernstein polynomials
B i , k ( t ) = k B i 1 , k 1 ( t ) B i , k 1 ( t ) ,
so that
f k ( t ) < m k i = 1 k 1 k B i 1 , k 1 ( t ) B i , k 1 ( t ) .
From where, we get
f k ( t ) < m k k B 0 , k 1 ( t ) B k 1 , k 1 ( t )
f k ( t ) < m k k ( 1 t ) k 1 t k 1 .
Since the function in brackets is a decreasing function in [ 1 , 1 ] , with max t [ 0 , 1 ] ( 1 t ) k 1 t k 1 = 1 there follows
f k ( t ) < k m k
Analogously, by using the properties of Bernstein polynomials, we can also estimate
f k ( t ) < m k i = 0 k B i , k ( t ) ,
and since Bernstein polynomials are partition of unit there follows
f k ( t ) < m k .
Let us combine Equation (36) with the condition in Equations (42) and (43), then we get
R k < m k k + α m k u ,
and analogously
R k + 1 < m k + 1 k + α m k + 1 u .
This can be written as
R k + 1 < m k + 1 m k m k k + α m k u + m k + 1 m k u u
R k + 1 < m k + 1 m k R k + m k + 1 m k u u .
Here
lim k m k + 1 m k = 1 ,
which implies
lim k m k + 1 m k u u = 0 ,
so that R k + 1 < ζ k R k , with ζ k = m k + 1 m k , 0 < ζ k < 1 , lim k ζ k = 1 .
Now, we begin from the above inequality
| R k + 1 ( b ) | | R k + 1 ( b ) R k ( b ) | ( ζ k 1 ) | R k ( b ) | .
Thus, we generalize the inequality and attain
| R k + 1 ( b ) R k ( b ) | ( ζ k 1 ) | R k ( b ) | ( ζ k 1 ) ( ζ k 1 1 ) | R k 1 ( b ) | ( ζ k 1 ) ( ζ k 1 1 ) ( ζ 0 1 ) | R 0 ( b ) | .
For k , s N and k s , to prove that the sequence is Cauchy sequence, we take
| R k ( b ) R s ( b ) | | R k ( b ) R k 1 ( b ) + R k 1 ( b ) R k 2 ( b ) + R s + 1 ( b ) R s ( b ) | | R k ( b ) R k 1 ( b ) | + | R k 1 ( b ) R k 2 ( b ) | + | R s + 1 ( b ) R s ( b ) | ϵ k 1 | R 0 ( b ) | + ϵ k 2 | R 0 ( b ) | + ϵ s + 1 | R 0 ( b ) | ( ϵ k 1 + ϵ k 2 ϵ s + 1 ) | R 0 ( b ) | η | R 0 ( b ) | ,
where ϵ k = i = 0 k ( ζ i 1 ) and η = ( ϵ k 1 + ϵ k 2 ϵ s + 1 ) . Hence, for 0 η < 1 , | R k ( b ) R s ( b ) | 0 as k , s , that proves the residual function sequence is Cauchy sequence and convergent. Therefore the approximate solution is convergent.

6. Illustrative Examples

Here, we present some examples to demonstrate the applicability of the presented technique. The relative errors and L -errors are plotted at the different values of k. Also the error estimation is discussed in each example.
Example 1.
Consider the following fractional differential equation [12]
D f ( t ) + D 0.25 f ( t ) + f ( t ) = t 5 / 2 + 5 2 t 3 / 2 + 15 8 π Γ 13 / 4 t 9 / 4 , f ( 0 ) = 0 ,
with the exact solution f ( t ) = t 2 t .
The relative error for different values of k are plotted in Figure 1 which shows that the method is giving good results as compared to analytic solution. Using the error estimation method, we estimated the upper bound of errors and calculate
e 6 = 1.07 × 10 3 e 7 = 6.00 × 10 5 f 6 f 7 = 1.07 × 10 3 .
Therefore, it is clear from the data that the error is estimated and bounded above by f 6 f 7 .
Example 2.
Consider the following fractional differential equation
D 1.5 f ( t ) + 3 f ( t ) = 3 t 3 + 4 Γ 1 . 5 t 1.5 , f ( 0 ) = f ( 0 ) = 0 ,
with exact solution f ( t ) = t 3 .
The relative errors for different values of k are plotted in Figure 2. Using the error estimation method, we estimated the upper bound of errors and calculated
e 6 = 1.71 × 10 3 e 7 = 3.21 × 10 5 f 6 f 7 = 1.72 × 10 3 .
Hence, in this example m a x { e 6 , e 7 } f 6 f 7 that justify the result of error analysis.
Example 3.
Consider the following fractional differential equation [15]
D 0.5 f ( t ) + f ( t ) = t 2 + 2 Γ 2 . 5 t 1.5 , f ( 0 ) = 0 ,
with the exact solution f ( t ) = t 2 .
In Figure 3, we plotted the relative errors for k = 5 and k = 6 . To show the efficiency, absolute errors compared to fractional finite difference method (FFDM) [15] are given in Table 1 which clearly states that our method gives more accurate results. Using the error estimation method, we also estimate the upper bound of errors and calculate e 6 = 8.17 × 10 4 , e 7 = 2.97 × 10 5 and f 6 f 7 = 8.17 × 10 4 . Therefore m a x { e 6 , e 7 } f 6 f 7 that follows the estimated upper bound of error.
Example 4.
Consider the following mathematical model, which is developed for a micro-electro mechanical system instrument, that has been designed to measure the viscosity of the fluids that are encountered during oil exploration [54]:
D 2 f ( t ) + β π D 1.5 f ( t ) + f ( t ) = 0 , f ( 0 ) = 1 , f ( 0 ) = 0
In Figure 4, we plotted the relative errors for k = 5 and k = 6 , taking β = 1 . To show the efficiency, absolute errors compared to cubic spline method [54] are given in Table 2 which clearly states that our method gives more accurate results than the method in [54]. Using the error estimation method, we also estimate the upper bound of errors and calculate e 6 = 2.61 × 10 4 , e 7 = 7.43 × 10 5 and f 6 f 7 = 1.90 × 10 4 . Therefore e 7 f 6 f 7 that follows the estimated upper bound of error.
Example 5.
Consider the following system of fractional differential equations [14,55]
D α f 1 ( t ) = f 1 ( t ) + f 2 ( t ) ,
D β f 2 ( t ) = f 1 ( t ) + f 2 ( t ) ,
subject to the conditions
f 1 ( 0 ) = 0 , f 2 ( 0 ) = 1 ,
where the exact solution of system at α = β = 1 is f 1 ( t ) = e t sin t and f 2 ( t ) = e t cos t .
The relative error for different values of k are plotted in Figure 5 and Figure 6. Using the error estimation method, we estimate the upper bound of errors for f 1 ( t ) and calculate
e 6 = 1.59 × 10 3 e 7 = 6.18 × 10 5 f 6 f 7 = 1.59 × 10 3
We observe that the result m a x { e 6 , e 7 } f 6 f 7 is satisfied here. Similarly for the function f 2 ( t ) the results are reported as
e 6 = 9.96 × 10 4 e 7 = 2.98 × 10 4 f 6 f 7 = 1.08 × 10 3 .
The numbers clearly show that the error is bounded from above, i.e., m a x { e 6 , e 7 } f 6 f 7 . To show the efficiency, absolute errors compared to variation iteration method (VIM) [55] are given in Table 3 and Table 4 which excludes that our method gives more accurate results than the method in [55].
Example 6.
Consider the following linear time fractional wave equation [18]
α f t α = 1 2 x 2 2 f x 2 ,
having f ( x , 0 ) = x , f ( x , 0 ) t = x 2 .
The exact solution is not known.
Here, we first convert the fractional partial differential equation to ordinary differential equation as described in Section 4, then the solution is obtained by our proposed method for α = 1.5 , k = 6 presented in Table 5 and have compared with the method VIM (Table 6) and Inverse Multiquadric-Radial Basis Functions (IMQ-RBF) (Table 7) which shows that the solution is in good agreement with VIM and IMQ-RBF [18].
Example 7.
Consider the nonlinear fractional differential equation
D 4 f ( t ) + D 7 / 2 f ( t ) + f 3 ( t ) = t 9 , f ( 0 ) = f ( 0 ) = f ( 0 ) = 0 , f ( 0 ) = 6 ,
with exact solution f ( t ) = t 3 .
The relative error for different values of k are plotted in Figure 7 which show that our method gives a very close solution to the analytic solution. Using the error estimation method, we estimated the upper bound of errors and calculated
e 6 = 1.71 × 10 3 e 7 = 7.72 × 10 5 f 6 f 7 = 1.72 × 10 3 .
Hence, we can see that here f 6 f 7 bounds the error e 6 .
We also analyzed the L -error at different values of k, i.e., k = 5 , 6 , 7 , 8 for examples which clearly exclude that we achieve the best results at k = 6 , as shown in Figure 8, Figure 9, Figure 10, Figure 11.

7. Conclusions

Enormous efforts and advances have been conducted to obtain the numerical solutions of fractional differential equations. An operational matrix of orthonormal Bernstein polynomials is derived to find the inverse Laplace transform in [50]. Here, the practical use of our proposed method is discussed for finding the solutions of some fractional order ordinary differential equations including the mathematical model of instrument (MEMS) and partial differential equations (particularly wave equation) that converts the problem to system of linear algebraic equations. The accuracy of the method is illustrated while comparing the solutions with some existing methods like VIM, FFDM, cubic spline and IMQ-RBF. We have also combined our method with Laplace adomian decomposition method that is advantageous to solve nonlinear fractional differential equations. Finally, we have analyzed the solution of each illustrative example at different values of k, i.e., at k = 5 , 6 , 7 , 8 and with the help of graphs relative errors are shown. It is also observed from the plots of supremum norm error that at k = 6 , we achieve the best accurate result compared to others.

Author Contributions

The Authors have equally contributed to this paper.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier: San Diego, CA, USA, 2006. [Google Scholar]
  2. Podlubny, I. Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations to Methods of Their Solution and Some of Their Applications; Academic Press: New York, NY, USA, 1999. [Google Scholar]
  3. Debnath, L.; Bhatta, D. Integral Transforms and their Applications, 2nd ed.; Chapman and Hall/CRC: Boca Raton, FL, USA, 2007. [Google Scholar]
  4. Miller, K.S.; Ross, B. An Introduction to the Fractional Calculus and Fractional Differential Equations; John Wiley and Sons, Inc.: New York, NY, USA, 1993. [Google Scholar]
  5. Oldham, K.B.; Spanier, J. The Fractional Calculus; Academic Press: New York, NY, USA, 1974. [Google Scholar]
  6. Xiao-Jun, Y. General Fractional Derivatives: Theory, Methods and Applications; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  7. Li, Y.; Sun, N. Numerical solution of fractional differential equations using the generalized block pulse operational matrix. Comput. Math. Appl. 2011, 62, 1046–1054. [Google Scholar] [CrossRef] [Green Version]
  8. Saadatmandi, A.; Dehghan, M. A new operational matrix for solving fractional-order differential equations. Comput. Math. Appl. 2010, 59, 1326–1336. [Google Scholar] [CrossRef] [Green Version]
  9. Saadatmandi, A.; Dehghan, M. A tau approach for solution of the space fractional diffusion equation. Comput. Math. Appl. 2011, 62, 1135–1142. [Google Scholar] [CrossRef] [Green Version]
  10. Doha, E.H.; Bhrawy, A.H.; Ezz-Eldien, S.S. A new Jacobi operational matrix: An application for solving fractional differential equations. Appl. Math. Model. 2012, 36, 4931–4943. [Google Scholar] [CrossRef]
  11. Kazem, S.; Abbasbandy, S.; Kumar, S. Fractional-order Legendre functions for solving fractional-order differential equations. Appl. Math. Model. 2013, 37, 5498–5510. [Google Scholar] [CrossRef]
  12. Mokhtary, P.; Ghoreishi, F.; Srivastava, H.M. The Muntz-Legendre Tau method for fractional differential equations. Appl. Math. Model. 2016, 40, 671–684. [Google Scholar] [CrossRef]
  13. Keshavarz, E.; Ordokhani, Y.; Razzaghi, M. Bernoulli wavelet operational matrix of fractional order integration and its applications in solving the fractional order differential equations. Appl. Math. Model. 2014, 38, 6038–6051. [Google Scholar] [CrossRef]
  14. Sabermahani, S.; Ordokhani, Y.; Yousefi, S.A. Numerical approach based on fractional-order Lagrange polynomials for solving a class of fractional differential equations. Comput. Appl. Math. 2017, 1, 1–23. [Google Scholar]
  15. Albadarneh, R.B.; Zerqat, M.; Batiha, I.M. Numerical solutions for linear and non-linear fractional differential equations. Int. J. Pure Appl. Math. 2016, 106, 859–871. [Google Scholar] [CrossRef]
  16. Garrappa, R. Numerical solution of fractional differential equations: A survey and a software tutorial. Mathematics 2018, 6, 16. [Google Scholar] [CrossRef]
  17. Dehghan, M.; Manafian, J.; Saadatmandi, A. The solution of the linear fractional partial differential equations using the homotopy analysis method. Z. Naturforsch. 2010, 65a, 935–949. [Google Scholar]
  18. Vanani, S.K.; Aminataei, A. On the numerical solution of fractional partial differential equations. Math. Comput. Appl. 2012, 17, 140–151. [Google Scholar] [CrossRef]
  19. Rehman, M.U.; Khan, R.A. Numerical solutions to initial and boundary value problems for linear fractional partial differential equations. Appl. Math. Model. 2013, 37, 5233–5244. [Google Scholar] [CrossRef]
  20. Li, W.; Bai, L.; Chen, Y.; Santos, S.D.; Li, B. Solution of linear fractional partial differential equations based on the operator matrix of fractional Bernstein polynomials and error correction. Inter. J. Innov. Comput. Inf. Control 2018, 14, 211–226. [Google Scholar]
  21. Xiao-Jun, Y.; Gao, F.; Ju, Y.; Zhou, H.W. Fundamental solutions of the general fractional-order diffusion equations. Math. Methods Appl. Sci. 2018, 41, 9312–9320. [Google Scholar]
  22. Xiao-Jun, Y.; Gao, F.; Srivastava, H.M. A new computational approach for solving nonlinear local fractional PDEs. J. Comput. Appl. Math. 2018, 339, 285–296. [Google Scholar]
  23. Cesarano, C. Generalized special functions in the description of fractional diffusive equations. Commun. Appl. Ind. Math. 2019, 10, 31–40. [Google Scholar] [CrossRef]
  24. Babolian, E.; Shamloo, A.S. Numerical solution of Volterra integral and integro-differential equations of convolution type by using operational matrices of piecewise constant orthogonal functions. J. Comput. Appl. Math 2008, 214, 495–508. [Google Scholar] [CrossRef] [Green Version]
  25. Maleknejad, K.; Nouri, M. A direct method to solve integral and integro-differential equations of convolution type by using improved operational matrix. Inter. J. Syst. Sci. 2012, 2012, 1–8. [Google Scholar] [CrossRef]
  26. Murli, A.; Rizzardi, M. Algorithm 682 Talbot’s method for the Laplace inversion problem. ACM Trans. Math. Softw. 1990, 16, 158–168. [Google Scholar] [CrossRef]
  27. Massouros, P.G.; Genin, G.M. Algebraic inversion of the Laplace transform. Comput. Math. Appl. 2005, 50, 179–185. [Google Scholar] [CrossRef] [Green Version]
  28. Lee, J.; Sheen, D. An accurate numerical inversion of Laplace transforms based on the location of their poles. Comput. Math. Appl. 2004, 48, 1415–1423. [Google Scholar] [CrossRef]
  29. Matsuura, T.; Saitoh, S. Real inversion formulas and numerical experiments of the Laplace transform by using the theory of reproducing kernels. Procedia Soc. Behav. Sci. 2010, 2, 111–119. [Google Scholar] [CrossRef] [Green Version]
  30. Hsiao, C.H. Numerical inversion of Laplace transform via wavelet in ordinary differential equations. Comput. Methods Diff. Equ. 2014, 2, 186–194. [Google Scholar]
  31. Iqbal, M. On comparison of spline regularization with exponential sampling method for Laplace transform inversion. Comput. Phys.Commun. 1995, 88, 43–50. [Google Scholar] [CrossRef]
  32. Cuomo, S.; D’Amore, L.; Murli, A.; Rizzardi, M. Computation of the inverse Laplace transform based on a collocation method which uses only real values. J. Comput. Appl. Math. 2007, 198, 98–115. [Google Scholar] [CrossRef] [Green Version]
  33. Dubner, H.; Abate, J. Numerical inversion of Laplace transforms by relating them to the finite Fourier cosine transform. J. Association Comput. Mach. 1968, 15, 115–123. [Google Scholar] [CrossRef]
  34. Durbin, F. Numerical inversion of Laplace transforms: an efficient improvement to Dubner and Abate’s method. Comput. J. 1974, 17, 371–376. [Google Scholar] [CrossRef]
  35. Davis, B.; Martin, B. Numerical inversion of Laplace transform: A survey and comparison of methods. J. Comput. Phys. 1979, 33, 1–32. [Google Scholar] [CrossRef]
  36. Cohen, A.M. Numerical methods for Laplace transform inversion; Springer: New Your, NY, USA, 2007. [Google Scholar]
  37. Sastre, J.; Defez, E.; Jódar, L. Application of Laguerre matrix polynomials to the numerical inversion of Laplace transforms of matrix functions. Appl. Math. Lett. 2011, 24, 1527–1532. [Google Scholar] [CrossRef] [Green Version]
  38. Aznam, S.M.; Hussin, A. Numerical method for inverse Laplace transform with Haar Wavelet operational matrix. Malays. J. Fund. Appl. Sci. 2012, 8, 182–188. [Google Scholar] [CrossRef]
  39. Chen, C.F.; Tsay, Y.T.; Wu, T.T. Walsh operational matrices for fractional calculus and their application to distributed parameter systems. J. Frankl. Inst. 1977, 503, 267–284. [Google Scholar] [CrossRef]
  40. Wu, J.L.; Chen, C.F.; Chen, C.F. Numerical inversion of Laplace transform using Haar wavlet operational matrices. IEEE Trans. Circuit Syst.-I: Fundam. Theory Appl. 2001, 48, 120–122. [Google Scholar]
  41. Shamloo, A.S.; Hosseingholizadeh, R.; Nouri, M. Numerical solution of nonlinear Volterra integral equations of the first kind with convolution kernel. World Appl. Program. 2014, 4, 172–180. [Google Scholar]
  42. Chen, C.F.; Haiso, C.H. Haar wavlet method for solving lumped and distributed-parameter systems. IEEE Control Theory Appl. 1997, 144, 87–94. [Google Scholar] [CrossRef]
  43. Bhatti, M.I.; Bracken, P. Solutions of differential equations in a Bernstein polynomial basis. J. Comput. Appl. Math. 2007, 205, 272–280. [Google Scholar] [CrossRef] [Green Version]
  44. Maleknejad, K.; Basirat, B.; Hashemizadeh, E. A Bernstein operational matrix approach for solving a system of high order linear Volterra-Fredholm integro-differential equations. Math. Comput. Model. 2012, 55, 1363–1372. [Google Scholar] [CrossRef]
  45. Singh, A.K.; Singh, V.K.; Singh, O.P. The Bernstein operational matrix of integration. Appl. Math. Sci. 2009, 3, 2427–2436. [Google Scholar]
  46. Quain, W.; Riedel, M.D.; Rosenberg, I. Uniform approximation and Bernstein polynomial with coefficients in the unit interval. Eur. J. Comb. 2011, 32, 448–463. [Google Scholar]
  47. Bataineh, A.S. Bernstein polynomials method and its error analysis for solving nonlinear problems in the calculus of variations:convergnece analysis via residual function. Filomat 2018, 32, 1379–1393. [Google Scholar] [CrossRef]
  48. Rostamy, D.; Alipour, M.; Jafari, H.; Baleanu, D. Solving multi-term orders fractional differential equations by operational matrices of BPs with convergence analysis. Roman. Rep. Phys. 2013, 65, 334–349. [Google Scholar]
  49. Alshbool, M.H.T.; Bataineh, A.S.; Hashim, I.; Isik, O.R. Solution of fractional-order differential equations based on the operational matrices of new fractional Bernstein functions. J. King Saud Univ. Sci. 2017, 29, 1–18. [Google Scholar] [CrossRef] [Green Version]
  50. Rani, D.; Mishra, V.; Cattani, C. Numerical inversion of Laplace transform based on Bernstein operational matrix. Math. Methods Appl. Sci. 2018, 41, 9231–9243. [Google Scholar] [CrossRef]
  51. Khuri, S.A. A Laplace decomposition algorithm applied to a class of nonlinear differential equation. J. Appl. Math. 2001, 1, 141–155. [Google Scholar] [CrossRef]
  52. Dattoli, G.; Lorenzutta, S.; Cesarano, C. Bernestein polynomials and operational methods. J. Comput. Anal. Appl. 2006, 8, 369–377. [Google Scholar]
  53. Kurkcu, O.K.; Aslan, E.; Sezera, M.E. A numerical method for solving some model problems arising in science and convergence analysis based on residual function. Appl. Numer. Math. 2017, 121, 134–148. [Google Scholar] [CrossRef]
  54. Zahra, W.K.; Elkholy, S.M. The use of cubic splines in the numerical solution of fractional differential equations. Int. J. Math. Math. Sci. 2012, 2012, 1–16. [Google Scholar] [CrossRef]
  55. Momani, S.; Odibatb, Z. Numerical approach to differential equations of fractional order. J. Comput. Appl. Math. 2007, 207, 96–110. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The relative errors for k = 5 (left) and k = 6 (right) in Example 1.
Figure 1. The relative errors for k = 5 (left) and k = 6 (right) in Example 1.
Symmetry 11 00530 g001
Figure 2. The relative errors for k = 5 (left) and k = 6 (right) in Example 2.
Figure 2. The relative errors for k = 5 (left) and k = 6 (right) in Example 2.
Symmetry 11 00530 g002
Figure 3. The relative errors for k = 5 (left) and k = 6 (right) in Example 3.
Figure 3. The relative errors for k = 5 (left) and k = 6 (right) in Example 3.
Symmetry 11 00530 g003
Figure 4. The relative errors for k = 5 (left) and k = 6 (right) in Example 4.
Figure 4. The relative errors for k = 5 (left) and k = 6 (right) in Example 4.
Symmetry 11 00530 g004
Figure 5. The relative errors for k = 5 (left) and k = 6 (right) for f 1 ( t ) in Example 5.
Figure 5. The relative errors for k = 5 (left) and k = 6 (right) for f 1 ( t ) in Example 5.
Symmetry 11 00530 g005
Figure 6. The relative errors for k = 5 (left) and k = 6 (right) for f 2 ( t ) in Example 5.
Figure 6. The relative errors for k = 5 (left) and k = 6 (right) for f 2 ( t ) in Example 5.
Symmetry 11 00530 g006
Figure 7. The relative errors for k = 5 (left) and k = 6 (right) in Example 7.
Figure 7. The relative errors for k = 5 (left) and k = 6 (right) in Example 7.
Symmetry 11 00530 g007
Figure 8. The L e r r o r for Example 1 (left) and Example 2 (right) for different values of k.
Figure 8. The L e r r o r for Example 1 (left) and Example 2 (right) for different values of k.
Symmetry 11 00530 g008
Figure 9. The L e r r o r for Example 3 (left) and Example 4 (right) for different values of k.
Figure 9. The L e r r o r for Example 3 (left) and Example 4 (right) for different values of k.
Symmetry 11 00530 g009
Figure 10. The L e r r o r for f 1 ( t ) (left) and f 2 ( t ) (right) at different values of k for Example 5.
Figure 10. The L e r r o r for f 1 ( t ) (left) and f 2 ( t ) (right) at different values of k for Example 5.
Symmetry 11 00530 g010
Figure 11. The L e r r o r for at different values of k for Example 7.
Figure 11. The L e r r o r for at different values of k for Example 7.
Symmetry 11 00530 g011
Table 1. Comparison of absolute error in Example 3.
Table 1. Comparison of absolute error in Example 3.
tPresent Method at k = 5Present Method at k = 6FFDM [15]
0.1 8.17 × 10 4 2.02 × 10 7 1.16 × 10 4
0.2 5.36 × 10 4 3.09 × 10 6 1.56 × 10 4
0.3 3.64 × 10 4 4.59 × 10 6 1.81 × 10 4
0.4 2.65 × 10 4 4.87 × 10 6 2.00 × 10 4
0.5 2.13 × 10 4 6.01 × 10 6 2.15 × 10 4
0.6 1.88 × 10 4 9.61 × 10 6 2.27 × 10 4
0.7 1.77 × 10 4 1.56 × 10 5 2.37 × 10 4
0.8 1.72 × 10 4 2.23 × 10 5 2.46 × 10 4
0.9 1.68 × 10 4 2.73 × 10 5 2.54 × 10 4
1 1.64 × 10 4 2.97 × 10 5 2.61 × 10 4
Table 2. Comparison of Absolute error in Example 4.
Table 2. Comparison of Absolute error in Example 4.
tPresent Method at k = 6Cubic Spline Method [54]
0.1254.49 × 10 5 1.24 × 10 3
0.2501.18 × 10 6 5.12 × 10 3
0.3751.80 × 10 5 1.39 × 10 2
0.5001.53 × 10 5 2.61 × 10 2
0.6259.29 × 10 6 4.04 × 10 2
0.757.63 × 10 6 5.58 × 10 2
0.8752.47 × 10 5 7.15 × 10 2
17.43 × 10 5 8.72 × 10 2
Table 3. Comparison of Absolute errors in f 1 ( t ) in Example 5.
Table 3. Comparison of Absolute errors in f 1 ( t ) in Example 5.
tPresent Method at k = 5Present Method at k = 6VIM [55]
0.11.59 × 10 3 5.99 × 10 6 1.66 × 10 4
0.29.84 × 10 4 1.40 × 10 5 1.32 × 10 3
0.36.08 × 10 4 1.32 × 10 5 4.39 × 10 3
0.43.80 × 10 4 1.02 × 10 5 1.02 × 10 2
0.52.34 × 10 4 1.53 × 10 5 1.93 × 10 2
0.61.22 × 10 4 3.16 × 10 5 3.21 × 10 2
0.72.24 × 10 5 5.15 × 10 5 4.86 × 10 2
0.87.21 × 10 5 6.18 × 10 5 6.81 × 10 2
0.91.73 × 10 4 5.60 × 10 5 8.95 × 10 2
13.29 × 10 4 5.56 × 10 5 1.11 × 10 1
Table 4. Comparison of Absolute errors in f 2 ( t ) in Example 5.
Table 4. Comparison of Absolute errors in f 2 ( t ) in Example 5.
tPresent Method at k = 5Present Method at k = 6VIM [55]
0.11.33 × 10 4 4.81 × 10 5 1.79 × 10 4
0.22.89 × 10 4 1.31 × 10 5 1.54 × 10 4
0.34.07 × 10 4 1.74 × 10 5 5.57 × 10 3
0.45.09 × 10 4 3.42 × 10 7 1.41 × 10 2
0.56.02 × 10 4 7.26 × 10 5 2.94 × 10 2
0.66.85 × 10 4 1.45 × 10 4 5.40 × 10 2
0.77.64 × 10 4 1.25 × 10 4 9.11 × 10 2
0.88.44 × 10 4 2.68 × 10 5 1.44 × 10 1
0.99.28 × 10 4 1.53 × 10 4 2.17 × 10 1
19.96 × 10 4 2.98 × 10 4 3.13 × 10 1
Table 5. Solution by proposed method for different values of x and t in Example 6.
Table 5. Solution by proposed method for different values of x and t in Example 6.
xt = 0t = 0.06t = 0.13t = 0.29t = 0.50
0.000.000.000.000.000.00
0.110.110.1107290.1115950.1136770.116726
0.310.310.3157910.3226710.3392070.363420
0.880.880.9266650.9821101.1153651.310479
1.001.001.0602601.1318571.3039321.555887
Table 6. Solution by variation iteration method (VIM) for different values of x and t in Example 6.
Table 6. Solution by variation iteration method (VIM) for different values of x and t in Example 6.
xt = 0t = 0.06t = 0.13t = 0.29t = 0.50
0.000.000.000.000.000.00
0.110.110.1107290.1115950.1136770.116726
0.310.310.3157910.3226700.3392070.363419
0.880.880.9266690.9821011.1153601.310469
1.001.001.0602651.1318451.3039261.555874
Table 7. Solution by IMQ-RBF for different values of x and t in Example 6.
Table 7. Solution by IMQ-RBF for different values of x and t in Example 6.
xt = 0t = 0.06t = 0.13t = 0.29t = 0.50
0.000.000.000.000.000.00
0.110.110.1107290.1115950.1136790.116710
0.310.310.3157940.3226750.3392230.363298
0.880.880.9266960.9821401.1154901.309495
1.001.001.0603001.1318961.3040941.554617

Share and Cite

MDPI and ACS Style

Rani, D.; Mishra, V.; Cattani, C. Numerical Inverse Laplace Transform for Solving a Class of Fractional Differential Equations. Symmetry 2019, 11, 530. https://doi.org/10.3390/sym11040530

AMA Style

Rani D, Mishra V, Cattani C. Numerical Inverse Laplace Transform for Solving a Class of Fractional Differential Equations. Symmetry. 2019; 11(4):530. https://doi.org/10.3390/sym11040530

Chicago/Turabian Style

Rani, Dimple, Vinod Mishra, and Carlo Cattani. 2019. "Numerical Inverse Laplace Transform for Solving a Class of Fractional Differential Equations" Symmetry 11, no. 4: 530. https://doi.org/10.3390/sym11040530

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop