Next Article in Journal
Adsorption–Desorption at Anomalous Diffusion: Fractional Calculus Approach
Next Article in Special Issue
Finite Element Method for Time-Fractional Navier–Stokes Equations with Nonlinear Damping
Previous Article in Journal
Football Games Consist of a Self-Similar Sequence of Ball-Keeping Durations
Previous Article in Special Issue
The Optimal L2-Norm Error Estimate of a Weak Galerkin Finite Element Method for a Multi-Dimensional Evolution Equation with a Weakly Singular Kernel
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Highly Accurate Numerical Method for Solving Fractional Differential Equations with Purely Integral Conditions

Department of Mathematics, Faculty of Technology and Education, Helwan University, Helwan 11281, Egypt
Fractal Fract. 2025, 9(7), 407; https://doi.org/10.3390/fractalfract9070407
Submission received: 11 May 2025 / Revised: 1 June 2025 / Accepted: 13 June 2025 / Published: 24 June 2025

Abstract

The main goal of this paper is to present a new numerical algorithm for solving two models of one-dimensional fractional partial differential equations (FPDEs) subject to initial conditions (ICs) and integral boundary conditions (IBCs). This paper builds a modified shifted Chebyshev polynomial of the second kind (MSC2Ps) basis function that meets homogeneous IBCs, named IMSC2Ps. We also introduce two types of MSC2Ps that satisfy the given ICs. We create two operational matrices (OMs) for both ordinary derivatives (ODs) and Caputo fractional derivatives (CFDs) connected to these basis functions. By employing the spectral collocation method (SCM), we convert the FPDEs into a system of algebraic equations, which can be solved using any suitable numerical solvers. We validate the efficacy of our approach through convergence and error analyses, supported by numerical examples that demonstrate the method’s accuracy and effectiveness. Comparisons with existing methodologies further illustrate the advantages of our proposed technique, showcasing its high accuracy in approximating solutions.

1. Introduction

Fractional partial differential equations (FPDEs) have come into view as powerful tools for the modeling of a wide range of phenomena in science and engineering, presenting a more accurate representation of processes exhibiting nonlocal behavior and memory effects than their partial ordinary order counterparts. There are many applications of FPDEs in diverse fields—for instance, fractional diffusion models to analyze thermal diffusion [1], fractional Fisher–Kolmogorov equations to simulate the propagation of a gene within a population [2], fractional cable equations in neuron–neuron communication [3], and fractional Burgers equations in shallow-water waves and waves in bubbly liquids [4]. The analytical solutions of FPDEs are often difficult to obtain, necessitating the development of robust and accurate numerical methods. In addition, these models are accompanied by IBCs.
Differential equations (DEs) with IBCs model many challenges in applied sciences, including heat conduction [5,6], chemical engineering [7], thermo-elasticity [8], and plasma physics [9]. Researchers have explored various numerical approaches for such models that include IBCs [10,11,12,13]. Among the most important mathematical models that appear in mathematical biology [14], physics [15], chemistry [7], and engineering applications [16] are PDEs and PFDEs with IBCs. Several problems have been mentioned in modern physics and technology using these models. IBCs are of great interest due to their applications in many fields: population dynamics, heat diffusion–advection, models of blood circulation, and chemical engineering and thermo-elasticity [17,18]. The existence and uniqueness of the solutions to these problems have been studied by many authors [19,20,21,22].
In recent years, numerical methods for solving various types of DEs with IBCs have been developed. The authors of [23] made a major contribution when they presented a finite difference scheme that used a Shishkin mesh to solve a nonlinear singularly perturbed problem with IBCs. In [10,11,12,24], researchers have explored various numerical approaches for a second-order forced Duffing equation with IBCs. Additionally, the necessity of finding solutions for FPDEs with IBCs has driven the development of numerical methods to solve them. The numerical study in [17] is based on the application of a combination of the finite difference method with a numerical integration method to obtain an approximate solution to the proposed problem. In [19], a Galerkin method based on least squares is considered.
The three most prevalent spectral approaches are the collocation, tau, and Galerkin methods. The optimal choice among these depends on the properties of the DE being studied and the boundary conditions (BCs) that are connected to it. These methodologies use OMs to formulate efficient algorithms that provide accurate numerical solutions for diverse DEs while reducing the computational effort required [25,26,27,28] and have been explored in various contexts [29,30,31,32].
This paper addresses the challenge of numerically solving two models of FPDEs subject to ICs and IBCs. To overcome these challenges, we introduce a novel spectral collocation method (SCM) based on MSC2Ps and IMSC2Ps. Below is a summary of this paper’s main contributions:
(i)
We construct a new class of basis polynomials, IMSC2Ps, and introduce two classes of basis polynomials, MSC2Ps, that satisfy the homogeneous form of the given ICs and IBCs.
(ii)
We discuss the establishment of OMs for ODs and CFDs of IMSC2Ps and MSC2Ps, respectively.
(iii)
We use the SCM along with the suggested OMs of the derivatives to develop a numerical method to solve the two models of FPDEs presented in Section 3.
To the best of our knowledge, the literature has not yet documented or traced a Galerkin OM using any basis function that satisfies the given IBCs. This situation largely drives our interest in such OMs. Another motivation is that the use of this form of OMs leads to high accuracy for the numerical solution of the studied FPDEs in this paper.
This paper is organized as follows. Section 2 presents essential preliminaries and fundamental definitions regarding CFDs. Section 3 presents two particular FPDE models that have been chosen to show how flexible and effective the new numerical method is. Next, we solve these models numerically using MSC2Ps- and IMSC2Ps-based SCMs. In Section 4, we discuss the Chebyshev polynomials of the second kind (C2Ps), SC2Ps, and MSC2Ps, focusing on how to create IMSC2Ps that satisfy the homogeneous form of IBCs related to the FPDEs examined in this paper. Section 5 is limited to developing a new OM for ODs of IMSC2Ps and two OMs for ODs and CFDs of MSC2Ps. The use of the SCM to solve the FPDEs studied is explained in Section 6 and Section 7. The convergence and error analyses are presented in Section 8. Section 9 presents numerical examples, along with comparisons to other approaches from the literature. Finally, Section 10 outlines the findings and future work.

2. Basic Definition of Caputo FDs

This section introduces key terminology and fundamental resources that lay the groundwork for our proposed methodology. These tools are vital elements that form the basis of the effective management of the TFCKdVEs under consideration.
Definition 1.
Caputo defined the fractional-order derivative as [33]
D 0 c s α u ( s ) = 1 Γ ( n α ) 0 s ( s τ ) n 1 α u ( n ) ( τ ) d τ , n 1 α < n , n Z + , u ( n ) ( s ) , α = n .
This operator satisfies that [33]
D 0 c s α ( λ 1 h 1 ( s ) + λ 2 h 2 ( s ) ) = λ 1 D 0 c s α h 1 ( s ) + λ 2 D 0 c s α h 2 ( s ) ,
D 0 c s α ( C ) = 0 , ( C is a const . ) ,
D 0 c s α s k = 0 , k < α , k N 0 , Γ ( k + 1 ) Γ ( k + 1 α ) s k α , k α .

3. Formulation of FPDEs with IBCs

In this study, two algorithms are proposed to numerically solve the following two models of FPDEs with IBCs.
  • Model 1:
L U ( X , t ) = D 0 c t β U ( X , t ) K 1 ( X , t ) D X X U ( X , t ) K 2 ( X , t ) D X U ( X , t ) + K 3 ( X , t ) U ( X , t ) = K 4 ( X , t ) , ( X , t ) I ,
subject to the following ICs and purely IBCs:
U ( X , 0 ) = g 1 ( X ) , X ( 0 , 1 ) , 0 1 U ( X , t ) d X = g 2 ( t ) , t ( 0 , 1 ) , 0 1 X U ( X , t ) d X = g 3 ( t ) , t ( 0 , 1 ) ,
where β ( 0 , 1 ) , I = ( 0 , 1 ) × ( 0 , 1 ) , K i , i = 1 , 2 , 3 , 4 and   g j , j = 1 , 2 , 3 are known functions such that K 2 ( X , t ) = D X K 1 ( X , t ) and satisfy the following compatibility conditions:
0 1 g 1 ( X ) d X = g 2 ( 0 ) , 0 1 X g 1 ( X ) d X = g 3 ( 0 ) .
  • Model 2:
L U ( X , t ) = D 0 c t α U ( X , t ) + H 1 ( X , t ) D X X U ( X , t ) + H 2 ( X , t ) D X U ( X , t ) + H 3 ( X , t ) U ( X , t ) = H 4 ( X , t ) , ( X , t ) I ,
subject to the following ICs and purely IBCs:
U ( X , 0 ) = h 1 ( X ) , X ( 0 , 1 ) , D t U ( X , 0 ) = h 2 ( X ) , X ( 0 , 1 ) , 0 1 U ( X , t ) d X = h 3 ( t ) , t ( 0 , 1 ) , 0 1 X U ( X , t ) d X = h 4 ( t ) , t ( 0 , 1 ) ,
where α ( 1 , 2 ) and H i , h i , i = 1 , 2 , 3 , 4 , are known functions and satisfy the following compatibility conditions:
0 1 h 1 ( X ) d X = h 3 ( 0 ) , 0 1 X h 1 ( X ) d X = h 4 ( 0 ) , 0 1 h 2 ( X ) d X = h 3 ( 0 ) , 0 1 X h 2 ( X ) d X = h 4 ( 0 ) .
The existence and uniqueness of the solutions of Model 1 and Model 2 are demonstrated, and two numerical approaches are proposed in [19] and [17], respectively.

4. An Overview of SC2Ps and MSC2Ps and Introduction to IMSC2Ps

The CP polynomials of the second kind,  U n ( z ) , can be defined using [34]
U n + 1 ( z ) = 2 z U n ( z ) U n 1 ( z ) , z [ 1 , 1 ] , U 0 ( z ) = 1 , U 1 ( z ) = 2 z ,
and they satisfy the orthogonal relation
1 1 1 z 2 U m ( z ) U n ( z ) d z = π 2 , n = m , 0 , n m .
The SC2Ps, U n * ( z ) , are defined as
U n * ( z ) = U n ( 2 z 1 ) , z [ 0 , 1 ] ,
and their orthogonality relation is
0 1 z ( 1 z ) U m * ( z ) U n * ( z ) d z = π 8 , n = m , 0 , n m .
These polynomials can be expressed analytically as
U n * ( z ) = k = 0 n ( 1 ) n k 2 2 k ( k + n + 1 ) ! ( 2 k + 1 ) ! ( n k ) ! z k , z [ 0 , 1 ] .
This section introduces three classes of polynomials. Two of them are referred to as MSC2Ps and will be symbolized by φ n ( z ) and ϕ n ( z ) , and the third class is referred to as IMSC2Ps and will be symbolized by ψ n ( z ) . These polynomials are chosen to satisfy the corresponding homogeneous IBC forms of (5) and (8). To achieve this aim, we choose the first and second classes to be in the form
φ n ( z ) = z U n * ( z ) , ϕ n ( z ) = z 2 U n * ( z ) ;
subsequently, they satisfy the ICs
ϕ k ( 0 ) = ϕ k ( 1 ) ( 0 ) = 0 , k = 0 , 1 , 2 , , φ k ( 0 ) = 0 , k = 0 , 1 , 2 , .
Meanwhile, the third class has the form
ψ k ( z ) = ( z 2 + A k z + B k ) U k * ( z ) , k = 0 , 1 , 2 , ,
where the two constants A k , B k are determined such that ψ k ( z ) satisfies the conditions
0 1 ψ k ( z ) d z = 0 , 0 1 z ψ k ( z ) d z = 0 .
To do this, substituting ψ k ( z ) into (16) leads to the system
I 1 , k A k + I 0 , k B k = I 2 , k , I 2 , k A k + I 1 , k B k = I 3 , k ,
where
I i , k = 0 1 z i U k * ( z ) d z = ( 1 ) k ( k + 1 ) i + 1 F 2 3 k , k + 2 , i + 1 3 2 , i + 2 | 1 , i = 0 , 1 , 2 , 3 .
The system (17) can be solved using the Mathematica code found in Appendix A, and the use of some simplification tools for algebraic expressions in Mathematica leads to obtaining
A k = 1 , B k = 1 4 d k + 6 8 k 2 k 2 + d k + 1 ,
where d 2 k = 0 , d 2 k + 1 = 1 , k = 0 , 1 , 2 , thanks to Mathematica [35].
Note 1.
Here, it is important to recall that the generalized hypergeometric function is defined as [36]
F q p a 1 , a 2 , , a p b 1 , b 2 , , b q | z = k = 0 ( a 1 ) k ( a p ) k ( b 1 ) k ( b q ) k z k k ! ,
where b j 0 , for all 1 j q .

5. OMs for the Derivatives of φ n ( z ) ϕ n ( z ) and ψ n ( z )

Three OMs are introduced in this section: the first and second OMs for FDs of the vectors
φ N ( z ) = [ φ 0 ( z ) , φ 1 ( z ) , , φ N ( z ) ] T ,
and
Φ N ( z ) = [ ϕ 0 ( z ) , ϕ 1 ( z ) , , ϕ N ( z ) ] T ,
and the third OM for ODs of the vector
Ψ N ( z ) = [ ψ 0 ( z ) , ψ 1 ( z ) , , ψ N ( z ) ] T .

5.1. OMs for Fds of φ n ( z ) and ϕ n ( z )

The two OMs of the FDs of φ N ( Z ) and Φ N ( Z ) are the first primary desired results, which are provided in Corollaries 1 and 2. In view of the relation between SC2Ps U i ( Z ) and Jacobi polynomials, P i ( α , β ) ( Z ) ,
U i ( Z ) = ( i + 1 ) ! Γ ( 3 / 2 ) Γ ( i + 3 / 2 ) P i ( 1 / 2 , 1 / 2 ) ( Z ) ,
these two corollaries are direct consequences of Theorem 4.2 in [27].
Corollary 1.
D 0 c μ φ i ( Z ) , i 0 has the following expression:
D 0 c μ φ i ( Z ) = Z μ j = 0 i Θ i , j ( 1 ) ( μ ) φ j ( Z ) ,
which leads to
D 0 c μ φ N ( Z ) = Z μ D μ φ N ( Z ) ,
where D μ = ( d i , j ( 1 ) ( μ ) ) is a matrix of order ( N + 1 ) × ( N + 1 ) with elements defined as follows:
d i , j ( 1 ) ( μ ) = Θ i , j ( 1 ) ( μ ) , i j , 0 , otherwise ,
and
Θ i , j ( 1 ) ( μ ) = ( 1 ) i j Γ ( j + 2 ) Γ ( i + j + 2 ) ( i j ) ! Γ ( 2 j + 2 ) Γ ( j μ + 2 ) × F 2 3 j i , j + 2 , i + j + 2 2 j + 3 , j μ + 2 | 1 .
Corollary 2.
D 0 c μ ϕ i ( Z ) , i 0 has the following expression:
D 0 c μ ϕ i ( Z ) = Z μ j = 0 i Θ i , j ( 2 ) ( μ ) ϕ j ( Z ) ,
which leads to
D 0 c μ Φ N ( Z ) = Z μ D μ Φ N ( Z ) ,
where D μ = ( d i , j ( 2 ) ( μ ) ) is a matrix of order ( N + 1 ) × ( N + 1 ) with elements defined as follows:
d i , j ( 2 ) ( μ ) = Θ i , j ( 2 ) ( μ ) , i j , 0 , otherwise ,
and
Θ i , j ( 2 ) ( μ ) = ( 1 ) i j ( j + 2 ) ! Γ ( i + j + 2 ) ( i j ) ! Γ ( 2 j + 2 ) Γ ( j μ + 3 ) × F 2 3 j i , j + 3 , i + j + 2 2 j + 3 , j μ + 3 | 1 .

5.2. OM for ODs of ψ n ( z )

In this section, we construct the OM for ODs of IMSC2P ψ n ( z ) , n = 0 , 1 , 2 , . To do this, we need to express D ψ n ( z ) in terms of themselves. First, we must note that
  • D ψ 0 ( z ) = 2 z 1 ;
  • D ψ 1 ( z ) = 12 ψ 0 ( z ) 1 / 5 ;
  • D ψ 2 ( z ) = 16 ψ 1 ( z ) + 9 / 5 ( 18 / 5 ) z ;
  • D ψ 3 ( z ) = ( 324 / 7 ) ψ 0 ( z ) + 20 ψ 2 ( z ) 4 / 7 .
We introduce a novel Galerkin OM of the derivatives by stating and proving the following theorem.
Theorem 1.
For all n 0 , we have
D ψ n ( z ) = j = 0 n 1 a j ( n ) ψ j ( z ) + ϑ n ( z ) , ϑ n ( z ) = e 1 ( n ) z + e 0 ( n ) ,
where a 0 ( n ) , a 1 ( n ) , , a n 1 ( n ) , satisfy the system
G n a n = B n ,
where a n = [ a 0 ( n ) , a 1 ( n ) , , a n 1 ( n ) ] T , G n = ( g i , j ( n ) ) 0 i , j n 1 , and B n = [ b 0 ( n ) , b 1 ( n ) , , b n 1 ( n ) ] T . The elements of G n and B n have the forms
g i , j ( n ) = ψ n j 1 ( n i + 1 ) ( 0 ) i j , 0 , otherwise , b i ( n ) = ψ n ( n i + 2 ) ( 0 ) .
Meanwhile, e 0 ( n ) and e 1 ( n ) are determined by
e 0 ( n ) = ψ n ( 1 ) ( 0 ) j = 0 n 1 a j ( n ) ψ j ( 0 ) , e 1 ( n ) = ψ n ( 2 ) ( 0 ) j = 0 n 1 a j ( n ) ψ j ( 1 ) ( 0 ) .
Proof. 
The coefficients e 0 ( n ) and e 1 ( n ) may be easily shown to have the form (32). So, the expansion (30) can be written in the form
D ψ n ( x ) ψ n ( 1 ) ( 0 ) ψ n ( 2 ) ( 0 ) x = j = 0 n 1 a j ( n ) ψ j ( x ) ψ j ( 0 ) ψ j ( 1 ) ( 0 ) x , n = 1 , 2 , .
By applying the Maclaurin series formulas for the two polynomials ψ j ( x ) and D ψ n ( x ) , which have degrees of ( j + 2 ) and ( n + 1 ) , we can rewrite Equation (33) as
r = 2 n + 1 ψ n ( r + 1 ) ( 0 ) r ! x r = j = 0 n 1 a j ( n ) r = 2 j + 2 ψ j ( r ) ( 0 ) r ! x r = r = 2 n + 1 j = r n + 1 ψ j 2 ( r ) ( 0 ) r ! a j 2 ( n ) x r , n = 1 , 2 , .
This leads to the following triangular system of n equations in the unknowns a 0 ( n ) , a 1 ( n ) , , a n 1 ( n ) :
j = r n + 1 ψ j 2 ( r ) ( 0 ) a j 2 ( n ) = ψ n ( r + 1 ) ( 0 ) , r = n + 1 , n , , 2 ,
which takes the matrix form (31), and this completes the proof of Theorem 1.    □
The OM for ODs of Ψ N ( z ) is given as a result of Theorem 1.
Corollary 3.
The mth derivative of the vector Ψ N ( z ) has the form
d m Ψ N ( z ) d z m = H m Ψ N ( z ) + Υ ( m ) ( z ) ,
with Υ ( m ) ( z ) = k = 0 m 1 H k ϑ ( m k 1 ) ( z ) , where ϑ ( z ) = ϑ 0 ( z ) , ϑ 1 ( z ) , , ϑ N ( z ) T a n d H = h i , j 0 i , j N ,
h i , j = a j ( i ) , i > j , 0 , otherwise .
Remark 1.
The proposed IMSC2Ps have the special values
ψ k ( q ) ( 0 ) = B k U k * ( q ) ( 0 ) + q A k U k * ( q 1 ) ( 0 ) + q ( q 1 ) U k * ( q 2 ) ( 0 ) , 1 q k + 2 ,
where
U k * ( q ) ( 0 ) = ( 1 ) k q 2 2 q ( q + k + 1 ) ! q ! ( 2 q + 1 ) ! ( k q ) ! .

6. Numerical Handling for Model 1 (4) and (5)

In this section, we utilize the OMs derived in Theorem 1 and Corollary 1 to get numerical solutions for Model 1 (4) and (5).

6.1. Homogeneous Form of Initial and Purely Integral Conditions (5)

Suppose that the ICs (5) are homogeneous, i.e., g 1 ( X ) = g 2 ( X ) = g 3 ( t ) = 0 , so consider
U ( X , t ) U N ( X , t ) = i , j = 0 N c i , j ψ i ( X ) φ j ( t ) = Ψ N ( X ) A φ N ( t ) ,
where
A = c 0 , 0 c 0 , 1 c 0 , N c N , 0 c N , 1 c N , N ( N + 1 ) × ( N + 1 ) ,
is the unknown matrix.
The derivatives of U ( X , t ) that appeared in Equation (4) may be approximated in the following way:
D 0 c t β U N ( X , t ) = i , j = 0 N c i , j ψ i ( X ) D 0 c t β φ j ( t ) , D X U N ( X , t ) = i , j = 0 N c i , j D X ψ i ( X ) φ j ( t ) , D X X U N ( X , t ) = i , j = 0 N c i , j D X X ψ i ( X ) φ j ( t ) ,
which can be written using Theorem 1 and Corollary 1 as follows:
D 0 c t β U N ( X , t ) = t β Ψ N ( X ) A D β φ N ( t ) , D X U N ( X , t ) = H Ψ N ( X ) + Υ ( 1 ) ( X ) A φ N ( t ) , D X X U N ( X , t ) = H 2 Ψ N ( X ) + Υ N ( 2 ) ( X ) A φ N ( t ) ,
In this method, using the approximations (38) and (40) allows one to write the residual of Equation (4) in the form
R N ( X , t ) = t β Ψ N ( X ) A D β φ N ( t ) K 1 ( X , t ) H 2 Ψ N ( X ) + Υ N ( 2 ) ( X ) A φ N ( t ) K 2 ( X , t ) H Ψ N ( X ) + Υ ( 1 ) ( X ) A φ N ( t ) + K 3 ( X , t ) Ψ N ( X ) A φ N ( t ) K 4 ( X , t ) .
To obtain U N ( X , t ) , a spectral approach is suggested in the current section called ISC2COMM, in which we apply the SCM using the given OMs in Section 5 for derivatives of MSC2P and IMSC2P. The collocation points X i , t i ( 0 i N ) are chosen to be either the zeros of U N + 1 * ( X ) and U N + 1 * ( t ) , respectively, or, alternatively, as the points X i = i + 1 N + 2 and t i = i + 1 N + 2 , i = 0 , 1 , , N , such that
R N ( X i , t j ) = 0 , i , j = 0 , 1 , , N ,
which consists of ( N + 1 ) 2 equations in the unknowns c i , j ( i , j = 0 , 1 , , N ). This system can be solved by using Newton’s iterative method to get U N ( X , t ) .
Remark 2.
In our approach, to obtain the proposed numerical solution, we utilize the SCM within the framework of our proposed method, ISC2COMM. The collocation points are chosen based on two possible strategies.
1.
The zeros of U N + 1 * ( X ) and U N + 1 * ( t ) : This approach typically provides optimal convergence properties for spectral methods. The zeros are advantageous as they cluster towards the boundaries of the interval, resulting in better accuracy near the endpoints.
2.
Uniform grid points: This approach provides a straightforward implementation when the distribution of points does not need to be optimized.
This dual approach allows for flexibility in the choice of collocation points, enabling us to balance computational efficiency and accuracy based on the specific characteristics of the problem at hand.

6.2. Nonhomogeneous Form of Initial and Purely Integral Conditions (5)

Converting Equation (4) and the non-homogeneous conditions (5) into an equivalent form with homogeneous conditions is a critical step in creating ISC2COMM. The following conversion is used to accomplish this change:
U ( X , t ) = V ( X , t ) + T ( X , t ) ,
where
T ( X , t ) = g 1 ( X ) + ( g 2 ( t ) g 2 ( 0 ) ) ( 1 18 X 2 + 12 X ) + 12 ( g 3 ( t ) g 3 ( 0 ) ) ( 3 X 2 2 X ) .
This transformation leads to obtaining the following updated model:
L V = K 4 ( X , t ) L T ( X , t ) , V ( X , 0 ) = 0 , X ( 0 , 1 ) , 0 1 V ( X , t ) d X = 0 , t ( 0 , 1 ) , 0 1 X V ( X , t ) d X = 0 , t ( 0 , 1 ) .
So, using ISC2COMM to solve (45) gives us the numerical solution for (4) and (5) in the form
U N ( X , t ) = V N ( X , t ) + T ( X , t ) .

7. Numerical Handling for Model 2 (7) and (8)

In this section, we utilize the OMs derived in Theorem 1 and Corollary 2 to get numerical solutions for Model 2 (7) and (8).

7.1. Homogeneous Form of Initial and Purely Integral Conditions (8)

Suppose that the conditions (8) are homogeneous, i.e., h 1 ( X ) = h 2 ( X ) = h 3 ( t ) = h 4 ( t ) = 0 , so consider
U ( X , t ) U N ( X , t ) = i , j = 0 N c i , j ψ i ( X ) ϕ j ( t ) = Ψ N ( X ) A Φ N ( t ) .
The derivatives of U ( X , t ) that appeared in Equation (7) may be approximated in the following way:
D 0 c t α U N ( X , t ) = i , j = 0 N c i , j ψ i ( X ) D 0 c t α ϕ j ( t ) , D X U N ( X , t ) = i , j = 0 N c i , j D X ψ i ( X ) ϕ j ( t ) , D X X U N ( X , t ) = i , j = 0 N c i , j D X X ψ i ( X ) ϕ j ( t ) ,
which can be written using Theorem 1 and Corollary 2 as follows:
D 0 c t α U N ( X , t ) = t α Ψ N ( X ) A D α Φ N ( t ) , D X U N ( X , t ) = H Ψ N ( X ) + Υ ( 1 ) ( X ) A Φ N ( t ) , D X X U N ( X , t ) = H 2 Ψ N ( X ) + Υ N ( 2 ) ( X ) A Φ N ( t ) .
In this method, using the approximations (47) and (49) allows one to write the residual of Equation (7) in the form
R N ( X , t ) = t α Ψ N ( X ) A D α Φ N ( t ) + H 1 ( X , t ) H 2 Ψ N ( X ) + Υ N ( 2 ) ( X ) A Φ N ( t ) + H 2 ( X , t ) H Ψ N ( X ) + Υ ( 1 ) ( X ) A Φ N ( t ) + H 3 ( X , t ) Ψ N ( X ) A Φ N ( t ) H 4 ( X , t ) .
Using the same spectral method, ISC2COMM, mentioned in Section 6, U N ( X , t ) can be found by solving the system
R N ( X i , t j ) = 0 , i , j = 0 , 1 , , N ,
that consists of ( N + 1 ) 2 equations in the unknowns c i , j ( i , j = 0 , 1 , , N ) using Newton’s iterative method.

7.2. Nonhomogeneous Form of Initial and Purely Integral Conditions (8)

Converting Equation (7) and the non-homogeneous conditions (8) using the transformation
U ( X , t ) = V ( X , t ) + T ( X , t ) ,
where
T ( X , t ) = h 1 ( X ) + h 2 ( X ) t + 2 ( 2 3 X ) h 3 ( t ) h 3 ( 0 ) h 3 ( 0 ) t + 6 ( 2 X 1 ) h 4 ( t ) h 4 ( 0 ) h 4 ( 0 ) t ,
leads to the following updated system:
L V = H 4 ( X , t ) L T ( X , t ) , V ( X , 0 ) = 0 , X ( 0 , 1 ) , D t V ( X , 0 ) = 0 , X ( 0 , 1 ) , 0 1 V ( X , t ) d X = 0 , t ( 0 , 1 ) , 0 1 X V ( X , t ) d X = 0 , t ( 0 , 1 ) .
So, using ISC2COMM to solve (54) gives the numerical solution for (7) and (8) in the form
U N ( X , t ) = V N ( X , t ) + T ( X , t ) .
Remark 3.
All presented OMs throughout Section 5 have a triangular structure; this yields several computational benefits [37].
1.
The assembly of the linear system requires at most O ( N 2 ) operations, leveraging the sparsity. The system exhibits a triangular structure.
2.
The solution of the system is efficiently carried out via forward substitution, with an overall computational cost that is lower than those of dense system solvers.
Remark 4.
We acknowledge the importance of the computational cost and scalability in our method, especially for large values of N . The time complexity associated with solving the algebraic systems obtained using collocation points is approximately O ( N 2 ) . Additionally, the memory usage may scale as O ( N 2 ) , which can become significant for large systems. To enhance the scalability, we recommend employing sparse matrix techniques and parallel computing [38].

8. Convergence and Error Estimates for ISC2COMM

Here, we examine the convergence and error estimates of ISC2COMM. Consider the space
S N = S p a n { ψ i ( X ) φ j ( t ) : i , j = 0 , 1 , , N } , for Model 1 , S p a n { ψ i ( X ) ϕ j ( t ) : i , j = 0 , 1 , , N } , for Model 2 .
Additionally, the error between both two functions U ( X , t ) and U ( X , t ) and their approximations U N ( X , t ) and U N ( X , t ) , respectively, can be defined by
E 1 , N ( X , t ) = U ( X , t ) U N ( X , t ) , E 2 , N ( X , t ) = U ( X , t ) U N ( X , t ) ,
respectively. In this study, the numerical scheme’s error accuracy is investigated by estimating it using the L norm:
E j , N = m a x ( X , t ) I | E j , N ( X , t ) | , j = 1 , 2 , I = [ 0 , 1 ] × [ 0 , 1 ] .
When the solutions satisfy the provided ICs and IBCs for Model 1 and Model 2, the following Theorem 2 examines the error estimates. The proof of this theorem is similar to the proofs of the theorems presented in the research papers [39] (Theorem 7.1), [40] (Theorem 1), [25] (Theorem 6.1) and [41] (Theorem 5), confirming that the error converges to zero by increasing N .
Theorem 2.
Suppose that U N ( X , t ) and U N ( X , t ) have the expressions (38) and (47), respectively, which give the best possible approximation (BPA) for U ( X , t ) and U ( X , t ) , respectively, out of S N . Then,
E j , N ( N + 2 ) ( N + 1 ) ! 2 N + 1 , j = 1 , 2 ,
wheremeans that a generic constant d exists such that E j , N d ( N + 2 ) ( N + 1 ) ! 2 N + 1 .
Proof. 
Let υ N ( X , t ) be the interpolating polynomial for U ( X , t ) at the points ( X i , t j ) , i , j = 0 , 1 , , N ,   where X i , ( 0 i N ) and t j , ( 0 j N ) are the roots of U N + 1 * ( X ) and U N + 1 * ( t ) , respectively. Then, the function U ( X , t ) can be written as [42]
U ( X , t ) = υ N ( X , t ) + N + 1 U ( η X , t ) X N + 1 ( N + 1 ) ! Q 1 ( X ) + N + 1 U ( X , η t ) t N + 1 ( N + 1 ) ! Q 2 ( t ) 2 N + 2 U ( η X , η t ) X N + 1 t N + 1 ( ( N + 1 ) ! ) 2 Q 1 ( X ) Q 2 ( t ) ,
where Q 1 ( X ) = i = 0 N ( X X i ) , Q 2 ( t ) = j = 0 N ( t t j ) and η X , η X , η t , η t [ 0 , 1 ] .
This means that
U υ N K 1 U N + 1 * ( N + 1 ) ! 2 N + 1 + K 2 U N + 1 * ( N + 1 ) ! 2 N + 1 + K 3 U N + 1 * 2 ( ( N + 1 ) ! ) 2 2 2 ( N + 1 ) ,
where
K 1 = m a x ( X , t ) I | N + 1 U ( X , t ) X N + 1 | , K 2 = m a x ( X , t ) I | N + 1 U ( X , t ) t N + 1 | , K 3 = m a x ( X , t ) I | 2 N + 2 U ( X , t ) X N + 1 t N + 1 | .
Using [43]
U N + 1 * ( N + 2 ) ,
the inequality (60) takes the form
U υ N K 1 ( N + 2 ) ( N + 1 ) ! 2 N + 1 + K 2 ( N + 2 ) ( N + 1 ) ! 2 N + 1 + K 3 ( N + 2 ) 2 ( ( N + 1 ) ! ) 2 2 2 ( N + 1 ) ( N + 2 ) ( N + 1 ) ! 2 N + 1 .
Since U N ( X , t ) S N represents the BPA to U ( X , t ) ,
U U N U h , h S N .
Therefore,
E 1 , N = U U N U υ N ( N + 2 ) ( N + 1 ) ! 2 N + 1 .
Similarly, we can prove that
E 2 , N = U U N ( N + 2 ) ( N + 1 ) ! 2 N + 1 .
The theorem presented below illustrates the stability of the errors, specifically regarding error propagation calculations.
Theorem 3.
Given any two iterative estimates of U ( X , t ) and U ( X , t ) , the results are
| U N + 1 U N | ( N + 2 ) ( N + 1 ) ! 2 N + 1 ,
and
| U N + 1 U N | ( N + 2 ) ( N + 1 ) ! 2 N + 1 .
Proof. 
We have
| U N + 1 U N | = | U N + 1 U + U U N | | U U N + 1 | + | U U N | E 1 , N + 1 + E 1 , N ,
and, similarly,
| U N + 1 U N | E 2 , N + 1 + E 2 , N .
By considering (58), we can obtain (67) and (68).    □

9. Numerical Simulations

In this section, four problems are presented in which we show that the proposed method, ISC2COMM, provides accurate solutions as shown in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12. The comparisons between ISC2COMM and other methods in [17,19] are shown in Table 2, Table 3, Table 5, Table 6, Table 8 and Table 10. They confirm that ISC2COMM gives more accurate results than these methods. Furthermore, as can be seen in Figure 1, Figure 2, Figure 3a, Figure 4, Figure 5, Figure 6a, Figure 7, Figure 8, Figure 9a, Figure 10, Figure 11, Figure 12a and Figure 13a,b, the precise and approximate solutions in Problems 1–6 have a high level of agreement. The absolute and log errors in Figure 3b, Figure 6b, Figure 9b and Figure 12b serve to illustrate the convergence and stability of the solutions to the given problems when applying ISC2COMM. Our technique provides efficient performance, considering the CPU time (measured in sec.) given in Table 1, Table 4, Table 7, Table 9, Table 11 and Table 12.
Problem 1.
Consider the FPDEs [19]
D 0 c t 1 / 2 U ( X , t ) 4 X t D X X U ( X , t ) 4 t D X U ( X , t ) 4 t U ( X , t ) = K ( X , t ) , ( X , t ) I ,
subject to the following ICs and purely integral conditions:
U ( X , 0 ) = 0 , X ( 0 , 1 ) , 0 1 U ( X , t ) d X = 1 2 sin ( π t 2 ) , t ( 0 , 1 ) , 0 1 X U ( X , t ) d X = 1 6 sin ( π t 2 ) , t ( 0 , 1 ) ,
where K ( X , t ) is generated such that the exact solution is U ( X , t ) = ( X 1 ) sin ( π t / 2 ) .
Table 1. The obtained errors of Problem 1.
Table 1. The obtained errors of Problem 1.
2468101214
E 1 , 3.3 × 10−31.2 × 10−51.0 × 10−74.4 × 10−91.4 × 10−111.8 × 10−133.2 × 10−16
CPU Time0.2310.4120.4510.5210.6310.9321.165
Table 2. Comparison between ISC2COMM and [19] for Problem 1.
Table 2. Comparison between ISC2COMM and [19] for Problem 1.
ISC2COMM[19]
( = 14 ) ( = 10 )
3.2 × 10−161.0 × 10−10
Table 3. Comparison of AEs obtained by I S C 2 C O M M for Problem 1.
Table 3. Comparison of AEs obtained by I S C 2 C O M M for Problem 1.
t ISC 2 COMM ( = 14 )
0.11.4 × 10 20
0.22.6 × 10 20
0.31.5 × 10 20
0.42.2 × 10 19
0.54.1 × 10 18
0.64.9 × 10 18
0.73.5 × 10 18
0.81.2 × 10 18
0.91.5 × 10 18
Problem 2.
Consider the FPDEs [19]
D 0 c t 1 / 2 U ( X , t ) X D X X U ( X , t ) D X U ( X , t ) cos ( π t / 2 ) U ( X , t ) = K ( X , t ) , ( X , t ) I ,
subject to the following ICs and purely integral conditions:
U ( X , 0 ) = sin ( ( 24 π X + π ) / 6 ) , X ( 0 , 1 ) , 0 1 U ( X , t ) d X = 0 , t ( 0 , 1 ) , 0 1 X U ( X , t ) d X = 3 8 π cos ( π t 2 ) sin ( π t 2 ) , t ( 0 , 1 ) ,
where K ( X , t ) is generated such that the exact solution is U ( X , t ) = sin ( ( 24 π X + 3 π t + π ) / 6 ) .
Table 4. The obtained errors of Problem 2.
Table 4. The obtained errors of Problem 2.
13579111315
E 1 , 4.3 × 10 2 7.1 × 10 4 4.4 × 10 6 2.9 × 10 8 1.2 × 10 10 3.9 × 10 12 1.0 × 10 14 1.1 × 10 17
CPU Time0.131 s0.3720.4210.5230.6310.6820.9551.197
Table 5. Comparison between ISC2COMM and [19] for Problem 2.
Table 5. Comparison between ISC2COMM and [19] for Problem 2.
ISC2COMM[19]
( = 15 ) ( = 26 )
1.1 × 10 17 1.0 × 10 5
Table 6. Comparison of AEs obtained by I S C 2 C O M M and approach in [17] for Problem 2.
Table 6. Comparison of AEs obtained by I S C 2 C O M M and approach in [17] for Problem 2.
t ISC 2 COMM ( = 15 )
0.12.4 × 10 20
0.24.0 × 10 20
0.32.5 × 10 20
0.47.1 × 10 20
0.55.1 × 10 20
0.61.1 × 10 19
0.74.2 × 10 19
0.81.3 × 10 19
0.94.3 × 10 19
Problem 3.
Consider the FPDEs [17]
D 0 c t 3 / 2 U ( X , t ) ( X + t ) D X X U ( X , t ) + ( X + t ) D X U ( X , t ) + 2 U ( X , t ) = H ( X , t ) , ( X , t ) I ,
subject to the following ICs and purely integral conditions:
U ( X , 0 ) = 0 , D t U ( X , 0 ) = 0 , X ( 0 , 1 ) , 0 1 U ( X , t ) d X = ( e 1 ) t 3 / 2 , t ( 0 , 1 ) , 0 1 X U ( X , t ) d X = t 3 / 2 , t ( 0 , 1 ) ,
where H ( X , t ) = ( 3 4 π + 2 t t ) and the exact solution is U ( X , t ) = t 3 / 2 e X .
Table 7. The obtained errors of Problem 3.
Table 7. The obtained errors of Problem 3.
13579111315
E 2 , 2.4 × 10 3 1.2 × 10 5 1.0 × 10 7 5.5 × 10 9 1.4 × 10 11 1.7 × 10 13 1.2 × 10 15 1.0 × 10 17
CPU Time0.1330.3800.4340.5410.6620.7020.8781.229
Table 8. Comparison of AEs obtained by I S C 2 C O M M and approach in [17] for Problem 3.
Table 8. Comparison of AEs obtained by I S C 2 C O M M and approach in [17] for Problem 3.
t ISC 2 COMM ( = 15 ) [17 ( h = 0.1 , h t = 10 5 )
0.11.1 × 10 19 5.80 × 10 8
0.21.0 × 10 19 5.45 × 10 8
0.32.7 × 10 19 5.06 × 10 8
0.49.0 × 10 20 4.63 × 10 8
0.53.1 × 10 19 4.16 × 10 8
0.69.0 × 10 20 3.63 × 10 8
0.74.2 × 10 19 3.05 × 10 8
0.81.4 × 10 19 2.41 × 10 8
0.92.3 × 10 19 1.69 × 10 8
Problem 4.
Consider the FPDEs [17]
D 0 c t 3 / 2 U ( X , t ) ( X 2 + t ) D X X U ( X , t ) + ( X t ) D X U ( X , t ) + ( X + 2 t ) U ( X , t ) = H ( X , t ) , ( X , t ) I ,
subject to the following ICs and purely integral conditions:
U ( X , 0 ) = e X , D t U ( X , 0 ) = 2 e X , X ( 0 , 1 ) , 0 1 U ( X , t ) d X = ( t + 1 ) 2 , t ( 0 , 1 ) , 0 1 X U ( X , t ) d X = ( t + 1 ) 2 , t ( 0 , 1 ) ,
where H ( X , t ) is chosen such that the exact solution is U ( X , t ) = ( t + 1 ) 2 e X .
Table 9. The obtained errors of Problem 4.
Table 9. The obtained errors of Problem 4.
135791113
E 2 , 2.4 × 10 3 1.2 × 10 5 1.0 × 10 7 5.5 × 10 9 1.4 × 10 11 1.7 × 10 13 8.0 × 10 16
CPU Time0.3330.5520.5810.6220.6820.7140.893
Table 10. Comparison of AEs obtained by I S C 2 C O M M and approach in [17] for Problem 4.
Table 10. Comparison of AEs obtained by I S C 2 C O M M and approach in [17] for Problem 4.
t ISC 2 COMM ( = 13 ) [17] ( h = 0.1 , h t = 10 7 )
0.14.1 × 10 19 5.80 × 10 8
0.25.3 × 10 19 5.45 × 10 8
0.31.7 × 10 19 5.06 × 10 8
0.44.2 × 10 18 4.63 × 10 8
0.53.1 × 10 18 4.16 × 10 8
0.63.3 × 10 18 3.63 × 10 8
0.72.5 × 10 18 3.05 × 10 8
0.81.2 × 10 18 2.41 × 10 8
0.92.3 × 10 18 1.69 × 10 8
Problem 5.
Consider the FPDEs
D 0 c t 1 / 2 U ( X , t ) X 2 D X X U ( X , t ) 2 X D X U ( X , t ) + 3 4 U ( X , t ) = X , ( X , t ) I ,
subject to the following ICs and purely integral conditions:
U ( X , 0 ) = 0 , X ( 0 , 1 ) , 0 1 U ( X , t ) d X = 4 3 π t 1 / 2 , t ( 0 , 1 ) , 0 1 X U ( X , t ) d X = 4 5 π t 1 / 2 , t ( 0 , 1 ) ,
where the exact solution is U ( X , t ) = 2 π t 1 / 2 X 1 / 2 .
Table 11. The obtained errors of Problem 5.
Table 11. The obtained errors of Problem 5.
2468101214
E 1 , 1.4 × 10 2 1.8 × 10 4 1.1 × 10 6 3.5 × 10 8 2.4 × 10 10 3.8 × 10 12 4.6 × 10 14
CPU Time0.2420.4310.4600.5300.6420.9431.245
Problem 6.
Consider the FPDEs
D 0 c t 3 / 2 U ( X , t ) t X 2 D X X U ( X , t ) 2 t 2 X D X U ( X , t ) + X U ( X , t ) = H ( X , t ) , ( X , t ) I ,
subject to the following ICs and purely integral conditions:
U ( X , 0 ) = 0 , X ( 0 , 1 ) , 0 1 U ( X , t ) d X = 2 5 t 3 / 2 , t ( 0 , 1 ) , 0 1 X U ( X , t ) d X = 2 7 t 3 / 2 , t ( 0 , 1 ) ,
where H ( X , t ) is chosen such that the exact solution is U ( X , t ) = t 3 / 2 X 3 / 2 .
Table 12. The obtained errors of Problem 6.
Table 12. The obtained errors of Problem 6.
2468101214
E 2 , 3.1 × 10 2 2.7 × 10 4 4.1 × 10 6 1.5 × 10 8 4.4 × 10 10 5.8 × 10 12 1.6 × 10 14
CPU Time0.2510.4710.4900.5300.6810.9831.278
Remark 5.
When making numerical comparisons through Problems 1–4 included in [17,19], to the best of our knowledge, these two papers are the only ones in which Model 1 and Model 2 are discussed.
Remark 6.
In view of the presented CPU time, our approach has efficient performance. The calculations show that the memory consumption is excellent. For example, the calculated CPU time using N = 13 is 23.1% slower than N = 11 and moreover requires an increase of 22% in RAM consumption compared to the N = 11 calculation. The numerical examples and comparisons provided in our paper highlight the superior accuracy and efficiency of our algorithm, solidifying its potential in solving the given models effectively. When we sought to compare the resource usage of our method to that of the methods described in [17,19], we found that these papers did not describe the CPU time and memory usage. However, based on our analysis, our approach demonstrates better performance compared to the referenced methods.
Remark 7.
The computations were performed using Mathematica 13.3 on a computer system equipped with an Intel(R) Core(TM) i9-10850 CPU operating at 3.60 GHz, featuring 10 cores and 20 logical processors. The algorithmic steps for solving Model 1 and Model 2 using ISC2COMM are expressed as in Algorithms 1 and 2, respectively.
Remark 8.
For interested readers, we utilized several built-in functions in Mathematica 13.3 for our numerical implementation of the provided algorithms. Below is a summary of the tools used, along with concise information about each function.
  • Array: For creating and manipulating arrays, which are used to hold coefficients and operational matrices throughout the computations.
  • NSolve: For finding numerical solutions to nonlinear algebraic equations; it is utilized to compute the zeros of U + 1 * ( t ) .
  • FindRoot: For solving equations by finding roots; it is essential in handling the nonlinear aspects of our system, using a zero initial approximation.
  • ChebyshevU: For generating MSC2Ps and IMSC2Ps, which serve as basis functions that provide the foundation for approximating the solution in our collocation method.
  • D: To compute ordinary derivatives to determine the defined residuals.
  • CaputoD: To compute Caputo fractional derivatives to determine the defined residuals.
  • Table: For generating lists and arrays of values based on specified formulas, particularly for collocation points and other parameterized data.
Algorithm 1 ISC2COMM Algorithm to Solve Model 1 (4) and (5)
Step 1.Given β , N and K j , j = 1 , 2 , 3 , 4 .
Step 2.Define the bases ψ i ( X ) and φ j ( t ) and the matrices A , H , Υ N ( m ) ( X ) , Ψ N ( X ) φ N ( t ) , and compute the elements of matrices   H 2 , Υ N ( 1 ) ( X ) , Υ N ( 2 ) ( X ) and D β .
Step 3.Evaluate the matrices:
1. t β Ψ N ( X ) A D β φ N ( t ) ,
2. H Ψ N ( X ) + Υ ( 1 ) ( X ) A φ N ( t ) ,
3. H 2 Ψ N ( X ) + Υ N ( 2 ) ( X ) A φ N ( t ) ,
Step 4.Define R N ( X , t ) as in Equation (41).
Step 5.List R N ( X i , t j ) = 0 , i , j = 0 , 1 , , N , defined in Equation (42).
Step 6.Use Mathematica’s built-in numerical solver to obtain the solution to the system of equations in [Output 5].
Step 7.Evaluate U N ( X , t ) defined in Equation (38) (in the case of homogeneous conditions).
Step 8.Evaluate T ( X , t ) and U N ( X , t ) defined in Equation (46) (in the case of Nonhomogeneous conditions).
Algorithm 2 ISC2COMM Algorithm to Solve Model 2 (7) and (8)
Step 1.Given α , N and H j , j = 1 , 2 , 3 , 4 .
Step 2.Define the basis ψ i ( X ) and ϕ j ( t ) and the matrices A , H , Υ N ( m ) ( X ) , Ψ N ( X ) Φ N ( t ) , and compute the elements of matrices   H 2 , Υ N ( 1 ) ( X ) , Υ N ( 2 ) ( X ) and D α .
Step 3.Evaluate the matrices:
1. t α Ψ N ( X ) A D α Φ N ( t ) ,
2. ( H Ψ N ( X ) + Υ ( 1 ) ( X ) ) A Φ N ( t ) ,
3. ( H 2 Ψ N ( X ) + Υ N ( 2 ) ( X ) ) A Φ N ( t ) ,
Step 4.Define R N ( X , t ) as in Equation (50).
Step 5.List R N ( X i , t j ) = 0 , i , j = 0 , 1 , , N , defined in Equation (51).
Step 6.Use Mathematica’s built-in numerical solver to obtain the solution to the system of equations in [Output 5].
Step 7.Evaluate U N ( X , t ) defined in Equation (47) (in the case of homogeneous conditions).
Step 8.Evaluate T ( X , t ) and U N ( X , t ) defined in Equation (55) (in the case of Nonhomogeneous conditions).
Figure 1. The exact and approximate solutions using N = 14 for Problem 1. (a) U ( X , t ) . (b) U 14 ( X , t ) .
Figure 1. The exact and approximate solutions using N = 14 for Problem 1. (a) U ( X , t ) . (b) U 14 ( X , t ) .
Fractalfract 09 00407 g001
Figure 2. Error function and its heatmap graphs using N = 14 for Problem 1. (a) E 1 , 14 ( X , t ) . (b) Heatmap graph of E 1 , 14 ( X , t ) . The colors in the heatmap indicate the magnitude of the error between 0 and 3 × 10 16 .
Figure 2. Error function and its heatmap graphs using N = 14 for Problem 1. (a) E 1 , 14 ( X , t ) . (b) Heatmap graph of E 1 , 14 ( X , t ) . The colors in the heatmap indicate the magnitude of the error between 0 and 3 × 10 16 .
Fractalfract 09 00407 g002
Figure 3. Error results for Problem 1. (a) E 1 , 14 ( X , t ) , t = 0.1 , 0.3 , 0.5 , 0.7 . (b) Graph of L o g 10 ( E 1 , N ) against N .
Figure 3. Error results for Problem 1. (a) E 1 , 14 ( X , t ) , t = 0.1 , 0.3 , 0.5 , 0.7 . (b) Graph of L o g 10 ( E 1 , N ) against N .
Fractalfract 09 00407 g003
Figure 4. The exact and approximate solution using N = 15 for Problem 2. (a) U ( X , t ) . (b) U 15 ( X , t ) .
Figure 4. The exact and approximate solution using N = 15 for Problem 2. (a) U ( X , t ) . (b) U 15 ( X , t ) .
Fractalfract 09 00407 g004
Figure 5. Error function and its heatmap graphs using N = 15 for Problem 2. (a) E 1 , 15 ( X , t ) . (b) Heatmap graph of E 1 , 15 ( X , t ) . The colors in the heatmap indicate the magnitude of the error between 0 and 1.5 × 10 17 .
Figure 5. Error function and its heatmap graphs using N = 15 for Problem 2. (a) E 1 , 15 ( X , t ) . (b) Heatmap graph of E 1 , 15 ( X , t ) . The colors in the heatmap indicate the magnitude of the error between 0 and 1.5 × 10 17 .
Fractalfract 09 00407 g005
Figure 6. Error results for Problem 2. (a) E 1 , 15 ( X , t ) , t = 0.2 , 0.4 , 0.6 , 0.8 . (b) Graph of L o g 10 ( E 1 , N ) against N .
Figure 6. Error results for Problem 2. (a) E 1 , 15 ( X , t ) , t = 0.2 , 0.4 , 0.6 , 0.8 . (b) Graph of L o g 10 ( E 1 , N ) against N .
Fractalfract 09 00407 g006
Figure 7. The exact and approximate solutions using N = 15 for Problem 3. (a) U ( X , t ) . (b) U 15 ( X , t ) .
Figure 7. The exact and approximate solutions using N = 15 for Problem 3. (a) U ( X , t ) . (b) U 15 ( X , t ) .
Fractalfract 09 00407 g007
Figure 8. Error function and its heatmap graphs using N = 15 for Problem 3. (a) E 2 , 15 ( X , t ) . (b) Heatmap graph of E 2 , 15 ( X , t ) . The colors in the heatmap indicate the magnitude of the error between 0 and 1 × 10 17 .
Figure 8. Error function and its heatmap graphs using N = 15 for Problem 3. (a) E 2 , 15 ( X , t ) . (b) Heatmap graph of E 2 , 15 ( X , t ) . The colors in the heatmap indicate the magnitude of the error between 0 and 1 × 10 17 .
Fractalfract 09 00407 g008
Figure 9. Error results for Problem 3. (a) E 2 , 15 ( X , t ) , t = 0.2 , 0.4 , 0.6 , 0.8 . (b) Graph of L o g 10 ( E 2 , N ) against N .
Figure 9. Error results for Problem 3. (a) E 2 , 15 ( X , t ) , t = 0.2 , 0.4 , 0.6 , 0.8 . (b) Graph of L o g 10 ( E 2 , N ) against N .
Fractalfract 09 00407 g009
Figure 10. The exact and approximate solutions using N = 13 for Problem 4. (a) U ( X , t ) . (b) U 13 ( X , t ) .
Figure 10. The exact and approximate solutions using N = 13 for Problem 4. (a) U ( X , t ) . (b) U 13 ( X , t ) .
Fractalfract 09 00407 g010
Figure 11. Error function and its heatmap graphs using N = 13 for Problem 4. (a) E 2 , 13 ( X , t ) . (b) Heatmap graph of E 2 , 13 ( X , t ) . The colors in the heatmap indicate the magnitude of the error between 0 and 8 × 10 16 .
Figure 11. Error function and its heatmap graphs using N = 13 for Problem 4. (a) E 2 , 13 ( X , t ) . (b) Heatmap graph of E 2 , 13 ( X , t ) . The colors in the heatmap indicate the magnitude of the error between 0 and 8 × 10 16 .
Fractalfract 09 00407 g011
Figure 12. Error results for Problem 4. (a) E 2 , 13 ( X , t ) , t = 0.1 , 0.3 , 0.5 , 0.7 . (b) Graph of L o g 10 ( E 2 , N ) against N .
Figure 12. Error results for Problem 4. (a) E 2 , 13 ( X , t ) , t = 0.1 , 0.3 , 0.5 , 0.7 . (b) Graph of L o g 10 ( E 2 , N ) against N .
Fractalfract 09 00407 g012
Figure 13. Error functions using N = 14 for (a) Problem 5 and (b) Problem 6. (a) E 1 , 14 ( X , t ) . (b) E 2 , 14 ( X , t ) .
Figure 13. Error functions using N = 14 for (a) Problem 5 and (b) Problem 6. (a) E 1 , 14 ( X , t ) . (b) E 2 , 14 ( X , t ) .
Fractalfract 09 00407 g013

10. Conclusions

We have established new bases of polynomials that meet the given homogeneous initial conditions and integral boundary conditions. Utilizing these polynomials with the spectral collocation method enables one to obtain accurate approximations of the specified FPDEs. The suggested approach leads to accurate and efficient solutions. This study’s approach has significant potential for enhancement and might be used to address a broader spectrum of fractional partial differential equations with diverse integral boundary conditions.

Funding

No funding was received to assist with the preparation of this manuscript.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No data are associated with this research.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

Figure A1. Mathematica code for solving the system (17).
Figure A1. Mathematica code for solving the system (17).
Fractalfract 09 00407 g0a1

References

  1. Nigmatullin, R. The realization of the generalized transfer equation in a medium with fractal geometry. Phys. Status Solidi B 1986, 133, 425–430. [Google Scholar] [CrossRef]
  2. Chen, P.; Ma, W.; Tao, S.; Zhang, K. Blowup and global existence of mild solutions for fractional extended Fisher–Kolmogorov equations. Int. J. Nonlinear Sci. Numer. Simul. 2021, 22, 641–656. [Google Scholar] [CrossRef]
  3. Langlands, T.A.M.; Henry, B.I.; Wearne, S.L. Fractional cable equation models for anomalous electrodiffusion in nerve cells: Finite domain solutions. SIAM J. Appl. Math. 2011, 71, 1168–1203. [Google Scholar] [CrossRef]
  4. Biler, P.; Funaki, T.; Woyczynski, W. Fractal Burgers equations. J. Diff. Equ. 1998, 148, 9–46. [Google Scholar] [CrossRef]
  5. Cahlon, B.; Kulkarni, D.; Shi, P. Stepwise stability for the heat equation with a nonlocal constraint. SIAM J. Numer. Anal. 1995, 32, 571–593. [Google Scholar] [CrossRef]
  6. Cannon, J.R. The solution of the heat equation subject to the specification of energy. Q. Appl. Math. 1963, 21, 155–160. [Google Scholar] [CrossRef]
  7. Choi, Y.; Chan, K. A parabolic equation with nonlocal boundary conditions arising from electrochemistry. Nonlinear Anal. 1992, 18, 317–331. [Google Scholar] [CrossRef]
  8. Shi, P. Weak solution to an evolution problem with a nonlocal constraint. SIAM J. Math. Anal. 1993, 24, 46–58. [Google Scholar] [CrossRef]
  9. Samarski, A. Some problems in the modern theory of differential equation. Differ. Uraven 1980, 16, 1221–1228. [Google Scholar]
  10. Du, J.; Cui, M. Solving the forced Duffing equation with integral boundary conditions in the reproducing kernel space. Int. J. Comput. Math. 2010, 87, 2088–2100. [Google Scholar] [CrossRef]
  11. Geng, F.; Cui, M. New method based on the HPM and RKHSM for solving forced Duffing equations with integral boundary conditions. J. Comput. Appl. Math. 2009, 233, 165–172. [Google Scholar] [CrossRef]
  12. Doostdar, M.; Kazemi, M.; Vahidi, A. A numerical method for solving the Duffing equation involving both integral and non-integral forcing terms with separated and integral boundary conditions. Comput. Methods Differ. Equ. 2023, 11, 241–253. [Google Scholar]
  13. Chen, Z.; Jiang, W.; Du, H. A new reproducing kernel method for Duffing equations. Int. J. Comput. Math. 2021, 98, 2341–2354. [Google Scholar] [CrossRef]
  14. Bahuguna, D.; Abbas, S.; Dabas, J. Partial functional differential equation with an integral condition and applications to population dynamics. Nonlinear Anal. 2008, 69, 2623–2635. [Google Scholar] [CrossRef]
  15. Kamynin, L.I. A boundary-value problem in the theory of heat conduction with non-classical boundary conditions. Zh. Vychisl. Mat. Fiz. 1964, 4, 1006–1024. [Google Scholar]
  16. Balaji, S.; Hariharan, G. An efficient operational matrix method for the numerical solutions of the fractional Bagley–Torvik equation using wavelets. J. Math. Chem. 2019, 57, 1885–1901. [Google Scholar] [CrossRef]
  17. Brahimi, S.; Merad, A.; Kılıçman, A. Theoretical and Numerical Aspect of Fractional Differential Equations with Purely Integral Conditions. Mathematics 2021, 9, 1987. [Google Scholar] [CrossRef]
  18. Day, W.A. A decreasing property of solutions of parabolic equations with applications to thermoelasticity. Quart. Appl. Math. 1983, 40, 468–475. [Google Scholar] [CrossRef]
  19. Martin-Vaquero, J.; Merad, A. Existence, uniqueness and numerical solution of a fractional PDE with integral conditions. Nonlinear Anal. Model. Control. 2019, 24, 368–386. [Google Scholar] [CrossRef]
  20. Anguraj, A.; Karthikeyan, P. Existence of solutions for fractional semiliear evolution boundary value problem. Commun. Appl. Anal. 2010, 14, 505. [Google Scholar]
  21. Benchohra, M.; Graef, J.; Hamani, S. Existence results for boundary value problems with non-linear fractional differential equations. Appl. Anal. 2008, 87, 851–863. [Google Scholar] [CrossRef]
  22. Daftardar-Gejji, V.; Jafari, H. Boundary value problems for fractional diffusion-wave equation. Aust. J. Math. Anal. Appl. 2006, 3, 8. [Google Scholar]
  23. Amiraliyev, G.; Amiraliyeva, I.; Kudu, M. A numerical treatment for singularly perturbed differential equations with integral boundary condition. Appl. Math. Comput. 2007, 185, 574–582. [Google Scholar] [CrossRef]
  24. Mashayekhi, S.; Ordokhani, Y.; Razzaghi, M. A hybrid functions approach for the Duffing equation. Phys. Scr. 2013, 88, 025002. [Google Scholar] [CrossRef]
  25. Ahmed, H. New generalized Jacobi–Galerkin operational matrices of derivatives: An algorithm for solving the time-fractional coupled KdV equations. Bound. Value Probl. 2024, 2024, 144. [Google Scholar] [CrossRef]
  26. Ahmed, H. Enhanced shifted Jacobi operational matrices of integrals: Spectral algorithm for solving some types of ordinary and fractional differential equations. Bound. Value Probl. 2024, 2024, 75. [Google Scholar] [CrossRef]
  27. Ahmed, H. Enhanced shifted Jacobi operational matrices of derivatives: Spectral algorithm for solving multiterm variable-order fractional differential equations. Bound. Value Probl. 2023, 2023, 108. [Google Scholar] [CrossRef]
  28. Ahmed, H. New Generalized Jacobi Polynomial Galerkin Operational Matrices of Derivatives: An Algorithm for Solving Boundary Value Problems. Fractal Fract. 2024, 8, 199. [Google Scholar] [CrossRef]
  29. Ahmed, H. Highly accurate method for boundary value problems with Robin boundary conditions. J. Nonlinear Math. Phys. 2023, 30, 1239–1263. [Google Scholar] [CrossRef]
  30. Napoli, A.; Abd-Elhameed, W.M. A new collocation algorithm for solving even-order boundary value problems via a novel matrix method. Mediterr. J. Math. 2017, 14, 170. [Google Scholar] [CrossRef]
  31. Abd-Elhameed, W.; Youssri, Y. Spectral solutions for fractional differential equations via a novel Lucas operational matrix of fractional derivatives. Rom. J. Phys. 2016, 61, 795–813. [Google Scholar]
  32. Loh, J.R.; Phang, C. Numerical solution of Fredholm fractional integro-differential equation with Right-sided Caputo’s derivative using Bernoulli polynomials operational matrix of fractional derivative. Mediterr. J. Math. 2019, 16, 28. [Google Scholar] [CrossRef]
  33. Podlubny, I. Fractional Differential Equations; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  34. Abd-Elhameed, W.; Ahmed, H.; Youssri, Y. A new generalized Jacobi Galerkin operational matrix of derivatives: Two algorithms for solving fourth-order boundary value problems. Adv. Differ. Equ. 2016, 2016, 1–16. [Google Scholar] [CrossRef]
  35. Wolfram Research, Inc. Mathematica Version Number 13.3.1; Wolfram Research, Inc.: Champaign, IL, USA, 2023. [Google Scholar]
  36. Luke, Y. Special Functions and Their Approximations; Academic Press: New York, NY, USA, 1969. [Google Scholar]
  37. Lakshmikantham, V.; Sen, S.K. Computational Error and Complexity in Science and Engineering; Elsevier: Melbourne, FL, USA, 2005. [Google Scholar]
  38. Kirk, D.; Wen-Mei, W. Programming Massively Parallel Processors: A Hands-On Approach; Morgan Kaufmann: Burlington, MA, USA, 2017. [Google Scholar] [CrossRef]
  39. Ahmed, H.M.; Hafez, R.; Abd-Elhameed, W. A computational strategy for nonlinear time-fractional generalized Kawahara equation using new eighth-kind Chebyshev operational matrices. Phys. Scr. 2024, 99, 045250. [Google Scholar] [CrossRef]
  40. Abd-Elhameed, W.; Ahmed, H.; Zaky, M.; Hafez, R. A new shifted generalized Chebyshev approach for multi-dimensional sinh-Gordon equation. Phys. Scr. 2024, 99, 095269. [Google Scholar] [CrossRef]
  41. Ahmed, H. New Generalized Jacobi Galerkin operational matrices of derivatives: An algorithm for solving multi-term variable-order time-fractional diffusion-wave equations. Fractal Fract. 2024, 8, 68. [Google Scholar] [CrossRef]
  42. Narumi, S. Some formulas in the theory of interpolation of many independent variables. Tohoku Math. J. 1920, 18, 309–321. [Google Scholar]
  43. Mason, J.; Handscomb, D. Chebyshev Polynomials; Chapman and Hall: New York, NY, USA; CRC: Boca Raton, FL, USA, 2003. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahmed, H.M. Highly Accurate Numerical Method for Solving Fractional Differential Equations with Purely Integral Conditions. Fractal Fract. 2025, 9, 407. https://doi.org/10.3390/fractalfract9070407

AMA Style

Ahmed HM. Highly Accurate Numerical Method for Solving Fractional Differential Equations with Purely Integral Conditions. Fractal and Fractional. 2025; 9(7):407. https://doi.org/10.3390/fractalfract9070407

Chicago/Turabian Style

Ahmed, Hany M. 2025. "Highly Accurate Numerical Method for Solving Fractional Differential Equations with Purely Integral Conditions" Fractal and Fractional 9, no. 7: 407. https://doi.org/10.3390/fractalfract9070407

APA Style

Ahmed, H. M. (2025). Highly Accurate Numerical Method for Solving Fractional Differential Equations with Purely Integral Conditions. Fractal and Fractional, 9(7), 407. https://doi.org/10.3390/fractalfract9070407

Article Metrics

Back to TopTop