Next Article in Journal
Zero Mass as a Borel Structure
Previous Article in Journal
The Swinging Sticks Pendulum: Small Perturbations Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modified Collocation Technique for Addressing the Time-Fractional FitzHugh–Nagumo Differential Equation with Shifted Legendre Polynomials

by
S. S. Alzahrani
1,*,
Abeer A. Alanazi
1 and
Ahmed Gamal Atta
2,*
1
Department of Mathematics, College of Science, Taibah University, Madinah P.O. Box 344, Saudi Arabia
2
Department of Mathematics, Faculty of Education, Ain Shams University, Roxy, Cairo 11341, Egypt
*
Authors to whom correspondence should be addressed.
Symmetry 2025, 17(9), 1468; https://doi.org/10.3390/sym17091468
Submission received: 27 July 2025 / Revised: 15 August 2025 / Accepted: 2 September 2025 / Published: 5 September 2025
(This article belongs to the Section Mathematics)

Abstract

This study is devoted to solving the nonlinear inhomogeneous time-fractional FitzHugh–Nagumo differential problem ( TFFNDP ) using the spectral modified collocation method. The proposed algorithm uses non-symmetric polynomials, namely shifted Legendre polynomials ( SLPs ), which are orthogonal. The orthogonality property of SLPs and certain relations facilitate the acquisition of accurate spectral approximations. Comprehensive convergence and error studies are conducted to validate the accuracy of the suggested shifted Legendre expansion. Several numerical examples are presented to demonstrate the method’s effectiveness and accuracy. The proposed scheme is benchmarked against known analytical solutions and compared with other algorithms to ensure the applicability and efficiency of the proposed algorithm.

1. Introduction

Spectral methods are important numerical approaches employed to solve all types of differential equations [1,2,3]. The philosophy behind applying these methods is based on the assumption that the numerical solutions are represented in terms of suitable special functions that are often special polynomials. Unlike other numerical techniques, these methods present several advantages; see [4,5]. Also, these methods produce accurate approximations that converge rapidly. There are three primary spectral approaches. Each method has different advantages and applications. The selection of basis functions in the tau and collocation methods is less restrictive than that in the Galerkin approach; see [6,7,8]. References [9,10,11,12] demonstrate that the Galerkin and Petrov–Galerkin approaches are effective for various problems. For more studies, see [13,14,15,16,17,18].
In many fields, such as physics, engineering, and mathematics, orthogonal polynomials [19,20] are crucial. These polynomials can be classified as either symmetric or non-symmetric. These polynomials have some properties that make them more important in many situations. Applications of particular polynomials have been demonstrated in several publications; see [21,22,23]. In particular, SLPs are essential in various disciplines, including control theory, signal processing, and cryptography, due to their distinctive characteristics. These polynomials are used more frequently in numerical analyses and approximation theory. The authors of [24] employed SLPs in a numerical method to address fractional-order multi-point boundary value problems. Also, the authors of [25] employed the differential quadrature approach utilizing SLPs and generalized Laguerre polynomials to solve the Benjamin–Bona–Mahony equation.
The FitzHugh–Nagumo equation system was derived by both FitzHugh [26] and Nagumo et al. [27]. Its simplicity and ability to capture key dynamics make it a valuable tool in studying nonlinear dynamical systems and analyzing biological phenomena related to excitation and signal propagation. The TFFNDP , which combines diffusion and nonlinearity, has attracted the interest of many scientists in the fields of nuclear reactor theory, neurophysiology, branching Brownian motion, flame propagation, and logistic population growth [28]. The purpose of this study is to accurately solve the following nonlinear inhomogeneous TFFNDP :
D t α u = u η η + u ( u ε ) ( 1 u ) + f ( η , t ) , 0 < α 1 ,
constrained with
u ( η , 0 ) = u 0 ( ξ ) , 0 < η 1 ,
u ( 0 , t ) = u 1 ( t ) , u ( 1 , t ) = u 2 ( t ) , 0 < t 1 ,
where u is a function of both η and t, i.e., u = u ( η , t ) ; f ( η , t ) is the source function; and ε is a real number. Various numerical and analytical approaches have been used to solve the TFFNDP . For example, a Petrov–Galerkin finite element technique was applied to solving a space TFFNDP in [29]. A collocation method was developed in [30] to treat the TFFNDP using a kind of Chebyshev polynomial. The authors in [31] offered a spectral technique for finding solutions to the TFFNDP using shifted Lucas polynomials. Although this method demonstrated good accuracy, it required more terms in the retained modes compared to the presented method. The authors in [32] presented a numerical technique using approximations of radial basis functions to solve this problem. Finally, the authors of [33] applied the fractional reduced differential transform method to this problem. The disadvantage of the methods presented in [32,33] is that they offer a lower accuracy compared to that of the proposed method.
The following points are the main contributions and novelties of this work:
  • Presenting two sets of basis functions in terms of SLPs ;
  • Deriving some new identities is crucial for efficient computation of the collocation matrices;
  • Using the collocation method to treat the TFFNDP ;
  • Offering a thorough examination of the convergence and error, tailored to the suggested basis functions;
  • Investigating our numerical algorithm by presenting some supporting numerical examples;
  • We expect that the scheme utilized could be used to solve other problems [34] using the collocation method.
Also, some advantages of this work are
  • Choosing the suggested basis functions results in faster convergence and improved stability;
  • The procedure needs fewer computations to reach the desired precision.
This paper is structured as follows: Section 2 presents the main concepts and essential formulas. Section 3 introduces a collocation spectral method for the nonlinear inhomogeneous TFFNDP . A study on the convergence of the shifted Legendre expansion is presented in Section 4. Five examples are presented in Section 5. The conclusions are presented in Section 6.

2. Essential Principles and Formulas

This section presents some fundamentals of fractional calculus related to Caputo’s fractional derivative (FD). Furthermore, we will introduce key principles and significant formulas for the SLPs , which will be crucial for subsequent discussions.

2.1. A Review of Caputo’s FD

Definition 1. 
Caputo’s FD is defined as [35]
D ζ α ψ ( ζ ) = 1 Γ ( α ) 0 ζ ( ζ t ) α 1 ψ ( r ) ( t ) d t , α > 0 , ζ > 0 , 1 α < , N .
In addition, we have
D ζ α ζ τ = 0 , if τ N 0 a n d τ < α , τ ! Γ ( τ + 1 α ) ζ τ α , if τ N 0 a n d τ α ,
where α denotes the ceiling function, N 0 = { 0 , 1 , 2 , } , and N = { 1 , 2 , } .

2.2. A Basic Overview of Legendre Polynomials and Their Shifted Equivalents

The Legendre polynomial of degree s, L s ( 1 , 1 ) ( η ) , can be defined in numerous forms over the range [ 1 ,   1 ] , including using the following recurrence relation [36,37]:
L 0 ( 1 , 1 ) ( η ) = 1 , L 1 ( 1 , 1 ) ( η ) = η , ( s + 1 ) L s + 1 ( 1 , 1 ) ( η ) = ( 2 s + 1 ) η L s ( 1 , 1 ) ( η ) s L s 1 ( 1 , 1 ) ( η ) .
A significant characteristic of the Legendre polynomials is their orthogonality on the interval [ 1 ,   1 ] regarding the L 2 inner product:
1 1 L m ( 1 , 1 ) ( η ) L s ( 1 , 1 ) ( η ) d η = 2 2 s + 1 δ m s ,
where δ m n is the renowned Kronecker delta function.
  • The SLPs L m ( a , b ) ( η ) are defined on [ a , b ] :
L m ( a , b ) ( η ) = L m ( 1 , 1 ) ( η ) 2 η b a b a ,
and they are orthogonal on [ a ,   b ] in the following manner:
a b L m ( a , b ) ( η ) L s ( a , b ) ( η ) d η = b a 2 s + 1 δ m s .
The power form of L i ( 0 , 1 ) ( η ) is
L i ( 0 , 1 ) ( η ) = m = 0 i ( 1 ) i + m ( i + m ) ! ( i m ) ! ( m ) ! 2 η m ,
and its inversion formula is
η i = m = 0 r ( 1 ) 2 m ( 2 m + 1 ) Γ ( r + 1 ) 2 Γ ( m + r + 1 ) Γ ( m + r + 2 ) L m ( 0 , 1 ) ( η ) .
Theorem 1 
([36]). Assume ϕ i ( η ) = ( η a ) ( η b ) L i ( a , b ) ( η ) . Then, for all i 1 , the following holds:
D ϕ i ( η ) = 2 b a j = 0 ( i + j ) o d d i 1 ( 2 j + 1 ) ( 1 + 2 H i 2 H j ) ϕ j ( η ) δ i ( η ) ,
where δ i ( η ) is provided by
δ i ( η ) = a + b 2 η , i f i even , a b , i f i odd ,
and H m is the mth harmonic number, defined as
H m = n = 1 m 1 n .

2.3. Trial Functions

We choose the trial functions to be
θ i ( η ) = η ( 1 η ) L i ( 0 , 1 ) ( η ) , ϑ j ( t ) = t L j ( 0 , 1 ) ( t ) .
Based on the orthogonality relation in (7), we acquire the following relations:
0 1 θ m ( η ) θ s ( η ) ω 1 ( η ) d η = 1 2 s + 1 δ m s ,
and
0 1 ϑ m ( t ) ϑ s ( t ) ω 2 ( t ) d t = 1 2 s + 1 δ m s .
where ω 1 ( η ) = 1 η 2 ( 1 η ) 2 and ω 2 ( t ) = 1 t 2 .
Corollary 1. 
For i 1 , one has
d θ i ( η ) d η = 2 j = 0 ( i + j ) odd i 1 ( 2 j + 1 ) ( 1 + 2 H i 2 H j ) θ j ( η ) + μ i ( η ) ,
where
μ i ( η ) = 1 2 η , if i even , 1 , if i odd ,
Proof. 
The proof of this corollary can be readily derived by substituting a = 0 and b = 1 into Theorem 1. □
Remark 1. 
The first and second derivatives of the vector θ ( η ) are provided by
d θ ( η ) d η = A θ ( η ) + μ ,
d 2 θ ( η ) d η 2 = A 2 θ ( η ) + A μ + μ ,
where A = ( a i j ) 0 i , j K is a matrix of order ( K + 1 ) 2 , whose nonzero elements can be expressly specified as
a i j = 2 ( 2 j + 1 ) ( 1 + 2 H i 2 H j ) , ( i + j ) odd , i > j , 0 , otherwise ,
and also
μ = μ 0 , μ 1 , , μ K T ,
and
μ = μ 0 , μ 1 , , μ K T , μ i = 2 , i even , 0 , i odd .
For example, the matrices A , μ , and μ take the following form for K = 6 as
A = 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 12 0 0 0 0 0 28 3 0 50 3 0 0 0 0 0 19 0 21 0 0 0 167 15 0 77 3 0 126 5 0 0 0 117 5 0 469 15 0 88 3 0 ,
and
μ = 1 2 η 1 1 2 η 1 1 2 η 1 1 2 η , μ = 2 0 2 0 2 0 2 .
Theorem 2. 
The subsequent formula is valid for α ( 0 , 1 ) :
D t α ϑ s ( t ) = 0 K Q , s ϑ n ( t ) + λ s ( t ) ,
where
Q , s = τ = 0 s + 1 ( τ + 2 ) ( 2 + 1 ) ( 1 ) s + τ + 2 + 1 ( s + τ + 1 ) ! Γ ( τ α + 2 ) ( α + τ + 2 ) ( τ + 1 ) ! Γ ( s τ ) Γ ( τ α + 2 ) Γ ( τ + α + 3 ) ,
and
λ s ( t ) = ( 1 ) s Γ ( 2 α ) t 1 α .
Proof. 
Based on (8), along with the definition of ϑ j ( t ) , one obtains
ϑ j ( t ) = τ = 0 j ( 1 ) j + τ ( j + τ ) ! ( τ ! ) 2 ( j τ ) ! t τ + 1 ,
Using Caputo’s FD from (5) and the last relation, we can write
D t α ϑ j ( t ) = τ = 0 j ( τ + 1 ) ! ( 1 ) j + τ ( j + τ ) ! ( τ ! ) 2 ( j τ ) ! ( α + τ + 1 ) ! t α + τ + 1 = τ = 1 j ( τ + 1 ) ! ( 1 ) j + τ ( j + τ ) ! ( τ ! ) 2 ( j τ ) ! ( α + τ + 1 ) ! t α + τ + 1 + ( 1 ) j t 1 α Γ ( 2 α ) .
Now, based on the inversion formula of L m ( 0 , 1 ) ( η ) defined in (9) and for a sufficiently large positive number K , we can approximate t α + τ + 1 in the form
t α + τ + 1 m = 0 K ( 1 ) 2 m ( 2 m + 1 ) ( ( τ α ) ! ) 2 Γ ( τ m α + 1 ) Γ ( τ + m α + 2 ) ϑ m ( t ) .
Inserting the previous relation into (27) leads to the following approximation:
D t α ϑ j ( t ) = τ = 1 j ( τ + 1 ) ! ( 1 ) j + τ ( j + τ ) ! ( τ ! ) 2 ( j τ ) ! ( α + τ + 1 ) ! m = 0 K ( 1 ) 2 m ( 2 m + 1 ) ( ( τ α ) ! ) 2 Γ ( τ m α + 1 ) Γ ( τ + m α + 2 ) ϑ m ( t ) + ( 1 ) j t 1 α Γ ( 2 α ) .
The previous equation can be reformulated by rearranging the elements as
D t α ϑ s ( t ) = 0 K Q , s ϑ n ( t ) + λ s ( t ) ,
where
Q , s = τ = 0 s + 1 ( τ + 2 ) ( 2 + 1 ) ( 1 ) s + τ + 2 + 1 ( s + τ + 1 ) ! Γ ( τ α + 2 ) ( α + τ + 2 ) ( τ + 1 ) ! Γ ( s τ ) Γ ( τ α + 2 ) Γ ( τ + α + 3 ) ,
and
λ s ( t ) = ( 1 ) s Γ ( 2 α ) t 1 α .
This concludes the proof of the theorem. □
Remark 2. 
The fractional derivative of the vector ϑ ( t ) is given by
D t α ϑ ( t ) B ϑ ( t ) + λ ,
where B = ( Q m , j ) 0 m , j K is a matrix of order ( K + 1 ) 2 whose nonzero elements are given in (24), and
λ = λ 0 , λ 1 , , λ K T .
For example, the matrices B and λ take the following form for K = 6 at α = 0.5 as
B = 0 0 0 0 0 0 0 32 9 π 32 15 π 32 63 π 32 135 π 32 231 π 32 351 π 32 495 π 224 75 π 608 175 π 352 105 π 2336 2475 π 12128 25025 π 6176 20475 π 2912 14025 π 2816 735 π 256 147 π 5888 1617 π 38656 9009 π 46336 35035 π 350464 487305 π 81152 174097 π 9728 2835 π 37888 10395 π 77824 81081 π 223744 57915 π 353792 69615 π 19757056 11904165 π 859136 915705 π 148448 38115 π 426784 165165 π 354272 99099 π 475808 1203345 π 72847392 17782765 π 84086176 14549535 π 32737184 16656255 π 2111264 585585 π 730976 195195 π 3917728 1990989 π 96387488 27020565 π 983968 21015995 π 1711517152 395482815 π 10724589152 1673196525 π ,
and
λ = 2 t π 2 t π 2 t π 2 t π 2 t π 2 t π 2 t π .

3. The Collocation Method for the Nonlinear Inhomogeneous Time-Fractional FitzHugh–Nagumo Differential Problem

This part analyzes a numerical algorithm for solving the nonlinear inhomogeneous equation TFFNDP (1) subject to conditions (2) and (3).
To proceed with the proposed collocation technique, we define the following transformation:
ψ ( η , t ) : = u ( η , t ) + u ^ ( η , t ) ,
where
u ^ ( η , t ) = ( ( 1 η ) u ( 0 , t ) + η u ( 1 , t ) + ( ( η 1 ) u ( 0 , 0 ) η u ( 1 , 0 ) + u ( η , 0 ) ) ) .
Then, Equation (37) converts the nonlinear inhomogeneous TFFNDP (1) governed by (2) and (3) into the adapted equation that follows:
D t α ψ = ψ η η + ψ ( ψ ε ) ( 1 ψ ) + g ( η , t ) , 0 < α 1 ,
subject to homogeneous initial and boundary conditions (HIBCs)
ψ ( η , 0 ) = 0 , 0 < η 1 , ψ ( 0 , t ) = ψ ( 1 , t ) = 0 , 0 < t 1 ,
where
g ( η , t ) = f ( η , t ) + D t α u ^ u ^ η η u ^ ( u ^ ε ) ( 1 u ^ ) .
Thus, we can solve the modified Equation (39) governed by (40) instead of solving (1) governed by (2) and (3).
Remark 3. 
Using a modified collocation method with the HIBCs leads to fewer computations to achieve the desired precision, better numerical stability, and fewer computations to achieve the desired accuracy.
Now, assume the following space functions:
L ω ( η , t ) 2 ( Ω ) = span { θ i ( η ) ϑ j ( t ) : i , j = 0 , 1 , , }
Δ ω ( η , t ) ( Ω ) = span { ψ L ω ( η , t ) 2 ( Ω ) : ψ ( η , 0 ) = ψ ( 0 , t ) = ψ ( 1 , t ) = 0 } ,
where ω ( η , t ) = ω 1 ( η ) ω 2 ( t ) , and Ω = ( 0 , 1 ) × ( 0 , 1 ) . Then, any ψ Δ ω ( η , t ) ( Ω ) can be approximated as
ψ ψ K = i = 0 K j = 0 K c i j θ i ( η ) ϑ j ( t ) = θ ( η ) T C ϑ ( t ) ,
where θ ( η ) = [ θ 0 ( η ) , θ 1 ( η ) , , θ K ( η ) ] T , ϑ ( t ) = [ ϑ 0 ( t ) , ϑ 1 ( t ) , , ϑ K ( t ) ] T , and C = ( c i j ) 0 i , j K is the unknown matrix whose order is ( K + 1 ) 2 .
  • The residual R ( η , t ) of Equation (39) is expressed as
R ( η , t ) = D t α ψ K ψ η η K ψ K ( ψ K ε ) ( 1 ψ K ) g ( η , t ) .
Utilizing Remarks 1 and 2 alongside the expansion (44), we can express R ( η , t ) , as defined in (45), in the subsequent matrix form:
R ( η , t ) = θ ( η ) T C B ϑ ( t ) + λ A 2 θ ( η ) + A μ + μ T C ϑ ( t ) θ ( η ) T C ϑ ( t ) θ ( η ) T C ϑ ( t ) ε 1 θ ( η ) T C ϑ ( t ) g ( η , t )
The application of the spectral collocation method at certain collocation points ( η i , t j ) produces
R ( η i , t j ) = 0 , i , j = 2 , 3 , , K + 2 ,
where { ( η i , t j ) : i , j = 2 , 3 , , K + 2 } are the roots of ( θ K + 2 ( η ) , ϑ K + 2 ( t ) ) . We may employ Newton’s iterative method to numerically address the nonlinear system of Equation (47), which has a dimension of ( K + 1 ) 2 in the unknowns c i j .
Remark 4. 
The term g ( η , t ) , which may include Caputo’s FD, is treated as known and evaluated at collocation points ( η i , t j ) using its analytical form; fractional derivatives are computed numerically using Mathematica before substituting it into the nonlinear system.

4. The Error Analysis

This section focuses on the error analysis of the double expansion utilized for approximation.
Lemma 1. 
The following inequalities are satisfied in the interval [ 0 ,   1 ] :
| θ i ( η ) | < 1 4 , i 0 ,
| ϑ i ( t ) | 1 , i 0 .
Proof. 
To prove the first part, Equation (11) enables one to write
| θ i ( η ) | = | η ( 1 η ) L i ( 0 , 1 ) ( η ) | 1 4 | L i ( 0 , 1 ) ( η ) | ,
Using the following inequality [38]
| L i ( 0 , 1 ) ( η ) | 1 , i 0 ,
the first part can be obtained.
  • To prove the second part, Equation (11) along with the previous inequality enable us to write
| ϑ i ( t ) | = | t L i ( 0 , 1 ) ( t ) | 1 ,
this completes the proof of the theorem. □
Theorem 3. 
Assume that the expansion ψ = i = 0 j = 0 c i j θ i ( η ) ϑ j ( t ) is satisfied by a function ψ = η t ( 1 η ) g 1 ( η ) g 1 ( η ) g 2 ( t ) Δ ω ( η , t ) ( Ω ) , with g 1 ( η ) and g 2 ( t ) having bounded fourth-order derivatives. After this, the series in (44) converges to ψ. Additionally, the following inequality is satisfied by the following bound on the expansion coefficients in (44).
| c i j | 1 i 3 j 3 , i , j 4 ,
where r 1 r 2 indicates that r 1 n r 2 . for a constant n.
Proof. 
In light of the orthogonality of θ i ( η ) and ϑ j ( t ) , it is possible to obtain the following:
c i j = ( 2 i + 1 ) ( 2 j + 1 ) 0 1 0 1 ψ θ i ( η ) ϑ j ( t ) ω ( η ) d η d t .
The last equation can be rewritten after using the assumption ψ = η t ( 1 η ) g 1 ( η ) g 2 ( t ) , as
c i j = ( 2 i + 1 ) ( 2 j + 1 ) 0 1 0 1 g 1 ( η ) g 2 ( t ) L i ( 0 , 1 ) ( η ) L j ( 0 , 1 ) ( t ) d η d t .
which can be rewritten as
c i j = Λ 1 × Λ 2 ,
where
Λ 1 = ( 2 i + 1 ) 0 1 g 1 ( η ) L i ( 0 , 1 ) ( η ) d η ,
and
Λ 2 = ( 2 j + 1 ) 0 1 g 2 ( t ) L j ( 0 , 1 ) ( t ) d t .
Integrating by parts Λ 1 and Λ 2 four times, we obtain
Λ 1 = ( 2 i + 1 ) 0 1 g 1 ( 4 ) ( η ) I 1 ( 4 ) ( η ) d η ,
Λ 2 = ( 2 j + 1 ) 0 1 g 2 ( 4 ) ( t ) I 2 ( 4 ) ( t ) d t ,
where
I 1 ( 4 ) ( η ) = 1 2 8 m = 0 4 4 m ( 1 ) m ( i 2 m + 9 2 ) Γ ( i m + 1 2 ) Γ ( i m + 11 2 ) L i + 4 2 m ( 0 , 1 ) ( η ) ,
and
I 2 ( 4 ) ( t ) = 1 2 8 m = 0 4 4 m ( 1 ) m ( i 2 m + 9 2 ) Γ ( i m + 1 2 ) Γ ( i m + 11 2 ) L i + 4 2 m ( 0 , 1 ) ( t ) .
Now, imitating similar steps to those in Theorem 4 in Ref. [37], we acquire the desired estimation
| c i j | 1 i 3 j 3 , i , j 4 .
Remark 5. 
If we follow the same principles as those for Theorem 3, we will have the following inequalities:
| c 0 j | 1 j 3 , | c 1 j | 1 j 3 , | c 2 j | 1 j 3 , | c 3 j | 1 j 3 , j > 3 ,
and
| c i 0 | 1 i 3 , | c i 1 | 1 i 3 , | c i 2 | 1 i 3 , | c i 3 | 1 i 3 , i > 3 .
Theorem 4. 
The truncation error can be approximated as
| ψ ψ K | 1 K 2 .
Proof. 
First, we can write
| ψ ψ K | j = K + 1 ( c 0 j θ 0 ( η ) + c 1 j θ 1 ( η ) + c 2 j θ 2 ( η ) + c 3 j θ 3 ( η ) ) ϑ j ( t ) + i = K + 1 ( c i 0 ϑ 0 ( t ) + c i 1 ϑ 1 ( t ) + c i 2 ϑ 2 ( t ) + c i 3 ϑ 3 ( t ) ) θ i ( η ) + i = 4 K j = K + 1 c i j θ i ( ξ ) ϑ j ( t ) + i = K + 1 j = 4 c i j θ i ( ξ ) ϑ j ( t ) .
With the aid of Lemma 1; Theorem 3; and Equations (64) and (65), the last equation becomes
| ψ ψ K | i = K + 1 j = 4 1 i 3 j 3 + j = K + 1 1 j 3 + i = 4 K j = K + 1 1 i 3 j 3 + i = K + 1 1 i 3 .
Now, noting the inequality
1 i 3 < 1 i ( i 2 1 ) , i > 1 ,
we acquire
i = K + 1 1 i i 2 1 = 1 2 K 2 + 2 K < 1 K 2 , K > 0 , i = 4 K 1 i i 2 1 = ( K 3 ) ( K + 4 ) 24 K ( K + 1 ) < 1 24 , K > 0 , i = 4 1 i i 2 1 = 1 24 .
Inserting Equation (69) into Equation (68), we obtain
| ψ ψ K | 1 K 2 .
Theorem 5 (Stability).
Under the assumptions of Theorem 3, we have
ψ K + 1 ψ K 1 K 2 .
Proof. 
Applying the following inequality and Theorem 4 yields the desired result
ψ K + 1 ψ K ψ ψ K + ψ ψ K + 1 1 K 2 .
This concludes the proof of this theorem. □

5. Illustrative Examples

In this section, we present some numerical examples to demonstrate the accuracy and efficiency of the suggested numerical algorithm by using the absolute error ( AE ) and the L error
AE = u e x a c t u a p p r o x i m a t e ,
L e r r o r = max ( η , t ) Ω AE .
All codes were written and debugged using Mathematica 11 on an HP Z420 Workstation, with an Intel(R) Xeon(R) CPU E5-1620 processor, v2, 3.70 GHz; 16 GB of RAM, DDR3; and 512 GB of storage.
Example 1 
([32,33]). Consider the TFFNDP
D t α u = u η η + u ( u ε ) ( 1 u ) + f ( η , t ) , 0 < α 1 ,
constrained with
u ( η , 0 ) = 1 2 tanh η 2 2 + 1 2 , 0 < η 1 , u ( 0 , t ) = 1 2 1 2 tanh t 4 , u ( 1 , t ) = 1 2 tanh 1 4 2 t + 1 2 , 0 < t 1 ,
where f ( η , t ) is selected such that the exact solution of this problem is u = 1 2 tanh 1 4 2 η t + 1 2 .
At ε = 1 , the L error at various α values is compared between our technique at K = 7 and the method described in [32] in Table 1. The absolute errors ( AE s) at α = 0.65 and ε = 0.1 are depicted in Figure 1. Table 2 presents a comparison of the AE s between our method and method in [33] at α = 1 and ε = 1 . The AE s are presented in Table 3 at varying values of ε when α = 0.65 and K = 7 . Figure 2 illustrates the stability | ψ K + 1 ψ K | at η = t and α = 0.5 . Finally, Figure 3 illustrates the Log10(maximum AE ) at different values of K when α = 0.5 and ε = 1 . The results of this method are exceedingly near to the precise solution, as evidenced by these findings.
Example 2 
([31,32,33]). Consider the TFFNDP
D t α u = u η η + u ( u ε ) ( 1 u ) + f ( η , t ) , 0 < α 1 ,
constrained with
u ( η , 0 ) = 1 e η 2 + 1 , 0 < η 1 , u ( 0 , t ) = 1 e 2 + 1 2 t + 1 , u ( 1 , t ) = 1 e 2 + 1 2 t 1 2 + 1 , 0 < t 1 ,
where f ( η , t ) is selected such that the exact solution of (76) is u = 1 1 + e 1 2 2 ε t η 2 .
In Table 4 and Table 5, we compare the AE s of our technique at K = 11 with those in [32,33] at different α and ε values. Table 6 also shows how the proposed technique and the method in [32] compare in terms of the L error at different α values. When α = 0.7 and ε = 0.45 , Figure 4 illustrates the AE at different values of K . Figure 5 illustrates the stability | ψ K + 1 ψ K | at η = t and α = 0.7 . Finally, Figure 6 illustrates the Log10(maximum AE ) at different values of K when α = 0.7 and ε = 1 . These findings show that this method’s results are quite near to the exact solution.
Example 3 
([32]). Consider the TFFNDP
D t α u = u η η + u ( u ε ) ( 1 u ) + f ( η , t ) , 0 < α 1 ,
constrained with
u ( ξ , 0 ) = 1 2 tanh η 2 2 + 1 2 , 0 < η 1 , u ( 0 , t ) = 1 2 tanh t 4 + 1 2 , u ( 1 , t ) = 1 2 tanh 1 4 t + 2 + 1 2 , 0 < t 1 ,
where f ( η , t ) is selected such that the exact solution of this problem is u = 1 2 tanh 1 4 t + 2 η + 1 2 .
Table 7 compares the L error for our technique at K = 7 with that of [32] at different α values when ε = 0 . Figure 7 shows the AE (left) and estimated solution ( AS ) (right) at α = 0.3 , ε = 0.5 for K = 7 . Table 8 shows the AE at α = 0.8 and different values of ε at K = 7 . Table 9 presents the AE at different values of α and ϵ when K = 7 .
Example 4. 
Consider the TFFNDP
D t α u = u η η + u ( u ε ) ( 1 u ) + t 2 1 t 2 η 2 1 η 2 , 0 < α 1 ,
constrained with
u ( ξ , 0 ) = 0 , 0 < η 1 , u ( 0 , t ) = u ( 1 , t ) = 0 , 0 < t 1 ,
Since the exact solution is not available, let us define the following absolute residual error norm:
A R E = max ( η , t ) ( 0 , 1 ) × ( 0 , 1 ) D t α u K u η η u K ( u K ε ) ( 1 u K ) t 2 1 t 2 η 2 1 η 2 .
and apply the presented method at α = 0.9 and ϵ = 0.01 at different values of η = t to obtain Table 10, which illustrates the ARE at various values of K . Figure 8 illustrates the ARE and the approximate solution at α = 0.9 , ϵ = 0.01 , and K = 14 .
Remark 6. 
The approximate solution at α = 0.9 , ϵ = 0.01 , and K = 3 is
u 3 ( η , t ) = η t 0.00169533 η 4 + 0.00332928 η 3 0.000865217 η 2 + 0.0000997735 η + 0.0929458 η 4 + 0.161879 η 3 0.00258051 η 2 + 0.00255686 η 0.0689095 t 3 + 0.0423435 η 4 0.0335923 η 3 0.0669551 η 2 0.000925701 η + 0.0591296 t 2 + 0.0560843 η 4 0.134855 η 3 + 0.0647677 η 2 0.00181725 η + 0.0158202 t 0.000868501 .
Example 5. 
Consider the TFFNDP
D t α u = u η η + u ( u ε ) ( 1 u ) + f ( η , t ) , 0 < α 1 ,
constrained with
u ( η , 0 ) = 0 , 0 < η 1 , u ( 0 , t ) = t 5 α , u ( 1 , t ) = t 5 α e , 0 < t 1 ,
where f ( η , t ) is selected such that the exact solution of this problem is u = e η 2 t 5 α at ε = 1 .
Table 11 presents the AE at α = 0.5 and K = 9 . In addition, Figure 9 illustrates the AE at different values of α when K = 9 .
Remark 7. 
The source term f ( η , t ) that contains Caputo’s FD of the exact solution u for Examples 1, 2, 3, and 5 is evaluated numerically using Mathematica 11.

6. Concluding Remarks

This paper presents and analyzes an accurate solver employing a modified collocation technique for solving the nonlinear inhomogeneous TFFNDP . Specific theoretical findings related to SLPs were significant in applying our numerical methods to resolve this problem. We designed the proposed numerical method by using the integer derivatives and fractional operational matrices of SLPs . Numerous numerical findings and comparisons were presented to validate the proposed methodology. As expected future work, we think that the presented approach could be extended to treating other fractional-order PDE models [39,40].

Author Contributions

Conceptualization, A.A.A. and A.G.A.; Methodology, S.S.A., A.A.A. and A.G.A.; Software, A.G.A.; Validation, S.S.A., A.A.A. and A.G.A.; Formal analysis, S.S.A. and A.G.A.; Investigation, A.G.A.; Resources, A.A.A. and A.G.A.; Data curation, S.S.A., A.A.A. and A.G.A.; Writing—original draft, S.S.A. and A.G.A.; Writing—review & editing, S.S.A., A.A.A. and A.G.A.; Visualization, S.S.A., A.A.A. and A.G.A.; Supervision, A.G.A.; Project administration, A.G.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors received no funding for this study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bernardi, C.; Maday, Y. Spectral Methods. Handb. Numer. Anal. 1997, 5, 209–485. [Google Scholar]
  2. Shen, J.; Tang, T.; Wang, L.L. Spectral Methods: Algorithms, Analysis and Applications; Springer: Berlin/Heidelberg, Germany, 2011; Volume 41. [Google Scholar]
  3. Orszag, S.A. Spectral Methods for Problems in Complex Geometrics. In Numerical Methods for Partial Differential Equations; Elsevier: Amsterdam, The Netherlands, 1979; pp. 273–305. [Google Scholar]
  4. Abdelhakem, M.; Abdelhamied, D.; El-Kady, M.; Youssri, Y.H. Two Modified Shifted Chebyshev–Galerkin Operational Matrix Methods for Even-Order Partial Boundary Value Problems. Bound. Value Probl. 2025, 2025, 34. [Google Scholar] [CrossRef]
  5. Brahim, M.S.T.; Youssri, Y.H.; Alburaikan, A.; Khalifa, H.; Radwn, T. A Refined Galerkin Approach for Solving Higher-Order Differential Equations via Bernoulli Polynomials. Fractals 2025, 2540183. [Google Scholar] [CrossRef]
  6. Zaky, M.A.; Alharbi, W.G.; Alzubaidi, M.M.; Matoog, R.T. A Legendre Tau Approach for High-Order Pantograph Volterra–Fredholm Integro-Differential Equations. AIMS Math. 2025, 10, 7067–7085. [Google Scholar] [CrossRef]
  7. Alaa-Eldeen, T.; Alzabeedy, G.M.; Albalawi, W.; Nisar, K.S.; Abdel-Aty, A.H.; El-Kady, M.; Abdelhakem, M. Spectral Tau Explicit Form for Approximating Solutions to Real-Life IBVPs Using Chebyshev Derivatives. Int. J. Geom. Methods Mod. Phys. 2025, 22, 2450324. [Google Scholar] [CrossRef]
  8. Abd-Elhameed, W.M.; Alqubori, O.M.; Amin, A.K.; Atta, A.G. Numerical Solutions for Nonlinear Ordinary and Fractional Duffing Equations Using Combined Fibonacci–Lucas Polynomials. Axioms 2025, 14, 314. [Google Scholar] [CrossRef]
  9. Luo, M.; Xu, D.; Pan, X. Sinc–Galerkin method and a higher-order method for a 1D and 2D time-fractional diffusion equations. Bound. Value Probl. 2024, 2024, 106. [Google Scholar] [CrossRef]
  10. Ramadan, M.; Samy, H.; Hanafy, I.; Adel, W. Petrov–Galerkin finite element method for solving the time-fractional Rosenau–Hyman equation. J. Umm Al-Qura Univ. Appl. Sci. 2025, 1–14. [Google Scholar] [CrossRef]
  11. Li, Z.; He, G.; Yi, L. Postprocessing techniques of the C0- and C1-continuous Petrov–Galerkin methods for second-order Volterra integro-differential equations. J. Appl. Math. Comput. 2025, 1–38. [Google Scholar] [CrossRef]
  12. Talaei, Y.; Zaky, M.; Hendy, A. A fractional spectral Galerkin method for Fuzzy Volterra integral equations with weakly singular kernels: Regularity, convergence, and applications. Fuzzy Sets Syst. 2025, 518, 109488. [Google Scholar] [CrossRef]
  13. Yassin, N.M.; Atta, A.G.; Aly, E.H. Numerical Solutions for Nonlinear Ordinary and Fractional Newell–Whitehead–Segel Equation Using Shifted Schröder Polynomials. Bound. Value Probl. 2025, 2025, 57. [Google Scholar] [CrossRef]
  14. Doha, E.; Abd-Elhameed, W.; Youssri, Y. Second kind Chebyshev operational matrix algorithm for solving differential equations of Lane–Emden type. New Astron. 2013, 23, 113–117. [Google Scholar] [CrossRef]
  15. Taema, M.; Dagher, M.; Youssri, Y. Spectral collocation method via Fermat polynomials for Fredholm–Volterra integral equations with singular kernels and fractional differential equations. J. Math. 2025, 14, 481–492. [Google Scholar]
  16. Yüzbaşı, Ş. Fractional Bell collocation method for solving linear fractional integro-differential equations. Math. Sci. 2024, 18, 29–40. [Google Scholar] [CrossRef]
  17. Roop, J. A randomized neural network based Petrov–Galerkin method for approximating the solution of fractional order boundary value problems. Results Appl. Math. 2024, 23, 100493. [Google Scholar] [CrossRef]
  18. Pulch, R.; Singh, A. Stochastic Galerkin method for linear fractional differential equations. Int. J. Uncertain. Quantif. 2025, 15, 21–36. [Google Scholar] [CrossRef]
  19. Suetin, P. Orthogonal Polynomials in Two Variables; Routledge: Oxfordshire, UK, 2022. [Google Scholar]
  20. Abd-Elhameed, W.M.; Doha, E.H.; Ahmed, H.M. Linearization Formulae for Certain Jacobi Polynomials. Ramanujan J. 2016, 39, 155–168. [Google Scholar] [CrossRef]
  21. Boyd, J.P. Chebyshev and Fourier Spectral Methods; Courier Corp.: North Chelmsford, MA, USA, 2001. [Google Scholar]
  22. Hesthaven, J.S.; Gottlieb, D.I.; Gottlieb, S. Spectral Methods for Time-Dependent Problems; Cambridge Univ. Press: Cambridge, UK, 2007; Volume 21. [Google Scholar]
  23. Ismail, M.E.H.; Koelink, E. Theory and Applications of Special Functions; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  24. Sakar, M.G.; Saldır, O.; Ata, A. Numerical Solution of Fractional Order Multi-Point Boundary Value Problems Using Reproducing Kernel Method with Shifted Legendre Polynomials. Z. Angew. Math. Phys. 2025, 76, 141. [Google Scholar] [CrossRef]
  25. Vana, R.; Karunaka, P. Numerical Solutions of the Benjamin–Bona–Mahony Equation Using the Differential Quadrature Method with Shifted Legendre and Generalized Laguerre Polynomials. Indian J. Pure Appl. Math. 2025, 1–14. [Google Scholar] [CrossRef]
  26. FitzHugh, R. Impulses and physiological states in theoretical models of nerve membrane. Biophys. J. 1961, 1, 445–466. [Google Scholar] [CrossRef]
  27. Nagumo, J.; Arimoto, S.; Yoshizawa, S. An active pulse transmission line simulating nerve axon. Proc. IRE 2007, 50, 2061–2070. [Google Scholar] [CrossRef]
  28. Aronson, D.G.; Weinberger, H.F. Multidimensional nonlinear diffusion arising in population genetics. Adv. Math. 1978, 30, 33–76. [Google Scholar] [CrossRef]
  29. Onyeoghane, J.N.; Njoseh, I.N.; Igabari, J.N. A Petrov–Galerkin Finite Element Method for the Space Time Fractional FitzHugh–Nagumo Equation. Sci. Afr. 2025, 28, e02623. [Google Scholar] [CrossRef]
  30. Abd-Elhameed, W.M.; Alqubori, O.M.; Atta, A.G. A Collocation Procedure for the Numerical Treatment of FitzHugh–Nagumo Equation Using a Kind of Chebyshev Polynomials. AIMS Math. 2025, 10, 1201–1223. [Google Scholar] [CrossRef]
  31. Abd-Elhameed, W.M.; Alqubori, O.M.; Atta, A.G. A Collocation Procedure for Treating the Time-Fractional FitzHugh–Nagumo Differential Equation Using Shifted Lucas Polynomials. Mathematics 2024, 12, 3672. [Google Scholar] [CrossRef]
  32. Alam, M.; Haq, S.; Ali, I.; Ebadi, M.J.; Salahshour, S. Radial Basis Functions Approximation Method for Time-Fractional FitzHugh–Nagumo Equation. Fractal Fract. 2023, 7, 882. [Google Scholar] [CrossRef]
  33. Patel, H.S.; Patel, T. Applications of Fractional Reduced Differential Transform Method for Solving the Generalized Fractional-Order FitzHugh–Nagumo Equation. Int. J. Appl. Comput. Math. 2021, 7, 188. [Google Scholar] [CrossRef]
  34. Zhao, Y.L.; Gu, X.M.; Ostermann, A. A Preconditioning Technique for an All-at-Once System from Volterra Subdiffusion Equations with Graded Time Steps. J. Sci. Comput. 2021, 88, 11. [Google Scholar] [CrossRef]
  35. Podlubny, I. Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications; Elsevier: Amsterdam, The Netherlands, 1998; Volume 198. [Google Scholar]
  36. Abd-Elhameed, W.M.; Youssri, Y.H.; Doha, E.H. A Novel Operational Matrix Method Based on Shifted Legendre Polynomials for Solving Second-Order Boundary Value Problems Involving Singular, Singularly Perturbed and Bratu-Type Equations. Math. Sci. 2015, 9, 93–102. [Google Scholar] [CrossRef]
  37. Napoli, A.; Abd-Elhameed, W.M. Numerical Solution of Eighth-Order Boundary Value Problems by Using Legendre Polynomials. Int. J. Comput. Methods 2018, 15, 1750083. [Google Scholar] [CrossRef]
  38. Hussaini, M.Y.; Zang, T.A. Spectral Methods in Fluid Dynamics. Annu. Rev. Fluid Mech. 1987, 19, 339–367. [Google Scholar] [CrossRef]
  39. Gu, X.M.; Sun, H.W.; Zhao, Y.L.; Zheng, X. An Implicit Difference Scheme for Time-Fractional Diffusion Equations with a Time-Invariant Type Variable Order. Appl. Math. Lett. 2021, 120, 107270. [Google Scholar] [CrossRef]
  40. Gu, X.M.; Huang, T.Z.; Zhao, Y.L.; Lyu, P.; Carpentieri, B. A Fast Implicit Difference Scheme for Solving the Generalized Time–Space Fractional Diffusion Equations with Variable Coefficients. Numer. Methods Partial Differ. Equ. 2021, 37, 1136–1162. [Google Scholar] [CrossRef]
Figure 1. The AE of Example 1 at α = 0.65 and ε = 0.1 . .
Figure 1. The AE of Example 1 at α = 0.65 and ε = 0.1 . .
Symmetry 17 01468 g001
Figure 2. Stability | ψ K + 1 ψ K | at η = t for Example 1.
Figure 2. Stability | ψ K + 1 ψ K | at η = t for Example 1.
Symmetry 17 01468 g002
Figure 3. The Log10(maximum AE ) at α = 0.5 of Example 1.
Figure 3. The Log10(maximum AE ) at α = 0.5 of Example 1.
Symmetry 17 01468 g003
Figure 4. The AE of Example 2 at α = 0.7 , ε = 0.45 .
Figure 4. The AE of Example 2 at α = 0.7 , ε = 0.45 .
Symmetry 17 01468 g004
Figure 5. Stability | ψ K + 1 ψ K | at η = t for Example 2.
Figure 5. Stability | ψ K + 1 ψ K | at η = t for Example 2.
Symmetry 17 01468 g005
Figure 6. The Log10(maximum AEs ) at α = 0.7 of Example 2.
Figure 6. The Log10(maximum AEs ) at α = 0.7 of Example 2.
Symmetry 17 01468 g006
Figure 7. The AE (left) and AS (right) of Example 3 at α = 0.3 , ε = 0.5 , when K = 7 .
Figure 7. The AE (left) and AS (right) of Example 3 at α = 0.3 , ε = 0.5 , when K = 7 .
Symmetry 17 01468 g007
Figure 8. The ARE and the approximate solution of Example 4 at α = 0.9 , ϵ = 0.01 , and K = 14 .
Figure 8. The ARE and the approximate solution of Example 4 at α = 0.9 , ϵ = 0.01 , and K = 14 .
Symmetry 17 01468 g008
Figure 9. The AE of Example 5 at different values of α .
Figure 9. The AE of Example 5 at different values of α .
Symmetry 17 01468 g009
Table 1. L errors of Example 1.
Table 1. L errors of Example 1.
tMethod in [32]Presented Technique at K = 7
α = 0.25 α = 0.5 α = 0.75 α = 1 α = 0.25 α = 0.5 α = 0.75 α = 1
0.2 6.166 × 10 4 3.14 × 10 4 1.28 × 10 4 3.64 × 10 5 2.41 × 10 10 3.23047 × 10 10 3.98 × 10 10 4.66 × 10 10
0.4 2.99 × 10 4 2.30 × 10 4 1.41 × 10 4 1.57 × 10 4 2.071 × 10 10 1.87 × 10 10 1.52 × 10 10 1.41 × 10 10
0.6 4.323 × 10 5 7.29 × 10 5 5.034 × 10 5 1.59 × 10 4 1.57 × 10 10 1.49 × 10 10 1.58 × 10 10 2.72 × 10 10
0.8 9.45 × 10 5 1.61 × 10 5 4.743 × 10 5 6.20 × 10 5 6.03 × 10 10 1.11 × 10 9 1.81 × 10 9 2.64 × 10 9
1 7.90 × 10 6 8.04 × 10 6 2.664 × 10 6 6.82 × 10 6 5.76 × 10 9 1.07 × 10 8 1.92 × 10 8 3.35 × 10 8
Table 2. Comparison of AE s for Example 1 at α = 1 .
Table 2. Comparison of AE s for Example 1 at α = 1 .
η tMethod in [33]Proposed Method at K = 7
0.10.2 5.26 × 10 8 8.90 × 10 11
0.4 5.11 × 10 7 2.47 × 10 11
0.6 9.06 × 10 7 4.48 × 10 11
0.8 2.40 × 10 6 5.87 × 10 10
0.30.2 1.95 × 10 7 2.61 × 10 10
0.4 2.80 × 10 6 7.71 × 10 11
0.6 1.26 × 10 5 1.43 × 10 10
0.8 3.45 × 10 5 1.65 × 10 9
0.50.2 3.21 × 10 7 4.02 × 10 10
0.4 4.86 × 10 6 1.30 × 10 10
0.6 2.32 × 10 5 2.49 × 10 10
0.8 6.86 × 10 5 2.41 × 10 9
0.70.2 4.22 × 10 7 4.66 × 10 10
0.4 6.54 × 10 6 1.14 × 10 10
0.6 3.20 × 10 5 2.27 × 10 10
0.8 9.72 × 10 5 2.64 × 10 9
0.90.2 4.9 × 10 7 3.70 × 10 10
0.4 7.72 × 10 6 1.20 × 10 10
0.6 3.83 × 10 5 1.78 × 10 10
0.8 1.18 × 10 4 1.67 × 10 9
Table 3. The AE of Example 1 at α = 0.65 .
Table 3. The AE of Example 1 at α = 0.65 .
η = t ε = 0.3 CPU Time ε = 0.6 CPU Time ε = 0.9 CPU Time
0.1 4.97520 × 10 11 4.79537 × 10 11 4.62389 × 10 11
0.2 9.34854 × 10 11 8.96371 × 10 11 8.60031 × 10 11
0.3 1.09946 × 10 10 1.0548 × 10 10 1.01279 × 10 10
0.4 1.21079 × 10 10 1.16213 × 10 10 1.1163 × 10 10
0.5 8.43561 × 10 11 24.875 8.06663 × 10 11 25.827 7.72103 × 10 11 26.189
0.6 1.26159 × 10 11 1.32163 × 10 11 1.37233 × 10 11
0.7 4.20313 × 10 10 4.14684 × 10 10 4.09208 × 10 10
0.8 1.26053 × 10 9 1.24131 × 10 9 1.22274 × 10 9
0.9 1.71685 × 10 9 1.68629 × 10 9 1.65686 × 10 9
Table 4. Comparison of AE s of Example 2 at α = 1 , ε = 1 .
Table 4. Comparison of AE s of Example 2 at α = 1 , ε = 1 .
( η , t ) Method in [33]Method in [32]Method in [31]Proposed Method at K = 11 Our CPU Time
(0.001, 0.001) 1.5 × 10 3 1.0 × 10 9 4.1 × 10 10 2.4 × 10 13 33.891
(0.002, 0.002) 3.0 × 10 3 8.7 × 10 11 5.2 × 10 10 3.1 × 10 14
(0.003, 0.003) 4.5 × 10 3 1.3 × 10 9 5.5 × 10 10 2.9 × 10 12
(0.004, 0.004) 6.0 × 10 3 3.3 × 10 9 5.3 × 10 10 1.0 × 10 11
(0.005, 0.005) 7.5 × 10 3 4.5 × 10 9 4.9 × 10 10 2.5 × 10 11
(0.006, 0.006) 9.1 × 10 3 5.8 × 10 9 4.4 × 10 10 4.5 × 10 11
(0.007, 0.007) 1.0 × 10 2 1.1 × 10 8 3.9 × 10 10 7.1 × 10 11
(0.008, 0.008) 1.2 × 10 2 1.2 × 10 8 3.4 × 10 10 1.0 × 10 10
(0.009, 0.009) 1.3 × 10 2 6.5 × 10 9 3.0 × 10 10 1.3 × 10 10
(0.01, 0.01) 1.5 × 10 2 1.4 × 10 9 2.7 × 10 10 1.5 × 10 10
Table 5. Comparison of AE s of Example 2 at α = 0.5 , ε = 0.45 .
Table 5. Comparison of AE s of Example 2 at α = 0.5 , ε = 0.45 .
( η , t ) Method in [33]Method in [32]Method in [31]Proposed Method at K = 7 Our CPU Time
(0.001, 0.001) 2.8 × 10 2 8.7 × 10 10 5.7 × 10 13 1.0 × 10 13 37.139
(0.002, 0.002) 4.1 × 10 2 4.3 × 10 10 7.6 × 10 13 4.1 × 10 13
(0.003, 0.003) 5.3 × 10 2 1.4 × 10 8 1.2 × 10 12 9.0 × 10 13
(0.004, 0.004) 6.2 × 10 2 6.6 × 10 8 1.8 × 10 12 1.5 × 10 12
(0.005, 0.005) 6.9 × 10 2 1.8 × 10 9 2.6 × 10 12 2.3 × 10 12
(0.006, 0.006) 8.0 × 10 2 4.1 × 10 9 3.6 × 10 12 3.2 × 10 12
(0.007, 0.007) 8.7 × 10 2 2.0 × 10 9 4.6 × 10 12 4.2 × 10 12
(0.008, 0.008) 9.4 × 10 2 3.5 × 10 10 5.8 × 10 12 5.3 × 10 12
(0.009, 0.009) 1.0 × 10 2 7.4 × 10 10 7.1 × 10 12 6.5 × 10 12
(0.01, 0.01) 1.1 × 10 2 1.0 × 10 9 8.4 × 10 12 7.7 × 10 12
Table 6. L errors of Example 2.
Table 6. L errors of Example 2.
tMethod in [32]Proposed Method
α = 1 , ε = 1 α = 0.5 , ε = 0.45 α = 1 , ε = 1 , K = 11 α = 0.5 , ε = 0.45 , K = 7
0.02 2.00 × 10 8 6.58 × 10 10 3.10 × 10 9 3.23 × 10 11
0.04 1.23 × 10 8 4.18 × 10 10 2.31 × 10 9 5.12 × 10 11
0.06 9.86 × 10 8 4.33 × 10 10 1.66 × 10 9 6.26 × 10 11
0.08 2.40 × 10 9 5.09 × 10 10 1.31 × 10 9 7.00 × 10 11
0.11 1.26 × 10 9 2.45 × 10 10 9.66 × 10 10 7.60 × 10 11
Table 7. L errors of Example 3.
Table 7. L errors of Example 3.
tMethod in [32]Proposed Technique at K = 7
α = 25 100 α = 5 10 α = 75 100 α = 1 α = 25 100 α = 5 10 α = 75 100 α = 1
0.2 9.50 × 10 4 6.34 × 10 4 2.56 × 10 4 8.71 × 10 6 1.56 × 10 9 2.61 × 10 9 4.20 × 10 9 6.468 × 10 9
0.4 7.42 × 10 4 7.79 × 10 4 4.51 × 10 4 3.39 × 10 5 7.31 × 10 9 1.14 × 10 8 1.73 × 10 8 2.55 × 10 8
0.6 5.77 × 10 4 5.47 × 10 4 4.56 × 10 4 6.51 × 10 5 2.66 × 10 8 4.04 × 10 8 5.97 × 10 8 8.58 × 10 8
0.8 3.26 × 10 4 6.66 × 10 5 1.64 × 10 4 2.61 × 10 4 8.37 × 10 8 1.23 × 10 7 1.78 × 10 7 2.51 × 10 7
1 1.17 × 10 6 3.61 × 10 6 4.25 × 10 6 3.06 × 10 6 2.34 × 10 7 3.34 × 10 7 4.70 × 10 7 6.55 × 10 7
Table 8. The AE of Example 3 at α = 0.8 .
Table 8. The AE of Example 3 at α = 0.8 .
η = t ε = 0.3 CPU Time ε = 0.6 CPU Time ε = 0.9 CPU Time
0.1 1.33026 × 10 10 1.29237 × 10 10 1.25581 × 10 10
0.2 7.95432 × 10 10 7.7033 × 10 10 7.46275 × 10 10
0.3 2.78714 × 10 9 2.70253 × 10 9 2.62156 × 10 9
0.4 8.16272 × 10 9 7.938 × 10 9 7.72269 × 10 9
0.5 2.12029 × 10 8 83.03 2.06932 × 10 8 90.893 2.0204 × 10 8 84.86
0.6 5.00448 × 10 8 4.90351 × 10 8 4.80639 × 10 8
0.7 1.05611 × 10 7 1.03882 × 10 7 1.02214 × 10 7
0.8 1.88777 × 10 7 1.86351 × 10 7 1.84007 × 10 7
0.9 2.41631 × 10 7 2.39296 × 10 7 2.37035 × 10 7
Table 9. The AE of Example 3 at K = 7 .
Table 9. The AE of Example 3 at K = 7 .
η = t α = 0.01 , ε = 0.01 CPU Time α = 0.02 , ε = 0.02 CPU Time α = 0.03 , ε = 0.001 CPU Time
0.1 3.67187 × 10 11 3.75785 × 10 11 3.86181 × 10 11
0.2 2.34813 × 10 11 2.39402 × 10 10 2.45066 × 10 10
0.3 8.62635 × 10 10 8.78073 × 10 10 8.97186 × 10 10
0.4 2.52802 × 10 9 2.57116 × 10 9 2.62409 × 10 9
0.5 6.49957 × 10 9 26.094 6.60822 × 10 9 25.968 6.73914 × 10 9 26.047
0.6 1.5112 × 10 8 1.53617 × 10 8 1.56557 × 10 8
0.7 3.16022 × 10 8 3.21185 × 10 8 3.27112 × 10 8
0.8 5.66429 × 10 8 5.75556 × 10 8 5.85777 × 10 8
0.9 7.33268 × 10 8 7.44858 × 10 8 7.57545 × 10 8
Table 10. The ARE for Example 4 at α = 0.9 and ϵ = 0.01 .
Table 10. The ARE for Example 4 at α = 0.9 and ϵ = 0.01 .
η = t K = 7 K = 9 K = 11 K = 14
0.1 3.58511 × 10 7 5.8678 × 10 7 1.09412 × 10 8 7.62746 × 10 8
0.2 1.30324 × 10 7 8.73372 × 10 7 3.20856 × 10 7 4.43158 × 10 8
0.3 1.98591 × 10 8 9.20873 × 10 7 3.31491 × 10 7 1.11666 × 10 7
0.4 1.98839 × 10 6 2.48025 × 10 7 1.3223 × 10 7 9.93006 × 10 8
0.5 7.45931 × 10 7 5.6552 × 10 6 1.9082 × 10 7 2.12828 × 10 8
0.6 1.09483 × 10 5 2.21417 × 10 6 3.76167 × 10 7 1.56756 × 10 7
0.7 1.35793 × 10 5 6.06985 × 10 6 1.45322 × 10 6 4.49382 × 10 7
0.8 4.74879 × 10 6 9.235 × 10 6 2.70249 × 10 6 4.10856 × 10 7
0.9 2.6111 × 10 5 1.55784 × 10 5 2.41674 × 10 7 1.93747 × 10 6
Table 11. The AE for Example 5 at α = 0.5 and K = 9 .
Table 11. The AE for Example 5 at α = 0.5 and K = 9 .
η t = 0.2 t = 0.4 t = 0.6 t = 0.8
0.1 1.67673 × 10 7 1.44428 × 10 7 2.92782 × 10 7 6.62384 × 10 7
0.2 2.72581 × 10 7 2.26141 × 10 7 4.74903 × 10 7 1.03788 × 10 6
0.3 3.21423 × 10 7 2.56773 × 10 7 5.58722 × 10 7 1.17977 × 10 6
0.4 3.23482 × 10 7 2.48188 × 10 7 5.60807 × 10 7 1.14174 × 10 6
0.5 2.89911 × 10 7 2.12231 × 10 7 5.01144 × 10 7 9.7832 × 10 7
0.6 2.32888 × 10 7 1.60574 × 10 7 4.01155 × 10 7 7.42428 × 10 7
0.7 1.64676 × 10 7 1.04229 × 10 7 2.82315 × 10 6 4.84053 × 10 7
0.8 9.67461 × 10 8 5.30397 × 10 8 1.65031 × 10 6 2.49294 × 10 7
0.9 3.91504 × 10 8 1.5833 × 10 8 6.63186 × 10 8 7.69010 × 10 8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alzahrani, S.S.; Alanazi, A.A.; Atta, A.G. A Modified Collocation Technique for Addressing the Time-Fractional FitzHugh–Nagumo Differential Equation with Shifted Legendre Polynomials. Symmetry 2025, 17, 1468. https://doi.org/10.3390/sym17091468

AMA Style

Alzahrani SS, Alanazi AA, Atta AG. A Modified Collocation Technique for Addressing the Time-Fractional FitzHugh–Nagumo Differential Equation with Shifted Legendre Polynomials. Symmetry. 2025; 17(9):1468. https://doi.org/10.3390/sym17091468

Chicago/Turabian Style

Alzahrani, S. S., Abeer A. Alanazi, and Ahmed Gamal Atta. 2025. "A Modified Collocation Technique for Addressing the Time-Fractional FitzHugh–Nagumo Differential Equation with Shifted Legendre Polynomials" Symmetry 17, no. 9: 1468. https://doi.org/10.3390/sym17091468

APA Style

Alzahrani, S. S., Alanazi, A. A., & Atta, A. G. (2025). A Modified Collocation Technique for Addressing the Time-Fractional FitzHugh–Nagumo Differential Equation with Shifted Legendre Polynomials. Symmetry, 17(9), 1468. https://doi.org/10.3390/sym17091468

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop