Next Article in Journal
Solving Logistical Challenges in Raw Material Reception: An Optimization and Heuristic Approach Combining Revenue Management Principles with Scheduling Techniques
Next Article in Special Issue
Ninth-Order Two-Step Methods with Varying Step Lengths
Previous Article in Journal
Tensorial Maclaurin Approximation Bounds and Structural Properties for Mixed-Norm Orlicz–Zygmund Spaces
Previous Article in Special Issue
A Parametric Six-Step Method for Second-Order IVPs with Oscillating Solutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Spectral Galerkin and Collocation Approaches Using Telephone Polynomials for Solving Some Models of Differential Equations with Convergence Analysis

by
Ramy Mahmoud Hafez
1,
Hany Mostafa Ahmed
2,
Omar Mazen Alqubori
3,
Amr Kamel Amin
4 and
Waleed Mohamed Abd-Elhameed
5,*
1
Department of Mathematics, Faculty of Education, Matrouh University, Cairo 51511, Egypt
2
Department of Mathematics, Faculty of Technology and Education, Helwan University, Cairo 11281, Egypt
3
Department of Mathematics and Statistics, College of Science, University of Jeddah, Jeddah 23831, Saudi Arabia
4
Department of Mathematics, Adham University College, Umm Al-Qura University, Makkah 28653, Saudi Arabia
5
Department of Mathematics, Faculty of Science, Cairo University, Giza 12613, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(6), 918; https://doi.org/10.3390/math13060918
Submission received: 9 February 2025 / Revised: 3 March 2025 / Accepted: 6 March 2025 / Published: 10 March 2025
(This article belongs to the Special Issue Numerical Methods Applied to Mathematical Problems)

Abstract

:
This study presents Galerkin and collocation algorithms based on Telephone polynomials (TelPs) for effectively solving high-order linear and non-linear ordinary differential equations (ODEs) and ODE systems, including those with homogeneous and nonhomogeneous initial conditions (ICs). The suggested approach also handles partial differential equations (PDEs), emphasizing hyperbolic PDEs. The primary contribution is to use suitable combinations of the TelPs, which significantly streamlines the numerical implementation. A comprehensive study has been conducted on the convergence of the utilized telephone expansions. Compared to the current spectral approaches, the proposed algorithms exhibit greater accuracy and convergence, as demonstrated by several illustrative examples that prove their applicability and efficiency.

1. Introduction

Polynomials and special functions are crucial in various fields because of their several uses in applied sciences; see, for instance, ref. [1,2,3]. Many theoretical publications have been devoted to the different polynomial sequences. The authors of [4] explored several polynomial sequences and offered a few practical uses for their findings. In ref. [5], the authors investigated different polynomial sequences. In ref. [6], degenerated polynomials of the Apostol type were introduced. The authors of [7] examined Sheffer polynomial sequences. Several articles also examined certain generalized polynomials. For example, the authors of [8] developed several other generalized polynomials. A significant subject in numerical analysis and the solution of various differential equations (DEs) is the utilization of distinct sequences. A great deal of effort has been focused on this area. For example, the authors of [9,10] used specific polynomials to solve some DEs, which are generalizations of the first and second kinds of Chebyshev polynomials. The generalized Bessel polynomials and the generalized Jacobi polynomials were used to address multi-order fractional DEs in [11,12]. The authors of ref. [13] employed a set of generalized Chebyshev polynomials to solve the telegraph DEs.
High-order DEs are central to numerous applications in science and engineering, including structural mechanics, fluid dynamics, and electromagnetic theory. Over the past few decades, significant advancements have been made in developing efficient numerical methods to address these complex problems. Among the most notable techniques are spectral methods, which have demonstrated superior convergence properties compared to traditional finite difference and finite element methods, especially for problems with smooth solutions [14,15,16].
Recent advancements in machine learning have shown promising applications in solving ODEs and PDEs. Various studies have successfully implemented machine learning techniques, providing innovative solutions and insights into complex dynamics. For instance, the work by Qiu et al. [17] explored novel machine learning frameworks for ODEs, while Zhang et al. [18] demonstrated effective methodologies for PDEs. Additionally, the study by Cuomo et al. [19] highlighted the evolving role of machine learning in enhancing computational efficiency and accuracy in this domain. These contributions collectively illustrate the dynamic development of machine learning approaches in addressing differential equations and underscore the potential for further exploration in our research.
In addition to the emerging techniques in machine learning and operational calculus, it is crucial to recognize traditional numerical methods that have been extensively employed in the approximation of differential equations. For instance, Runge–Kutta methods provide a robust framework for solving ordinary differential equations, offering varying degrees of accuracy depending on the specific formulation used. Similarly, differential transformation methods have gained attention for their efficiency and ease of implementation in obtaining numerical solutions. These classical approaches have been explored in literature, as highlighted in recent studies (e.g., [19,20]), and will be discussed further to provide a comprehensive overview of the landscape of numerical techniques applicable to differential equations.
Heat transfer, wave propagation, fluid dynamics, and many more areas of physics and engineering extensively use PDEs. In addition, biology uses them to model population dynamics and disease transmission, while economics uses them to compute price alternatives. Photodynamic enzymes are vital in medical imaging. The geosciences utilize them for weather prediction and groundwater movement simulation, while the material sciences employ them for chemical reaction and diffusion studies. For some applications of different PDEs, see [21,22]. Several numerical techniques exist for solving different PDEs. For example, the authors of [23] followed a variational quantum algorithm for handling PDEs. In [24], some studies of an inverse source problem for the linear parabolic equation were performed.
Spectral methods use global basis functions, such as orthogonal polynomials or trigonometric functions, to approximate the solution to DEs. These methods can be classified into various types (see, for example, [25,26,27,28]). The main spectral methods are collocation, tau, and Galerkin. The collocation method relies on choosing suitable nodes in the domain and, after that, satisfying the equations at these points. Many authors use the collocation method because it is easy to implement and can handle various DEs. For example, the authors of [29] applied a wavelet collocation approach to treat the Benjamin–Bona–Mahony equation. Another collocation approach using third-kind Chebyshev polynomials was followed in [30] to treat the nonlinear fractional Duffing equation. In [31], the authors applied a collocation procedure to specific high-order DEs. The collocation method treated inverse heat equations in [32]. Some other contributions regarding collocation methods can be found in [33,34,35]. The Galerkin method and its variants are also among the essential spectral methods. The main principle of applying this method is to choose basis functions that meet the underlying conditions. For example, the authors of [36] applied a Jacobi Galerkin algorithm for two-dimensional time-dependent PDEs. Some other contributions regarding the Galerkin method can be found in [37,38,39,40]. The tau method can be applied without restrictions on the choice of basis functions. In [41], the authors followed a tau approach for certain Maxwell’s equations. Another tau approach was followed in [42] to solve the two-dimensional ocean acoustic propagation in an undulating seabed environment. A tau–Gegenbauer spectral approach was followed for treating certain systems of fractional integro DEs in [43]. Some other contributions can be found in [44,45,46].
Table 1 summarizes the key publications related to Galerkin’s methods for solving DEs, including the types of equations addressed, the specific Galerkin approach used, and notable findings.
In 1800, Heinrich Rothe proposed the Telephone numbers (TelNs) method while computing the involutions in a set of n elements. In a telephone system, where each subscriber may only be linked to a maximum of two other subscribers, the sequence of TelNs can alternatively be seen as the number of feasible methods to make connections between the n subscribers. In graph theory, the quantity of matchings in a whole graph with n vertices is another use of the TelNs. In this paper, we will develop a generalization for the TelNs to build a sequence of polynomials, namely TelPs. We will employ these polynomials to solve some ODEs and PDEs.
In this study, we propose a spectral Galerkin approach for solving some models of DEs. We utilize the TelPs as basis functions. We will propose suitable basis functions for handling some models of DEs. The convergence analysis for the used expansions will be discussed.
The structure of this paper is organized as follows: Section 2 gives an overview of the mathematical properties of TelPs and develops an inversion formula, which is essential for analyzing the error analysis of the used expansion. Section 3 describes the Galerkin formulation for the high-order linear IVPs, while Section 4 proposes a collocation algorithm for treating the non-linear IVPs. Section 5 extends the method to systems of DEs, while Section 6 discusses the application to hyperbolic PDEs. In Section 7, a comprehensive investigation of the convergence and error analysis is performed. Numerical results presented in Section 8 validate the efficiency and accuracy of the presented methods, with concluding remarks and future research directions in Section 9.

2. An Overview of TelNs and Their Corresponding Polynomials

In mathematics, the involution numbers, sometimes called TelNs, are a series of integers that count the possible connections between n individuals through direct telephone calls. These numbers can be generated using the following recursive formula [53]:
T n = T n 1 + ( n 1 ) T n 2 , T 0 = T 1 = 1 .
These numbers can be represented explicitly as
T n = = 0 n 2 n ! 2 ! ( n 2 ) ! .
Now, we define a sequence of polynomials T n ( z ) that generalize the numbers T n . We define T n ( z ) of degree n as follows:
T n ( z ) = = 0 n 2 n ! 2 ! ( n 2 ) ! z n 2 .
From Formula (2), it can be verified that the recurrence relation given by
T n ( z ) = z T n 1 ( z ) + ( n 1 ) T n 2 ( z ) , T 0 ( z ) = 1 , T 1 ( z ) = z ,
is satisfied by the TelPs that are expressed in (2). Now, we will give the inversion formula to Formula (2) in the following theorem.
Theorem 1.
For every non-negative integer m, the following formula holds:
z i = r = 0 i 2 1 2 r ( 1 + i 2 r ) 2 r r ! T i 2 r ( z ) ,
where ( A ) r is the Pochhammer function defined as ( A ) r = Γ ( A + r ) Γ ( A ) .
Proof. 
We will proceed by induction. The formula holds for i = 0 . Assume that Formula (4) is valid, and we have to prove the validity of the following identity:
z i + 1 = r = 0 i + 1 2 1 2 r ( 2 + i 2 r ) 2 r r ! T i 2 r + 1 ( z ) .
Now, making use of the inductive hypothesis, we can write
z i + 1 = r = 0 i 2 1 2 r ( 1 + i 2 r ) 2 r r ! z T i 2 r ( z ) .
Making use of the recurrence relation (3) in the following form:
z T n ( z ) = T n + 1 ( z ) n T n 1 ( z ) ,
the following formula can be obtained:
z i + 1 = r = 0 i 2 1 2 r ( 1 + i 2 r ) 2 r r ! ( T i 2 r + 1 ( z ) ( i 2 r ) T i 2 r 1 ( z ) ) .
Some algebraic computations lead to Formula (5). □
Remark 1.
The inversion Formula (4) can be written in the following alternative formula:
z i = k = 0 i d i , k T k ( z ) ,
where
d i , k = i ! 1 2 i k 2 k ! i k 2 ! , ( i k ) e v e n , 0 , o t h e r w i s e .

3. Treatment of High-Order Initial Value Problems

In this section, we focus on applying the Telephone Galerkin method (TGM) to solve linear higher-order ODEs, both with homogeneous and non-homogeneous ICs.

3.1. Homogeneous ICs

We are interested in handling the following linear higher-order ODEs:
U ( m ) ( z ) + i = 0 m 1 λ i U ( i ) ( z ) = f ( z ) , z [ 0 , 1 ] ,
controlled by the following homogeneous ICs:
U ( i ) ( 0 ) = 0 , i = 0 , 1 , , m 1 ,
where λ 0 , λ 1 , , λ m 1 are constants. If we define the following spaces:
S N = { T 0 ( z ) , T 1 ( z ) , T 2 ( z ) , , T N ( z ) } , Φ N = { ϕ ( z ) S N : ϕ ( i ) ( 0 ) = 0 , i = 0 , 1 , , m 1 } ,
then, the TGM approximation to Equations (11) and (12) is to find U N ( z ) Φ N such that
U N ( m ) ( z ) , ϕ j ( z ) + i = 0 m 1 μ i U N ( i ) ( z ) , ϕ j ( z ) = f ( z ) , ϕ j ( z ) , 0 j N ,
where the scalar inner product in the space L 2 ( 0 , 1 ) is ( U ( z ) , V ( z ) ) = 0 1 U ( z ) V ( z ) d x . To meet the homogenous ICs, we choose suitable basis functions in the form
ϕ i ( z ) = z m T i ( z ) , z [ 0 , 1 ] .
Now, if we consider the following approximate solution U N ( z ) for (11) and (12):
U N ( z ) i = 0 N c i ϕ i ( z ) , z [ 0 , 1 ] .
then, the Galerkin formulation for (11) and (12) is given by
i = 0 N c i ϕ i ( m ) ( z ) , ϕ j ( z ) + l = 0 m 1 λ l ϕ i ( l ) ( z ) , ϕ j ( z ) = f ( z ) , ϕ j ( z ) .
Let us denote
A = a i j 0 i , j N , a i j = ϕ i ( m ) ( z ) , ϕ j ( z ) , B l = b i j l 0 i , j N , b i j l = ϕ i ( l ) ( z ) , ϕ j ( z ) , 0 l m 1 , F = ( f 0 , f 1 , , f N ) T , f j = f ( z ) , ϕ j ( z ) .
Then, (17) can be written equivalently in the following form:
A + l = 0 m 1 λ l B l C = F ,
in which the vector of unknowns is C = ( c 0 , c 1 , , c N ) T . The following theorem presents the nonzero elements of the matrices, A and B l ( 0 l m 1 ) .
Theorem 2.
Consider the basis functions ϕ i ( z ) in (15). Setting a i j = ϕ i ( m ) ( z ) , ϕ j ( z ) and b i j l = ϕ i ( l ) ( z ) , ϕ j ( z ) , 0 l m 1 . The nonzero elements of the two matrices A and B l ( 0 l m 1 ) can be written, respectively, in the following explicit forms:
a i j = k = 0 i 2 s = 0 j 2 i ! j ! ( i 2 k + m ) ! 2 k + s k ! s ! ( ( i 2 k ) ! ) 2 ( j 2 s ) ! ( i + j 2 k 2 s + m + 1 ) ,
b i j l = k = 0 i 2 s = 0 j 2 i ! j ! ( i 2 k + m ) ! 2 k + s k ! s ! ( i 2 k ) ! ( j 2 s ) ! ( i 2 k + m l ) ! ( i + j 2 k 2 s + 2 m l + 1 ) .
Proof. 
The basis functions ϕ i ( z ) are chosen such that the ICs (12) are satisfied. Now, we prove (21). Using (15), we have
ϕ i ( z ) = z m k = 0 i 2 i ! 2 k k ! ( i 2 k ) ! z i 2 k = k = 0 i 2 i ! 2 k k ! ( i 2 k ) ! z i 2 k + m .
By differentiating both sides of (22) l times with respect to z, we obtain
ϕ i ( l ) ( z ) = k = 0 i 2 i ! ( i 2 k + m ) ! 2 k k ! ( i 2 k ) ! ( i 2 k + m l ) ! z i 2 k + m l ,
and making use of the scalar inner product enables one to obtain
b i j l = k = 0 i 2 s = 0 j 2 i ! j ! ( i 2 k + m ) ! 2 k + s k ! s ! ( i 2 k ) ! ( j 2 s ) ! ( i 2 k + m l ) ! 0 1 z i + j 2 k 2 s + 2 m l d x ,
which leads to the following formula:
b i j l = k = 0 i 2 s = 0 j 2 i ! j ! ( i 2 k + m ) ! 2 k + s k ! s ! ( i 2 k ) ! ( i 2 k + m l ) ! ( j 2 s ) ! ( i + j 2 k 2 s + 2 m l + 1 ) ,
which proves (21).
If we replace l in (21) with m, then we obtain (20). □

3.2. Treatment of Nonhomogeneous ICs

In this section, we show how to deal with Equation (11) governed by the following non-homogeneous ICs:
U ( i ) ( 0 ) = κ i , i = 0 , 1 , , m 1 ,
where κ i ( i = 0 , 1 , , m 1 ) are real constants. In such a case, we make use of the transformation
V ( z ) = U ( z ) r = 0 m 1 κ r r ! z r .
It follows that Equation (11) governed by the non-homogeneous conditions (12) turns into the following equation:
V ( m ) ( z ) + i = 0 m 1 λ i V ( i ) ( z ) = f * ( z ) , z [ 0 , 1 ] ,
governed by the following homogeneous conditions:
V ( i ) ( 0 ) = 0 , i = 0 , 1 , , m 1 ,
where
f * ( z ) = f ( z ) i = 0 m 1 r = i m 1 λ i κ r ( r i ) ! z r i .

4. Treatment of the High-Order Non-Linear IVPs

This section is confined to the numerical treatment of the high-order non-linear IVPs. The typical collocation method, together with the operational matrix of derivatives, will be utilized.

4.1. The Operational Matrix of Derivatives of the TelPs

Here, we establish the operational matrix of derivatives of the TelPs that will be used to solve the non-linear IVPs. The following theorem will be the key to developing the operational matrix.
Theorem 3.
The first derivative of T j ( z ) is given by
D T j ( z ) = j T j 1 ( z ) , j 1 .
Proof. 
Based on the power form representation in (2), we can write
D T j ( z ) = j ! r = 0 j 1 2 2 r ( j 2 r ) ( j 2 r ) ! r ! z j 2 r 1 .
If we insert the inversion Formula (4) into (31), then the following formula can be obtained:
D T j ( z ) = j ! r = 0 j 1 2 2 r ( j 2 r ) ( j 2 r ) ! r ! s = 0 j 1 2 r 1 2 s ( j 2 ( r + s ) ) 2 s s ! T j 2 r 2 s 1 ( z ) ,
which can be written alternatively as
D T j ( z ) = j ! p = 0 j 1 2 2 p ( j 2 p 1 ) ! r = 0 p ( 1 ) p r ( p r ) ! r ! T j 2 p 1 ( z ) .
Using the following identity:
r = 0 p ( 1 ) p r ( p r ) ! r ! = 1 , p = 0 , 0 , p > 0 .
Formula (33) reduces to the following simplified formula:
D T j ( z ) = j T j 1 ( z ) , j 1 .
This ends the proof. □
If we consider the following vector:
Ψ ( z ) = [ T 0 ( z ) , T 1 ( z ) , , T N ( z ) ] T ,
then it is easy to see that
d Ψ d x = S Ψ ,
where S is the operational matrix of derivatives with the following entries:
s i j = j + 1 , j = i 1 , 0 , otherwise .
For example, the matrix S for N = 5 is
0 0 0 0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 3 0 0 0 0 0 0 4 0 0 0 0 0 0 5 0 .

4.2. Treatment of the High-Order Non-Linear IVPs

We aim to solve the following nonlinear IVPs numerically:
u ( m ) ( z ) = F z , u ( z ) , u ( z ) , , u ( m 1 ) ( z ) , z ( 0 , 1 ) , n 1 ,
governed by the following ICs:
u ( ) ( 0 ) = 0 , 0 m 1 .
Now, assume a polynomial approximation in terms of the TelPs as
u ( z ) u N ( z ) = i = 0 N c i T i ( z ) ,
which can be expressed as
u N ( z ) = k = 0 N c k T k ( z ) = C T Ψ ( z ) ,
where
C T = [ a 0 , a 1 , , a N ] ,
and Ψ ( z ) is defined in (34).
Now, using the operational matrix S, the th derivative of u N ( z ) can be expressed in the following form:
D u N ( z ) = C T S Ψ ( z ) ,
and accordingly, the residual of Equation (36) has the following form:
R ( z ) = C T S 2 n Ψ ( z ) F z , C T Ψ ( z ) , C T S Ψ ( z ) , , C T S 2 n 1 Ψ ( z ) .
Now, to apply the collocation method to solve Equation (36), we enforce the residual to be zero at selected suitable points z i . We choose them to be the distinct ( N + 1 ) zeros of the shifted Legendre polynomial P N + 1 * ( z ) on [ 0 , 1 ] . In such case, we have
C T S 2 n Ψ ( z i ) = F z , C T Ψ ( z i ) , C T S Ψ ( z i ) , , C T S 2 n 1 Ψ ( z i ) , i = 0 , 1 , , N n .
In addition, the ICs in (37) lead to the following n equations:
C T S Ψ ( 0 ) = 0 .
Merging the ( N + 1 ) equations in (43) and (44) yields a non-linear algebraic system that can be treated using any suitable numerical algorithm such as Newton’s iterative method, and hence solution u N ( z ) can be obtained.

5. Numerical Treatment of a System of Initial Value Problems

In this section, we present another application of the TGM to solve a system of higher-order DEs with homogeneous ICs. Consider the following system of higher-order DEs.
i = 0 m l = 1 s λ r , l i U l ( i ) ( z ) = f r ( z ) , r = 1 , 2 , , k , z [ 0 , 1 ] ,
controlled by following homogeneous ICs:
U l ( i ) ( 0 ) = 0 , i = 0 , 1 , , m 1 , l = 1 , 2 , , s .
Similarly, as in Section 3.1, we have to find U l N ( z ) Φ N , l = 1 , 2 , , s , such that
i = 0 m l = 1 s λ r , l i U l N ( i ) ( z ) , ϕ j ( z ) = f r ( z ) , ϕ j ( z ) , 0 j N .
Now, if we approximate U l N ( z ) Φ N , l = 1 , 2 , , s , as
U l N ( z ) h = 0 N c l , h ϕ h ( z ) , z [ 0 , 1 ] , l = 1 , 2 , , s .
where ϕ h ( z ) Φ N is defined as in (15). Now the application of the Galerkin method leads to
i = 0 m l = 1 s h = 0 N λ r , l i c l , h ϕ h ( i ) ( z ) , ϕ j ( z ) = f r ( z ) , ϕ j ( z ) , r = 1 , 2 , , k .
Then, (49) is equivalent to the system
i = 0 m l = 1 s λ r , l i A C l = F r , r = 1 , 2 , , k ,
where
C l = ( c l , 0 , c 1 , 1 , , c l , N ) T ,
and
F r = ( f 0 r , f 1 r , , f N r ) T , f j r = f r ( z ) , ϕ j ( z ) .
One can use any appropriate numerical solver to solve the ( s N + s ) algebraic system in the variables c l , h , where 1 l s , 0 h N .

6. Numerical Treatment of Hyperbolic PDEs of First-Order

In this section, we focus on applying the TGM to solve one-dimensional first-order hyperbolic PDEs governed by homogeneous and non-homogeneous ICs.

6.1. The Problem Governed by Homogeneous ICs

Consider the following one-dimensional hyperbolic PDE [54]:
t U ( z , t ) τ 1 z U ( z , t ) τ 2 U ( z , t ) = S ( z , t ) , ( z , t ) ( 0 , 1 ) × ( 0 , 1 ) ,
subject to the following conditions:
U ( z , 0 ) = 0 , z ( 0 , 1 ) ,
U ( 0 , t ) = 0 , t ( 0 , 1 ) ,
where τ 1 and τ 2 are constants, and S ( z , t ) is the source function. We choose appropriate basis functions to apply the Galerkin method to (51)–(53). The TGM for (51) is to find U N W N such that
( t U N , v ) τ 1 ( z U N , v ) τ 2 ( U N , v ) = ( S , v ) , v W N ,
where
U ( z , t ) U N ( z , t ) = i = 0 N 1 j = 0 N 1 c i j ξ i ( z ) ξ j ( t ) ,
W N = Span { ξ i ( z ) ξ j ( t ) : i , j = 0 , 1 , , N } .
Now, we choose combinations of TelPs to act as basis functions. More precisely, we choose the two following sets of basis functions:
ξ i ( z ) = z T i ( z ) ,
ξ j ( t ) = t T j ( t ) .
These choices allow us to convert (54) into
( t U N , ξ i ( z ) ξ j ( t ) ) τ 1 ( z U N , ξ i ( z ) ξ j ( t ) ) τ 2 ( U N , ξ i ( z ) ξ j ( t ) ) = ( S ( z , t ) , ξ i ( z ) ξ j ( t ) ) , i , j = 0 , 1 , , N 1 .
Let us denote
s i j = ( S ( z , t ) , ξ i ( z ) ξ j ( t ) ) = 0 1 0 1 S ( z , t ) ξ i ( z ) ξ j ( t ) d t d x ,
and
S = ( s i j ) , i , j = 0 , 1 , , N 1 .
The proposed approximation U N ( z , t ) can be written in the form
U N ( z , t ) = i = 0 N 1 j = 0 N 1 c i j ξ i ( z ) ξ j ( t ) = Ω N 1 T ( t ) C Ω N 1 ( z ) ,
and C is represented as
C = c 00 c 01 c 0 , N 1 c 10 c 11 c 1 , N 1 c N 1 , 0 c N 1 , 1 c N 1 , N 1 ,
where
Ω N 1 ( z ) = [ ξ 0 ( z ) , ξ 1 ( z ) , , ξ N 1 ( z ) ] T .
In the tensor product notion, Equation (59) can be represented as
( B ^ A ^ τ 1 A ^ B ^ τ 2 B ^ B ^ ) C = S ,
where S can be represented in the form
S = [ s 00 , s 10 , , s N 1 , 0 , s 01 , s 11 , , s N 1 , 1 , s 0 , N 1 , s 1 , N 1 , , s N 1 , N 1 ] T ,
and A ^ and B ^ are N × N matrices,
A ^ = ( a ^ i j ) 0 i , j N 1 , B ^ = ( b ^ i j ) 0 i , j N 1 ,
their elements a ^ i j and b ^ i j are obtained, respectively, from
a ^ i j = ( z ξ j ( z ) , ξ i ( z ) ) , b ^ i j = ( ξ j ( z ) , ξ i ( z ) ) ,
where the nonzero elements of the matrices A ^ and B ^ are given explicitly by
a ^ i j = k = 0 i 2 s = 0 j 2 i ! j ! ( i 2 k + 1 ) ! 2 k + s k ! s ! ( i 2 k ) ! ( j 2 s ) ! ( i 2 k ) ! ( i + j 2 k 2 s + 2 ) ,
b ^ i j = k = 0 i 2 s = 0 j 2 i ! j ! ( i 2 k + 1 ) ! 2 k + s k ! s ! ( i 2 k ) ! ( j 2 s ) ! ( i 2 k + 1 ) ! ( i + j 2 k 2 s + 3 ) .
Remark 2.
The Kronecker product A ^ B ^ of the n × m matrix A ^ and the p × q matrix B ^
A ^ = a ^ 11 a ^ 12 a ^ 1 m a ^ 21 a ^ 22 a ^ 2 m a ^ n 1 a ^ n 2 a ^ n m , B ^ = b ^ 11 b ^ 12 b ^ 1 q b ^ 21 b ^ 22 b ^ 2 q b ^ p 1 b ^ p 2 b ^ p q ,
is the n p × m q matrix having the following block form:
A ^ B ^ = a ^ 11 B ^ a ^ 12 B ^ a ^ 1 m B ^ a ^ 21 B ^ a ^ 22 B ^ a ^ 2 m B ^ a ^ n 1 B ^ a ^ n 2 B ^ a ^ n m B ^ .
At last, the differential equation governed by its underlying conditions is converted into a linear system of algebraic equations concerning the unknown expansion coefficients, which can be efficiently solved utilizing the Gaussian elimination method.

6.2. The Problem Governed by the Nonhomogeneous Conditions

In this subsection, we examine the first-order one-dimensional hyperbolic PDEs (51) with non-homogeneous initial and boundary conditions:
U ( z , 0 ) = H 1 ( z ) , z ( 0 , 1 ) ,
U ( 0 , t ) = H 2 ( t ) , t ( 0 , 1 ) ,
where H 1 ( z ) and H 2 ( t ) are given functions.
We make use of the following transformation:
V ( z , t ) = U ( z , t ) [ ( 1 z t ) U ( z , t ) ] = U ( z , t ) U ε ( z , t ) ,
where V is a function that is chosen to satisfy the following homogeneous conditions:
V ( z , 0 ) = 0 , z ( 0 , 1 ) ,
V ( 0 , t ) = 0 , t ( 0 , 1 ) .
This means that (51) governed by (64), and (65) is equivalent to the following modified equation:
t V ( z , t ) τ 1 z V ( z , t ) τ 2 V ( z , t ) = ( z , t ) , ( z , t ) ( 0 , 1 ) × ( 0 , 1 ) ,
with
( z , t ) = S ( z , t ) t U ε ( z , t ) + τ 1 z U ε ( z , t ) + τ 2 U ε ( z , t ) ,
under the homogeneous conditions (66), and (67).
While U ε ( z , t ) is an arbitrary function that satisfies the original non-homogeneous initial and boundary conditions.

7. Investigation of the Convergence and Error Analysis

This section focuses on specific models discussed in Section 3, Section 4, Section 5 and Section 6. We will analyze the convergence of the proposed expansions employed for the numerical treatment of these models. This ensures that the numerical solutions converge to the exact solution as the discretization parameters approach their limits. Additionally, we will investigate convergence and error analysis. Through rigorous analysis, we aim to establish error bounds and conditions for convergence that affirm the efficiency of our methods in solving the models under consideration. The following lemma is required to continue with our analysis.
Lemma 1.
The polynomials T n ( z ) satisfy the following inequality:
| T n ( z ) | 6 ( n ! ) 5 8 n 2 n 2 ! , z [ 0 , 1 ] .
Proof. 
In view of (2), we have
| T n ( z ) | k = 0 n 2 n ! 2 k k ! ( n 2 k ) ! .
For n = 2 p , Formula (70) takes the form:
| T 2 p ( z ) | ( 2 p ) ! k = 0 p 1 2 k k ! ( 2 p 2 k ) ! = ( 2 p ) ! k = 0 p 1 2 p k ( p k ) ! ( 2 k ) ! .
Using the simple identity: ( 2 k ) ! = 2 2 k k ! ( 1 / 2 ) k , we can write
| T 2 p ( z ) | ( 2 p ) ! k = 0 p 1 2 p + k ( p k ) ! ( 1 / 2 ) k k ! .
In addition, it is not difficult to show that ( 1 / 2 ) k 1 6 2 k , and accordingly, (72) takes the form
| T 2 p ( z ) | 6 ( 2 p ) ! S p ,
where
S p = k = 0 p 1 2 p + 2 k ( p k ) ! k ! .
By using computer algebra, especially Zeilberger’s algorithm (see [55]), S p are able to meet the first-order recurrence relation shown below:
8 ( p + 1 ) S p + 1 5 S p = 0 , S 0 = 1 .
The exact solution to this recurrence relation has the form
S p = 5 8 p p ! .
Then, we obtain
| T 2 p ( z ) | 6 ( 2 p ) ! 5 8 p p ! .
Similarly, for n = 2 p + 1 , we can prove that
| T 2 p + 1 ( z ) | 6 ( 2 p + 1 ) ! 5 8 p p ! .
Formulas (76) and (77) lead to (69) and the proof of Lemma 1 is complete. □
The following corollary is a direct consequence of Lemma 1.
Corollary 1.
The polynomials ϕ n ( z ) satisfy the following inequality
| ϕ n ( z ) | 6 ( n ! ) 5 8 n 2 n 2 ! , z [ 0 , 1 ] .

7.1. The Model of Initial Value Problem

If the error obtained by solving (11), and (12) numerically is defined by
e N ( z ) = U ( z ) U N ( z ) ,
subsequently, the maximum absolute error (MAE) of the proposed technique is estimated as
e N = max 0 z 1 e N ( z ) ,
while the differences between the function U j ( z ) and its estimated value U j , N ( z ) for the system (45) and (46) are defined by
e j , N ( z ) = U j ( z ) U j , N ( z ) , j = 1 , 2 , , k ,
and consequently, the MAEs of the suggested method are estimated by the following:
e j , N = max 0 z 1 e j , N ( z ) .
In order to evaluate the precision of the suggested method, we define the following error:
E N = max { e 1 , N , , e k , N } .
Theorem 4.
Assume that U ( z ) = z m u ( z ) , where u ( z ) is an infinitely differentiable function at the origin with | d i u ( 0 ) d z i | < M . Then, it has the following expansion:
U ( z ) = i = 0 U i ϕ i ( z ) ,
where
U i = p = 0 d i + 2 p u ( 0 ) d z i + 2 p ( 1 / 2 ) p i ! p ! .
These expansion coefficients satisfy the following inequality
| U i | < M e 1 / 2 1 i ! , i 0 ,
and the series in (84) converges absolutely.
Proof. 
First, we expand U ( z ) as
U ( z ) = z m i = 0 u i z i , u i = 1 i ! d i u ( 0 ) d z i .
Inserting the inversion Formula (9) into (87) enables one to write
U ( z ) = z m i = 0 u i k = 0 i d i , k T k ( z ) .
Expanding the right-hand side of (88) and rearranging similar terms, the following expansion is obtained:
U ( z ) = i = 0 U i ϕ i ( z ) ,
where
U i = p = 0 u p + i d p + i , i .
In view of (9), we can obtain (85). The inequality (86) is a direct consequence of (85). Now, direct use of (86) together with the application of Lemma 1 yields
i = 0 | U i | | ϕ i ( z ) | 6 M e 1 / 2 i = 0 ( 5 / 8 ) i 2 i 2 ! ,
we have
i = 0 ( 5 / 8 ) i 2 i 2 ! = i = 0 i e v e n ( 5 / 8 ) i i ! + i = 0 i o d d ( 5 / 8 ) i i ! i = 0 ( 5 / 8 ) i i ! = e 5 / 8 ,
then
i = 0 | U i | | ϕ i ( z ) | 6 M e 1 / 2 i = 0 ( 5 / 8 ) i i ! = 6 M e 1 / 8 ,
so, the series converges absolutely. □
Theorem 5.
If U ( z ) satisfies the hypothesis of Theorem 4, and if
U N ( z ) = i = 0 N U i ϕ i ( z ) ,
then, the following estimate is satisfied
U U N 6 M e 9 / 2 ( 1 / 2 ) 3 N + 3 , N 1 .
So
U U N 1 2 3 N + 3 ,
where ≲ means that a generic constant d exists such that U U N d ( 1 / 2 ) 3 N + 3 .
Proof. 
The truncation error may be written as follows:
U U N = i = N + 1 U i ϕ i ( z ) i = N + 1 | U i | | ϕ i ( z ) | 6 M e 1 / 2 i = N + 1 ( 5 / 8 ) i 2 i 2 ! ,
however, we have
i = N + 1 ( 5 / 8 ) i 2 i 2 ! i = N + 1 ( 5 / 8 ) i i ! ( 1 / 8 ) N + 1 i = N + 1 5 i i ! ( 1 / 8 ) N + 1 e 5 ,
then,
U U N 6 M e 9 / 2 ( 1 / 8 ) N + 1 ( 1 / 2 ) 3 N + 3 ,
which completes the proof of the theorem. □
Corollary 2.
Suppose that U N ( z ) has the form (16) and represents the best possible approximation for U ( z ) out of Ω. Then, the following estimates for the error E N are valid:
e N 6 M e 9 / 2 ( 1 / 2 ) 3 N + 3 , N 1 .
Proof. 
Since U N ( z ) Ω represents the best possible approximation for U ( z ) , we have the following:
U ( z ) U N ( z ) U ( z ) h ( z ) , h Ω .
In view of (95) and (101),
e N = U ( z ) U N ( z ) U ( z ) U N ( z ) ,
and this leads to (100), which completes the proof of the theorem. □
Remark 3.
Theorems 4 and 5 can be extended when the error analysis is considered for the proposed technique for solving the model defined by the system (45) and (46). The most important steps would be as follows:
 (i)
In order to derive the bound expression of the coefficients c i , j , it is necessary to establish the matching assumptions in Theorem 4.
 (ii)
In Theorem 5, the error bound formulas for e j , N , j = 1 , 2 , , k are derived. These expressions are identical to the ones for e N .
 (iii)
Obtaining the bound expression of the expression E N that is specified in Equation (100) as
E N 6 M e 9 / 2 ( 1 / 2 ) 3 N + 3 .

7.2. Treatment of First-Order Hyperbolic PDEs

The error estimation to assess the numerical scheme’s error is
E N = U U N = max ( z , t ) I | U ( z , t ) U N ( z , t ) | .
To proceed in our analysis, the following two theorems are needed in this subsection:
Theorem 6.
Let U ( z , t ) = z t u ( z , t ) be an infinitely differentiable function at the origin with | i + j u ( 0 , 0 ) z i t j | < M . Then, it has the following expansion
U ( z , t ) = i = 0 j = 0 U i , j ξ i ( z ) ξ j ( t ) ,
where
U i , j = p = 0 q = 0 i + j + 2 ( p + q ) u ( 0 , 0 ) z i + 2 p t j + 2 q ( 1 / 2 ) p + q i ! j ! p ! q ! .
These expansion coefficients satisfy the following inequality
| U i , j | < M e 1 1 i ! j ! , i , j 0 ,
and the series in (105) converges absolutely.
Proof. 
First, we expand U ( z , t ) as
U ( z , t ) = z t i = 0 j = 0 u i , j z i t j , u i , j = 1 i ! j ! i + j u ( 0 , 0 ) z i t j .
This expansion can be written in the form:
U ( z , t ) = z t i = 0 b i ( t ) z i ,
where
b i ( t ) = j = 0 u i , j t j .
Inserting the inversion Formula (9) into (109) enables one to write
U ( z , t ) = t i = 0 b i ( t ) k = 0 i d i , k ξ k ( z ) .
Expanding the right-hand side of (88), and rearranging the similar terms, the following expansion is obtained
U ( z , t ) = t i = 0 p = 0 b p + i ( t ) d p + i , i ξ i ( z ) .
Now,
p = 0 b p + i ( t ) d p + i , i = p = 0 j = 0 u p + i , j t j d p + i , i = j = 0 p = 0 u p + i , j d p + i , i t j .
Inserting the inversion Formula (9) into (113) and following the same procedures enables one to write
p = 0 b p + i ( t ) d p + i , i = j = 0 p = 0 q = 0 u p + i , j + q d p + i , i d q + j , j T j ( t ) .
Substituting (114) for (112) immediately gives
U ( z , t ) = i = 0 j = 0 U i , j ξ i ( z ) ξ j ( t ) ,
where
U i , j = p = 0 q = 0 u p + i , j + q d p + i , i d q + j , j .
In view of (10), we can obtain (106). The inequality (107) is a direct consequence of (106). Now, direct use of (107) together with the application of Lemma 1 yields
i , j = 0 | U i , j | | ϕ i ( z ) | | ϕ j ( t ) | 36 M e 1 i , j = 0 ( 5 / 8 ) i 2 ( 5 / 8 ) j 2 i 2 ! j 2 ! ,
we have
i = 0 ( 5 / 8 ) i 2 i 2 ! = i = 0 i e v e n ( 5 / 8 ) i i ! + i = 0 i o d d ( 5 / 8 ) i i ! i = 0 ( 5 / 8 ) i i ! = e 5 / 8 ,
then
i , j = 0 | U i , j | | ϕ i ( z ) | | ϕ j ( t ) | 36 M e 1 i , j = 0 ( 5 / 8 ) i + j i ! j ! = 36 M e 1 e ( 5 / 4 ) ,
so the series converges absolutely. □
Theorem 7.
If U ( z , t ) satisfies the hypothesis of Theorem 6, and if
U N ( z , t ) = i = 0 N j = 0 N U i , j ξ i ( z ) ξ j ( t ) ,
then the following estimate is satisfied
U U N 1 2 3 N + 3 .
Proof. 
The truncation error may be written as follows:
U U N = i = 0 j = 0 U i , j ξ i ( z ) ξ j ( t ) i = 0 N j = 0 N U i , j ξ i ( z ) ξ j ( t ) i = 0 N j = N + 1 | U i , j | | ξ i ( z ) | | ξ j ( t ) | + i = N + 1 j = 0 | U i , j | | ξ i ( z ) | | ξ i ( t ) | 36 M e 1 i = 0 N j = N + 1 ( 5 / 8 ) i 2 ( 5 / 8 ) j 2 i 2 ! j 2 ! + 36 M e 1 i = N + 1 j = 0 ( 5 / 8 ) i 2 ( 5 / 8 ) j 2 i 2 ! j 2 ! ,
we have
i = 0 N ( 5 / 8 ) i 2 i 2 ! = i = 0 N i e v e n ( 5 / 8 ) i i ! + i = 0 N i o d d ( 5 / 8 ) i i ! i = 0 N ( 5 / 8 ) i i ! e 5 / 8 ,
also,
j = N + 1 ( 5 / 8 ) j 2 j 2 ! j = N + 1 ( 5 / 8 ) j j ! ( 1 / 8 ) N + 1 j = N + 1 5 j j ! ( 1 / 8 ) N + 1 e 5 ,
then
U U N 36 M e 1 ( 1 / 8 ) N + 1 e 5 e ( 5 / 8 ) + 36 M e 1 ( 1 / 8 ) N + 1 e 5 e 5 / 8 ( 1 / 8 ) N + 1 ,
which completes the proof of the theorem. □
Corollary 3.
Suppose that U N ( z , t ) has the form (60) and represents the best possible approximation for U ( z , t ) out of W . Then, the following estimates for the error E N are valid:
E N ( 1 / 2 ) 3 N + 3 , N 1 .
Proof. 
Since U N ( z , t ) W represents the best possible approximation for U ( z , t ) , we have the following:
E N = U U N U h , h W .
In view of (121) and (127),
E N U U N ( 1 / 2 ) 3 N + 3 ,
which completes the proof of the theorem. □
The stability of error, or the process of estimating the propagation of error, is the focus of the subsequent theorem.
Theorem 8.
For any two successive approximations of U ( z , t ) , we obtain the following:
| U N + 1 U N | ( 1 / 2 ) 3 N + 3 .
Proof. 
We have
| U N + 1 U N | = | U N + 1 U + U U N | | U U N + 1 | + | U U N | E N + 1 + E N .
By considering (126), we can obtain (129). □

8. Numerical Results

Several test examples are presented in this section to demonstrate the effectiveness of the proposed algorithms. Comparisons between the results obtained using the proposed method and those from other methods show that the proposed methods are highly effective and convenient. The following examples are considered.
Example 1.
Consider the linear eighth-order problem [56]
U ( 8 ) ( z ) U ( z ) = f ( z ) , f ( z ) = 8 e z , z [ 0 , 1 ] ,
subject to the conditions
U ( 0 ) = 1 , U ( 1 ) ( 0 ) = 0 , U ( 2 ) ( 0 ) = 1 , U ( 3 ) ( 0 ) = 2 , U ( 4 ) ( 0 ) = 3 , U ( 5 ) ( 0 ) = 4 , U ( 6 ) ( 0 ) = 5 , U ( 7 ) ( 0 ) = 6 ,
and the analytical solution is U ( z ) = ( 1 z ) e z .
Table 2 in Example 1 displays three sets of absolute error values using the proposed methods, with different numbers of points N (16, 20, and 24). The absolute errors decrease significantly as N increases, indicating that the method becomes more accurate with a higher number of points. This suggests the method is effective in reducing error and improving numerical precision.
  • At smaller values of z (e.g., z = 0.1 ), the errors are generally lower.
  • At larger values of z (e.g., z = 0.9 ), the errors increase, indicating potential challenges in achieving high accuracy in these regions.
Figure 1 illustrates the corresponding absolute error function (left) and the comparison between the approximate and exact solutions (right) for Example 1 at N = 24 . The plot on the left demonstrates that the absolute error remains very small and increases gradually at N = 24 , confirming the method’s reliability and precision in solving the given problem. The plot on the right shows an excellent agreement between the approximate and exact solutions, indicating the high accuracy of the method.
Example 2.
Consider the linear second-order problem (see [57])
U ( 2 ) ( z ) 2 U ( 1 ) ( z ) + 2 U ( z ) = f ( z ) , z [ 0 , 1 ] ,
with two different sets of initial conditions and corresponding exact solutions:
  • First case:
    Initial conditions:
    U ( 0 ) = 0 , U ( 1 ) ( 0 ) = 1 .
    Exact solution:
    U ( z ) = z e z .
  • Second case:
    Initial conditions:
    U ( 0 ) = 0 , U ( 1 ) ( 0 ) = 0 .
    Exact solution:
    U ( z ) = z 13 / 2 sin π z 4 / 3 .
In each case, f ( z ) is selected accordingly to satisfy the differential equation. Table 3 presents a comparative analysis of two computational methods, Romanovski–Jacobi tau method (R-JTM) [57] and the proposed TGM, for solving numerical Example 2. The table details each method’s absolute errors and CPU time (s) at varying numbers of discretization points N. The results demonstrate the superior accuracy of the TGM method, achieving significantly smaller absolute errors compared to R-JTM [57]. For instance, at N = 15 , TGM exhibits an absolute error of 8.03 × 10−17, while R-JTM at N = 20 shows 2.96 × 10−13. Furthermore, TGM is more computationally efficient, requiring only 4.499 seconds at N = 15 , in contrast to R-JT’s 12.907 seconds at N = 20 . The table highlights that TGM offers a better balance between accuracy and efficiency, establishing it as a more favorable choice for solving similar problems with solutions of the form U ( z ) = z e z . Notably, while CPU time increases with N in both methods, the rate of increase is substantially lower for TGM. Figure 2 compares the exact solutions of a second-order differential equation in Example 2. In the left plot, the solution U ( z ) = z e z is shown for N = 24 , exhibiting a smooth increasing behavior before decreasing, with a good match between the numerical and analytical solutions. In the right plot, the solution U ( z ) = z 13 / 2 sin π z 4 / 3 is presented for N = 10 , displaying a more oscillatory pattern. The first solution demonstrates exponential behavior, while the second shows strong oscillatory effects. This figure highlights the possibility of obtaining different solutions for the same equation depending on the initial conditions.
Remark 4.
In both accuracy and computational efficiency, the TGM approach beats the R-JTM method [57]. TGM is the better method for this problem as it requires considerably less CPU time and yields much less absolute errors.
Example 3.
The isothermal gas spheres equation [57] is given by the following equation:
U ( 2 ) ( z ) + 2 z U ( 1 ) ( z ) + e U ( z ) = 0 , z [ 0 , 1 ] ,
with the following initial conditions:
U ( 0 ) = 0 , U ( 1 ) ( 0 ) = 0 .
An approximate solution to this equation, as obtained by Wazwaz [58], Liao [59], and Singh et al. [60], is given by the following formula:
U ( z ) = 1 6 z 2 + 1 5.4 ! z 4 8 21.6 ! z 6 + 122 81.8 ! z 8 61.67 495.10 ! z 10 + .
Table 4 presents the approximate values of u ( z ) for the standard Lane–Emden equation with N = 8 , obtained using TGM. These results are compared with those derived from the Jacobi Rational Pseudospectral Method (JRPM) [61], the variable weight fuzzy marginal linearization (VWFML) method [62], and the Adomian Decomposition Method (ADM) solution [58].
Example 4.
We consider the following linear ODE system [63],
U 1 ( 1 ) ( z ) + U 2 ( 1 ) ( z ) + U 1 ( z ) + U 2 ( z ) = 1 , U 2 ( 1 ) ( z ) 2 U 1 ( z ) U 2 ( z ) = 0 , 0 z 1 ,
with the ICs
U 1 ( 0 ) = 0 a n d U 2 ( 0 ) = 1 .
The exact solution of this problem are U 1 ( z ) = e z 1 and U 2 ( z ) = 2 e z .
Table 5 and Table 6 provide a comparison of the absolute errors e 1 , N and e 2 , N for the functions U 1 ( z ) and U 2 ( z ) , solved using different methods. Here are some observations on the tables:
  • They show a clear difference in solution accuracy between the three methods used (Differential transformation method (DTM) in [64], Bessel collocation method (BCM) in [63], and the present method). The present method achieves significantly higher accuracy, with much smaller error values at the same points x i .
  • In Table 5, it is evident that the errors produced by the present method are much smaller compared to those of the DTM and BCM methods, especially as N increases from 6 to 10.
  • In Table 6, the present method again demonstrates notably higher precision than the DTM and BCM methods across all x i points.
Also, Figure 3 and Figure 4 are plotted to compare the analytic solution with the approximate solution at N = 12 for Example 4.
Example 5.
We consider the following hyperbolic equation of first-order of the form [33]
t U ( z , t ) = z U ( z , t ) U ( z , t ) + S ( z , t ) , z ( 0 , 1 ) , t ( 0 , 1 ) ,
subject to initial and boundary conditions,
U ( z , 0 ) = cos ( z ) , z ( 0 , 1 ) , U ( 0 , t ) = cos ( t ) , t ( 0 , 1 ) ,
where
S ( z , t ) = 2 sin ( z + t ) + cos ( z + t ) .
The exact solution is given by
U ( z , t ) = cos ( z + t ) .
Table 7 compares the absolute errors at various points in the ( z , t ) domain for Example 5, with N = 6 . The errors are shown for two previous methods (Legendre wavelets (LW) and Chebyshev wavelets (CW) from Singh et al. [33]) and the present method.
  • It is notable that the errors in the present method are significantly lower compared to the LW and CW methods. For instance, at point ( z , t ) = ( 0.1 , 0.1 ) , the error in the present method is 2.95 × 10 8 , while the errors in the other methods are 9.12 × 10 5 and 4.53 × 10 4 , respectively.
  • This result highlights the efficiency of the present method in substantially reducing the error compared to the other methods, reflecting higher accuracy.
Table 8 compares the L errors for Example 5 at different values of N.
  • It is clear that the present method shows reduced errors as the values of N increase, with the error decreasing significantly with larger N. For instance, at N = 6 , the error in the present method is 1.58 × 10 7 , while the errors in the other methods are 9.90 × 10 5 and 4.53 × 10 4 for the LW and CW methods, respectively.
  • The present method demonstrates excellent performance as N increases, achieving very high accuracy, such as 2.30 × 10 11 at N = 9 , indicating the method’s strength in convergence and accuracy.
Also, Figure 5 compares the curves of analytical and approximation solutions for Example 5, where N = 9 , at t = 0.1 , 0.3 , 0.5 , 0.7 , 0.9 and z = 0.1 , 0.3 , 0.5 , 0.7 , 0.9 . Finally, Figure 6 illustrates the space-time graphs of the absolute error functions for Example 5 at various choices of N, specifically N = 3 , N = 5 , N = 7 , and N = 9 . The graphs depict the distribution of errors over time and space, reflecting the accuracy of the proposed method in solving the problem. It is evident that increasing the values of N results in a significant reduction in errors, confirming the method’s effectiveness in terms of accuracy and convergence.

9. Conclusions

This paper introduces and utilizes a new set of polynomials known as TelPs, which are considered generalizations of TelNs. We showed that the application of spectral methods, and in particular the Galerkin and collocation methods, is efficient for solving different DEs. We treated numerically some types of DEs using TelPs as basis functions. We conducted a thorough investigation into the convergence analysis of the used expansion. To the best of our knowledge, these kinds of polynomials have not been used to solve DEs before. We plan to employ these polynomials in other types of DEs shortly. In conclusion, our research has significant implications for various real-world applications. For instance, the methodologies developed can be effectively applied to heat transfer problems, enabling more efficient thermal management in engineering systems. Additionally, our findings contribute to the growing field of machine learning-based PDE solvers, which have the potential to revolutionize how complex DEs are approached in disciplines such as fluid dynamics, materials science, and environmental modeling. By bridging the gap between theoretical advancements and practical implementations, we hope to inspire further exploration and application of these techniques in diverse fields.

Author Contributions

Conceptualization, R.M.H. and W.M.A.-E.; Methodology, R.M.H. and W.M.A.-E.; Software, R.M.H.; Validation, R.M.H., H.M.A., O.M.A., A.K.A. and W.M.A.-E.; Formal analysis, R.M.H. and H.M.A.; Investigation, H.M.A., O.M.A. and W.M.A.-E.; Writing—original draft, R.M.H., H.M.A. and W.M.A.-E.; Visualization, R.M.H. and W.M.A.-E.; Supervision, W.M.A.-E.; Funding acquisition, A.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research work was funded by Umm Al-Qura University, Saudi Arabia, under grant number: 25UQU4331287GSSR01.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors extend their appreciation to Umm Al-Qura University, Saudi Arabia, for funding this research work through grant number: 25UQU4331287GSSR01.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mason, J.C.; Handscomb, D.C. Chebyshev Polynomials, 1st ed.; Chapman and Hall/CRC: New York, NY, USA, 2002. [Google Scholar]
  2. Marcellán, F. Orthogonal Polynomials and Special Functions: Computation and Applications, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  3. Lebedev, N.N. Special Functions &amp Their Applications; Dover Publ.: New York, NY, USA, 1972. [Google Scholar]
  4. Costabile, F.A.; Gualtieri, M.I.; Napoli, A. Polynomial sequences: Elementary basic methods and application hints. A survey. Rev. R. Acad. Cienc. Exactas Fis. Nat. Ser. A Math. 2019, 113, 3829–3862. [Google Scholar] [CrossRef]
  5. Costabile, F.A.; Gualtieri, M.I.; Napoli, A.; Altomare, M. Odd and even Lidstone-type polynomial sequences. Part 1: Basic topics. Adv. Differ. Equ. 2018, 2018, 299. [Google Scholar] [CrossRef]
  6. Cesarano, C.; Ramírez, W.; Díaz, S.; Shamaoon, A.; Khan, W.A. On Apostol-type Hermite degenerated polynomials. Mathematics 2023, 11, 1914. [Google Scholar] [CrossRef]
  7. Costabile, F.A.; Gualtieri, M.I.; Napoli, A. Towards the centenary of Sheffer polynomial sequences: Old and recent results. Mathematics 2022, 10, 4435. [Google Scholar] [CrossRef]
  8. Ramírez, W.; Cesarano, C. Some new classes of degenerated generalized Apostol-Bernoulli, Apostol-Euler and Apostol-Genocchi polynomials. Carpath. Math. Publ. 2022, 14, 354–363. [Google Scholar] [CrossRef]
  9. Abd-Elhameed, W.M.; Ahmed, H.M.; Zaky, M.A.; Hafez, R.M. A new shifted generalized Chebyshev approach for multi-dimensional sinh-Gordon equation. Phys. Scr. 2024, 99, 095269. [Google Scholar] [CrossRef]
  10. Ahmed, H.M.; Hafez, R.M.; Abd-Elhameed, W.M. A computational strategy for nonlinear time-fractional generalized Kawahara equation using new eighth-kind Chebyshev operational matrices. Phys. Scr. 2024, 99, 045250. [Google Scholar] [CrossRef]
  11. Izadi, M.; Cattani, C. Generalized Bessel polynomial for multi-order fractional differential equations. Symmetry 2020, 12, 1260. [Google Scholar] [CrossRef]
  12. El-Sayed, A.M.A.; El-Mesiry, A.E.M.; El-Saka, H.A.A. Numerical solution for multi-term fractional (arbitrary) orders differential equations. Comput. Appl. Math. 2004, 23, 33–54. [Google Scholar] [CrossRef]
  13. Abd-Elhameed, W.M.; Hafez, R.M.; Napoli, A.; Atta, A.G. A new generalized Chebyshev matrix algorithm for solving second-order and telegraph partial differential equations. Algorithms 2024, 18, 2. [Google Scholar] [CrossRef]
  14. Boyd, J.P. Chebyshev and Fourier Spectral Methods; Dover Publications: Mineola, NY, USA, 2001. [Google Scholar]
  15. Canuto, C.; Hussaini, M.Y.; Quarteroni, A.; Zang, T.A. Spectral Methods: Evolution to Complex Geometries and Applications to Fluid Dynamics; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  16. Shen, J.; Tang, T.; Wang, L.L. Spectral Methods: Algorithms, Analysis and Applications, 1st ed.; Springer: New York, NY, USA, 2011. [Google Scholar]
  17. Qiu, L.; Wang, Y.; Gu, Y.; Qin, Q.H.; Wang, F. Adaptive physics-informed neural networks for dynamic coupled thermo-mechanical problems in large-size-ratio functionally graded materials. Appl. Math. Model. 2025, 140, 115906. [Google Scholar] [CrossRef]
  18. Zhang, B.; Wu, G.; Gu, Y.; Wang, X.; Wang, F. Multi-domain physics-informed neural network for solving forward and inverse problems of steady-state heat conduction in multilayer media. Phys. Fluids 2022, 34, 2442–2461. [Google Scholar] [CrossRef]
  19. Cuomo, S.; Di Cola, V.; Giampaolo, F.; Rozza, G.; Raissi, M.; Piccialli, F. Scientific machine learning through physics–informed neural networks: Where we are and what’s next. J. Sci. Comput. 2022, 92, 88. [Google Scholar] [CrossRef]
  20. Dangal, T.; Chen, C.S.; Lin, J. Polynomial particular solutions for solving elliptic partial differential equations. Comput. Math. Appl. 2017, 73, 60–70. [Google Scholar] [CrossRef]
  21. Shearer, M.; Levy, R. Partial Differential Equations: An Introduction to Theory and Applications; Princeton University Press: Princeton, NJ, USA, 2015. [Google Scholar]
  22. Selvadurai, A.P.S. Partial Differential Equations in Mechanics 2: The Biharmonic Equation, Poisson’s Equation; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  23. Sarma, A.; Watts, T.W.; Moosa, M.; Liu, Y.; McMahon, P.L. Quantum variational solving of nonlinear and multidimensional partial differential equations. Phys. Rev. A 2024, 109, 062616. [Google Scholar] [CrossRef]
  24. Lin, G.; Zhang, Z.; Zhang, Z. Theoretical and numerical studies of inverse source problem for the linear parabolic equation with sparse boundary measurements. Inverse Probl. 2022, 38, 125007. [Google Scholar] [CrossRef]
  25. Trefethen, L.N. Spectral Methods in MATLAB; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
  26. Hesthaven, J.; Gottlieb, S.; Gottlieb, D. Spectral Methods for Time-Dependent Problems; Cambridge University Press: Cambridge, UK, 2007; Volume 21. [Google Scholar]
  27. Karniadakis, G.E.; Sherwin, S.J. Spectral/hp Element Methods for Computational Fluid Dynamics, 2nd ed.; Numerical Mathematics and Scientific Computation; Oxford University Press: Oxford, UK, 2005. [Google Scholar]
  28. Gottlieb, D.; Orszag, S.A. Numerical Analysis of Spectral Methods: Theory and Applications; SIAM: Philadelphia, PA, USA, 1977. [Google Scholar]
  29. Mulimani, M.; Srinivasa, K. A novel approach for Benjamin-Bona-Mahony equation via ultraspherical wavelets collocation method. Int. J. Math. Comput. Eng. 2024, 2, 39–52. [Google Scholar] [CrossRef]
  30. Youssri, Y.H.; Atta, A.G.; Moustafa, M.O.; Abu Waar, Z.Y. Explicit collocation algorithm for the nonlinear fractional Duffing equation via third-kind Chebyshev polynomials. Iran. J. Numer. Anal. Optim. 2025; in press. [Google Scholar]
  31. Costabile, F.; Napoli, A. Collocation for High-Order Differential Equations with Lidstone Boundary Conditions. J. Appl. Math. 2012, 2012, 120792. [Google Scholar] [CrossRef]
  32. Abdelkawy, M.A.; Amin, A.Z.M.; Babatin, M.M.; Alnahdi, A.S.; Zaky, M.A.; Hafez, R.M. Jacobi spectral collocation technique for time-fractional inverse heat equations. Fractal Fract. 2021, 5, 115. [Google Scholar] [CrossRef]
  33. Singh, S.; Patel, V.K.; Singh, V.K. Application of Wavelet Collocation Method for Hyperbolic Partial Differential Equations via Matrices. Appl. Math. Comput. 2018, 320, 407–424. [Google Scholar] [CrossRef]
  34. Jiang, W.; Gao, X. Review of Collocation Methods and Applications in Solving Science and Engineering Problems. CMES-Comput. Model. Eng. Sci. 2024, 140, 41–76. [Google Scholar] [CrossRef]
  35. Aourir, E.; Izem, N.; Dastjerdi, H.L. A computational approach for solving third kind VIEs by collocation method based on radial basis functions. J. Comput. Appl. Math. 2024, 440, 115636. [Google Scholar] [CrossRef]
  36. Hafez, R.M.; Youssri, Y.H. Fully Jacobi–Galerkin algorithm for two-dimensional time-dependent PDEs arising in physics. Int. J. Mod. Phys. C 2024, 35, 2450034. [Google Scholar] [CrossRef]
  37. Youssri, Y.H.; Atta, A.G. Modal spectral Tchebyshev Petrov–Galerkin stratagem for the time-fractional nonlinear Burgers’ equation. Iran. J. Numer. Anal. Optim. 2024, 14, 172–199. [Google Scholar]
  38. Abd-Elhameed, W.M.; Alsuyuti, M.M. New spectral algorithm for fractional delay pantograph equation using certain orthogonal generalized Chebyshev polynomials. Commun. Nonlinear Sci. Numer. Simul. 2024, 141, 108479. [Google Scholar] [CrossRef]
  39. Alsuyuti, M.M.; Doha, E.H.; Ezz-Eldien, S.S.; Youssef, I.K. Spectral Galerkin schemes for a class of multi-order fractional pantograph equations. J. Comput. Appl. Math. 2021, 384, 113157. [Google Scholar] [CrossRef]
  40. Ahmed, H.M. New Generalized Jacobi Galerkin operational matrices of derivatives: An algorithm for solving multi-term variable-order time-fractional diffusion-wave equations. Fractal Fract. 2024, 8, 68. [Google Scholar] [CrossRef]
  41. Niu, C.; Ma, H. An operator splitting Legendre-tau spectral method for Maxwell’s equations with nonlinear conductivity in two dimensions. J. Comput. Appl. Math. 2024, 437, 115499. [Google Scholar] [CrossRef]
  42. Ma, X.; Wang, Y.; Zhou, X.; Xu, G.; Gao, D. A Chebyshev tau matrix method to directly solve two-dimensional ocean acoustic propagation in undulating seabed environment. Phys. Fluids 2024, 36, 096601. [Google Scholar] [CrossRef]
  43. Sadri, K.; Amilo, D.; Hosseini, K.; Hinçal, E.; Seadawy, A.R. A tau-Gegenbauer spectral approach for systems of fractional integrodifferential equations with the error analysis. AIMS Math. 2024, 9, 3850–3880. [Google Scholar] [CrossRef]
  44. Pour-Mahmoud, J.; Rahimi-Ardabili, M.Y.; Shahmorad, S. Numerical solution of the system of Fredholm integro-differential equations by the Tau method. Appl. Math. Comput. 2005, 168, 465–478. [Google Scholar] [CrossRef]
  45. Ahmed, H.F.; Hashem, W.A. A fully spectral tau method for a class of linear and nonlinear variable-order time-fractional partial differential equations in multi-dimensions. Math. Comput. Simul. 2023, 214, 388–408. [Google Scholar] [CrossRef]
  46. Yang, X.; Jiang, X.; Zhang, H. A time–space spectral tau method for the time fractional cable equation and its inverse problem. Appl. Numer. Math. 2018, 130, 95–111. [Google Scholar] [CrossRef]
  47. Galerkin, B. Series occurring in various questions concerning the elastic equilibrium of rods and plates. Eng. Bull. (Vestn. Inzhenerov) 1915, 19, 897–908. [Google Scholar]
  48. Strang, G.; Fix, G.J. An Analysis of the Finite Element Method; Prentice-Hall, Inc.: Englewood Cliffs, NJ, USA, 1973; 318 p. [Google Scholar]
  49. Ciarlet, P.G.; Oden, J. The finite element method for elliptic problems. J. Appl. Mech. 1978, 45, 968. [Google Scholar] [CrossRef]
  50. Reddy, J.N. An Introduction to the Finite Element Method, 3rd ed.; McGraw-Hill Education: New York, NY, USA, 2005. [Google Scholar]
  51. Hughes, T.J.R. The Finite Element Method: Linear Static and Dynamic Finite Element Analysis; Dover Publications: Mineola, NY, USA, 2000. [Google Scholar]
  52. Quarteroni, A.; Valli, A. Domain Decomposition Methods for Partial Differential Equations; Oxford University Press: Oxford, UK, 1999. [Google Scholar]
  53. Catarino, P.; Morais, E.; Campos, H. A Note on k-Telephone and Incomplete k-Telephone Numbers. In Proceedings of the International Conference on Mathematics and Its Applications in Science and Engineering, Madrid, Spain, 12–14 July 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 225–238. [Google Scholar]
  54. Doha, E.H.; Hafez, R.M.; Youssri, Y.H. Shifted Jacobi Spectral-Galerkin Method for Solving Hyperbolic Partial Differential Equations. Comput. Math. Appl. 2019, 78, 889–904. [Google Scholar] [CrossRef]
  55. Koepf, W. Hypergeometric Summation, 2nd ed.; Universitext Series; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  56. Haq, S.; Idrees, M.; Islam, S. Application of Optimal Homotopy Asymptotic Method to Eighth Order Initial and Boundary Value Problems. Int. J. Appl. Math. Comput. Sci. 2010, 2, 73–80. [Google Scholar]
  57. Youssri, Y.H.; Zaky, M.A.; Hafez, R.M. Romanovski-Jacobi spectral schemes for high-order differential equations. Appl. Numer. Math. 2024, 198, 148–159. [Google Scholar] [CrossRef]
  58. Wazwaz, A.W. A new algorithm for solving differential equations of Lane–Emden type. Appl. Math. Comput. 2001, 118, 287–310. [Google Scholar] [CrossRef]
  59. Liao, S. A new analytic algorithm of Lane–Emden type equations. Appl. Math. Comput. 2003, 142, 1–16. [Google Scholar] [CrossRef]
  60. Singh, O.P.; Pandey, R.K.; Singh, V.K. An analytic algorithm of Lane–Emden type equations arising in astrophysics using modified homotopy analysis method. Comput. Phys. Commun. 2009, 180, 1116–1124. [Google Scholar] [CrossRef]
  61. Doha, E.H.; Bhrawy, A.H.; Hafez, R.M.; Van Gorder, R.A. A Jacobi rational pseudospectral method for Lane-Emden–Emden initial value problems arising in astrophysics on a semi-infinite interval. Comput. Appl. Math. 2014, 33, 607–619. [Google Scholar] [CrossRef]
  62. Wang, D.G.; Song, W.Y.; Shi, P.; Karimi, H.R. Approximate analytic and numerical solutions to Lane-Emden equation via fuzzy modeling method. Math. Probl. Eng. 2012, 2012, 259494. [Google Scholar] [CrossRef]
  63. Yüzbaşı, S. An efficient algorithm for solving multi-pantograph equation systems. Comput. Math. Appl. 2012, 64, 589–603. [Google Scholar] [CrossRef]
  64. Abdel-Halim Hassan, I.H. Application to differential transformation method for solving systems of differential equations. Appl. Math. Model. 2008, 32, 2552–2559. [Google Scholar] [CrossRef]
Figure 1. Graphs of absolute error, approximate and exact solutions at N = 24 for Example 1.
Figure 1. Graphs of absolute error, approximate and exact solutions at N = 24 for Example 1.
Mathematics 13 00918 g001
Figure 2. Graphs of exact and approximate solutions for Example 2.
Figure 2. Graphs of exact and approximate solutions for Example 2.
Mathematics 13 00918 g002
Figure 3. Graph of exact solution and approximate solution U 1 ( z ) at N = 12 for Example 4.
Figure 3. Graph of exact solution and approximate solution U 1 ( z ) at N = 12 for Example 4.
Mathematics 13 00918 g003
Figure 4. Graph of exact solution and approximate solution U 2 ( z ) at N = 12 for Example 4.
Figure 4. Graph of exact solution and approximate solution U 2 ( z ) at N = 12 for Example 4.
Mathematics 13 00918 g004
Figure 5. Comparison of the curves of exact U ( z , t ) and numerical U N ( z , t ) at t = 0.1, 0.3, 0.5, 0.7, 0.9 (left) and z = 0.1 , 0.3 , 0.5 , 0.7 , 0.9 (right) for Example 5 where N = 9 .
Figure 5. Comparison of the curves of exact U ( z , t ) and numerical U N ( z , t ) at t = 0.1, 0.3, 0.5, 0.7, 0.9 (left) and z = 0.1 , 0.3 , 0.5 , 0.7 , 0.9 (right) for Example 5 where N = 9 .
Mathematics 13 00918 g005
Figure 6. The space-time graphs of the absolute error functions for Example 5 at various choices of N.
Figure 6. The space-time graphs of the absolute error functions for Example 5 at various choices of N.
Mathematics 13 00918 g006
Table 1. Summary of Key Literature on Galerkin’s Approaches to Solving Differential Equations.
Table 1. Summary of Key Literature on Galerkin’s Approaches to Solving Differential Equations.
ReferenceMethodologyKey Findings
Galerkin [47]Original formulation of the Galerkin methodIntroduced the concept of weighted residuals for approximating solutions to differential equations.
Strang and Fix [48]Variational methodsDiscussed the application of Galerkin methods in finite element analysis, providing a comprehensive framework for numerical solutions.
Ciarlet [49]Finite Element MethodsEstablished foundational principles for Galerkin methods in Sobolev spaces and their applications in PDEs.
Reddy [50]Finite Element MethodsExplored advanced Galerkin approaches for nonlinear problems, emphasizing the versatility of the method in engineering applications.
Hughes [51]Galerkin methods in fluid dynamicsInvestigated the application of Galerkin methods in computational fluid dynamics, highlighting stability and convergence.
Quarteroni and
Valli [52]
Numerical Approximation of PDEsProvided a detailed analysis of Galerkin methods for various types of PDEs, with a focus on error estimation.
Table 2. The absolute errors for Example 1 at different values of z and N.
Table 2. The absolute errors for Example 1 at different values of z and N.
TGM
z N = 16 N = 20 N = 24
0.1 1.1452 × 10 10 1.7763 × 10 15 1.1102 × 10 16
0.2 2.4774 × 10 8 3.3195 × 10 13 0.0000 × 10 00
0.3 5.4599 × 10 7 6.8604 × 10 12 1.1102 × 10 16
0.4 4.7633 × 10 6 5.7179 × 10 11 7.7715 × 10 16
0.5 2.5134 × 10 5 2.9184 × 10 10 5.6621 × 10 15
0.6 9.6789 × 10 5 1.0961 × 10 9 2.8754 × 10 14
0.7 3.0048 × 10 4 3.3377 × 10 9 1.1390 × 10 13
0.8 7.9773 × 10 4 8.7262 × 10 9 3.7525 × 10 13
0.9 1.8807 × 10 3 2.0319 × 10 8 1.0758 × 10 12
Table 3. The absolute errors and CPU time (s) for Example 2 at different values of N.
Table 3. The absolute errors and CPU time (s) for Example 2 at different values of N.
NR-JTMCPUNTGMCPU
[57]Time (s) Time (s)
8 5.11 × 10 3 11.845 7 1.25 × 10 4 1.062
12 3.15 × 10 8 12.016 10 8.11 × 10 9 2.078
16 1.19 × 10 11 12.375 13 1.70 × 10 13 3.125
20 2.96 × 10 13 12.907 15 8.03 × 10 17 4.499
Table 4. Approximate solutions for Lane–Emden Equation (131) for Example 3.
Table 4. Approximate solutions for Lane–Emden Equation (131) for Example 3.
zTGMJRPMVWFMLADM
(Our Method)[61]Method [62]Solution [58]
0.1−0.0016658−0.0016677−0.0014253−0.0016658
0.2−0.0066534−0.0066443−0.0062547−0.0066534
0.3−0.014933−0.0149455−0.014411−0.014933
0.4−0.026456−0.0264421−0.025868−0.026456
0.5−0.041154−0.0411573−0.04051−0.041154
0.6−0.058944−0.058964−0.058256−0.058945
0.7−0.079726−0.0797169−0.079011−0.079728
0.8−0.10339−0.1033590.10264−0.10339
0.9−0.12979−0.1297950.12904−0.12981
1.0−0.15883−0.158859−0.15806−0.15886
Table 5. Comparison of the absolute errors e 1 , N for U 1 ( z ) of Equation (132).
Table 5. Comparison of the absolute errors e 1 , N for U 1 ( z ) of Equation (132).
x i DTM [64]BCM [63]TGM
e 1 , 6 e 1 , 6 e 1 , 6 e 1 , 10
0.00000
0.2 1.5575 × 10 2 2.8460 × 10 8 3.8359 × 10 8 1.2216 × 10 10
0.4 5.1209 × 10 2 1.7820 × 10 8 5.9136 × 10 8 5.3899 × 10 11
0.6 1.0150 × 10 1 1.2668 × 10 8 2.9357 × 10 8 8.0393 × 10 10
0.8 1.6630 × 10 1 3.3538 × 10 8 3.6346 × 10 8 1.1373 × 10 9
1.0 2.3351 × 10 1 6.8657 × 10 7 2.5086 × 10 8 1.8155 × 10 9
Table 6. Comparison of the absolute errors e 2 , N for U 2 ( z ) of Equation (132).
Table 6. Comparison of the absolute errors e 2 , N for U 2 ( z ) of Equation (132).
x i DTM [64]BCM [63]TGM
e 2 , 6 e 2 , 6 e 2 , 6 e 2 , 10
0.00000
0.2 2.3262 × 10 3 2.8460 × 10 8 3.8360 × 10 8 2.9631 × 10 10
0.4 1.6867 × 10 2 1.7820 × 10 8 5.9137 × 10 8 4.4937 × 10 10
0.6 5.4499 × 10 2 1.2668 × 10 8 2.9358 × 10 8 6.2494 × 10 10
0.8 1.3194 × 10 1 3.3538 × 10 8 3.6346 × 10 8 6.4809 × 10 10
1.0 2.8038 × 10 1 6.8657 × 10 7 2.5086 × 10 8 6.8342 × 10 10
Table 7. Comparison of the absolute errors for Example 5 at N = 6 .
Table 7. Comparison of the absolute errors for Example 5 at N = 6 .
[33]TGM
( z , t ) LWCW
( 0.1 , 0.1 ) 9.12 × 10 5 4.53 × 10 4 2.95 × 10 8
( 0.2 , 0.2 ) 3.36 × 10 5 2.07 × 10 4 4.32 × 10 8
( 0.3 , 0.3 ) 1.86 × 10 5 6.92 × 10 5 1.63 × 10 8
( 0.4 , 0.4 ) 2.42 × 10 5 1.39 × 10 4 8.50 × 10 8
( 0.5 , 0.5 ) 8.30 × 10 5 2.64 × 10 5 6.31 × 10 8
( 0.6 , 0.6 ) 1.38 × 10 4 7.62 × 10 5 3.05 × 10 8
( 0.7 , 0.7 ) 9.90 × 10 5 2.10 × 10 5 3.15 × 10 8
( 0.8 , 0.8 ) 1.20 × 10 6 2.01 × 10 5 1.95 × 10 8
( 0.9 , 0.9 ) 9.00 × 10 5 3.00 × 10 4 8.00 × 10 9
Table 8. Comparison of the L errors for Example 5 at different values of N.
Table 8. Comparison of the L errors for Example 5 at different values of N.
[33]TGM
N LWCW
2 4.86 × 10 2 5.94 × 10 2 8.02 × 10 3
4 2.78 × 10 4 6.00 × 10 4 6.49 × 10 5
6 9.90 × 10 5 4.53 × 10 4 1.58 × 10 7
7 1.74 × 10 8
8 2.39 × 10 10
9 2.30 × 10 11
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hafez, R.M.; Ahmed, H.M.; Alqubori, O.M.; Amin, A.K.; Abd-Elhameed, W.M. Efficient Spectral Galerkin and Collocation Approaches Using Telephone Polynomials for Solving Some Models of Differential Equations with Convergence Analysis. Mathematics 2025, 13, 918. https://doi.org/10.3390/math13060918

AMA Style

Hafez RM, Ahmed HM, Alqubori OM, Amin AK, Abd-Elhameed WM. Efficient Spectral Galerkin and Collocation Approaches Using Telephone Polynomials for Solving Some Models of Differential Equations with Convergence Analysis. Mathematics. 2025; 13(6):918. https://doi.org/10.3390/math13060918

Chicago/Turabian Style

Hafez, Ramy Mahmoud, Hany Mostafa Ahmed, Omar Mazen Alqubori, Amr Kamel Amin, and Waleed Mohamed Abd-Elhameed. 2025. "Efficient Spectral Galerkin and Collocation Approaches Using Telephone Polynomials for Solving Some Models of Differential Equations with Convergence Analysis" Mathematics 13, no. 6: 918. https://doi.org/10.3390/math13060918

APA Style

Hafez, R. M., Ahmed, H. M., Alqubori, O. M., Amin, A. K., & Abd-Elhameed, W. M. (2025). Efficient Spectral Galerkin and Collocation Approaches Using Telephone Polynomials for Solving Some Models of Differential Equations with Convergence Analysis. Mathematics, 13(6), 918. https://doi.org/10.3390/math13060918

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop