Next Article in Journal
Global Existence, General Decay, and Blow Up of the Solution to the Coupled p-Biharmonic Equation of Hyperbolic Type with Degenerate Damping Terms
Previous Article in Journal
The Implicit Phase-Fitted and Amplification-Fitted Four-Point Block Methods for Oscillatory First-Order Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Spectral Approach to Solve High-Order Ordinary Differential Equations: Improved Operational Matrices for Exponential Jacobi Functions

by
Hany Mostafa Ahmed
Department of Mathematics, Faculty of Technology and Education, Helwan University, Cairo 11281, Egypt
Mathematics 2025, 13(19), 3154; https://doi.org/10.3390/math13193154
Submission received: 7 August 2025 / Revised: 23 September 2025 / Accepted: 24 September 2025 / Published: 2 October 2025
(This article belongs to the Special Issue Computational Methods for Numerical Linear Algebra)

Abstract

This paper presents a novel numerical approach to handling ordinary differential equations (ODEs) with initial conditions (ICs) by introducing generalized exponential Jacobi functions (GEJFs). These GFJFs satisfy the associated ICs. A crucial part of this approach is using the spectral collocation method (SCM) and building operational matrices (OMs) for the ordinary derivatives (ODs) of GEJFs. These lead to efficient and accurate computations. The suggested algorithm’s convergence and error analysis is proved. We present numerical examples to demonstrate the applicability of the approach.

1. Introduction

In the realm of mathematical modeling, initial value problems (IVPs) play a pivotal role in representing a wide array of physical, biological, and engineering phenomena. The numerical solutions (NUMSs) of IVPs are essential for obtaining approximate solutions (APPSs) when analytical methods prove intractable. Differernt numerical methods have been developed to obtain APPSs for ODEs. One common method is to turn the differential equation (DE) into a system of algebraic equations. Foundational texts such as Shen et al. [1] give a full picture of the powerful spectral methods paradigms. The literature presents a wide array of basis functions for this purpose, encompassing both non-orthogonal and orthogonal polynomials.
Among the non-orthogonal polynomials, Bernstein polynomials have been effectively employed for high-order ordinary differential equations (ODEs) [2,3], whereas alternative methods have utilized bases such as Bernoulli polynomials [4] and cubic Hermite splines [5]. Investigations have examined additional non-classical polynomials, including generalized Fibonacci and Vieta–Fibonacci polynomials, to address fractional and singular issues [6,7].
However, orthogonal polynomials are very common because they can obtain approximating solutions. The Jacobi polynomials (JPs) family is a flexible and unifying base for many spectral methods. JPs and Romanovski–Jacobi polynomials, in their broader applications, have been employed to develop resilient Galerkin and matrix methods for a range of complex DEs [8,9,10]. The strength of this family also comes from its important special cases. For example, Gegenbauer polynomials have been effectively utilized in time-fractional cable problems [11]. Chebyshev polynomials are used in many numerical methods, such as solving singular Lane–Emden equations [12], making Petrov–Galerkin schemes for beam equations [13], and making hybrid methods for nonlinear PDEs [14]. These ideas have been expanded to include specialized bases, such as airfoil polynomials for complex fluid models [15]. In addition to the Jacobi family, similar methods have used Chebyshev wavelets [16] and radial basis functions [17]. These various methods have made it possible to accurately solve other types of DEs, such as the Lane–Emden type [18,19] and the first Painlevé equation [20].
Orthogonal JPs possess numerous advantageous properties that render them highly effective in the NUMSs of various types of ODEs, particularly through spectral methods. The primary attributes of JPs include orthogonality, exponential precision, and the incorporation of two parameters that facilitate versatility in setting the APPSs. In the development of various computational techniques, the usefulness of these characteristics is visible. For the development of Jacobi–Galerkin methods, which provide a strong framework for solving complex differential equations, the orthogonality of JPs is essential [21]. Another useful technique creates OMs for derivatives that function well by utilizing the properties of JPs. This approach streamlines the process of converting a DE into an algebraic system as demonstrated in [22,23].
Orthogonal functions (OFs) have garnered significant interest in addressing different types of DEs. The primary attribute of this strategy is its ability to reduce these problems to the handling of a system of algebraic equations, significantly simplifying the problem. For example, researchers have created fractional-order Legendre functions [24] and shifted fractional-order Jacobi functions to effectively address systems of FDEs [25]. Another significant advancement was the proposal of Exponential Jacobi Polynomials (EJPs) for high-order ODEs to solve problems on semi-infinite intervals [26]. These were later expanded to develop new algorithms for partial DEs in unbounded domains [27,28].
This work aims to expand the use of the novel OFs derived from EJPs to address IVPs, named GEJFs that satisfy the given ICs, to develop an SCM capable of addressing the following IVPs:
D n Y ( t ˇ ) = F ( t ˇ , Y ( t ˇ ) , D Y ( t ˇ ) , D 2 Y ( t ˇ ) , , D n 1 Y ( t ˇ ) ) , 0 < t ˇ < ,
subject to the ICs
Y ( j ) ( 0 ) = β j , j = 0 , 1 , , n 1 .
In this work, we introduce a novel methodology to solve Equations (1) and (2) by establishing new Galerkin OMs for ODEs based on the basis vectors of GEJFs. To the best of our knowledge, the literature has never used this strategy for addressing ODEs, utilizing the OM of the suggested basis vector, which is presented. This novel technique enables the effective mending and attainment of NUMSs for this class of ODEs.
This article is structured as follows: Section 2 delineates particular characteristics of the JPs and EJFs. Section 3 focuses on the development of innovative OMs for the ODs of GEJFs. In Section 4, we analyze the application of freshly generated OMs in conjunction with the SCM as a numerical approach to solve (1) and (2). The evaluation of the error estimate for the NUMS obtained from this innovative method is elaborated in Section 5. Section 6 provides numerical examples and comparisons with alternative approaches from the literature to illustrate the effectiveness of the proposed strategy. In Section 7, we summarize the key findings and draw implications from our research.

2. Jacobi Polynomials and Exponential Jacobi Functions

This section presents some elementary properties of JPs and EJFs. Additionally, we propose and characterize a new type of OF, called GEJFs.

2.1. Analytical Framework: JP Basis Functions

The JPs, P n ( ς 1 , ς 2 ) ( x ) , ς 1 , ς 2 > 1 , satisfy the orthogonality relation [29]
1 1 w ς 1 , ς 2 ( x ) P n ( ς 1 , ς 2 ) ( x ) P m ( ς 1 , ς 2 ) ( x ) d x = 0 , m n , h n ( ς 1 , ς 2 ) , m = n ,
where w ς 1 , ς 2 ( x ) = ( 1 x ) ς 1 ( 1 + x ) ς 2 , and h n ( ς 1 , ς 2 ) = 2 λ Γ ( n + ς 1 + 1 ) Γ ( n + ς 2 + 1 ) n ! ( 2 n + λ ) Γ ( n + λ ) , λ = ς 1 + ς 2 + 1 .
The analytical form of P i ( ς 1 , ς 2 ) ( x ) can be written as follows ([30], p. 436):
P i ( ς 1 , ς 2 ) ( x ) = k = 0 i c k ( i ) ( 1 x ) k ,
where
c k ( i ) = ( 1 ) k ( ς 1 + 1 ) i ( i + λ ) k 2 k ( i k ) ! k ! ( ς 1 + 1 ) k .
Alternatively, the expression for ( 1 x ) k in relation to P r ( ς 1 , ς 2 ) ( x ) has the following form ([30], p. 441):
( 1 x ) k = r = 0 k b r ( k ) P r ( ς 1 , ς 2 ) ( x ) ,
where
b r ( k ) = ( 1 ) r 2 k k ! ( ς 1 + 1 ) k ( 2 r + λ ) ( k r ) ! ( ς 1 + 1 ) r ( r + λ ) k + 1 .
Additionally, using the following relation ([30], p. 437):
P r ( ς 1 , ς 2 ) ( x ) = ( 1 ) r P r ( ς 2 , ς 1 ) ( x ) ,
it is easy to see that
( 1 + x ) k = r = 0 k b ¯ r ( k ) P r ( ς 1 , ς 2 ) ( x ) ,
where
b ¯ r ( k ) = 2 k k ! ( ς 2 + 1 ) k ( 2 r + λ ) ( k r ) ! ( ς 2 + 1 ) r ( r + λ ) k + 1 .

2.2. Introducing EJFs and GEJFs

The so-called EJFs, J n ( ς 1 , ς 2 , ) ( t ˇ ) = P n ( ς 1 , ς 2 ) ( 1 2 e t ˇ / ) , t ˇ [ 0 , ) , have the following analytical form [26]
J i ( ς 1 , ς 2 , ) ( t ˇ ) = k = 0 i c ^ k ( i ) e t ˇ k / , c ^ k ( i ) = 2 k c k ( i ) .
These functions satisfy the following orthogonal relation [26]:
0 w e ς 1 , ς 2 ( t ˇ ) J n ( ς 1 , ς 2 , ) ( t ˇ ) J m ( ς 1 , ς 2 , ) ( t ˇ ) d t ˇ = 2 λ h n ( ς 1 , ς 2 ) , m = n , 0 , m n ,
where w e ς 1 , ς 2 ( t ˇ ) = e t ˇ ( ς 1 + 1 ) / ( 1 e t ˇ / ) ς 2 . In addition, they satisfy the following recurrence relation [26]
e t ˇ / J i ( ς 1 , ς 2 , ) ( t ˇ ) = α 1 , i J i + 1 ( ς 1 , ς 2 , ) ( t ˇ ) + α 2 , i J i ( ς 1 , ς 2 , ) ( t ˇ ) + α 3 , i J i 1 ( ς 1 , ς 2 , ) ( t ˇ ) ,
where
α 1 , i = ( i + 1 ) ( i + λ ) ( 2 i + λ ) ( 2 i + λ + 1 ) , α 2 , i = ( ς 1 + 1 ) ( λ 1 ) + 2 i 2 + 2 i λ ( 2 i + λ ) ( 2 i + λ + 1 ) , α 3 , i = ( ς 1 + i ) ( ς 2 + i ) ( 2 i + λ 1 ) ( 2 i + λ ) .
Additionally, J i ( ς 1 , ς 2 , ) ( t ˇ ) has the following special value ([26], p. 437):
J i ( ς 1 , ς 2 , ) ( 0 ) = P i ( ς 1 , ς 2 ) ( 1 ) = ( 1 ) i ( ς 2 + 1 ) i i ! .
The following lemma is needed throughout the paper.
Lemma 1. 
The function J i ( ς 1 , ς 2 , ) ( t ˇ ) can be written in the following form:
J i ( ς 1 , ς 2 , ) ( t ˇ ) = k = 0 i B k ( i ) ( 1 e t ˇ / ) k , B k ( i ) = ( i β ) i k ( i + λ ) k k ! ( i k ) ! .
Proof. 
Using Formula (9),
J i ( ς 1 , ς 2 , ) ( t ˇ ) = k = 0 i ( 1 ) k c ^ k ( i ) ( 1 e t ˇ / 1 ) k = k = 0 i c ^ k ( i ) r = 0 k ( 1 ) r k r ( 1 e t ˇ / ) r .
By expanding and collecting similar terms—and after some manipulation—Equation (14) takes the form (13).    □
The following inversion formulae given in Lemma 2 are needed throughout the paper.
Lemma 2. 
The expressions of e t ˇ k / and ( 1 e t ˇ / ) k k = 0 , 1 , 2 , , have the following expansions:
e t ˇ k / = r = 0 k d r ( k ) J r ( ς 1 , ς 2 , ) ( t ˇ ) , d r ( k ) = 2 k b r ( k ) ,
and
( 1 e t ˇ / ) k = r = 0 k d ^ r ( k ) J r ( ς 1 , ς 2 , ) ( t ˇ ) , d ^ r ( k ) = 2 k b ¯ r ( k ) .
Proof. 
The proof is a direct consequence of Formulas (5) and (7), by replacing x with ( 1 2 e t ˇ / ) .    □
The proof of Corollary 1 is a direct consequence of (10) and (16).
Corollary 1. 
e t ˇ / ( 1 e t ˇ / ) k = r = 0 k + 1 M r ( k ) J r ( ς 1 , ς 2 , ) ( t ˇ ) ,
where
M r ( k ) = α 1 , r 1 d ^ r 1 ( k ) + α 2 , r d ^ r ( k ) + α 3 , r + 1 d ^ r + 1 ( k ) , d ^ 1 ( k ) = d ^ k + 1 ( k ) = 0 .
New GEJFs { ϕ n , j ( ς 1 , ς 2 , ) ( t ˇ ) } j 0 are introduced,
ϕ n , j ( ς 1 , ς 2 , ) ( t ˇ ) = ( 1 e t ˇ / ) n J j ( ς 1 , ς 2 , ) ( t ˇ ) ,
which satisfy that D q ϕ n , j ( ς 1 , ς 2 , ) ( 0 ) = 0 , q = 0 , 1 , , n 1 , and the orthogonality relation
0 w ^ e , n ς 1 , ς 2 ( t ˇ ) ϕ n , i ( ς 1 , ς 2 , ) ( t ˇ ) ϕ n , j ( ς 1 , ς 2 , ) ( t ˇ ) d t ˇ = 2 λ h i ( ς 1 , ς 2 ) , i = j , 0 , i j ,
where w ^ e , n ς 1 , ς 2 ( t ˇ ) = 1 ( 1 e t ˇ / ) 2 n w e ς 1 , ς 2 ( t ˇ ) .
Remark 1. 
Since
ϕ 0 , i ( ς 1 , ς 2 , ) ( t ˇ ) = J i ( ς 1 , ς 2 , ) ( t ˇ ) ,
ϕ n , i ( ς 1 , ς 2 , ) ( t ˇ ) is a generalization of J i ( ς 1 , ς 2 , ) ( t ˇ ) .

3. OM for Ods of ϕ n , i ( ς 1 , ς 2 , ) ( t ˇ )

In this section, the main theorem is presented, which provides the OM for Ods of the vector
Φ n , ( ς 1 , ς 2 , ) ( t ˇ ) = [ ϕ n , 0 ( ς 1 , ς 2 , ) ( t ˇ ) , ϕ n , 1 ( ς 1 , ς 2 , ) ( t ˇ ) , , ϕ n , ( ς 1 , ς 2 , ) ( t ˇ ) ] T .
Theorem 1. 
The first derivative of ϕ n , i ( ς 1 , ς 2 , ) ( t ˇ ) for all i 0 , can be written in the form
D ϕ n , i ( ς 1 , ς 2 , ) ( t ˇ ) = j = 0 i Θ i , j ς 1 , ς 2 ( n , ) ϕ n , j ( ς 1 , ς 2 , ) ( t ˇ ) + Q n , i ( t ˇ ) ,
where Q n , i ( t ˇ ) = n B 0 ( i ) r = 0 n M r ( n 1 ) J r ( ς 1 , ς 2 , ) ( t ˇ ) ,  and 
Θ i , j ς 1 , ς 2 ( n ) = 1 k = 0 i j δ k + j ( k + j + n + 1 ) M j ( k + j 1 ) B k + j ( i ) ,
where δ 0 = 0 , δ j = 1 , j 1 .
Proof. 
In view of (13), we obtain
D ϕ n , i ( ς 1 , ς 2 , ) ( t ˇ ) = 1 e t ˇ / k = 0 i B k ( i ) ( k + n ) ( 1 e t ˇ / ) k + n 1 = 1 e t ˇ / k = 1 i B k ( i ) ( k + n ) ( 1 e t ˇ / ) k + n 1 + n B 0 ( i ) e t ˇ / ( 1 e t ˇ / ) n 1 = 1 e t ˇ / k = 0 i 1 B k + 1 ( i ) ( k + n + 1 ) ( 1 e t ˇ / ) k + n + n B 0 ( i ) e t ˇ / ( 1 e t ˇ / ) n 1 .
Using Corollary 1, we get
D ϕ n , i ( ς 1 , ς 2 , ) ( t ˇ ) = 1 ( 1 e t ˇ / ) n k = 0 i 1 B k + 1 ( i ) ( k + n + 1 ) r = 0 k + 1 M r ( k ) J r ( ς 1 , ς 2 , ) ( t ˇ ) + n B 0 ( i ) r = 0 n M r ( n 1 ) J r ( ς 1 , ς 2 , ) ( t ˇ ) .
Then, expanding and collecting similar terms leads to (21).    □
Now, we have reached the main desired result in this section, the mentioned OM of the vector (20), which is a direct consequence of Theorem 1 as follows:
Corollary 2. 
The mth derivative of the vector Φ n , ( ς 1 , ς 2 , ) ( t ˇ ) has the form
d m Φ n , ( ς 1 , ς 2 , ) ( t ˇ ) d t m = G n m Φ n , ( ς 1 , ς 2 , ) ( t ˇ ) + Υ n , ( m ) ( t ˇ ) ,
with Υ n , ( m ) ( t ˇ ) = k = 0 m 1 G n k Q n , ( m k 1 ) ( t ˇ ) , where Q n , ( t ˇ ) = Q n , 0 ( t ˇ ) , Q n , 1 ( t ˇ ) , , Q n , ( t ˇ ) T a n d G n = g i , j ( n ) 0 i , j is a matrix of order ( + 1 ) × ( + 1 ) , which can be expressed explicitly as
Θ 0 , 0 ς 1 , ς 2 ( n , ) 0 0 Θ 1 , 0 ς 1 , ς 2 ( n , ) Θ 1 , 1 ς 1 , ς 2 ( n , ) 0 0 Θ i , 0 ς 1 , ς 2 ( n , ) Θ i , i ς 1 , ς 2 ( n , ) 0 0 0 Θ , 0 ς 1 , ς 2 ( n , ) Θ , ς 1 , ς 2 ( n , ) ,
where
g i , j ( n ) = Θ i , j ς 1 , ς 2 ( n , ) , i j , 0 , otherwise .

4. Numerical Handling for IVP (1) and (2)

Here, we utilize the OM obtained in Corollary 2 to derive NUMSs for (1) and (2).

4.1. Homogeneous ICs

Suppose we have homogeneous initial conditions (HICs), i.e.,  β j = 0 , j = 0 , 1 , 2 , , n 1 . We propose the APPS to Y ( t ˇ ) as
Y ( t ˇ ) Y ( t ˇ ) = i = 0 c i ϕ n , i ( ς 1 , ς 2 , ) ( t ˇ ) = A T Φ n , ( ς 1 , ς 2 , ) ( t ˇ ) ,
where A = c 0 , c 1 , , c T , and the unknown constants c i , i = 0 , 1 , , , are to be determined.
Corollary 2 enables us to obtain
D m Y ( t ˇ ) = A T G n m Φ n , ( ς 1 , ς 2 , ) ( t ˇ ) + Υ n , ( m ) ( t ˇ ) , m = 1 , 2 , , n 1 .
In this method, using the approximations (27) and (28) allows one to write the residual of Equation (1) as
R n , ( t ˇ ) = A T G n n Φ n , ( ς 1 , ς 2 , ) ( t ˇ ) + Υ n , ( n ) ( t ˇ ) F ( t ˇ , A T Φ n , ( ς 1 , ς 2 , ) ( t ˇ ) , A T G n Φ n , ( ς 1 , ς 2 , ) ( t ˇ ) + Υ n , ( 1 ) ( t ˇ ) , , A T G n n 1 Φ n , ( ς 1 , ς 2 , ) ( t ˇ ) + Υ n , ( n 1 ) ( t ˇ ) ) .
We suggest a spectral approach, referred to as GEJFCOPMM using SCM and the computed previous OMs. The collocation points for GEJFCOPMM are the ( + 1 ) zeros whose form
t ˇ , j = l n 1 x , j 2 , j = 0 , 1 , , ,
where P + 1 ( ς 1 , ς 2 ) ( x , j ) = J + 1 ( ς 1 , ς 2 , ) ( t ˇ , j ) = 0 , such that
R n , ( t ˇ i , ) = 0 , i = 0 , 1 , . . . , .
By solving (31) using Newton’s iterative method, we obtain the unknown constants c i , i = 0 , 1 , , , and the desired NUMS (27) is computed.

4.2. Nonhomogeneous ICs

Here, GEJFCOPMM is developed to solve (1) and (2) by transforming it into an equivalent form with HICs. This transformation has the form
Y ¯ ( t ˇ ) = Y ( t ˇ ) q n ( t ˇ ) , q n ( t ˇ ) = i = 0 n 1 β i i ! t ˇ i ,
which leads to
D n Y ¯ ( t ˇ ) = D n q n ( t ˇ ) + F ( t ˇ , Y ¯ ( t ˇ ) + q n ( t ˇ ) , D ( Y ¯ ( t ˇ ) + q n ( t ˇ ) ) , D 2 ( Y ¯ ( t ˇ ) + q n ( t ˇ ) ) , , D n 1 ( Y ¯ ( t ˇ ) + q n ( t ˇ ) ) ) , 0 t ˇ < ,
where
Y ¯ ( m ) ( 0 ) = 0 , m = 0 , 1 , , n 1 .
Then,
Y ( t ˇ ) = Y ¯ ( t ˇ ) + q n ( t ˇ ) .
Remark 2. 
The algorithm presented here is used to solve multiple numerical examples in Section 6. The computations were performed using Mathematica 13.3 on a computer system equipped with an Intel(R) Core(TM) i9-10850 CPU operating at 3.60 GHz, featuring 10 cores and 20 logical processors. The algorithmic steps for solving the ODEs using GEJFCOPMM are expressed in Algorithm 1.
Algorithm 1 GEJFCOPMM algorithm
Step 1. Given ς 1 , ς 2 , , and β j , j = 0 , , n 1 .
Step 2. Define the basis ϕ n , i ( ς 1 , ς 2 , ) ( t ˇ ) , the vectors A , Φ n , ( ς 1 , ς 2 , ) ( t ˇ ) and compute the elements of ( + 1 ) ( + 1 ) matrix G n .
Step 3. Evaluate A T G n m , Υ n , ( m ) ( t ˇ ) , m = 1 , . . . , n .
Step 4. Define R n , ( t ˇ ) as in Equation (29).
Step 5. List R n , ( t ˇ i , ) = 0 , i = 0 , 1 , . . . , , defined in Equation (31).
Step 6. Use Mathematica’s built-in numerical solver to obtain the solution to the system of equations in [Output 5].
Step 7. Evaluate Y ( t ˇ ) defined in Equation (27) (In the case of homogeneous ICs).
Step 8. Evaluate q n ( t ˇ ) and Y ( t ˇ ) defined in Equation (35) (In the case of Nonhomogeneous ICs).
Remark 3. 
For interested readers, we utilized several built-in functions in Mathematica 13.3 for our numerical implementation of the provided algorithms. Below is a summary of the tools used, along with concise information about each function.
  • Array: For creating and manipulating arrays, which are used to hold coefficients and OMs throughout the computations.
  • NSolve: For finding NUMSs to nonlinear algebraic equations; it is utilized to compute the zeros of P + 1 ( ς 1 , ς 2 ) ( x ) .
  • FindRoot: For solving equations by finding roots; it is essential in handling the nonlinear aspects of our system, using a zero initial approximation.
  • JacobiP: For generating ϕ n , i ( ς 1 , ς 2 , ) ( τ ) , which serve as basis functions that provide the foundation for approximating the solution in our collocation method.
  • D: To compute ordinary derivatives to determine the defined residuals.
  • Table: For generating lists and arrays of values based on specified formulas, particularly for collocation points and other parameterized data.

5. Convergence and Error Analysis (CEA)

In this part, we present the CEA of GEJFCOPMM. In this respect, we consider the space
S n , = S p a n { ϕ n , 0 ( ς 1 , ς 2 , ) ( t ˇ ) , ϕ n , 1 ( ς 1 , ς 2 , ) ( t ˇ ) , . . . , ϕ n , ( ς 1 , ς 2 , ) ( t ˇ ) } .
Additionally, the absolute error (AE) between Y ( t ˇ ) and its approximation Y ( t ˇ ) can be defined by
E ( t ˇ ) = Y ( t ˇ ) Y ( t ˇ ) .
In the paper, the error obtained by GEJFCOPMM is analyzed by using the the weighted L 2 norm
E 2 = Y Y 2 = 0 w ^ e , n ς 1 , ς 2 ( t ˇ ) | Y ( t ˇ ) Y ( t ˇ ) | 2 d t ˇ 1 / 2 ,
and the L (MAE) norm
E = Y Y = m a x 0 t ˇ L | Y ( t ˇ ) Y ( t ˇ ) | .
The following lemma is need to prove Theorem 2:
Lemma 3. 
Consider the zeros x , k , t ˇ , k , k = 0 , 1 , , ; then, there exists a constant M such that
| t ˇ t ˇ , k | < M | x x , k | .
Proof. 
In view of (30), we have
| t ˇ t ˇ , k |   = | l n 1 x 2 + l n 1 x , k 2 | = | l n 1 x 1 x , k | .
Consider the function f ( x ) = l n 1 x 1 x , k , x ( 1 , 1 ) . Using the mean value theorem, we get
f ( x ) = f ( ζ x ) ( x x , k ) , ζ x ( x , k , x ) ;
then,
| f ( x ) | M | x x , k | , M = m a x 1 x 1 | f ( x ) | .
Applying the inequality (42) to (40), we obtain
| t ˇ t ˇ , k | M | x x , k | ,
which completes the proof of the lemma. □
Theorem 2. 
Assume that Y ( t ˇ ) = ( 1 e t ˇ / ) n u ( t ˇ ) , and Y ( t ˇ ) has the form (27) and represents the best possible approximation (BPA) for Y ( t ˇ ) out of S n , . Then, there is a constant K such that
E K 2 λ 1 e 2 ( + 1 ) q 1 ,
and
E 2 K 2 λ 1 Γ ( ς 1 + 1 ) Γ ( ς 2 + 1 ) Γ ( λ + 1 ) e 2 ( + 1 ) q 1 ,
where q = m a x { ς 1 , ς 2 , 1 / 2 } < + 1 , K = ( M ) + 1 m a x t ˇ [ 0 , ) | d + 1 u ( η t ˇ ) d t ˇ + 1 | , η t ˇ [ 0 , ) and the constant M is a constant given by Lemma 3.
Proof. 
Using Theorem 3.3 in ([31], p. 109), u ( t ˇ ) takes the form
u ( t ˇ ) = u ( t ˇ ) + m a x t ˇ [ 0 , ) | d + 1 u ( η t ˇ ) d t ˇ + 1 | ( + 1 ) ! k = 0 ( t ˇ t ˇ k ) , η t ˇ [ 0 , ) ,
where u ( t ˇ ) (has the form of Equation (3.1) in ([31], p. 108)) is the interpolating polynomial for u ( t ˇ ) at the points t ˇ k , k = 0 , 1 , , , that satisfy J + 1 ( ς 1 , ς 2 , ) ( t ˇ k ) = 0 , such that > q 1 . In view of Lemma 3, we have
| u ( t ˇ ) u ( t ˇ ) | K ( + 1 ) ! | k = 0 ( x x k ) | .
Then, we get
u u K ( + 1 ) ! k = 0 ( x x k ) = K ( + 1 ) ! c + 1 + 1 P + 1 ( ς 1 , ς 2 ) ,
where c + 1 + 1 = Γ ( 2 + λ + 2 ) 2 + 1 ( + 1 ) ! Γ ( + λ + 1 ) is the leading coefficient of P + 1 ( ς 1 , ς 2 ) ( t ˇ ) . In view of formula ([29], Formula (7.32.2)),
P + 1 ( ς 1 , ς 2 ) ( + 1 ) q ,
we get
u u K 2 + 1 Γ ( + λ + 1 ) ( + 1 ) q Γ ( 2 + λ + 2 ) .
By using [32],
Γ ( m + λ ) = O ( m λ 1 m ! ) , ( 2 m ) ! = 1 π 4 m m ! Γ ( m + 1 / 2 ) , m ! = O 2 π m m e m ,
the inequality (50) takes the form
u u K 2 λ 1 e 2 ( + 1 ) q 1 .
Now, consider Y ( t ˇ ) Y ( t ˇ ) = ( 1 e t ˇ / ) n u ( t ˇ ) ; so,
Y Y u u K 2 λ 1 e 2 ( + 1 ) q 1 .
Since the APPS Y ( t ˇ ) S n , , represents the BPA to Y ( t ˇ ) , we have
Y Y Y h , h S n , ,
and
Y Y 2 Y h 2 , h S n , .
So,
Y Y Y Y K 2 λ 1 e 2 ( + 1 ) q 1 ,
and
Y Y 2 2 Y Y 2 2 u u 2 2   =   0 w ^ e , n ς 1 , ς 2 ( t ˇ ) | u ( t ˇ ) u ( t ˇ ) | 2 d t ˇ .
Using inequality (52) leads to
Y Y 2 K 2 λ 1 Γ ( ς 1 + 1 ) Γ ( ς 2 + 1 ) Γ ( λ + 1 ) e 2 ( + 1 ) q 1 .
The following implication demonstrates that the acquired inaccuracies converge rapidly.
Corollary 3. 
For all > q 1 , we have
E = O ( ( e / 2 ) q 1 ) ,
and
E 2 = O ( ( e / 2 ) q 1 ) .
The next theorem emphasizes the stability of error, i.e., making an estimation for error propagation.
Theorem 3. 
For any two successive approximations of Y ( t ˇ ) , we get
| Y + 1 Y |   O ( ( e / 2 ) q 1 ) , > q 1 ,
where ≲ means that a generic constant d exists, such that | Y + 1 Y |   d ( e / 2 ) q 1 .
Proof. 
We have
| Y + 1 Y |   =   | Y + 1 Y + Y Y |     | Y Y + 1 |   +   | Y Y   |   E + 1 + E .
By considering (59), we can obtain (61). □

6. Numerical Simulations

In this part, numerical examples are given to demonstrate that GEJFCOPMM has high efficiency. In these examples, Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10 are given, in which the computed errors to obtain some NUMSs Y ( t ˇ ) are presented; additionally, the comparisons between GEJFCOPMM and other techniques in [17,26,33,34,35,36,37] are given. In these tables, excellent computational results are obtained. In addition, they confirm that GEJFCOPMM gives more accurate results than the other techniques. Additionally, Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5 show that the exact solutions and APPSs in the presented examples are in excellent agreement.
Problem 1. 
Consider the nonlinear Emden–Fowler equation
D 2 Y ( t ˇ ) + 6 t ˇ D Y ( t ˇ ) + 14 Y ( t ˇ ) + 4 Y ( t ˇ ) l n Y ( t ˇ ) = 0 , t ˇ 0 , Y ( 0 ) = 1 , and Y ( 0 ) = 0 . }
The exact solution is Y ( t ˇ ) = e t ˇ 2 . The application of GEJFCOPMM gives the proposed solution in the form
Y ( t ˇ ) = i = 0 c i ϕ 2 , i ( ς 1 , ς 2 , ) ( t ˇ ) + 1 .
Figure 1 shows Y ( t ˇ ) , Y 17 ( t ˇ ) , and E 17 ( t ˇ ) using ς 1 = 1 , ς 2 = 0.5 , = 1 , and = 17 , which leads to an accuracy of 7.8 × 10 11 . Table 1 shows that the application of G E J F C O P M M using various values of ς 1 and ς 2 and = 1 leads to an accuracy of 10 10 . Table 2 also shows a comparison of the AE between our method at = 12 and the EJOM method at = 20 as mentioned in [26]. Notably, the results demonstrate that G E J F C O P M M achieves significantly better error performance compared to the EJOM method, confirming its superiority.
Problem 2. 
Consider the DE
D 2 Y ( t ˇ ) + 8 t ˇ D Y ( t ˇ ) + Y 2 ( t ˇ ) = erf ( t ˇ ) 2 4 e t ˇ 2 ( t ˇ 2 4 ) π t ˇ , t ˇ 0 , Y ( 0 ) = 0 , Y ( 0 ) = 2 π , }
where the exact solution is Y ( t ˇ ) = 2 π 0 t ˇ e x 2 d x . The application of GEJFCOPMM gives the proposed solution in the form
Y ( t ˇ ) = i = 0 c i ϕ 2 , i ( ς 1 , ς 2 , ) ( t ˇ ) + 2 t ˇ π .
Table 3 shows that the results from different values of ς 1 , ς 2 , ℵ, and = 1 have an accuracy that reaches 5 × 10 11 at = 20 . Additionally, Figure 2 shows Y ( t ˇ ) , Y 20 ( t ˇ ) , and E 20 ( t ˇ ) using ς 1 = 2 , ς 2 = 0.5 , = 1 , which leads to an accuracy of 5.42 × 10 11 . Additionally, a comparison between Figure 4 in [26] (using ς 1 = 0.5 , ς 2 = 0.5 , = 1 , and = 24 ) and Figure 3 (using ς 1 = 0.5 , ς 2 = 0.5 , = 1 , and = 20 ) shows that the AE obtained by applying our method provides better accuracy than that of [26] and reduces the computational efforts. Table 4 also shows a comparison of the AE between our method using = 20 and the EJOM method [26] using = 24 . Notably, the results demonstrate that G E J F C O P M M achieves significantly better error performance compared to the EJOM method, confirming its superiority.
Problem 3. 
Consider the DE
D 3 Y ( t ˇ ) + 6 t ˇ D 2 Y ( t ˇ ) + 6 t ˇ 2 D Y ( t ˇ ) = 6 ( t ˇ 6 + 2 t ˇ 3 + 10 ) e 3 Y ( t ˇ ) , t ˇ [ 0 , 1 ] , Y ( 0 ) = 0 , Y ( 1 ) ( 0 ) = 0 , Y ( 2 ) ( 0 ) = 0 , }
where g ( t ˇ ) is selected, such that Y ( t ˇ ) = l n ( t ˇ 3 + 1 ) . The application of GEJFCOPMM gives the proposed solution in the form
Y ( t ˇ ) = i = 0 c i ϕ 3 , i ( ς 1 , ς 2 , ) ( t ˇ )
and leads to obtaining the numerical results that are summarized in Table 5, and comparisons between GEJFCOPMM and other methods, SC3COMM and SC3CM [36], VIM [33], QBSM [34], and BSCM [35], are presented in Table 6. Additionally, Figure 4a shows the rapid convergence of APPS to the exact solution. Furthermore, Figure 4b shows the log-errors, which demonstrate that the solutions obtained are stable and converge.
Problem 4. 
Consider the DE
D 5 Y ( t ˇ ) + Y ( t ˇ ) D 4 Y ( t ˇ ) = g ( t ˇ ) , t ˇ [ 0 , 1 ] , Y ( i ) ( 0 ) = 0 , i = 0 , 1 , 2 , 3 , 4 , }
where g ( t ˇ ) is selected, such that Y ( t ˇ ) = t ˇ 5 cos t ˇ . The application of GEJFCOPMM gives the proposed solution in the form
Y ( t ˇ ) = i = 0 c i ϕ 5 , i ( ς 1 , ς 2 , ) ( t ˇ )
and provides Table 7, which illustrates that the varying parameters ς 1 and ς 2 significantly influence the accuracy and convergence of the solutions, which is essential for understanding the behavior of the solution being studied. Furthermore, Table 8 provides a comparison between the SCJ method [37] and the G E J F C O P M M approach, confirming the superiority of G E J F C O P M M . Figure 5 illustrates the relationship between the NUMSs and their corresponding errors for the specified parameter values ℵ. By depicting both the solutions and their errors, the figure enables a thorough analysis of the performance of the employed numerical method. Additionally, observing how the solutions converge as ℵ varies provides instructive information about the stability and reliability of the results.
Problem 5. 
Consider the DE
D 2 Y ( t ˇ ) + 2 t ˇ Y ( t ˇ ) + sin ( Y ( t ˇ ) ) = 0 , t ˇ [ 0 , 2 ] , Y ( 0 ) = 1 , Y ( 1 ) ( 0 ) = 0 , }
where the explicit exact solution is not available; so, the following error norm is used to check the accuracy in this case:
E = max t ˇ [ 0 , 2 ] R ( t ˇ ) ,
where R ( t ˇ ) is the absolute residual error defined as follows:
R ( t ˇ ) = | D 2 Y ( t ˇ ) + 2 t ˇ Y ( t ˇ ) + sin ( Y ( t ˇ ) ) | .
The application of GEJFCOPMM gives the proposed solution in the form
Y ( t ˇ ) = i = 0 c i ϕ 2 , i ( ς 1 , ς 2 , ) ( t ˇ ) + 1
and provides Table 9, which demonstrates the high efficiency of the G E J F C O P M M . Additionally, Table 10 compares the RBF-DQ method [17] with the G E J F C O P M M approach, confirming the superiority of the latter. Figure 6 presents the NUMSs Y 3 ( t ˇ ) and Y 7 ( t ˇ ) , along with the residual errors R 20 ( t ˇ ) and R 22 ( t ˇ ) . By illustrating these solutions and errors, the figure facilitates a comprehensive analysis of the performance of G E J F C O P M M .
Remark 4. 
In view of the presented CPU time (in seconds), our approach has efficient performance. The calculations show that the memory consumption was excellent. For example, Table 7 shows that the calculated CPU time using = 8 is 25% slower than = 5 and moreover requires an increase of 21% of memory consumption of RAM compared to the = 5 calculation. The numerical examples and comparisons provided in our paper highlight the superior accuracy and efficiency of our algorithm, solidifying its potential for solving ODEs effectively. In contrast, the methods described in [17,26,33,34,35,36,37] did not provide the CPU time or memory usage data. However, our analysis suggests that our approach performs better than these referenced methods.

7. Conclusions

In this work, we have introduced GEJFs that satisfy HC. Moreover, by utilizing the computed OMs with the SCM, GEJFCOPMM is established. GEJFCOPMM gives high-accuracy NUMSs and efficiency. For future work, we have identified several potential research avenues. First, we intend to extend the GEJFCOPMM framework to address boundary value problems, which bring with them special difficulties and opportunities for further development. Additionally, we believe that the theoretical findings presented in this paper can be adapted to handle other types of DEs, including PDEs and systems of DEs. Furthermore, exploring the application of GEJFs in real-world scenarios, such as in engineering and physics, could provide valuable insights and validate the robustness of our approach. We also encourage further investigation into the enhancement of OMs for various function classes to broaden the applicability of our method.

Funding

No funding was received to assist with the preparation of this manuscript.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. For further inquiries, please contact the corresponding author(s).

Conflicts of Interest

The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Shen, J.; Tang, T.; Wang, L. Spectral Methods: Algorithms, Analysis and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2011; Volume 41. [Google Scholar]
  2. Ahmed, H. Numerical solutions of high-order differential equations with polynomial coefficients using a Bernstein polynomial basis. Mediterr. J. Math. 2023, 20, 303. [Google Scholar] [CrossRef]
  3. Ahmed, H. Solutions of 2nd-order linear differential equations subject to Dirichlet boundary conditions in a Bernstein polynomial basis. J. Egypt. Math. Soc. 2014, 22, 227–237. [Google Scholar] [CrossRef]
  4. Brahim, M.S.T.; Youssri, Y.H.; Alburaikan, A.; Khalifa, H.; Radwn, T. A Refined Galerkin Approach for Solving Higher-Order Differential Equations via Bernoulli Polynomials. Fractals 2025, 2540183, 14. [Google Scholar] [CrossRef]
  5. Ganaie, I.A.; Arora, S.; Kukreja, V. Cubic Hermite collocation method for solving boundary value problems with Dirichlet, Neumann, and Robin conditions. Int. J. Eng. Math. 2014, 365209. [Google Scholar] [CrossRef]
  6. Atta, A.; Moatimid, G.; Youssri, Y. Generalized Fibonacci operational collocation approach for fractional initial value problems. Int. J. Appl. Comput. Math. 2019, 5, 9. [Google Scholar] [CrossRef]
  7. Izadi, M.; Roul, P. A new approach based on shifted Vieta-Fibonacci-quasilinearization technique and its convergence analysis for nonlinear third-order Emden–Fowler equation with multi-singularity. Commun. Nonlinear Sci. Numer. 2023, 117, 106912. [Google Scholar] [CrossRef]
  8. Youssri, Y.; Zaky, M.; Hafez, R. Romanovski-Jacobi spectral schemes for high-order differential equations. Appl. Numer. Math. 2024, 198, 148–159. [Google Scholar] [CrossRef]
  9. Abd-Elhameed, W.; Ahmed, H.; Youssri, Y. A new generalized Jacobi Galerkin operational matrix of derivatives: Two algorithms for solving fourth-order boundary value problems. Adv. Differ. Equ. 2016, 2016, 22. [Google Scholar] [CrossRef]
  10. Hafez, R.; Boulaaras, S.; Khalifa, H.A.E.W. Matrix Algorithms Based on Jacobi and Romanovski-Jacobi Polynomials for Solving the FitzHugh-Nagumo Nonlinear Equation. Rev. Int. Metod. Numer. Para Calc. Diseno Ing. 2025, 41, 1–19. [Google Scholar] [CrossRef]
  11. Atta, A. Two spectral Gegenbauer methods for solving linear and nonlinear time fractional Cable problems. Int. J. Mod. Phys. C. 2024, 35, 2450070. [Google Scholar] [CrossRef]
  12. Ahmed, H. Numerical solutions for singular Lane-Emden equations using shifted Chebyshev polynomials of the first kind. Contemp. Math. 2023, 4, 132–149. [Google Scholar] [CrossRef]
  13. Youssri, Y.H.; Abd-Elhameed, W.M.; Elmasry, A.A.; Atta, A.G. An efficient Petrov–Galerkin scheme for the Euler–Bernoulli beam equation via second-kind Chebyshev polynomials. Fractal Fract. 2025, 9, 78. [Google Scholar] [CrossRef]
  14. Izadi, M.; Yüzbaşı, Ş.; Baleanu, D. A Taylor–Chebyshev approximation technique to solve the 1D and 2D nonlinear Burgers equations. Math. Sci. 2022, 16, 459–471. [Google Scholar] [CrossRef]
  15. Srivastava, H.M.; Izadi, M. Generalized shifted airfoil polynomials of the second kind to solve a class of singular electrohydrodynamic fluid model of fractional order. Fractal Fract. 2023, 7, 94. [Google Scholar] [CrossRef]
  16. Malmir, I. Novel Chebyshev wavelets algorithms for optimal control and analysis of general linear delay models. Appl. Math. Model. 2019, 69, 621–647. [Google Scholar] [CrossRef]
  17. Parand, K.; Hashemi, S. RBF-DQ method for solving non-linear differential equations of Lane-Emden type. Ain Shams Eng. J. 2018, 9, 615–629. [Google Scholar] [CrossRef]
  18. Abd-Elhameed, W. New Galerkin operational matrix of derivatives for solving Lane-Emden singular-type equations. Eur. Phys. J. Plus 2015, 130, 52. [Google Scholar] [CrossRef]
  19. Abdelkawy, M.; Sabir, Z. Solution of new nonlinear second order singular perturbed Lane-Emden equation by the numerical spectral collocation method. Num. Com. Meth. Sci. Eng. 2020, 2, 11–19. [Google Scholar]
  20. Izadi, M. An approximation technique for first Painlevé equation. TWMS J. Appl. Eng. Math. 2021, 11, 739–750. [Google Scholar]
  21. Ashry, H.; Abd-Elhameed, W.; Moatimid, G.; Youssri, Y. Robust Shifted Jacobi-Galerkin Method for Solving Linear Hyperbolic Telegraph Type Equation. Palest. J. Math. 2022, 11, 504–518. [Google Scholar]
  22. El-Sayed, A.; Baleanu, D.; Agarwal, P. A novel Jacobi operational matrix for numerical solution of multi-term variable-order fractional differential equations. J. Taibah Univ. Sci. 2020, 14, 963–974. [Google Scholar] [CrossRef]
  23. Ahmed, H.M. Enhanced shifted Jacobi operational matrices of derivatives: Spectral algorithm for solving multiterm variable-order fractional differential equations. Bound. Value Probl. 2023, 2023, 108. [Google Scholar] [CrossRef]
  24. Kazem, S.; Abbasbandy, S.; Kumar, S. Fractional-order Legendre functions for solving fractional-order differential equations. Appl. Math. Model. 2013, 37, 5498–5510. [Google Scholar] [CrossRef]
  25. Bhrawy, A.H.; Zaky, M.A. Shifted fractional-order Jacobi orthogonal functions: Application to a system of fractional differential equations. Appl. Math. Model 2016, 40, 832–845. [Google Scholar] [CrossRef]
  26. Bhrawy, A.; Hafez, R.; Alzaidy, J. A new exponential Jacobi pseudospectral method for solving high-order ordinary differential equations. Adv. Differ. Equ. 2015, 2015, 152. [Google Scholar] [CrossRef]
  27. Hammad, M.; Hafez, R.M.; Youssri, Y.H.; Doha, E.H. Exponential Jacobi-Galerkin method and its applications to multidimensional problems in unbounded domains. Appl. Numer. Math. 2020, 157, 88–109. [Google Scholar] [CrossRef]
  28. Youssri, Y.H.; Hafez, R.M. Exponential Jacobi spectral method for hyperbolic partial differential equations. Math. Sci. 2019, 13, 347–354. [Google Scholar] [CrossRef]
  29. Szegö, G. Orthogonal Polynomials, 4th ed.; American Mathematical Soc.: Providence, RI, USA, 1975; Volume XXIII. [Google Scholar]
  30. Luke, Y. Mathematical Functions and Their Approximations; Academic Press: London, UK, 1975. [Google Scholar]
  31. Burden, R.; Faires, J.; Burden, A. Numerical Analysis; Cengage Learning: Andover, MA, USA, 2015. [Google Scholar]
  32. Jeffrey, A.; Dai, H. Handbook of Mathematical Formulas and Integrals, 4th ed.; Elsevier: Amsterdam, The Netherlands, 2008. [Google Scholar]
  33. Wazwaz, A. Solving two Emden-Fowler type equations of third order by the variational iteration method. Appl. Math. Inf. Sci. 2015, 9, 2429. [Google Scholar]
  34. Mishra, H.; Saini, S. Quartic B–Spline Method for Solving a Singular Singularly Perturbed Third-Order Boundary Value Problems. Am. J. Numer. Anal. 2015, 3, 18–24. [Google Scholar]
  35. Iqbal, M.; Abbas, M.; Wasim, I. New cubic B-spline approximation for solving third order Emden–Flower type equations. Appl. Math. Comput. 2018, 331, 319–333. [Google Scholar] [CrossRef]
  36. Abd-Elhameed, W.; Al-Harbi, M.S.; Amin, A.K.; Ahmed, H. Spectral treatment of high-order Emden–Fowler equations based on modified Chebyshev polynomials. Axioms 2023, 12, 99. [Google Scholar] [CrossRef]
  37. Bhrawy, A.; Alghamdi, M. Numerical solutions of odd order linear and nonlinear initial value problems using a shifted Jacobi spectral approximations. Abstr. Appl. Anal. 2012, 2012, 364360. [Google Scholar] [CrossRef]
Figure 1. Numerical results using = 17 , ς 1 = 1 , ς 2 = 1 / 2 , = 1 for Problem 1.
Figure 1. Numerical results using = 17 , ς 1 = 1 , ς 2 = 1 / 2 , = 1 for Problem 1.
Mathematics 13 03154 g001
Figure 2. Numerical results using = 20 , ς 1 = 2 , ς 2 = 1 / 2 , = 1 for Problem 2.
Figure 2. Numerical results using = 20 , ς 1 = 2 , ς 2 = 1 / 2 , = 1 for Problem 2.
Mathematics 13 03154 g002
Figure 3. AE E 20 ( t ) using ς 1 = 1 / 2 , ς 2 = 1 / 2 , = 1 for Problem 2.
Figure 3. AE E 20 ( t ) using ς 1 = 1 / 2 , ς 2 = 1 / 2 , = 1 for Problem 2.
Mathematics 13 03154 g003
Figure 4. Figures of errors using various and ς 1 = 2 , ς 2 = 0 , = 1 for Problem 3.
Figure 4. Figures of errors using various and ς 1 = 2 , ς 2 = 0 , = 1 for Problem 3.
Mathematics 13 03154 g004
Figure 5. Solutions and errors obtained using = 13 , 14 and ς 1 = 2 , ς 2 = 0 , = 2 for Problem 4.
Figure 5. Solutions and errors obtained using = 13 , 14 and ς 1 = 2 , ς 2 = 0 , = 2 for Problem 4.
Mathematics 13 03154 g005
Figure 6. APPSs and residual errors obtained using ς 1 = 1 , ς 2 = 0 , = 2 for Problem 5.
Figure 6. APPSs and residual errors obtained using ς 1 = 1 , ς 2 = 0 , = 2 for Problem 5.
Mathematics 13 03154 g006
Table 1. MAEs for Problem 1 using different values for ς 1 , ς 2 and and = 1 .
Table 1. MAEs for Problem 1 using different values for ς 1 , ς 2 and and = 1 .
ς 1 ς 2 Errors = 2 = 5 = 8 = 11 = 14 = 17
00 E 1.31  × 10 2 2.12  × 10 3 4.25  × 10 5 5.15  × 10 7 1.05  × 10 9 6.83  × 10 10
E 2 1.11  × 10 2 1.22  × 10 3 4.15  × 10 5 4.15  × 10 7 1.01  × 10 9 6.81  × 10 10
CPU time0.2310.4420.6220.8310.8860.922
11 E 1.21  × 10 2 2.22  × 10 3 4.35  × 10 5 5.31  × 10 7 1.20  × 10 9 6.85  × 10 10
E 2 2.17  × 10 2 2.12  × 10 3 4.25  × 10 5 5.25  × 10 7 1.10  × 10 9 6.82  × 10 10
CPU time0.2230.4340.6250.8400.8710.901
−1/21/2 E 2.22  × 10 2 1.32  × 10 3 3.15  × 10 5 4.11  × 10 7 1.23  × 10 9 6.80  × 10 10
E 2 1.13  × 10 2 1.12  × 10 3 2.15  × 10 5 4.01  × 10 7 1.21  × 10 9 6.79  × 10 10
CPU time0.2270.4170.6300.8290.8850.919
1/2−1/2 E 2.23  × 10 2 4.12  × 10 3 3.85  × 10 5 4.81  × 10 7 2.70  × 10 9 6.78  × 10 10
E 2 2.21  × 10 2 4.02  × 10 3 3.75  × 10 5 4.77  × 10 7 2.68  × 10 9 6.77  × 10 10
CPU time0.2530.4250.6510.8340.8910.981
Table 2. Comparison of AE between the two methods EJOM [26] and G E J F C O P M M for Problem 1.
Table 2. Comparison of AE between the two methods EJOM [26] and G E J F C O P M M for Problem 1.
Analytical GEJFCOPMM   ( = 12 , = 1 ) EJOM [26] ( = 20 , = 1 )
t ˇ ς 1 = ς 2 = 0 . 5 ς 1 = ς 2 = 0 ς 1 = ς 2 = 0 . 5 ς 1 = ς 2 = 0 . 5 ς 1 = ς 2 = 0 ς 1 = ς 2 = 0 . 5
0.01.00000001.00000001.00000001.000000001.000000001.000000001.00000000
0.10.990049830.990049820.990049840.990049810.990049880.990049940.99004993
0.20.960789430.960789440.960789420.960789420.960789460.960789420.96078934
0.30.913931180.913931170.913931160.913931170.913931120.913931120.91393120
0.40.852143780.852143770.852143760.852143770.852143820.852143810.85214376
0.50.778800780.778800770.778800760.778800790.778800850.778800960.77880099
0.60.697676320.697676330.697676340.697676310.697676180.697675940.69767596
0.70.612626390.612626380.612626370.612626360.612626440.612626620.61262661
0.80.5272924240.5272924250.5272924260.5272924240.527292520.527292720.52729269
0.90.444858060.444858050.444858030.444858020.444858070.444857720.44485769
1.00.367879440.367879460.367879470.367879480.367879390.367879150.36787921
Table 3. Errors obtained for Problem 2 using different values for ς 1 , ς 2 and and = 1 .
Table 3. Errors obtained for Problem 2 using different values for ς 1 , ς 2 and and = 1 .
ς 1 ς 2 Errors = 2 = 6 = 10 = 14 = 18 = 20
00 E 2.11  × 10 2 1.32  × 10 4 2.35  × 10 6 4.25  × 10 9 1.10  × 10 10 5.10  × 10 11
E 2 1.01  × 10 2 1.12  × 10 4 2.15  × 10 6 3.15  × 10 9 1.01  × 10 10 4.12  × 10 11
CPU time0.2530.4430.8100.8790.9360.996
11 E 2.35  × 10 2 3.31  × 10 4 4.15  × 10 6 4.35  × 10 9 4.15  × 10 10 6.11  × 10 11
E 2 2.31  × 10 2 2.52  × 10 4 3.15  × 10 6 4.25  × 10 9 1.25  × 10 10 5.15  × 10 11
CPU time0.2490.4570.8210.8880.9450.995
−1/21/2 E 4.01  × 10 2 4.12  × 10 4 3.25  × 10 6 5.75  × 10 9 5.15  × 10 10 6.86  × 10 11
E 2 3.10  × 10 2 4.02  × 10 4 2.75  × 10 6 5.65  × 10 9 2.15  × 10 10 6.85  × 10 11
CPU time0.2550.4560.8360.8990.9490.998
1/2−1/2 E 4.21  × 10 2 4.31  × 10 4 5.45  × 10 6 4.65  × 10 9 5.80  × 10 10 7.15  × 10 11
E 2 1.61  × 10 2 1.72  × 10 4 2.85  × 10 6 6.15  × 10 9 5.13  × 10 10 6.92  × 10 11
CPU time0.2590.4500.8440.8970.9560.995
Table 4. Comparison of AE between the two methods EJOM [26] and G E J F C O P M M for Problem 2.
Table 4. Comparison of AE between the two methods EJOM [26] and G E J F C O P M M for Problem 2.
GEJFCOPMM ( = 20 , = 1 ) EJOM [26] ( = 24 , = 1 )
t ˇ ς 1 = ς 2 = 0 . 5 ς 1 = ς 2 = 0 ς 1 = ς 2 = 0 . 5 ς 1 = ς 2 = 0 . 5 ς 1 = ς 2 = 0 ς 1 = ς 2 = 0 . 5
0.0000000
1.0 4.780 × 10 14 3.890 × 10 14 2.317 × 10 14 3.007 × 10 8 2.107 × 10 8 2.707 × 10 8
2.0 2.308 × 10 14 4.217 × 10 14 4.357 × 10 14 2.353 × 10 8 4.976 × 10 8 5.067 × 10 8
3.0 5.481 × 10 14 5.652 × 10 14 4.982 × 10 14 3.180 × 10 8 5.733 × 10 8 5.028 × 10 8
4.0 1.117 × 10 14 1.223 × 10 14 1.121 × 10 14 9.386 × 10 8 2.368 × 10 8 6.252 × 10 9
5.0 1.112 × 10 14 2.227 × 10 14 2.311 × 10 14 9.514 × 10 9 6.890 × 10 9 1.667 × 10 8
6.0 3.127 × 10 14 3.007 × 10 14 3.018 × 10 14 1.007 × 10 9 4.581 × 10 9 3.294 × 10 9
7.0 1.112 × 10 14 1.717 × 10 14 1.126 × 10 14 6.591 × 10 10 1.593 × 10 10 9.366 × 10 9
8.0 5.521 × 10 14 5.423 × 10 14 5.326 × 10 14 1.070 × 10 9 3.543 × 10 9 1.079 × 10 8
9.0 4.960 × 10 11 4.892 × 10 11 4.563 × 10 11 1.375 × 10 9 5.259 × 10 9 1.136 × 10 8
10.0 2.107 × 10 10 2.107 × 10 10 2.107 × 10 10 6.492 × 10 10 5.976 × 10 9 1.153 × 10 8
Table 5. Errors obtained for Problem 3 using different values for ς 1 , ς 2 and and = 1 .
Table 5. Errors obtained for Problem 3 using different values for ς 1 , ς 2 and and = 1 .
ς 1 ς 2 Errors = 2 = 5 = 8 = 11 = 14 = 17
00 E 1.01  × 10 3 2.12  × 10 6 3.15  × 10 9 4.35  × 10 12 2.10  × 10 15 4.30  × 10 18
E 2 1.00  × 10 3 2.05  × 10 6 3.11  × 10 9 4.25  × 10 12 2.09  × 10 15 4.20  × 10 18
CPU time0.2480.4560.6660.8710.8890.949
11 E 2.31  × 10 3 1.32  × 10 6 3.45  × 10 9 4.55  × 10 12 2.16  × 10 15 4.80  × 10 18
E 2 2.29  × 10 3 1.30  × 10 6 3.41  × 10 9 4.52  × 10 12 2.12  × 10 15 4.78  × 10 18
CPU time0.2510.4510.6590.8810.9010.962
−1/21/2 E 3.11  × 10 3 1.32  × 10 6 3.35  × 10 9 4.45  × 10 12 2.60  × 10 15 4.70  × 10 18
E 2 3.08  × 10 3 1.27  × 10 6 3.31  × 10 9 4.41  × 10 12 2.58  × 10 15 4.68  × 10 18
CPU time0.2490.4580.6550.8790.8960.958
1/2−1/2 E 4.11  × 10 3 2.72  × 10 6 2.75  × 10 9 5.15  × 10 12 3.11  × 10 15 4.70  × 10 18
E 2 4.05  × 10 3 2.52  × 10 6 2.61  × 10 9 5.01  × 10 12 3.06  × 10 15 4.61  × 10 18
CPU time0.2590.4610.6610.8990.9060.971
Table 6. Comparison between the methods [33,34,35,36] and G E J F C O P M M for Problem 3.
Table 6. Comparison between the methods [33,34,35,36] and G E J F C O P M M for Problem 3.
GEJFCOPMM
( = 17 , ς 1 = 2 , ς 2 = 0 , = 1 )
SC3COMM [36]
( = 22 )
SC3CM [36]
( = 22 )
VIM [33QBSM [34]BSCM [35]
1.14  × 10 19 4.542  × 10 15 2.123  × 10 14 1.10  × 10 1 1.29  × 10 5 4.08  × 10 7
Table 7. Errors obtained for Problem 4 using different values for ς 1 , ς 2 and and = 2 .
Table 7. Errors obtained for Problem 4 using different values for ς 1 , ς 2 and and = 2 .
ς 1 ς 2 Errors = 2 = 5 = 8 = 11 = 14 = 17
00 E 1.21  × 10 2 3.22  × 10 5 3.75  × 10 8 5.15  × 10 11 3.15  × 10 14 4.60  × 10 17
E 2 1.10  × 10 2 3.15  × 10 5 3.71  × 10 8 5.05  × 10 11 3.09  × 10 14 4.55  × 10 17
CPU time0.3520.5520.6990.9010.9811.062
11 E 1.23  × 10 3 2.42  × 10 6 2.55  × 10 9 3.45  × 10 12 1.26  × 10 15 4.84  × 10 18
E 2 1.20  × 10 3 2.40  × 10 6 1.99  × 10 9 3.33  × 10 12 1.22  × 10 15 4.52  × 10 18
CPU time0.3560.5580.7010.9220.9911.052
−1/21/2 E 4.21  × 10 3 3.12  × 10 6 2.21  × 10 9 5.15  × 10 12 4.61  × 10 15 3.52  × 10 18
E 2 4.17  × 10 3 3.07  × 10 6 2.11  × 10 9 5.01  × 10 12 4.58  × 10 15 3.44  × 10 18
CPU time0.3590.5600.7110.9370.9921.072
1/2−1/2 E 3.15  × 10 2 3.71  × 10 5 1.76  × 10 8 4.35  × 10 11 4.15  × 10 14 5.72  × 10 17
E 2 3.13  × 10 2 3.69  × 10 5 1.62  × 10 8 4.20  × 10 11 4.11  × 10 14 5.63  × 10 17
CPU time0.3600.5810.7110.9440.9991.079
Table 8. Comparison between the methods SCJ [37] and G E J F C O P M M for Problem 4.
Table 8. Comparison between the methods SCJ [37] and G E J F C O P M M for Problem 4.
t ˇ Absolute Errors
GEJFCOPMM SJC [37]
( = 17 , ς 1 = 2 , ς 2 = 0 , = 2 ) ( = 18 )
0.00.00001.991  × 10 18
0.11.12  × 10 20 3.037  × 10 17
0.25.31  × 10 20 4.353  × 10 17
0.36.71  × 10 20 1.908  × 10 17
0.44.17  × 10 20 6.418  × 10 17
0.53.21  × 10 20 5.204  × 10 18
0.62.07  × 10 20 8.326  × 10 17
0.71.34  × 10 19 1.110  × 10 16
0.82.66  × 10 19 1.110  × 10 16
0.92.43  × 10 19 1.110  × 10 16
1.01.13  × 10 19 1.110  × 10 16
Table 9. Errors obtained for Problem 5 using different values for ς 1 , ς 2 and and = 2 .
Table 9. Errors obtained for Problem 5 using different values for ς 1 , ς 2 and and = 2 .
ς 1 ς 2 Errors = 2 = 6 = 10 = 14 = 18 = 22
00 E 1.32  × 10 2 2.44  × 10 4 4.51  × 10 6 5.32  × 10 9 6.98  × 10 13 7.11  × 10 15
CPU time0.2630.4510.8240.8890.9461.057
11 E 3.45  × 10 2 3.15  × 10 4 2.19  × 10 6 2.65  × 10 9 4.77  × 10 13 6.58  × 10 15
CPU time0.2680.4530.8340.8990.9861.110
−1/21/2 E 1.11  × 10 2 2.32  × 10 4 2.55  × 10 6 4.15  × 10 9 5.25  × 10 13 7.06  × 10 15
CPU time0.2750.4830.8740.9020.9901.115
1/2−1/2 E 4.21  × 10 2 4.31  × 10 4 5.45  × 10 6 4.65  × 10 9 5.80  × 10 10 7.15  × 10 11
CPU time0.2710.4890.8810.9320.9951.120
Table 10. Comparison between residual errors obtained by the methods RBF-DQ [17] and G E J F C O P M M for Problem 5.
Table 10. Comparison between residual errors obtained by the methods RBF-DQ [17] and G E J F C O P M M for Problem 5.
t ˇ Residual Errors
GEJFCOPMM RBF-DQ [17]
( = 22 , ς 1 = 1 . 5 , ς 2 = 1 , = 2 ) ( = 30 )
0.00.00000.0000
0.11.22  × 10 21 7.40  × 10 16
0.22.21  × 10 21 1.49  × 10 15
0.55.31  × 10 20 4.36  × 10 15
1.04.42  × 10 20 1.30  × 10 17
1.53.72  × 10 19 3.29  × 10 13
2.04.52  × 10 18 1.51  × 10 13
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahmed, H.M. A Spectral Approach to Solve High-Order Ordinary Differential Equations: Improved Operational Matrices for Exponential Jacobi Functions. Mathematics 2025, 13, 3154. https://doi.org/10.3390/math13193154

AMA Style

Ahmed HM. A Spectral Approach to Solve High-Order Ordinary Differential Equations: Improved Operational Matrices for Exponential Jacobi Functions. Mathematics. 2025; 13(19):3154. https://doi.org/10.3390/math13193154

Chicago/Turabian Style

Ahmed, Hany Mostafa. 2025. "A Spectral Approach to Solve High-Order Ordinary Differential Equations: Improved Operational Matrices for Exponential Jacobi Functions" Mathematics 13, no. 19: 3154. https://doi.org/10.3390/math13193154

APA Style

Ahmed, H. M. (2025). A Spectral Approach to Solve High-Order Ordinary Differential Equations: Improved Operational Matrices for Exponential Jacobi Functions. Mathematics, 13(19), 3154. https://doi.org/10.3390/math13193154

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop