Next Article in Journal
Inferred Loss Rate as a Credit Risk Measure in the Bulgarian Banking System
Previous Article in Journal
SG-ResNet: Spatially Adaptive Gabor Residual Networks with Density-Peak Guidance for Joint Image Steganalysis and Payload Location
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Refined Spectral Galerkin Approach Leveraging Romanovski–Jacobi Polynomials for Differential Equations

1
Department of Mathematics, Faculty of Education, Matrouh University, Marsa Matrouh 51511, Egypt
2
Department of Mathematics and Statistics, College of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11623, Saudi Arabia
3
Department of Mathematics, Faculty of Technology and Education, Helwan University, Cairo 11281, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(9), 1461; https://doi.org/10.3390/math13091461
Submission received: 3 April 2025 / Revised: 26 April 2025 / Accepted: 27 April 2025 / Published: 29 April 2025

Abstract

:
This study explores the application of Romanovski–Jacobi polynomials (RJPs) in spectral Galerkin methods (SGMs) for solving differential equations (DEs). It uses a suitable class of modified RJPs as basis functions that meet the homogeneous initial conditions (ICs) given. We derive spectral Galerkin schemes based on modified RJP expansions to solve three models of high-order ordinary differential equations (ODEs) and partial differential equations (PDEs) of first and second orders with ICs. We provide theoretical assurances of the treatment’s efficacy by validating its convergent and error investigations. The method achieves enhanced accuracy, spectral convergence, and computational efficiency. Numerical experiments demonstrate the robustness of this approach in addressing complex physical and engineering problems, highlighting its potential as a powerful tool to obtain accurate numerical solutions for various types of DEs. The findings are compared to those of preceding studies, verifying that our treatment is more effective and precise than that of its competitors.

1. Introduction

DEs defined over the bounded interval [ 0 , 1 ] play a pivotal role in modeling numerous phenomena across science and engineering disciplines, including heat conduction in constrained environments [1,2], wave propagation within limited domains [3,4], fluid flow in narrow channels, and financial systems with bounded parameters [5]. Despite the finite nature of the domain, these problems often demand careful analytical and numerical treatment, particularly near the domain boundaries, to ensure they remain well-posed and to maintain the integrity of the solution. Standard numerical strategies such as finite difference methods [6,7], finite element techniques [8,9], and spectral approaches [10,11] can encounter challenges in capturing sharp gradients or singular behaviors close to the endpoints. Moreover, achieving high levels of precision typically requires dense discretization or sophisticated basis functions, which in turn increases the computational burden and demands robust algorithms to guarantee stability and convergence.
Spectral approaches, known for their high accuracy and exponential convergence, have emerged as powerful tools for solving PDEs. Recent advancements in numerical techniques have focused on overcoming the challenges associated with complex operators. Hafez et al. [12] developed spectral techniques based on RJPs for solving multitype differential equations. Nataj and He [13] have successfully employed Anderson acceleration to improve the convergence of space–time spectral methods for nonlinear time-dependent PDEs. Yan et al. [14] proposed a mesh-free spectral method for solving elliptic PDEs on unknown manifolds using point cloud data. Ahmed [15] introduces a novel algorithm using new Galerkin operational matrices for integrals of generalized shifted JPs for solving ordinary and fractional DEs, which provide high accuracy and efficiency. Additionally, Ahmed [16], introduces a novel algorithm using spectral approaches with Bernstein polynomials as basis functions to solve ODEs and PDEs.
SGMs offer a powerful alternative for solving DEs. These methods approximate the solution using a series expansion of basis functions that are specifically designed to satisfy the associated initial or boundary conditions. By carefully selecting these basis functions, these methods can achieve exponential convergence rates, providing highly accurate solutions with relatively few degrees of freedom.
In recent years, RJPs gained attention from several researchers [11,12] for various advantages. These polynomials are orthogonal polynomials that extend classical JPs to weight functions. They are useful for solving boundary value problems, especially in cases involving complex geometries or higher-order DEs, enabling efficient numerical methods and solving complex DEs.
This paper focuses on the development and analysis of spectral Romanovski–Jacobi Galerkin methods (RJGM) for solving ODEs and PDEs with ICs. The RJGM offers its ability to handle problems with high-order derivatives efficiently. Additionally, RJGM has flexibility in dealing with ODEs and PDEs with ICs.
The layout of this paper is as follows: Section 2 provides a concise overview of RJPs, detailing their definition, orthogonality properties, and associated quadrature rules. Section 3 introduces RJGM for solving higher-order ODEs with both homogeneous and nonhomogeneous conditions. Building on this, Section 4 extends the RJGM to systems of initial value problems. Section 5 then applies the method to PDEs, specifically addressing first-order and second-order hyperbolic equations. Section 6 outlines techniques for handling nonhomogeneous boundary conditions through transformation methods. Section 7 presents a rigorous convergence and error analysis, including theoretical bounds and stability estimates for the proposed approach. Section 8 includes numerical experiments and case studies that compare the proposed methodology to current approaches, demonstrating its accuracy and resilience. Section 9 summarizes the study’s principal results, discusses their importance to future research, and explores potential applications in science and engineering.

2. A Short Overview of RJPs Is Outlined

This section presents the basis functions utilized in the presented spectral algorithm. First, an overview of key concepts related to RJPs is provided. These polynomials are a unique class of finite orthogonal basis functions. This class was first formulated in [17] and has been widely investigated by researchers for its effectiveness in numerical solutions to DEs [11,18]. They are denoted by R l ^ μ , ν ( ϰ ) , ( l ^ = 0 , 1 , , M ) , which is clearly reformulated as (see [12,19])
R l ^ μ , ν ( ϰ ) : = l = 0 l ^ λ l μ , ν ϰ l ,
where
λ l μ , ν = ( 1 ) l Γ ( μ ν l ^ ) Γ ( μ + l ^ + 1 ) l ! Γ ( μ ν l ^ l ) Γ ( μ + l + 1 ) Γ ( l ^ l + 1 ) .
The following key property of RJPs is their orthogonality related to the weight function w R μ , ν ( ϰ ) ϰ μ ( 1 + ϰ ) ν . This requires the conditions μ > 1 and ν < 2 M μ 1 , where M > 0 is a specified integer. Specifically, we obtain the orthogonality:
0 R l ^ μ , ν ( ϰ ) R s ^ μ , ν ( ϰ ) w R μ , ν ( ϰ ) d ϰ = ^ l ^ , l ^ = s ^ , 0 , o t h e r w i s e ,
where ^ l ^ = Γ ( μ + l ^ + 1 ) Γ ( ν μ l ^ ) l ^ ! ( μ + ν + 2 l ^ + 1 ) Γ ( ν l ^ ) . We present the next equation associated with the weighted space L w R μ , ν 2 [ 0 , 1 ] as
( f , g ) w R μ , ν = 0 f ( ϰ ) g ( ϰ ) w R μ , ν ( ϰ ) d ϰ , g w R μ , ν = ( g , g ) w R μ , ν 1 2 .
A complete L w R μ , ν 2 [ 0 , 1 ] -orthogonal system consists of a set of RJPs, where
R k μ , ν w R μ , ν 2 = ^ k .
Theorem 1 
(Romanovski–Jacobi–Gauss quadrature). The Romanovski–Jacobi–Gauss nodes ϰ j j = 0 N represent the zeros of R N + 1 μ , ν ( ϰ ) , and their weights are provided by
ϖ j μ , ν = G N μ , ν R N μ , ν ( ϰ j ) D ϰ R N + 1 μ , ν ( ϰ j ) = G ˜ N μ , ν ϰ j ( 1 + ϰ j ) D ϰ R N + 1 μ , ν ( ϰ j ) 2 ,
where
G N μ , ν = ( 2 N + μ + ν + 2 ) Γ ( N μ ν 1 ) Γ ( 1 + N + μ ) ( N + 1 ) ! Γ ( N ν ) ,
G ˜ N μ , ν = Γ ( N μ ν 1 ) Γ ( 2 + N + ν ) ( N + 1 ) ! Γ ( N ν 1 ) .
(For the proof, see, [10].)
For N > 0 , ϕ S 2 N + 1 ( 0 , ) , we obtain
0 ϰ μ ( 1 + ϰ ) ν ϕ ( ϰ ) d ϰ = j = 0 N ϖ j μ , ν ϕ ϰ j .

3. Initial Value Problems of Higher Order

RJGM is applied in this section to address linear higher-order ODEs, including cases with homogeneous and nonhomogeneous ICs.

3.1. Initial Homogeneous Conditions Type

Assume the next linear higher-order ODEs [12]:
d m Y d ϰ m + i = 0 m 1 η i d i Y d ϰ i = f ( ϰ ) , ϰ [ 0 , 1 ] ,
related to the initial homogeneous conditions
d i Y d ϰ i = 0 , i = 0 , 1 , , m 1 ,
where η 0 , η 1 , , η m 1 are constants. Let us establish the next spaces:
ξ N = { R 0 μ , ν ( ϰ ) , R 1 μ , ν ( ϰ ) , R 2 μ , ν ( ϰ ) , , R N μ , ν ( ϰ ) } , Φ N = { φ ( ϰ ) ξ N : φ ( i ) ( 0 ) = 0 , i = 0 , 1 , , m 1 } ,
then, the RJGM approximation to Equations (5) and (6) is to find Y N ( ϰ ) Φ N such that
Y N ( m ) ( ϰ ) , φ j ( ϰ ) w R μ , ν + i = 0 m 1 η i Y N ( i ) ( ϰ ) , φ j ( ϰ ) w R μ , ν = f ( ϰ ) , φ j ( ϰ ) w R μ , ν , 0 j N .
We create suitable basis functions that meet the homogeneous beginning conditions, described as:
φ i ( ϰ ) = ϰ m R i μ , ν ( ϰ ) , ϰ [ 0 , 1 ] .
We now examine the approximate solution Y N ( ϰ ) for (5) and (6), given by
Y N ( ϰ ) i = 0 N ρ i φ i ( ϰ ) , ϰ [ 0 , 1 ] .
Based on (9), we have
i = 0 N ρ i φ i ( m ) ( ϰ ) , φ j ( ϰ ) w R μ , ν + l = 0 m 1 η l φ i ( l ) ( ϰ ) , φ j ( ϰ ) w R μ , ν = f ( ϰ ) , φ j ( ϰ ) w R μ , ν .
We introduce the notation:
C = c i j 0 i , j N , c i j = φ i ( m ) ( ϰ ) , φ j ( ϰ ) w R μ , ν , D l = d i j l 0 i , j N , d i j l = φ i ( l ) ( ϰ ) , φ j ( ϰ ) w R μ , ν , 0 l m 1 , F = ( f 0 , f 1 , , f N ) T , f j = f ( ϰ ) , φ j ( ϰ ) w R μ , ν .
Consequently, (10) corresponds to the next system:
C + l = 0 m 1 η l D l P = F .
The unknown vector P = ( ρ 0 , ρ 1 , , ρ N ) T is represented here, although the nonzero components of C and D l ( 0 l m 1 ) are clearly defined in the next theorem.
Theorem 2. 
If the basis functions φ i ( ϰ ) are selected as in (8), and c i j = φ i ( m ) ( ϰ ) , φ j ( ϰ ) w R μ , ν and d i j l = φ i ( l ) ( ϰ ) , φ j ( ϰ ) w R μ , ν , 0 l m 1 , then
Φ N = { φ 0 , φ 1 , , φ N } ,
and the non-zero entries of the matrices C and D l ( 0 l m 1 ) are presented clearly as
c i j = k = 0 i s = 0 j ( 1 ) k + s i + μ i k j + μ j s i μ ν 1 k j μ ν 1 s × Γ ( k + m + s + μ + 1 ) Γ ( k m s μ ν 1 ) ( k + m ) ! Γ ( ν ) ( k ) ! ,
d i j l = k = 0 i s = 0 j ( 1 ) k + s i + μ i k j + μ j s i μ ν 1 k j μ ν 1 s × Γ ( k l + 2 m + s + μ + 1 ) Γ ( k + l 2 m s μ ν 1 ) ( k + m ) ! Γ ( ν ) ( k l + m ) ! .
Proof. 
φ i ( x ) are selected to meet the ICs (6). We now proceed to prove (13). Utilizing (8), we obtain
φ i ( x ) = k = 0 i ( 1 ) k Γ ( μ ν i ) Γ ( μ + i + 1 ) k ! Γ ( μ ν i k ) Γ ( μ + k + 1 ) Γ ( i k + 1 ) ϰ k + m .
By differentiating (14) l times via ϰ , we obtain:
φ i ( l ) ( x ) = k = 0 i ( 1 ) k ( k + m ) ! Γ ( μ ν i ) Γ ( μ + i + 1 ) k ! ( k + m l ) ! Γ ( μ ν i k ) Γ ( μ + k + 1 ) Γ ( i k + 1 ) ϰ k + m l .
By employing the scalar inner product, one can derive:
d i j l = k = 0 i s = 0 j ( 1 ) k + s ( k + m ) ! Γ ( μ ν i ) Γ ( μ ν j ) Γ ( μ + i + 1 ) k ! s ! ( k + m l ) ! Γ ( μ ν i k ) Γ ( μ ν j s ) Γ ( μ + k + 1 ) Γ ( μ + s + 1 ) × Γ ( μ + j + 1 ) Γ ( i k + 1 ) Γ ( j s + 1 ) 0 ϰ k + s + μ l + 2 m ( 1 + ϰ ) ν d ϰ .
After performing the necessary algebraic computations, D l can be determined as follows:
d i j l = k = 0 i s = 0 j ( 1 ) k + s i + μ i k j + μ j s i μ ν 1 k j μ ν 1 s × Γ ( k l + 2 m + s + μ + 1 ) Γ ( k + l 2 m s μ ν 1 ) ( k + m ) ! Γ ( ν ) ( k l + m ) ! .
This completes the proof of (13). By substituting m for l in (13), we obtain (12).
Finally, the linear algebraic system (11) is solved for the unknown coefficients ρ i (where i = 0 , 1 , , N ) using Newton’s iterative scheme.    □
Note 1. 
The discretization of the proposed Romanovski–Jacobi Galerkin method leads to a structured linear system of the form in Equation (11), where the system matrix comprises two components with distinct structural roles, arising from the Galerkin formulation of the differential operator and the enforcement of ICs. The matrix C is an upper triangular matrix, resulting directly from the incorporation of ICs within the spectral framework. Its triangular form reflects the inherent directionality and causality of these conditions and enables fast and numerically stable resolution through backward substitution. The matrix D l , generated from the Galerkin projection of the differential operator onto the span of the basis functions { ϰ m R j μ , ν ( ϰ ) } j = 0 N , exhibits a structure that depends on the order l of the highest derivative in the equation:
  • For l = 0 , D 0 is a banded matrix, capturing localized interactions between the basis functions caused by lower-order differential terms.
  • For l > 0 , the matrix D l becomes upper triangular, due to the dominance of higher-order derivatives and the hierarchical recurrence structure of the RJPs.
    In many practical problems, particularly those involving higher-order derivatives, both C and D l are upper triangular, and hence the entire system matrix inherits this triangular structure.
    This feature yields several computational benefits:
  • The assembly of the linear system requires at most O ( N 2 ) operations, leveraging the sparsity and triangular structure.
  • The solution of the system is efficiently carried out via backward substitution, with an overall computational cost less than that of dense system solvers [20].
    Such a structured formulation, combined with the spectral convergence of the Romanovski–Jacobi basis and the precision of Gauss-type quadrature, ensures that the proposed method is both highly accurate and computationally efficient for solving DEs on finite intervals with ICs.
Remark 1. 
If we consider nonlinear higher-order ODEs
Y ( m ) ( ϰ ) + F ( Y ( ϰ ) , Y ( 1 ) ( ϰ ) , , Y ( m 1 ) ( ϰ ) ) = f ( ϰ ) , ϰ [ 0 , 1 ] ,
with ICs (6), to solve the IVP (15)–(6), numerically, we may propose the numerical solution Y N ( ϰ ) in the form (9). Then, the residual of (15) takes the form
R N ( ϰ ) = Y N ( m ) ( ϰ ) + F ( Y N ( ϰ ) , Y N ( 1 ) ( ϰ ) , , Y N ( m 1 ) ( ϰ ) ) = f ( ϰ ) ,
where
Y N ( l ) ( ϰ ) = i = 0 N ρ i φ i ( l ) ( ϰ ) l = 1 , 2 , , m .
By collecting R N ( ϰ ) at the Romanovski–Jacobi nods ϰ r , r = 0 , 1 , , N . These points serve as the basis for performing the spectral approximation in our proposed numerical approach. So we have
R N ( ϰ r ) = 0 , r = 0 , 1 , . . . , N .
By solving the nonlinear algebraic equations (16) using an appropriate solver, the unknown coefficients ρ i (where i = 0 , 1 , . . . , N ) can be determined. These coefficients play a crucial role in obtaining the desired numerical solution (9). Example 4 shows the application of this proposed method.

3.2. Initial Conditions of Nonhomogeneous Type

This subsection focuses on Equation (5) which is subject to nonhomogeneous ICs.
Y ( i ) ( 0 ) = θ i , i = 0 , 1 , , m 1 ,
where θ i ( i = 0 , 1 , , m 1 ) are constants. By applying the transformation below:
V ( ϰ ) = Y ( ϰ ) s = 0 m 1 θ s s ! ϰ s ,
as a result, we obtain:
V ( m ) ( ϰ ) + i = 0 m 1 η i V ( i ) ( ϰ ) = f ( ϰ ) , ϰ [ 0 , 1 ] ,
subject to
V ( i ) ( 0 ) = 0 , i = 0 , 1 , , m 1 ,
where
f ( x ) = f ( x ) i = 0 m 1 s = i m 1 η i θ s ( s i ) ! ϰ s i .
We are now ready to employ the previous RJGM.

4. Numerical Approach for Solving a System of IVPs

This section presents an additional use of the RJGM to solve a system of higher-order DEs under homogeneous ICs. Consider the following:
i = 0 m l = 1 s η r , l i Y l ( i ) ( ϰ ) = f r ( ϰ ) , r = 1 , 2 , , k , ϰ [ 0 , 1 ] ,
governed by homogeneous ICs
Y l ( i ) ( 0 ) = 0 , i = 0 , 1 , , m 1 , l = 1 , 2 , , s .
Following the approach in Section 3.1, we aim to determine Y l N ( ϰ ) Φ N for l = 1 , 2 , , s , satisfying:
i = 0 m l = 1 s η r , l i Y l N ( i ) ( ϰ ) , φ j ( ϰ ) w l R μ , ν = f r ( ϰ ) , φ j ( ϰ ) w l R μ , ν , 0 j N .
Approximate Y l N ( ϰ ) Φ N for l = 1 , 2 , , s as follows:
Y l N ( ϰ ) h = 0 N e l , h φ h ( ϰ ) , ϰ [ 0 , 1 ] , l = 1 , 2 , , s .
Given that φ h ( x ) Φ N is defined as in (8), the variational formulation (20) is equivalent to:
i = 0 m l = 1 s h = 0 N η r , l i e l , h φ h ( i ) ( ϰ ) , φ j ( ϰ ) w l R μ , ν = f r ( ϰ ) , φ j ( ϰ ) w l R μ , ν , r = 1 , 2 , , k .
Thus, Equation (21) can be expressed as the following system:
i = 0 m l = 1 s η r , l i C E l = F r , r = 1 , 2 , , k ,
where
E l = ( e l , 0 , e 1 , 1 , , e l , N ) T ,
and
F r = ( f 0 r , f 1 r , , f N r ) T , f j r = f r ( ϰ ) , φ j ( ϰ ) w l R μ , ν .
The linear system (22), consisting of s N + s algebraic equations in the unknowns e l , h ( f o r 1 l s a n d 0 h N ) , can be addressed using any applicable numerical solver.

5. Partial Differential Equations

In the next section, RJGM is used to solve two types of PDEs: first-order and second-order hyperbolic DEs.

5.1. First-Order Hyperbolic PDEs

Here, we concentrate on employing RJGM to address one-dimensional first-order hyperbolic PDEs:
Y ( ϰ , τ ) τ σ 1 Y ( ϰ , τ ) ϰ σ 2 Y ( ϰ , τ ) = F ( ϰ , τ ) , ( ϰ , τ ) [ 0 , 1 ] × [ 0 , 1 ] ,
subject to homogeneous ICs in both time and space
Y ( ϰ , 0 ) = 0 , ϰ [ 0 , 1 ] ,
Y ( 0 , τ ) = 0 , τ [ 0 , 1 ] ,
where σ 1 and σ 2 are constants and F ( ϰ , τ ) is a given. Selecting suitable basis functions, the system in RJGM remains sparse and can be accomplished smoothly. The semi-discrete RJGM for Equation (23) involves finding Y N W N , where
Y ( ϰ , τ ) Y N ( ϰ , τ ) = i = 0 N 1 j = 0 N 1 a i j S i ( ϰ ) S j ( τ ) ,
W N = Span { S i ( ϰ ) S j ( τ ) : i , j = 0 , 1 , , N 1 } .
We utilize compact combinations of RJPs as basis functions, aiming to reduce the bandwidth of the coefficient matrix associated with Equation (23). The basis functions adopted for the expansion S i ( ϰ ) and S j ( τ ) are arranged in the form:
S i ( ϰ ) = ϰ R i μ , ν ( ϰ ) ,
S j ( τ ) = τ R j μ , ν ( τ ) .
Now it is clear that (23) is equal to
Y N ( ϰ , τ ) τ , S i ( x ) S j ( t ) w R μ , ν σ 1 Y N ( ϰ , τ ) ϰ , S i ( x ) S j ( t ) w R μ , ν σ 2 Y N ( ϰ , τ ) , S i ( x ) S j ( t ) w R μ , ν = ( F ( ϰ , τ ) , S i ( x ) S j ( t ) ) w R μ , ν , i , j = 0 , 1 , , N 1 .
Let us denote
f i j = ( F ( ϰ , τ ) , S i ( x ) S j ( t ) ) w R μ , ν = 0 0 w R μ , ν ( ϰ ) w R μ , ν ( τ ) F ( ϰ , τ ) S i ( ϰ ) S j ( τ ) d τ d ϰ ,
by utilizing (4), one can express
f i j = n = 0 N 1 n = 0 N 1 F ( ϰ n , τ m ) ϖ n μ , ν S i ϰ n ϖ m μ , ν S j τ m ,
and
F = ( f i j ) , i , j = 0 , 1 , , N 1 .
Thus, Equation (27) is given by the matrix equation:
( D ^ C ^ σ 1 C ^ D ^ σ 2 D ^ D ^ ) A = F ,
where
c ^ i j = S j ( ϰ ) ϰ , S i ( ϰ ) w R μ , ν = S j ( τ ) τ , S i ( τ ) w R μ , ν ,
and
d ^ i j = ( S j ( ϰ ) , S i ( ϰ ) ) w R μ , ν = ( S j ( τ ) , S i ( τ ) ) w R μ , ν .
The nonzero entries of C ^ and D ^ are clearly provided as:
c ^ i j = k = 0 i s = 0 j ( 1 ) k + s i + μ i k j + μ j s i μ ν 1 k j μ ν 1 s × Γ ( k + s + μ + 2 ) Γ ( k s μ ν 2 ) ( k + 1 ) Γ ( ν ) ,
d ^ i j = k = 0 i s = 0 j ( 1 ) k + s i + μ i k j + μ j s i μ ν 1 k j μ ν 1 s × Γ ( k + s + μ + 3 ) Γ ( k s μ ν 3 ) Γ ( ν ) .
To further illustrate the numerical properties of the matrices C ^ and D ^ formulated in Equations (29) and (30), we present explicit numerical examples for specific values of N. The following computed values correspond to N = 6 and μ = 2 , ν = 40 . These examples provide insight into the structure and behavior of the matrices as their size increases.
For N = 6 , the matrices are given as follows:
C ^ N = 6 = 1 329004 61 3838380 1 82251 1 575757 1 3262623 1 15380937 1 295260 79 1673140 1663 13803405 1 14763 1 83657 1 394383 0 8 242165 509 1992672 12589 25435872 1 4403 1 20757 0 0 5 32736 41 44640 2441 1604715 1 1683 0 0 0 16 31465 183 69020 257 65076 0 0 0 0 25 17748 1757 262548 ,
D ^ N = 6 = 1 2878785 1 1254855 1 2179485 0 0 0 1 1254855 1 242165 1 157080 1 324632 0 0 1 2179485 1 157080 9 405790 1 34782 1 79112 0 0 1 324632 1 34782 5 59334 5 50344 1 24273 0 0 1 79112 5 50344 5 18879 1 3393 0 0 0 1 24273 1 3393 7 9425 .
Note 2. 
The Kronecker product V W of the n × m matrix V and the p × q matrix W
V = v 11 v 12 v 1 m v 21 v 22 v 2 m v n 1 v n 2 v n m , W = w 11 w 12 w 1 q w 21 w 22 w 2 q w p 1 w p 2 w p q ,
is the n p × m q matrix having the following block form:
V W = v 11 W v 12 W v 1 m W v 21 W v 22 W v 2 m W v n 1 W v n 2 W v n m W .
Furthermore, it is important to realize the following:
  • If V and W are lower (or upper) triangular matrices, then their Kronecker product, V W , is also lower (or upper) triangular.
  • If V and W are band matrices, then their Kronecker product, V W , is a band matrix. The bandwidth of V W depends on the bandwidths of V and W .
    Ultimately, the differential equation, along with the specified initial and boundary conditions, is transformed into a system of linear algebraic equations containing the unknown expansion coefficients. This system is then efficiently solved using the Gaussian elimination method.

5.2. Second-Order PDEs

This part focuses on second-order PDEs, as described in [21]
ξ 1 2 Y ( ϰ , τ ) ϰ 2 + ξ 2 2 Y ( ϰ , τ ) τ ϰ +   ξ 3 2 Y ( ϰ , τ ) τ 2 + ξ 4 Y ( ϰ , τ ) ϰ + ξ 5 Y ( ϰ , τ ) τ +   ξ 6 Y ( ϰ , τ ) = H ( ϰ , τ ) , ( ϰ , τ ) [ 0 , 1 ] × [ 0 , 1 ] ,
with
Y ( ϰ , 0 ) = Y ( ϰ , 0 ) τ = 0 , ϰ [ 0 , 1 ] , Y ( 0 , τ ) = Y ( 0 , τ ) ϰ = 0 , τ [ 0 , 1 ] , }
where ξ 1 , ξ 2 , ξ 3 , ξ 4 , ξ 5 , and ξ 6 are constants and H ( ϰ , τ ) is a given function.
Where
Y ( ϰ , τ ) Y N ( ϰ , τ ) = i = 0 N 2 j = 0 N 2 e i j L i ( ϰ ) L j ( τ ) ,
U N = Span { L i ( ϰ ) L j ( τ ) : i = 0 , 1 , , N 2 , j = 0 , 1 , , N 2 } .
The basis functions are used for L i ( ϰ ) and L j ( τ ) are arranged as follows:
L i ( ϰ ) = ϰ 2 R i ( ϰ ) ,
L j ( τ ) = τ 2 R j ( τ ) ,
then, we have to find Y N ( ϰ , τ ) U N such that
ξ 1 2 Y N ( ϰ , τ ) ϰ 2 , L i ( ϰ ) L j ( τ ) w R μ , ν + ξ 2 2 Y N ( ϰ , τ ) τ ϰ , L i ( ϰ ) L j ( τ ) w R μ , ν + ξ 3 2 Y N ( ϰ , τ ) τ 2 , L i ( ϰ ) L j ( τ ) w R μ , ν + ξ 4 Y N ( ϰ , τ ) ϰ , L i ( ϰ ) L j ( τ ) w R μ , ν + ξ 5 Y N ( ϰ , τ ) τ , L i ( ϰ ) L j ( τ ) w R μ , ν + ξ 6 Y N ( ϰ , τ ) , L i ( ϰ ) L j ( τ ) w R μ , ν = H ( ϰ , τ ) , L i ( ϰ ) L j ( τ ) w R μ , ν , i , j = 0 , 1 , , N 2 .
Equation (34) can now be expressed as the following matrix equation:
ξ 1 ( L Q ) + ξ 2 ( P P ) + ξ 3 ( Q L ) + ξ 4 ( P Q ) + ξ 5 ( Q P ) + ξ 6 ( Q Q ) E = H ,
and L , Q , and P are ( N 1 ) × ( N 1 )
L = ( l ^ i j ) 0 i , j N 2 , Q = ( q ^ i j ) 0 i , j N 2 , P = ( p ^ i j ) 0 i , j N 2 ,
where
l ^ i j = 2 L j ( ϰ ) ϰ 2 , L i ( ϰ ) w R μ , ν = 2 L j ( τ ) τ 2 , L i ( τ ) w R μ , ν ,
p ^ i j = L j ( ϰ ) ϰ , L i ( ϰ ) w R μ , ν = L j ( τ ) τ , L i ( τ ) w R μ , ν ,
and
q ^ i j = L j ( ϰ ) , L i ( ϰ ) w R μ , ν = L j ( τ ) , L i ( τ ) w R μ , ν .
The nonzero entries of L , Q , and P are explicit, as follows:
l ^ i j = k = 0 i s = 0 j ( 1 ) k + s i + μ i k j + μ j s i μ ν 1 k j μ ν 1 s × Γ ( k + s + μ + 3 ) Γ ( k s μ ν 3 ) ( k + 2 ) ( k + 1 ) Γ ( ν ) ,
p ^ i j = k = 0 i s = 0 j ( 1 ) k + s i + μ i k j + μ j s i μ ν 1 k j μ ν 1 s × Γ ( k + s + μ + 4 ) Γ ( k s μ ν 4 ) ( k + 2 ) Γ ( ν ) ,
and
q ^ i j = k = 0 i s = 0 j ( 1 ) k + s i + μ i k j + μ j s i μ ν 1 k j μ ν 1 s × Γ ( k + s + μ + 5 ) Γ ( k s μ ν 5 ) Γ ( ν ) .

6. Handling of Nonhomogeneous Conditions

In this work, we introduce a method for converting problems involving nonhomogeneous ICs in both time and space into equivalent problems characterized by homogeneous ICs in both time and space. For demonstration purposes, this approach is applied to the first-order hyperbolic PDEs given in Equation (23). Similarly, nonhomogeneous ICs in both time and space for the second-order PDEs (31) can be addressed using the same methodology. If the solution Y of Equation (23) is subjected to nonhomogeneous ICs in both time and space:
Y ( ϰ , 0 ) = λ + ( ϰ ) , Y ( 0 , τ ) = λ ( τ ) ,
where λ + ( ϰ ) and λ ( τ ) are given functions. To proceed, consider the following transformation:
U ( ϰ , τ ) = Y ( ϰ , τ ) ( 1 ϰ τ ) Y ( ϰ , τ ) = Y ( ϰ , τ ) Y ϵ ( ϰ , τ ) ,
where U represents an auxiliary unknown function, which is defined to satisfy the modified problem.
U ( ϰ , τ ) τ σ 1 U ( ϰ , τ ) ϰ σ 2 U ( ϰ , τ ) = F ( ϰ , τ ) , ( ϰ , τ ) [ 0 , 1 ] × [ 0 , 1 ] ,
under homogeneous ICs
U ( ϰ , 0 ) = U ( 0 , τ ) = 0 ,
where
F ( ϰ , τ ) = F ( ϰ , τ ) Y ϵ ( ϰ , τ ) τ + σ 1 Y ϵ ( ϰ , τ ) ϰ + σ 2 Y ϵ ( ϰ , τ ) ,
where Y ϵ ( ϰ , τ ) is an arbitrary function, specifically constructed to satisfy the original nonhomogeneous ICs in both time and space.
Remark 2. 
Recent studies have demonstrated the effectiveness of Romanovski–Jacobi polynomials (RJPs) in solving DEs. In [12], an operational matrix-based collocation method was applied to higher-order ODEs and second-order PDEs. Meanwhile, ref. [19] proposed a spectral collocation algorithm for time-fractional telegraph equations, utilizing a hybrid basis of shifted Jacobi and Romanovski–Jacobi polynomials. This approach offered enhanced flexibility and accuracy, confirming the strength of RJPs in tackling complex differential systems.
Remark 3. 
The presented algorithm, RJGM, employs the general basis x m R N μ , ν ( ϰ j ) and computes the necessary non-zero entries of the matrices required for implementing the Galerkin scheme, as detailed in Theorem 2. We then utilize these computational tools to demonstrate the flexibility of RJGM in obtaining numerical solutions for PDEs of various orders, highlighting the robustness of this algorithm.

7. Convergence and Error Analysis

In this section, we present a comprehensive study for the convergence analysis of the suggested expansions. The three spaces Φ N , W N , and U N , defined in two sections, Section 3 and Section 5, are our primary area of interest. The following two lemmas are needed:
Lemma 1. 
The following estimate holds for R i μ , ν ( ϰ ) for all i = 0 , 1 , . . , M , where ν < 2 M μ 1 and μ > 1 ,
R i μ , ν ( ϰ ) Γ ( ν ) Γ ( i ν ) , ϰ [ 0 , 1 ] .
Proof. 
From (1), we have
R l ^ μ , ν ( ϰ ) l = 0 l ^ Γ ( μ ν l ^ ) Γ ( μ + l ^ + 1 ) l ! Γ ( μ ν l ^ l ) Γ ( μ + l + 1 ) Γ ( l ^ l + 1 ) , ϰ [ 0 , 1 ] .
The inequality (38)—after some rather manipulation—can be written in the form
R l ^ μ , ν ( ϰ ) Γ ( l ^ + μ + 1 ) Γ ( μ + 1 ) Γ ( l ^ + 1 ) F 1 2 ( l ^ , l ^ + μ + ν + 1 ; μ + 1 ; 1 ) .
The application of the Chu–Vandermonde identity leads us to write (39) in the form of (37), and the proof is completed.    □
The following lemma is a direct result of Theorem 5.4 in [10].
Lemma 2. 
The following the inversion formula holds for R i μ , ν ( ϰ ) for all i 0 , where ν < 2 M μ 1 and μ > 1 ,
ϰ m = n = 0 m d n , m μ , ν R n μ , ν ( ϰ ) ,
where
d n , m μ , ν = ( 1 ) n n ! m n ( μ ν 2 n 1 ) Γ ( m + μ + 1 ) Γ ( m n μ ν 1 ) Γ ( n + μ + 1 ) Γ ( n μ ν ) .
Note 3. 
The RJPs, R i μ , ν ( ϰ ) , i = 0 , 1 , , M , are defined under the conditions ν < 2 M μ 1 , μ > 1 M > 0 . In view of these conditions, throughout the current section, one should note that ν > 2 M + 1 + μ > 2 M > 0 and μ ν > 2 M + 1 .

7.1. The Convergence and Error Analysis for IVPs (5), (6), (18), and (19)

In this part, we analyze the convergence and error estimates of the proposed techniques to solve the model given by IVPs (5) and (6) and the system given by (18) and (19). If the error obtained by solving (5) and (6) numerically is defined by
e N ( ϰ ) = Y ( ϰ ) Y N ( ϰ ) ,
then the maximum absolute error (MAE) of the suggested method is estimated by:
e N = max 0 ϰ 1 e N ( ϰ ) .
On the other hand, with respect to the system given by (18) and (19), the differences between the function Y j ( ϰ ) and its estimated value Y j , N ( ϰ ) are denoted by
e j , N ( ϰ ) = Y j ( ϰ ) Y j , N ( ϰ ) , j = 1 , 2 , , k ,
and the MAEs of the proposed method are determined using:
e j , N = max 0 ϰ 1 e j , N ( ϰ ) .
To assess the accuracy of the proposed method, we introduce the following error definition:
E N = max { e 1 , N , , e k , N } .
Theorem 3. 
A function y ( ϰ ) L w R μ , ν 2 ( 0 , ) , with y ( n ) ( ϰ ) L , can be represented as follows:
y ( ϰ ) = i = 0 y i R i μ , ν ( ϰ ) ,
and the series converges uniformly to y ( ϰ ) . In addition, the expansion coefficients y i satisfy the following inequality
y i < L μ ν μ n n ! 2 i i ! Γ ( ν i ) Γ ( ν ) , i n , 0 < μ < 1 2 ν .
Proof. 
Using the orthogonality relation (3) leads to
y i = 1 ^ i 0 w R μ , ν ( ϰ ) Y ( ϰ ) R i μ , ν ( ϰ ) d ϰ .
The function y ( ϰ ) can be written as
y ( ϰ ) = k = 0 n 1 y ( k ) ( 0 ) k ! ϰ k + y ( n ) ( ζ ϰ ) n ! ϰ n , ζ ϰ ( 0 , ϰ ) .
By substituting (45) and applying the orthogonality relation while accounting for the condition i n , we obtain
0 w R μ , ν ( ϰ ) ϰ k R i μ , ν ( ϰ ) d ϰ = 0 , k = 0 , 1 , . . . , n 1 ,
and | y ( n ) ( ϰ ) | L , enabling one to obtain
| y i | < L n ! ^ i 0 w R μ , ν ( ϰ ) ϰ n | R i μ , ν ( ϰ ) | d ϰ .
Now, using Formula (1) and
0 w R μ , ν ( ϰ ) x k + n d ϰ = Γ ( k + n + μ + 1 ) Γ ( k n μ ν 1 ) Γ ( ν ) , k + n + μ + ν < 1 , k + n + μ > 1 ,
enables one to write relation (46) in the form
| y i | < L n ! ^ i k = 0 i | λ k μ , ν | Γ ( k + n + μ + 1 ) Γ ( k n μ ν 1 ) Γ ( ν ) .
Substituting (2) into (47) and using the asymptotic result in ([22], p. 233):
Γ ( a ϰ + b ) 2 π e a ϰ ( a ϰ ) a ϰ + b 1 / 2 , ϰ 0 , a > 0 .
Through straightforward algebraic manipulations, we arrive at (44), thereby concluding the proof of the theorem.    □
Theorem 4. 
Let y ( ϰ ) satisfy the assumptions of Theorem 3, and if
y N ( ϰ ) = i = 0 N 1 y i R i μ , ν ( ϰ ) ,
then the following inequality is valid
y y N = m a x ϰ [ 0 , 1 ] | y ( ϰ ) y N ( ϰ ) | < U B N ( n , μ , ν ) ,
where
U B N ( n , μ , ν ) = μ ν μ n n ! 2 π 1 N 1 / 2 ( N 2 e ) 2 e N N ,
for 0 < μ < 1 2 ν and N > 2 e .
Proof. 
Following Theorem 3 and Lemma 1, we obtain
y y N m a x ϰ [ 0 , 1 ] i = N y i R i μ , ν ( ϰ ) L μ ν μ n n ! i = N 2 i i ! .
Using
i ! 2 π i i e i ,
we obtain
y y N L n μ ν μ n n ! 2 π i = N i 1 / 2 2 e i i < L μ ν μ n n ! 2 π 1 N 1 / 2 R N ,
where R N = i = N 2 e N i , which is convergent for N > 2 e , so
R N = 2 e ( N 2 e ) 2 e N N 1 ,
and Theorem 4 is now proved.    □
Theorem 5. 
Assume that Y ( ϰ ) = ϰ m y ( ϰ ) and y ( ϰ ) satisfies the hypothesis of Theorem 3. Let Y N ( ϰ ) have the expression (7) and indicate an ideal approximation (BPA) for Y ( ϰ ) out of Φ N . Then,
e N < U B N ( n , μ , ν ) ,
for 0 < μ < 1 2 ν and N > 2 e .
Proof. 
Let y N ( ϰ ) have the form (49); then, Theorem 4 leads to
y y N < U B N ( n , μ , ν ) .
Now, consider the approximation Y ( ϰ ) Y N ( ϰ ) = ϰ m y N ( ϰ ) ; then,
Y Y N = m a x ϰ [ 0 , 1 ] | Y ( ϰ ) Y N ( ϰ ) | m a x ϰ [ 0 , 1 ] | y ( ϰ ) y N ( ϰ ) | = y y N < U B N ( n , μ , ν ) .
Since Y N ( ϰ ) Φ N indicates an ideal approximation to Y ( ϰ ) , then
Y Y N Y h , h Φ N ,
therefore,
Y Y N Y Y N < U B N ( n , μ , ν ) .
   □
The following theorem further emphasizes error stability by estimating error propagation.
Theorem 6. 
For any two successive approximations of Y ( ϰ ) for the problem (5) and (6), we obtain:
| Y N + 1 Y N | < U B N ( n , μ , ν ) .
Proof. 
We have
| Y N + 1 Y N | = | Y N + 1 Y + Y Y N | | Y Y N + 1 | + | Y Y N | e N + 1 + e N .
By considering two theorems, Theorems 4 and 5, the successive approximations Y N ( ϰ ) for the problem (5) and (6) satisfy (50).    □
Remark 4. 
Theorems 3 and 5 can be extended when one studies the error analysis for the proposed method for solving the system of differential Equations (18) and (19). The key steps would involve:
(i) 
Establishing the corresponding hypotheses and assumptions in Theorem 3 to obtain the bound expression of the coefficients y i , j in the expansions:
y j , N ( ϰ ) = i = 0 N 1 y i , j R i μ , ν ( ϰ ) , j = 1 , 2 , , k .
(ii) 
Deriving the error-bound expressions for e j , N , j = 1 , 2 , , k , similar to e N in Theorem 5.
(iii) 
Getting the bound expression of E N defined in (42):
E N < U B N ( n , μ , ν ) .

7.2. The Convergence and Error Analysis for PDEs (23)–(25) and (31)–(32)

Here, we examine the proposed method’s convergence and error estimates. The two spaces W N and U N , defined in two sections, Section 5.1 and Section 5.2, are our primary area of interest. Additionally, E N ( ϰ , τ ) can be defined by
E N ( ϰ , τ ) = Y ( ϰ , τ ) Y N ( ϰ , τ ) .
This paper analyzes the numerical scheme’s error through L norm estimation,
E N = Y Y N = m a x ( ϰ , τ ) I | Y ( ϰ , τ ) Y N ( ϰ , τ ) | , I = [ 0 , 1 ] × [ 0 , 1 ] .
Now, we examine the convergence and error analysis of the suggested expansions (26) and (33). The following two theorems are necessary to continue with our investigation.
Theorem 7. 
Let y ( ϰ , τ ) be an infinitely differentiable function at the origin with i + j y ϰ i τ j ( 0 , 0 ) < λ ( μ 2 ν ) i + j , 0 < μ < 1 2 ν . Then, it has the following expansion
y ( ϰ , τ ) = i = 0 j = 0 Y i , j R i μ , ν ( ϰ ) R j μ , ν ( τ ) ,
where
Y i , j = p = 0 q = 0 y i + p , j + q d i , p + i μ , ν d j , q + j μ , ν , y i , j = 1 i ! j ! i + j y x i t j ( 0 , 0 ) .
The expansion coefficients satisfy
| Y i , j | λ μ ( 2 ν ) ( μ ν ) i + j e μ μ + ν e μ 2 ν , i , j 0 ,
where ≲ means that a generic constant d exists such that | Y i , j | d M i M j i ! j ! . The series in Equation (53) converges uniformly to y ( ϰ , τ ) .
Proof. 
First, we expand y ( ϰ , τ ) as
y ( ϰ , τ ) = i = 0 j = 0 y i , j ϰ i τ j .
This expansion can be written in the form:
y ( ϰ , τ ) = i = 0 b i ( τ ) ϰ i ,
where
b i ( τ ) = j = 0 y i , j τ j .
Inserting (40) into (56) enables one to write
y ( ϰ , τ ) = i = 0 b i ( τ ) p = 0 i d p , i μ , ν R p μ , ν ( ϰ ) .
By expanding the right-hand side of Equation (57) and grouping like terms, we derive the following expression:
y ( ϰ , τ ) = i = 0 p = 0 b p + i ( τ ) d i , p + i μ , ν R i μ , ν ( ϰ ) .
Now, we have
p = 0 b p + i ( τ ) d i , p + i μ , ν = p = 0 j = 0 y p + i , j τ j d i , p + i μ , ν = j = 0 p = 0 y p + i , j d i , p + i μ , ν τ j .
Dropping Equation (40) into Equation (59) and using the same processes allows you to write
p = 0 b p + i ( τ ) d i , p + i μ , ν = j = 0 p = 0 q = 0 y p + i , j + q d i , p + i μ , ν d j , q + j μ , ν R j μ , ν ( τ ) .
Substituting Equation (60) for Equation (58) immediately proves Equation (43). This concludes the first part of the proof of the Theorem.
Now, we need to prove Equation (55). In view of Equation (54), we have
| Y i , j | λ p = 0 q = 0 | y i + p , j + q | | d i , p + i μ , ν | | d j , q + j μ , ν | , i , j 0 .
Using (48) enables us to obtain
| d n , m μ , ν | m ! ( m n ) ! μ μ ν m 1 μ n , 0 < μ < 1 2 ν ,
which leads to
| Y i , j | λ ( μ 2 ν ) i + j 1 μ ν i + j p = 0 q = 0 1 p ! q ! μ μ ν p + q μ 2 ν p + q ,
which takes the form
| Y i , j | λ μ ( 2 ν ) ( μ ν ) i + j e μ μ + ν e μ 2 ν .
This proves the first part. We have now proved the second part of the theorem.
Now, in view of Lemma 1, we obtain
i = 0 j = 0 | Y i , j R i μ , ν ( ϰ ) R j μ , ν ( τ ) | λ e μ μ + ν e μ 2 ν i = 0 j = 0 μ ( 2 ν ) ( μ ν ) i + j Γ ( ν ) Γ ( i ν ) Γ ( ν ) Γ ( j ν ) .
Then, using (48), the inequality (61) takes the form
i = 0 j = 0 | Y i , j R i μ , ν ( ϰ ) R j μ , ν ( τ ) | λ e μ μ + ν e μ 2 ν i = 0 j = 0 μ 2 ( μ + ν ) i + j .
The condition 0 < μ < 1 2 ν gives μ μ ν < 1 ; then, we have
i = 0 j = 0 | Y i , j R i μ , ν ( ϰ ) R j μ , ν ( τ ) | λ 2 ( μ + ν ) 3 μ + 2 ν 2 e μ μ + ν e μ 2 ν .
This shows that the series in Equation (62) converges uniformly to y ( x , t ) .    □
Theorem 8. 
If y ( ϰ , τ ) satisfies the hypothesis of Theorem 7, and if
y N ( ϰ , τ ) = i = 0 N 1 j = 0 N 1 Y i , j R i μ , ν ( ϰ ) R j μ , ν ( τ ) ,
then
| y y N | 1 2 N .
Proof. 
The truncation error may be written as:
y y N = i = 0 j = 0 Y i , j R i μ , ν ( ϰ ) R j μ , ν ( τ ) i = 0 N 1 j = 0 N 1 Y i , j R i μ , ν ( ϰ ) R j μ , ν ( τ ) i = 0 N 1 j = N | Y i , j | | R i μ , ν ( ϰ ) | | R j μ , ν ( τ ) | + i = N j = 0 | Y i , j | | R i μ , ν ( ϰ ) | | R j μ , ν ( τ ) | .
Applying Lemma 1 and Theorem 7 to (64) leads to
| y y N | λ e μ μ + ν e μ 2 ν i = 0 N 1 j = N μ ( 2 ν ) ( μ ν ) i + j Γ ( ν ) Γ ( i ν ) Γ ( ν ) Γ ( j ν ) + λ e μ μ + ν e μ 2 ν i = N j = 0 μ ( 2 ν ) ( μ ν ) i + j Γ ( ν ) Γ ( i ν ) Γ ( ν ) Γ ( j ν ) .
Using (48) gives
| y y N | λ e μ μ + ν e μ 2 ν i = 0 N 1 j = N μ 2 ( μ ν ) i + j + λ e μ μ + ν e μ 2 ν i = N j = 0 μ 2 ( μ ν ) i + j , λ e μ μ + ν e μ 2 ν 1 2 N i = 0 N 1 j = N μ ( μ ν ) i + j + λ e μ μ + ν e μ 2 ν 1 2 N i = N j = 0 μ ( μ ν ) i + j , λ e 3 μ μ + ν e μ 2 ν 1 2 N + λ e 3 μ μ + ν e μ 2 ν 1 2 N ,
then, we obtain
| y y N | 1 2 N ,
and the theorem’s proof is complete.    □
Theorem 9. 
Assume Y ( ϰ , τ ) = τ ϰ y ( ϰ , τ ) , and y ( ϰ , τ ) meets the assumptions of Theorem 7. The term (26) represents the ideal achievable approximation (BPA) for Y N ( ϰ , τ ) from W N . Then,
E N 1 2 N .
Proof. 
Let y N ( ϰ , τ ) have the form (63); then, Theorem 8 leads to
y y N 1 2 N + 1 .
Attention is now directed to the approximation Y ( x , t ) Y N ( x , t ) = ϰ τ y N ( ϰ , τ ) ; then,
Y Y N y y N 1 2 N .
Since Y N ( x , t ) W N represents the BPA to Y ( x , t ) , then
Y Y N Y h , h W N ,
therefore,
Y Y N Y Y N 1 2 N .
   □
Theorem 10. 
Assume that Y ( ϰ , τ ) = τ 2 ϰ 2 y ( ϰ , τ ) and y ( ϰ , τ ) satisfy the hypothesis of Theorem 7. Let Y N ( ϰ , τ ) have the expression (33) and represent the BPA for Y ( ϰ , τ ) out of U N . Then,
E N 1 2 N 1 .
Proof. 
A similar proof of Theorem 9 can be obtained to prove (65).    □
Error stability is further emphasized in the following theorem through an estimation of error propagation.
Theorem 11. 
For any two successive approximations of Y ( ϰ , τ ) for each of the two problems (23)–(25) and (31)–(32), we obtain:
| Y N + 1 Y N | O 1 2 N .
Proof. 
We have
| Y N + 1 Y N | = | Y N + 1 Y + Y Y N | | Y Y N + 1 | + | Y Y N | E N + 1 + E N .
By considering two theorems, Theorems 8 and 9, the successive approximations Y N ( ϰ , τ ) for the two problems (23)–(25) and (31)–(32) satisfy (66).    □
Remark 5. 
We acknowledge that convergence analysis, error estimates, and the order of error are well-established topics in the numerical analysis literature. However, our work specifically applies these techniques to the context of RJPs within SGMs for solving DEs. Furthermore, we provide comparative results that underscore the effectiveness of our method relative to existing studies. We believe these contributions add value to the discourse in numerical analysis and demonstrate the potential of RJPs in improving numerical solutions for DEs.

8. Numerical Results

In this part, we use the suggested approach to solve two types of DEs. Numerical examples are provided to demonstrate the accuracy and efficiency of the approach. The results are compared with exact solutions and other numerical methods, showcasing the robustness and reliability of the proposed technique. Tables and figures are included to illustrate the convergence rates, error analysis, and computational performance, further validating the method’s effectiveness for both ODEs and PDEs.
Example 1. 
Examine the next problem: (see [23])
Y ( 2 ) ( ϰ ) 2 Y ( 1 ) ( ϰ ) + 2 Y ( ϰ ) = e 2 ϰ sin ( ϰ ) , ϰ [ 0 , 1 ] , Y ( 0 ) = 0.4 , Y ( 1 ) ( 0 ) = 0.6 ,
whose analytical solution is Y ( ϰ ) = 0.2 e 2 ϰ ( sin ( ϰ ) 2 cos ( ϰ ) ) .
An absolute error comparison is presented in Table 1, which relates Example 1 to the Differential Transform Method (DTM) and our proposed method RJGM at N = 18 and ν = 65 with varying values of μ ( μ = 1 , 2 , 3 ). The results clearly demonstrate that our method achieves significantly lower absolute errors compared to the DTM across all evaluated points (from 0.1 to 1.0). For instance, at ϰ = 0.1 , our method with μ = 1 yields an error of 4.85 × 10 17 , which is substantially smaller than the DTM’s error of 2.10 × 10 6 . This trend persists throughout Table 1, with our method consistently outperforming the DTM in terms of accuracy, particularly for higher values of μ. The findings demonstrate the accuracy as well as efficiency of our suggested technique in solving the problem in Example 1, showcasing its superiority over the traditional DTM approach. Figure 1 compares the precise and approximate solutions for Example 1 at N = 18 , ν = 65 , and μ = 4 , obtained using the RJGM. The graph depicts the great accuracy of the approximate solution, demonstrating the precision and dependability of the RJGM technique to solve the problem in Example 1.
Example 2. 
Consider the following problem: (see [11])
Y ( 2 ) ( ϰ ) 2 Y ( 1 ) ( ϰ ) + 2 Y ( ϰ ) = ϰ e ϰ , ϰ [ 0 , 1 ] , Y ( 0 ) = 0 , Y ( 1 ) ( 0 ) = 1 ,
whose analytical solution is Y ( ϰ ) = ϰ e ϰ .
In Example 2, Table 2 compares the maximum absolute error between the Romanovski–Jacobi tau method (RJTM) and RJGM, which is our proposed method. RJGM demonstrates a clear advantage, particularly with larger N values, indicating its superior accuracy in approximating solutions. This advantage highlights the effectiveness of the Galerkin method (RJGM) over the tau method (RJTM). Table 2 also illustrates the sensitivity of both methods to variations in μ and ν values. In Example 2, Table 3 compares the computational time for the RJTM method and RJGM, which is our proposed method. It is evident that RJGM requires less computational time than RJTM, indicating its higher efficiency. This efficiency makes RJGM the preferable choice when fast and accurate solutions are required. Table 3 also demonstrates the influence of μ and ν values on the computational time for both methods.
Example 3. 
Examine the next problem [24],
Y 1 ( 1 ) + Y 2 ( 1 ) + Y 1 + Y 2 = 1 , Y 2 ( 1 ) 2 Y 1 Y 2 = 0 , Y 1 ( 0 ) = 0 , Y 2 ( 0 ) = 1 ,
where Y 1 ( ϰ ) = e ϰ 1 and Y 2 ( ϰ ) = 2 e ϰ 0 ϰ 1 .
Table 4 and Table 5 compare the absolute errors E 1 and E 2 for Y 1 ( ϰ ) and Y 2 ( ϰ ) of Equation (3) in Example 3 using the differential transformation method (DTM) [25], the Bessel collocation method (BCM) [24], and our proposed method. The results show that our method achieves significantly lower errors than both DTM and BCM across all ϰ i values, demonstrating its superior accuracy and reliability.
Example 4. 
We examine the nonlinear Lane–Emden-type equation [26]:
Y ( 2 ) ( ϰ ) + 2 ϰ Y ( 1 ) ( ϰ ) = 4 ( e Y + e Y 2 ) , ϰ [ 0 , 1 ] , Y ( 0 ) = 0 , Y ( 1 ) ( 0 ) = 0 ,
whose analytical solution is Y ( ϰ ) = 2 l n ( 1 + ϰ 2 ) .
Table 6 provides a comparative analysis of the absolute errors obtained by various methods for solving Example 4. The results clearly highlight the superior performance of the proposed Romanovski–Jacobi collocation method (RJCM) over Laguerre wavelets (LW) [27], Hermite wavelets (HW) [28], and the wavelet collocation (WC) method [26]. Notably, RJCM achieves remarkably low errors, reaching the order of 10 8 , indicating exceptional numerical stability and precision. This high accuracy stems from the effective use of the orthogonal Romanovski–Jacobi basis and its suitability for representing the solution behavior. These findings confirm the robustness and efficiency of RJCM in solving such differential problems.
Example 5. 
We examine the following [29,30]:
Y ( ϰ , τ ) τ + Y ( ϰ , τ ) ϰ + Y ( ϰ , τ ) = 2 sin ( ϰ + τ ) + cos ( ϰ + τ ) , ϰ [ 0 , 1 ] , τ [ 0 , 1 ] ,
accompanied by ICs for time and spatial variables,
Y ( ϰ , 0 ) = cos ( ϰ ) , ϰ [ 0 , 1 ] , Y ( 0 , τ ) = cos ( τ ) , τ [ 0 , 1 ] .
Y ( ϰ , τ ) is formulated as
Y ( ϰ , τ ) = cos ( ϰ + τ ) .
Table 7 presents a comparison of the absolute errors for Example 5 using different methods, including the wavelet collocation method (WCM) [30], Bernstein operational matrix (BOM) [29], and the proposed Romanovski–Jacobi Galerkin method (RJGM). The proposed RJGM was tested for two values of μ ( μ = 1 and μ = 3 ). From Table 7, it is evident that the RJGM provides significantly higher accuracy compared to the other methods. For instance, at the point ( 0.1 , 0.1 ) , the absolute error of the RJGM was 4.336 × 10 19 for both μ = 1 and μ = 3 , while the errors for the other methods were much larger, such as 9.120 × 10 5 for Legendre wavelet collocation (LWC) and 4.530 × 10 4 for Chebyshev wavelet collocation (CWC). This indicates that RJGM is more effective in reducing numerical errors and achieving high precision, regardless of the value of μ. Additionally, at other points such as ( 0.2 , 0.2 ) and ( 0.3 , 0.3 ) , the absolute errors of the RJGM were in the order of 10 18 or less for both μ = 1 and μ = 3 , further confirming its superiority in terms of accuracy compared to traditional methods. These results reinforce the effectiveness of the RJGM in solving first-order PDEs. In addition, Table 8 illustrates the maximum absolute errors for Example 5 using the RJGM at ν = 50 for varying values of N and μ. The results demonstrate the rapid convergence and high precision of the RJGM, with errors decreasing to the order of 10 17 or lower as N increases. The method’s performance is consistent across different values of μ, with μ = 2 yielding the smallest error at N = 20 . These findings underscore the effectiveness of RJGM in achieving high accuracy and numerical efficiency for solving first-order PDEs.
Moreover, Figure 2 displays space–time graphs illustrating the absolute error functions for Example 5 with different values of N, specifically N = 10 , N = 12 , N = 14 , and N = 16 , using parameters μ = 4 and ν = 50 . These graphs represent how the error is distributed across both spatial and temporal domains, providing insights into the accuracy of the proposed method. A clear trend emerges, showing that as N increases, the errors diminish considerably, reinforcing the method’s reliability in terms of accuracy and convergence. Additionally, Figure 3 illustrates the space–time graphs of the approximate solution on the left and its absolute error function on the right for Example 5, considering μ = 4 , ν = 65 , and N = 24 . The approximate solution demonstrates smooth and precise behavior, while the corresponding absolute error remains low, further substantiating the accuracy of the proposed approach. These findings affirm the efficiency of the method in handling complex DEs.
Example 6 
([21]). We investigate the following:
2 Y ( ϰ , τ ) ϰ 2 3 2 Y ( ϰ , τ ) ϰ τ + 2 Y ( ϰ , τ ) τ 2 = 3 e ϰ cos ( ϰ ) , ϰ [ 0 , 1 ] , τ [ 0 , 1 ] ,
formulated under the constraints of ICs in time and space,
Y ( ϰ , 0 ) = sin ( ϰ ) , Y ( ϰ , 0 ) τ = sin ( ϰ ) , Y ( 0 , τ ) = 0 , Y ( 0 , τ ) ϰ = e τ .
The exact solution is represented as follows:
Y ( ϰ , τ ) = e τ sin ( ϰ ) .
Table 9 compares the absolute errors for Example 6 between the proposed RJGM at N = 24 and ν = 55 and the Bernstein Matrix Method (BMM) [21]. The results demonstrate that the RJGM achieves significantly lower errors, often reducing them to zero or near-zero values across various μ ( 1 , 2 , 3 ) and points ( ϰ , τ ) . This highlights the superior accuracy and reliability of the proposed method in solving DEs. Figure 4 presents the space–time graphs of the approximate solution (left) and its absolute error function (right) for Example 6 with parameters μ = 1 , ν = 65 , and N = 28 . The left graph illustrates the smooth and accurate behavior of the approximate solution, while the right graph demonstrates the minimal absolute errors, further confirming the precision of the proposed method.
Remark 6. 
Attempting to demonstrate how to use the four algorithms presented in Section 3. The steps for solving (67), (68), (69), and (70) in Examples 1, 4, 5, and 6 using Algorithms 1–4. The Mathematica application, version 9, was used to perform the necessary computations.
Algorithms 1 RJGM Algorithm for Example 1
Step 1.Given N , m , f , and η i ( i = 1 , 2 , , m ) .
Step 2.Define the polynomials R i ( ϰ ) , the basis ϕ i ( ϰ ) , and the vector P and compute
the elements of the ( N + 1 ) × ( N + 1 ) matrices C , F , and D l ( l = 1 , 2 , , m ) .
Step 3.List the equation system as defined in (11).
Step 4.Use Mathematica’s built-in numerical solver to solve the system
obtained in [Output 3].
Step 5.Evaluate Y N ( ϰ ) defined in (9).   (In the case of homogeneous ICs.)
Step 6.Evaluate Y N ( ϰ ) using Equation (17).      (In the case of nonhomogeneous ICs.)
Step 7.Evaluate e N defined in (41).
Algorithms 2 RJGM Algorithm for Example 4
Step 1.Given N , m , f , and η i ( i = 1 , 2 , , m ) .
Step 2.Define the polynomials R i ( ϰ ) , the basis ϕ i ( ϰ ) , and the vector P and compute
the elements of the ( N + 1 ) × ( N + 1 ) matrices C , F , and D l ( l = 1 , 2 , , m ) .
Step 3.Define R N ( ϰ ) defined in Equation (1).
Step 3.List R N ( ϰ i ) , i = 0 , 2 , , N , defined in (16).
Step 4.Use Mathematica’s built-in numerical solver to solve the system
obtained in [Output 3].
Step 5.Evaluate Y N ( ϰ ) defined in (9).   (In the case of homogeneous ICs.)
Step 6.Evaluate Y N ( ϰ ) using Equation (17).      (In the case of nonhomogeneous ICs.)
Step 7.Evaluate e N defined in (41).
Algorithms 3 RJGM Algorithm for Example 5
Step 1.Given N , σ 1 , σ 2 , F .
Step 2.Define the polynomials R i ( ϰ ) , the basis S i ( ϰ ) , and the vector A and compute
the elements of N × N   matrices C ^ , D ^ .
Step 3.Compute D ^ C ^ C ^ D ^ and D ^ D ^ .
Step 4.List the equations system as defined in (28).
Step 5.Use Mathematica’s built-in numerical solver to solve the system
obtained in [Output 4].
Step 6.Evaluate Y N ( ϰ , τ ) defined in (26).
Step 7.Evaluate E N ( ϰ , τ ) and E N defined in (51) and (52), respectively.
Algorithms 4 RJGM Algorithm for Example 6
Step 1.Given N , H , and ξ r ( r = 1 , 2 , , 6 ) .
Step 2.Define the polynomials R i ( ϰ ) , the basis L i ( ϰ ) , and the vector E and compute
the elements of ( N 1 ) × ( N 1 )   matrices P , Q , L .
Step 3.Compute L Q P P Q L P Q Q P , and Q Q .
Step 4.List the equation system as defined in (35).
Step 5.Use Mathematica’s built-in numerical solver to solve the system
obtained in [Output 4].
Step 6.Evaluate Y N ( ϰ , τ ) defined in (33).   (In the case of homogeneous ICs.)
Step 7.Evaluate Y N ( ϰ , τ ) using Equation (36).      (In the case of nonhomogeneous ICs.)
Step 8.Evaluate E N ( ϰ , τ ) and E N defined in (51) and (52), respectively.

9. Conclusions

This study introduced an innovative spectral Galerkin approach using RJPs to efficiently solve various types of DEs with ICs. The proposed method exploited the orthogonality and completeness of RJPs to construct trial functions that inherently satisfied the ICs, leading to a simplified numerical implementation and enhanced computational stability. Through various numerical examples, the method demonstrated remarkable accuracy and convergence, significantly outperforming existing spectral techniques in terms of error reduction and computational efficiency. The results validated the robustness of the approach and highlighted its potential applicability to a wide range of engineering and scientific problems involving both ODEs and PDEs. In conclusion, the findings confirmed that the RJPs-based SGM serves as a powerful tool for obtaining accurate numerical solutions for these equations. Future research could explore extending this framework to more complex DE models, further enhancing its applicability and effectiveness.

Author Contributions

Conceptualization, R.M.H. and M.A.A.; Methodology, R.M.H. and M.A.A.; Software, R.M.H.; Validation, R.M.H., M.A.A. and H.M.A.; Formal analysis, R.M.H. and H.M.A.; Investigation, M.A.A. and H.M.A.; Writing—original draft, R.M.H., M.A.A. and H.M.A.; Visualization, R.M.H. and M.A.A.; Supervision, H.M.A.; Funding acquisition, M.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) (grant number IMSIU-DDRSP2502).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Antonov, D.V.; Shchepakina, E.A.; Sobolev, V.A.; Starinskaya, E.M.; Terekhov, V.V.; Strizhak, P.A.; Sazhin, S.S. A new solution to a weakly non-linear heat conduction equation in a spherical droplet: Basic idea and applications. Int. J. Heat Mass Transf. 2024, 219, 124880. [Google Scholar] [CrossRef]
  2. Beybalaev, V.D.; Aliverdiev, A.A.; Hristov, J. Transient Heat Conduction in a Semi-Infinite Domain with a Memory Effect: Analytical Solutions with a Robin Boundary Condition. Fractal Fract. 2023, 7, 770. [Google Scholar] [CrossRef]
  3. Ali, K.K.; Sucu, D.Y.; Karakoc, S.B.G. Two effective methods for solution of the Gardner–Kawahara equation arising in wave propagation. Math. Comput. Simul. 2024, 220, 192–203. [Google Scholar] [CrossRef]
  4. Gao, Q.; Yan, B.; Zhang, Y. An accurate method for dispersion characteristics of surface waves in layered anisotropic semi-infinite spaces. Comput. Struct. 2023, 276, 106956. [Google Scholar] [CrossRef]
  5. Daum, S.; Werner, R. A novel feasible discretization method for linear semi-infinite programming applied to basket option pricing. Optimization 2011, 60, 1379–1398. [Google Scholar] [CrossRef]
  6. Fazio, R. Free Boundary Formulation for Boundary Value Problems on Semi-Infinite Intervals: An up to Date Review. arXiv 2020, arXiv:2011.07723. [Google Scholar]
  7. Fazio, R.; Jannelli, A. Finite difference schemes on quasi-uniform grids for BVPs on infinite intervals. J. Comput. Appl. Math. 2014, 269, 14–23. [Google Scholar] [CrossRef]
  8. Givoli, D.; Neta, B.; Patlashenko, I. Finite element analysis of time-dependent semi-infinite wave-guides with high-order boundary treatment. Int. J. Numer. Methods Eng. 2003, 58, 1955–1983. [Google Scholar] [CrossRef]
  9. Huang, W.; Yang, J.; Sladek, J.; Sladek, V.; Wen, P. Semi-infinite structure analysis with bimodular materials with infinite element. Materials 2022, 15, 641. [Google Scholar] [CrossRef]
  10. Abo-Gabal, H.; Zaky, M.A.; Hafez, R.M.; Doha, E.H. On Romanovski–Jacobi polynomials and their related approximation results. Numer. Methods Partial Differ. Equ. 2020, 36, 1982–2017. [Google Scholar] [CrossRef]
  11. Youssri, Y.H.; Zaky, M.A.; Hafez, R.M. Romanovski-Jacobi spectral schemes for high-order differential equations. Appl. Numer. Math. 2024, 198, 148–159. [Google Scholar] [CrossRef]
  12. Hafez, R.; Haiour, M.; Tantawy, S.; Alburaikan, A.; Khalifa, H. A Comprehensive Study on Solving Multitype Differential Equations Using Romanovski-Jacobi Matrix Methods. Fractals 2025. [Google Scholar] [CrossRef]
  13. Nataj, S.; He, Y. Anderson acceleration for nonlinear PDEs discretized by space–time spectral methods. Comput. Math. Appl. 2024, 167, 199–206. [Google Scholar] [CrossRef]
  14. Yan, Q.; Jiang, S.W.; Harlim, J. Spectral methods for solving elliptic PDEs on unknown manifolds. J. Comput. Phys. 2023, 486, 112132. [Google Scholar] [CrossRef]
  15. Ahmed, H.M. Enhanced shifted Jacobi operational matrices of integrals: Spectral algorithm for solving some types of ordinary and fractional differential equations. Bound. Value Probl. 2024, 2024, 75. [Google Scholar] [CrossRef]
  16. Ahmed, H.M. Solutions of 2nd-order linear differential equations subject to Dirichlet boundary conditions in a Bernstein polynomial basis. J. Egypt. Math. Soc. 2014, 22, 227–237. [Google Scholar] [CrossRef]
  17. Masjedjamei, M. Three finite classes of hypergeometric orthogonal polynomials and their application in functions approximation. Integr. Transf. Spec. Funct. 2002, 13, 169–190. [Google Scholar] [CrossRef]
  18. Izadi, M.; Veeresha, P.; Adel, W. The fractional-order marriage–divorce mathematical model: Numerical investigations and dynamical analysis. Eur. Phys. J. Plus 2024, 139, 205. [Google Scholar] [CrossRef]
  19. Abdelkawy, M.A.; Izadi, M.; Adel, W. Robust and accurate numerical framework for multi-dimensional fractional-order telegraph equations using Jacobi/Jacobi-Romanovski spectral technique. Bound. Value Probl. 2024, 2024, 131. [Google Scholar] [CrossRef]
  20. Lakshmikantham, V.; Sen, S.K. Computational Error and Complexity in Science and Engineering; Elsevier: Melbourne, FL, USA, 2005. [Google Scholar]
  21. Toutounian, F.; Tohidi, E. A new Bernoulli matrix method for solving second order linear partial differential equations with the convergence analysis. Appl. Math. Comput. 2013, 223, 298–310. [Google Scholar] [CrossRef]
  22. Jeffrey, A.; Dai, H.H. Handbook of Mathematical Formulas and Integrals; Elsevier: London, UK, 2008. [Google Scholar]
  23. Hassan, I.H.A.H. Differential transformation technique for solving higher-order initial value problems. Appl. Math. Comput. 2004, 154, 299–311. [Google Scholar]
  24. Yuzbasi, S. An efficient algorithm for solving multi-pantograph equation systems. Comput.Math. Appl. 2012, 64, 589–603. [Google Scholar] [CrossRef]
  25. Hassan, I.H.A.H. Application to differential transformation method for solving systems of differential equations. Appl. Math. Model 2008, 32, 2552–2559. [Google Scholar] [CrossRef]
  26. Gireesha, B.J.; Gowtham, K.J. Efficient hypergeometric wavelet approach for solving lane-emden equations. J. Comput. Sci. 2024, 82, 102392. [Google Scholar] [CrossRef]
  27. Zhou, F.; Xu, X. Numerical solutions for the linear and nonlinear singular boundary value problems using Laguerre wavelets. Adv. Differ. Equ. 2016, 2016, 17. [Google Scholar] [CrossRef]
  28. Shiralashetti, S.C.; Kumbinarasaiah, S. Hermite wavelets operational matrix of integration for the numerical solution of nonlinear singular initial value problems. Alex. Eng. J. 2018, 57, 2591–2600. [Google Scholar] [CrossRef]
  29. Tohidi, E.; Toutounian, F. Convergence analysis of bernoulli matrix approach for onedimensional matrix hyperbolic equations of the first order. Comput. Math. Appl. 2014, 68, 1–12. [Google Scholar] [CrossRef]
  30. Singh, S.; Patel, V.K.; Singh, V.K. Application of wavelet collocation method for hyperbolic partial differential equations via matrices. Appl. Math. Comput. 2018, 320, 407–424. [Google Scholar] [CrossRef]
Figure 1. Exact versus computed values for Example 1 at N = 18 , ν = 65 , and μ = 4 .
Figure 1. Exact versus computed values for Example 1 at N = 18 , ν = 65 , and μ = 4 .
Mathematics 13 01461 g001
Figure 2. Absolute error functions for Example 5 at various choices of N with μ = 4 and ν = 50 .
Figure 2. Absolute error functions for Example 5 at various choices of N with μ = 4 and ν = 50 .
Mathematics 13 01461 g002
Figure 3. Approximate solution (left) and its absolute error function (right) for Example 5 with μ = 4 , ν = 65 , and N = 24 .
Figure 3. Approximate solution (left) and its absolute error function (right) for Example 5 with μ = 4 , ν = 65 , and N = 24 .
Mathematics 13 01461 g003
Figure 4. The space–time graphs of the approximate solution (left) and its absolute error function (right) for Example 6 with μ = 1 , ν = 65 , and N = 28 .
Figure 4. The space–time graphs of the approximate solution (left) and its absolute error function (right) for Example 6 with μ = 1 , ν = 65 , and N = 28 .
Mathematics 13 01461 g004
Table 1. Absolute error comparison.
Table 1. Absolute error comparison.
ϰ DTMRJGM at ( N = 18 , ν = 65 )
[23] μ = 1 μ = 2 μ = 3
0.1 2.10 × 10 6 4.85 × 10 17 2.42 × 10 17 1.78 × 10 16
0.2 3.70 × 10 6 1.04 × 10 17 3.12 × 10 17 4.89 × 10 16
0.3 7.00 × 10 6 2.77 × 10 17 6.93 × 10 17 8.11 × 10 16
0.4 4.40 × 10 6 9.71 × 10 17 1.38 × 10 16 1.16 × 10 15
0.5 9.20 × 10 6 4.27 × 10 15 3.05 × 10 16 1.85 × 10 15
0.6 4.60 × 10 6 2.24 × 10 14 5.55 × 10 17 2.22 × 10 15
0.7 6.90 × 10 6 9.21 × 10 15 2.37 × 10 14 8.04 × 10 15
0.8 3.70 × 10 6 8.86 × 10 13 2.95 × 10 14 1.23 × 10 14
0.9 2.01 × 10 6 7.92 × 10 12 4.38 × 10 14 6.10 × 10 13
1.0 3.44 × 10 6 3.97 × 10 11 6.78 × 10 13 6.77 × 10 13
Table 2. Comparison of the maximum absolute error for different values of μ , ν , and N for Example 2.
Table 2. Comparison of the maximum absolute error for different values of μ , ν , and N for Example 2.
N μ = 2 , ν = 50 μ = 3 , ν = 65 μ = 4 , ν = 75
RJTM [11] RJGM RJTM [11] RJGM RJTM [11] RJGM
8 5.11 × 10 3 1.21 × 10 5 6.92 × 10 3 7.29 × 10 5 7.48 × 10 3 1.24 × 10 4
12 3.15 × 10 8 9.36 × 10 10 1.50 × 10 6 1.68 × 10 10 2.41 × 10 6 3.42 × 10 10
16 1.19 × 10 11 3.89 × 10 13 9.80 × 10 12 4.71 × 10 15 1.82 × 10 11 1.13 × 10 15
20 2.96 × 10 13 4.68 × 10 14 5.32 × 10 15 5.10 × 10 19 7.10 × 10 15 3.00 × 10 20
Table 3. Comparison of the computational time (CPU time in seconds) for different values of μ , ν , and N for Example 2.
Table 3. Comparison of the computational time (CPU time in seconds) for different values of μ , ν , and N for Example 2.
N μ = 2 , ν = 50 μ = 3 , ν = 65 μ = 4 , ν = 75
RJTM [11] RJGM RJTM [11] RJGM RJTM [11] RJGM
8 11.845 7.187 20.190 12.640 30.344 17.811
12 12.016 7.483 20.752 13.268 30.516 17.920
16 12.375 7.905 20.752 13.547 30.844 18.955
20 12.907 8.953 21.252 14.408 31.281 20.906
Table 4. Absolute error E 1 comparison for Y 1 ( ϰ ) of Equation (3) at N = 12 , ν = 65 , and μ = 4 .
Table 4. Absolute error E 1 comparison for Y 1 ( ϰ ) of Equation (3) at N = 12 , ν = 65 , and μ = 4 .
ϰ i DTM [25]BCM [24]Present Method
0.2 1.5575 × 10 2 2.8460 × 10 8 1.6088 × 10 17
0.4 5.1209 × 10 2 1.7820 × 10 8 4.2019 × 10 17
0.6 1.0150 × 10 1 1.2668 × 10 8 1.1546 × 10 15
0.8 1.6630 × 10 1 3.3538 × 10 8 2.6537 × 10 14
1.0 2.3351 × 10 1 6.8657 × 10 7 6.1027 × 10 13
Table 5. Absolute error E 2 comparison for Y 2 ( ϰ ) of Equation (3) at N = 12 , ν = 65 , and μ = 4 .
Table 5. Absolute error E 2 comparison for Y 2 ( ϰ ) of Equation (3) at N = 12 , ν = 65 , and μ = 4 .
ϰ i DTM [25]BCM [24]Present Method
0.2 2.3262 × 10 3 2.8460 × 10 8 9.0142 × 10 17
0.4 1.6867 × 10 2 1.7820 × 10 8 5.5165 × 10 17
0.6 5.4499 × 10 2 1.2668 × 10 8 9.4298 × 10 16
0.8 1.3194 × 10 1 3.3538 × 10 8 2.7109 × 10 14
1.0 2.8038 × 10 1 6.8657 × 10 7 6.5028 × 10 13
Table 6. The absolute error comparison for Example 4.
Table 6. The absolute error comparison for Example 4.
ϰ LWHWWCRJCM at ( N = 13 )
[27][28][26] μ = 2 , ν = 65 μ = 5 , ν = 70
0.0 0 3.37 × 10 18 1.19 × 10 17
0.2 6.99131 × 10 7 1.2130 × 10 5 3.73 × 10 7 9.84 × 10 13 1.03 × 10 11
0.4 1.06726 × 10 6 7.8887 × 10 6 5.06 × 10 8 7.26 × 10 11 2.01 × 10 11
0.6 8.97718 × 10 6 1.0604 × 10 7 1.86 × 10 7 4.80 × 10 9 5.09 × 10 10
0.8 1.83900 × 10 5 4.2378 × 10 6 2.23 × 10 7 1.34 × 10 7 3.54 × 10 7
Table 7. Absolute error comparison for Example 5.
Table 7. Absolute error comparison for Example 5.
WCM [30]BOMRJGM at ( N = 20 , ν = 50 )
( ϰ , τ ) LWCCWC[29] μ = 1 μ = 3
( 0.1 , 0.1 ) 9.120 × 10 5 4.530 × 10 4 1.170 × 10 13 4.336 × 10 19 4.336 × 10 19
( 0.2 , 0.2 ) 3.360 × 10 5 2.070 × 10 4 1.317 × 10 12 0 1.734 × 10 18
( 0.3 , 0.3 ) 1.860 × 10 5 6.920 × 10 5 2.026 × 10 12 0 6.938 × 10 18
( 0.4 , 0.4 ) 2.420 × 10 5 1.390 × 10 4 1.947 × 10 12 1.387 × 10 17 0
( 0.5 , 0.5 ) 8.300 × 10 5 2.640 × 10 5 1.126 × 10 12 00
( 0.6 , 0.6 ) 1.380 × 10 4 7.620 × 10 5 7.800 × 10 14 5.551 × 10 17 0
( 0.7 , 0.7 ) 9.900 × 10 5 2.100 × 10 5 1.174 × 10 12 1.665 × 10 16 0
( 0.8 , 0.8 ) 1.200 × 10 6 2.010 × 10 5 1.662 × 10 12 1.110 × 10 16 1.110 × 10 16
( 0.9 , 0.9 ) 9.000 × 10 5 3.000 × 10 4 1.166 × 10 12 1.110 × 10 16 2.220 × 10 16
Table 8. The maximum absolute errors at ν = 50 for Example 5.
Table 8. The maximum absolute errors at ν = 50 for Example 5.
N μ = 1 μ = 2 μ = 3
11 1.9466 × 10 3 1.6765 × 10 3 4.0894 × 10 6
14 5.7427 × 10 9 1.3882 × 10 9 6.2780 × 10 11
17 1.1158 × 10 14 6.4918 × 10 16 1.4340 × 10 16
20 2.4843 × 10 17 1.3429 × 10 18 1.6878 × 10 17
Table 9. Absolute error comparison for Example 6.
Table 9. Absolute error comparison for Example 6.
( ϰ , τ ) BMMRJGM at ( N = 24 , ν = 55 )
[21] μ = 1 μ = 2 μ = 3
( 0 , 0 ) 8.200 × 10 17 000
( 0.1 , 0.1 ) 7.510 × 10 16 9.048 × 10 18 9.048 × 10 18 9.048 × 10 18
( 0.2 , 0.2 ) 9.070 × 10 16 1.154 × 10 17 1.160 × 10 17 1.165 × 10 17
( 0.3 , 0.3 ) 2.640 × 10 16 1.647 × 10 17 1.647 × 10 17 1.647 × 10 17
( 0.4 , 0.4 ) 7.700 × 10 16 7.459 × 10 17 7.372 × 10 17 7.459 × 10 17
( 0.5 , 0.5 ) 1.503 × 10 15 1.040 × 10 17 1.040 × 10 17 1.040 × 10 17
( 0.6 , 0.6 ) 1.451 × 10 15 1.387 × 10 17 1.387 × 10 17 1.387 × 10 17
( 0.7 , 0.7 ) 5.150 × 10 16 1.387 × 10 17 2.775 × 10 17 0
( 0.8 , 0.8 ) 4.800 × 10 16 000
( 0.9 , 0.9 ) 1.023 × 10 15 5.551 × 10 17 0 8.326 × 10 17
( 1.0 , 1.0 ) 7.690 × 10 16 1.434 × 10 19 2.991 × 10 19 1.239 × 10 19
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hafez, R.M.; Abdelkawy, M.A.; Ahmed, H.M. A Refined Spectral Galerkin Approach Leveraging Romanovski–Jacobi Polynomials for Differential Equations. Mathematics 2025, 13, 1461. https://doi.org/10.3390/math13091461

AMA Style

Hafez RM, Abdelkawy MA, Ahmed HM. A Refined Spectral Galerkin Approach Leveraging Romanovski–Jacobi Polynomials for Differential Equations. Mathematics. 2025; 13(9):1461. https://doi.org/10.3390/math13091461

Chicago/Turabian Style

Hafez, Ramy M., Mohamed A. Abdelkawy, and Hany M. Ahmed. 2025. "A Refined Spectral Galerkin Approach Leveraging Romanovski–Jacobi Polynomials for Differential Equations" Mathematics 13, no. 9: 1461. https://doi.org/10.3390/math13091461

APA Style

Hafez, R. M., Abdelkawy, M. A., & Ahmed, H. M. (2025). A Refined Spectral Galerkin Approach Leveraging Romanovski–Jacobi Polynomials for Differential Equations. Mathematics, 13(9), 1461. https://doi.org/10.3390/math13091461

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop