Next Article in Journal
Identification of Industrial Occupational Safety Risks and Selection of Optimum Intervention Strategies: Fuzzy MCDM Approach
Next Article in Special Issue
Bifurcation, Quasi-Periodic, Chaotic Pattern, and Soliton Solutions to Dual-Mode Gardner Equation
Previous Article in Journal
Improved Sampled-Data Consensus Control for Multi-Agent Systems via Delay-Incorporating Looped-Functional
Previous Article in Special Issue
A Novel Approximation Method for Solving Ordinary Differential Equations Using the Representation of Ball Curves
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Collocation Approach for the Nonlinear Fifth-Order KdV Equations Using Certain Shifted Horadam Polynomials

by
Waleed Mohamed Abd-Elhameed
1,*,
Omar Mazen Alqubori
2 and
Ahmed Gamal Atta
3
1
Department of Mathematics, Faculty of Science, Cairo University, Giza 12613, Egypt
2
Department of Mathematics and Statistics, College of Science, University of Jeddah, Jeddah 23218, Saudi Arabia
3
Department of Mathematics, Faculty of Education, Ain Shams University, Roxy, Cairo 11341, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(2), 300; https://doi.org/10.3390/math13020300
Submission received: 9 December 2024 / Revised: 12 January 2025 / Accepted: 14 January 2025 / Published: 17 January 2025
(This article belongs to the Special Issue Exact Solutions and Numerical Solutions of Differential Equations)

Abstract

:
This paper proposes a numerical algorithm for the nonlinear fifth-order Korteweg–de Vries equations. This class of equations is known for its significance in modeling various complex wave phenomena in physics and engineering. The approximate solutions are expressed in terms of certain shifted Horadam polynomials. A theoretical background for these polynomials is first introduced. The derivatives of these polynomials and their operational metrics of derivatives are established to tackle the problem using the typical collocation method to transform the nonlinear fifth-order Korteweg–de Vries equation governed by its underlying conditions into a system of nonlinear algebraic equations, thereby obtaining the approximate solutions. This paper also includes a rigorous convergence analysis of the proposed shifted Horadam expansion. To validate the proposed method, we present several numerical examples illustrating its accuracy and effectiveness.

1. Introduction

Nonlinear differential equations (DEs) are vital since they can represent complicated real-world phenomena in numerous fields of science and engineering branches. Complex systems can be better understood by studying nonlinear DEs, which, in contrast to linear ones, can display a wide range of behaviors. These behaviors include multistability, chaos, and bifurcations. Many models in different disciplines, such as electrodynamics, neuroscience, epidemiology, mechanical engineering, fluid dynamics, and economics, can be modeled using nonlinear DEs; see, for example, [1,2,3]. Since most of these models lack exact solutions, numerical methods are crucial in treating these nonlinear DEs. For example, the authors of [4] followed a numerical approach for treating the Black–Scholes model. The authors of [5] used a collocation approach to solve the Fitzhugh–Nagumo nonlinear DEs in neuroscience. Another numerical approach was followed in [6] to solve the nonlinear equations of Emden–Fowler models. Nonlinear thermal diffusion problems were handled in [7]. In [8], a numerical scheme for solving a stochastic nonlinear advection-diffusion dynamical model was handled. In [9], the authors employed Petrov-Galerkin methods for treating some linear and nonlinear partial DEs. The authors of [10] used a specific difference scheme for some nonlinear fractional DEs. The authors of [11] a collocation procedure for treating the nonlinear fractional FitzHugh–Nagumo equation.
An important nonlinear partial differential equation that describes the motion of solitons (individual waves) in shallow water and other systems is the Korteweg–de Vries (KdV) equation. Over the years, the KdV equation has undergone several revisions to account for non-local interactions, dissipation effects, higher-order terms, and various physical phenomena. Many scientific fields have used these modifications; nonlinear optics, fluid dynamics, and plasma physics are just a few examples. The following are a few notable variations of the KdV equation and the several scientific domains that have used them: the standard KdV equations, the modified KdV equation, the generalized KdV equation, the KdV–Burgers equation, and the KdV–Kawahara equation. Furthermore, regarding some applications of some specific problems of the KdV-type equations, we mention three of these problems and their applications.
  • The Caudrey–Dodd–Gibbon problem. This problem has applications in shallow water waves, nonlinear optics, and plasma physics; see [12].
  • The Sawada–Kotera problem has applications in hydrodynamics, elasticity, plasma physics, and soliton theory; see [13].
  • The Kaup-Kuperschmidt problem has applications in fluid mechanics, biological wave propagation, plasma physics, and quantum field theory; see [14].
Numerous contributions have focused on their handling due to the significance of the various KdV-type equations. For example, in [15], analytical and numerical solutions for the fifth-order KdV equation were presented. In [16], some hyperelliptic solutions of certain modified KdV equations were proposed. A numerical study for the stochastic KdV equation was presented in [17]. To treat the generalized Kawahara equation, an operational matrix approach was proposed in [18]. The authors of [19] followed a finite difference approach to handle the fractional KdV equation. Two algorithms were presented in [20] to treat the nonlinear time-fractional Lax’s KdV equation. Another numerical approach was given in [21] for approximating the modified KdV equation. A computational approach was used to handle a higher-order KdV equation in [22]. The time-fractional KdV equation was investigated numerically in [23]. In [24], the method of lines was proposed to solve the KdV equation. A Bernstein polynomial basis was employed in [25] to treat the KdV-type equations.
Special functions are fundamental in the scientific, mathematical, and engineering fields. For examples of the usage of these polynomials in signal processing, quantum mechanics, and physics, one can consult [26,27]. These functions have the potential to solve several types of DEs. For example, the authors of [28] numerically treated the fractional Rayleigh-Stokes problem using certain orthogonal combinations of Chebyshev polynomials. The shifted Fibonacci polynomials were utilized in [29] to treat the fractional Burgers equation. In [30], the authors used Vieta–Fibonacci polynomials to treat certain two-dimensional problems. Other two-dimensional FDEs were handled using Vieta Lucas polynomials in [31]. The authors of [32] used Changhee polynomials to treat a high-dimensional chaotic Lorenz system. In [33], some FDEs were treated using shifted Chebyshev polynomials.
Horadam sequences, named after the mathematician Alwyn Horadam, who initially developed them in the 1960s, generalize several well-known polynomials, such as Fibonacci, Lucas, Pell, and Pell Lucas polynomials. Many authors investigated Horadam sequences of polynomials. For example, the authors in [34] investigated some generalized Horadam polynomials and numbers. Some identities regarding Horadam sequences were developed in [35]. Some subclasses of bi-univalent functions associated with the Horadam polynomials were given in [36]. In [37], some characterizations of periodic generalized Horadam sequences were introduced. An application to specific Horadam sequences in coding theory was presented in [38].
Spectral methods are becoming essential in the applied sciences; see, for instance [39,40], for some of their applications in fields like engineering and fluid dynamics. These methods involve approximating differential and integral equation solutions by expansions of various special functions. The three spectral techniques most frequently employed are the collocation, tau, and Galerkin methods. The type of differential equation and the boundary conditions it governs determine which spectral method is suitable. The three spectral approaches use different trial and test functions. The Galerkin approach selects all basis function members to satisfy the underlying conditions imposed by a specific differential equation, treating the test and trial functions as equivalent. (For a few references, see [41,42,43].) The tau method is easier than the Galerkin method in application since there are no restrictions on selecting the trial and test functions; see, for example, [44,45,46]. The collocation method is the most popular spectral method because it works well with nonlinear DEs and can be used with all kinds of DEs, no matter what the underlying conditions are; see, for example, [47,48,49,50].
We comment here that the motivations for our work are as follows:
  • KdV-type equations are among the most important problems encountered in applied sciences, which motivates us to investigate them using a new approach.
  • Several spectral approaches were followed to solve KdV-type equations with various orthogonal polynomials as basis functions. The basis functions used in this article are a family of polynomials that are not orthogonal. This article will motivate us to apply these polynomials to other problems in the applied sciences.
  • To the best of our knowledge, the specific Horadam sequence of polynomials used in this paper was not previously used in numerical analysis, which provides a compelling reason to introduce and utilize them.
Furthermore, the work’s novelty is due to the following points:
  • We have developed novel simplified formulas for the new sequence of polynomials, including their high-order derivatives and operational matrices of derivatives.
  • This paper presents a new comprehensive study on the convergence analysis of the used Horadam expansion.
The main objectives of this paper can be listed in the following items:
(a)
Introducing a class of shifted Horadam polynomials and developing new essential formulas concerned with them.
(b)
Developing operational matrices of derivatives of the introduced shifted polynomials.
(c)
Analyzing a collocation procedure for solving the nonlinear fifth-order KdV equations.
(d)
Investigating the convergence analysis of the proposed Horadam expansion.
(e)
Verifying our numerical algorithm by presenting some illustrative examples.
This paper is structured as follows: Section 2 gives an overview of Horadam polynomials, their representation, and some particular polynomials of them. Section 3 introduces certain shifted Horadam polynomials and develops some theoretical formulas that will be used to design our numerical algorithm. Section 4 presents a collocation approach for treating the nonlinear fifth-order KdV-type equations. Section 5 discusses the convergence and error analysis of the proposed expansion in more detail. Section 6 presents some illustrative examples and comparisons. Finally, some discussions are given in Section 7.

2. An Overview of Horadam Polynomials and Some Particular Polynomials

Horadam presented a set of generalized polynomials in his seminal work [51]. These polynomials may be generated using the following recursive formula:
W j ( x ) = p ( x ) W j 1 ( x ) + q ( x ) W j 2 ( x ) , W 0 ( x ) = 0 , W 1 ( x ) = 1 .
The polynomials W j ( x ) can be written in the following Binet’s form:
W j ( x ) = p ( x ) + p 2 ( x ) + 4 q ( x ) j p ( x ) p 2 ( x ) + 4 q ( x ) j 2 j p 2 ( x ) + 4 q ( x ) , j 0 .
The above sequence of polynomials generalizes some well-known polynomials, such as Fibonacci, Pell, Lucas, and Pell–Lucas polynomials.
The standard Fibonacci polynomials can be generated with the following recursive formula:
F j ( x ) = x F j 1 ( x ) + F j 2 ( x ) , j 2 , F 0 ( x ) = 0 , F 1 ( x ) = 1 .
The standard Fibonacci polynomials, which are special ones of Horadam polynomials, have several extensions. The generalized Fibonacci polynomials are one example of such a generalization; they are derived using the following recursive formula:
F k a , b ( x ) = a x F k 1 a , b ( x ) + b F k 2 a , b ( x ) , F 0 a , b ( x ) = 1 , F 1 a , b ( x ) = a x , k 2 .
It is worth noting here that for every k, F k a , b ( x ) is of degree k. These polynomials involve many celebrated sequences, such as Fibonacci, Pell, Fermat, and Chebyshev polynomials of the second kind. More precisely, we have the following expressions:
F i + 1 ( x ) = F i ( 1 , 1 ) ( x ) , P i + 1 ( x ) = F i ( 2 , 1 ) ( x ) ,
F i + 1 ( x ) = F i ( 3 , 2 ) ( x ) , U i ( x ) = F i ( 2 , 1 ) ( x ) .
Recently, the authors of [29] have developed some new formulas for the shifted Fibonacci polynomials, defined as
F i * ( x ) = F i ( 2 x 1 ) .
In addition; they used these polynomials to solve the fractional Burgers’ equation. This paper will introduce specific polynomials of the shifted generalized Fibonacci polynomials, defined as
θ m ( x ) = F m + 1 2 , 1 ( 2 x 1 ) .
Note that for every m 0 , θ m ( x ) is of degree m.
The following formula is used to generate these polynomials:
θ k ( x ) = 2 x θ k 1 ( x ) θ k 2 ( x ) , θ 0 ( x ) = 1 , θ 1 ( x ) = 2 ( 2 x 1 ) , k 2 .
The following section introduces fundamental formulas concerning the introduced polynomials θ m ( x ) .

3. Some New Formulas Concerned with the Introduced Shifted Polynomials

We will develop new formulas for the specific shifted Horadam polynomials defined in (7). The following two lemmas will present the power form representation and inversion formula for these polynomials, which are pivotal in this paper. Next, we will establish new derivative expressions for these polynomials and their operational matrices of derivatives.

3.1. Analytic Form and Its Inversion Formula

Theorem 1.
Let m be a non-negative integer. The power form representation of θ m ( x ) is given by
θ m ( x ) = 1 2 π r = 0 m ( 1 + 2 m r ) ! Γ 1 2 m + r ( m r ) ! r ! x m r .
Proof. 
We will proceed by induction. Assume the validity of (9) for every j with j < m ; that is, we have
θ j ( x ) = r = 0 j A r , j x j r , j < m ,
where
A r , j = ( 1 + 2 j r ) ! Γ 1 2 j + r 2 π ( j r ) ! r ! .
To complete the proof; we have to show the validity of (9).
Starting from the recurrence relation of θ m ( x ) , we have
θ m ( x ) = 2 ( 2 x 1 ) θ m 1 ( x ) θ m 2 ( x ) .
Making use of (10), we can write
θ m ( x ) = 2 ( 2 x 1 ) r = 0 m 1 A r , m 1 x m r 1 r = 0 m 2 A r , m 2 x m r 2 .
The last formula can be written as
θ m ( x ) = 4 r = 0 m 1 A r , m 1 x m r + 2 r = 0 m 1 A r , m 1 x m r 1 r = 0 m 2 A r , m 2 x m r 2 ,
which has the form
θ m ( x ) = 4 r = 0 m 1 A r , m 1 x m r + 2 r = 0 m A r 1 , m 1 x m r r = 0 m A r 2 , m 2 x m r = r = 0 m ( 4 A r , m 1 + 2 A r 1 , m 1 A r 2 , m 2 ) x m r .
If we note the identity:
4 A r , m 1 + 2 A r 1 , m 1 A r 2 , m 2 = A r , m ,
then, Formula (9) can be proved. □
Theorem 2.
Consider a non-negative integer m. The following inversion formula is valid:
x m = r = 0 m ( 1 ) r m 2 1 2 m ( 1 + m r ) ( 1 + 2 m ) ! ( 2 + 2 m r ) ! r ! θ m r ( x ) .
Proof. 
We prove Formula (14) by induction. The formula holds for m = 0 . Assume the validity of (14), and we have to show the following formula:
x m + 1 = r = 0 m + 1 ( 1 ) r m + 1 2 1 2 m ( m r + 2 ) ( 2 m + 3 ) ! ( 2 m r + 4 ) ! r ! θ m r + 1 ( x ) .
Now, if we multiply Formula (14) by x, and make use of the recursive formula (8) in the following form:
x θ m ( x ) = 1 4 2 θ m ( x ) θ m + 1 ( x ) θ m 1 ( x ) ,
then the following formula can be obtained:
x m + 1 = r = 0 m ( 1 ) r m 2 1 2 m ( m r + 1 ) ( 2 m + 1 ) ! ( 2 m r + 2 ) ! r ! 2 θ m r ( x ) θ m r + 1 ( x ) θ m r 1 ( x ) ,
which can be turned after some algebraic computations into the following form:
x m + 1 = r = 0 m + 1 ( 1 ) r m + 1 2 1 2 m ( m r + 2 ) ( 2 m + 3 ) ! ( 2 m r + 4 ) ! r ! θ m r + 1 ( x ) .
This completes the proof. □

3.2. Derivative Expressions and Operational Matrices of Derivatives of θ m ( x )

Based on Theorems 1 and 2, an expression for the high-order derivatives of θ m ( x ) in terms of their original polynomials can be deduced. The following theorem exhibits this expression.
Theorem 3.
Consider two positive integers m , p with m p . We have
D p θ m ( x ) = k = 0 m p U k , m , p θ k ( x ) ,
where
U k , m , p = v k , m , p 2 2 p ( 1 ) m k ( k + 1 ) 1 2 ( 4 + m + k p ) p 1 ( p ) 1 2 ( m k p ) 1 2 ( m k p ) ! ,
with
v k , m , p = 1 , ( m k p ) even , 0 , otherwise .
Proof. 
The analytic form of θ m ( x ) in (9) enables one to write D p θ m ( x ) as
D p θ m ( x ) = 1 2 π r = 0 m p ( 1 + 2 m r ) ! Γ 1 2 m + r ( 1 + m p r ) p ( m r ) ! r ! x m r p .
The inversion formula in (14) converts the above formula into the following one:
D p θ m ( x ) = 2 π r = 0 m p ( 2 m r + 1 ) ! Γ 3 2 + m p r Γ 1 2 m + r r ! × n = 0 m r p ( 1 ) m + p + r + n ( 1 + m p r n ) n ! ( 2 m 2 p 2 r n + 2 ) ! θ m r p n ( x ) .
The last formula can be rearranged to be written in a more convenient form:
D p θ m ( x ) = 2 π = 0 m p ( 1 ) m + + p ( 1 m + + p ) × r = 0 ( 2 m r + 1 ) ! Γ 3 2 + m p r Γ 1 2 m + r r ! ( r ) ! ( 2 m 2 p r + 2 ) ! θ m p ( x ) .
Now, to obtain a simplified formula for the derivatives D p θ m ( x ) m p , we use symbolic algebra to find a closed form for the second sum that appears in the right-hand side of (22). For this purpose, we set
S , m , p = r = 0 ( 2 m r + 1 ) ! Γ 3 2 + m p r Γ 1 2 m + r r ! ( r ) ! ( 2 m 2 p r + 2 ) ! ,
and use Zeilberger’s algorithm [52] to show that the following recursive formula is satisfied by S , m , p :
( 2 2 p ) ( 4 + 2 m + 2 p ) S 2 , m , p + ( 2 + 2 m ) S , m , p = 0 ,
with the following initial values:
S 0 , m , p = 4 1 m + p π Γ 1 2 m ( 2 m + 1 ) ! ( m p + 1 ) ! , S 1 , m , p = 0 ,
which can be solved to obtain
S , m = ( 1 ) / 2 2 2 2 m + + 2 p π ( 2 m + 1 ) ! Γ 1 2 ( 1 2 m + ) ( p ) 2 ( 2 ) ! m 2 p + 1 ! , e v e n , 0 , o d d ,
and accordingly, Formula (22) reduces into the following one:
D p θ m ( x ) = 2 2 p k = 0 m p 2 ( 1 ) p ( 1 2 k + m p ) 1 2 ( 4 2 k + 2 m 2 p ) p 1 ( p ) k k ! θ m p 2 k ( x ) .
The last formula can be written in the following alternative form:
D p θ m ( x ) = 2 2 p k = 0 m p v ( m , k , p ) ( 1 ) m k ( k + 1 ) 1 2 ( 4 + m + k p ) p 1 ( p ) 1 2 ( m k p ) 1 2 ( m k p ) ! θ k ( x ) ,
with
v k , m , p = 1 , ( m k p ) even , 0 , otherwise .
This finalizes the proof of Theorem 3. □
The following four corollaries exhibit the first-, second-, third-, and fifth-order derivatives of the polynomials θ m ( x ) . They are all consequences of Theorem 3.
Corollary 1.
The first derivative of θ m ( x ) can be expressed in the following form:
d θ m ( x ) d x = k = 0 m 1 λ k , m 1 θ k ( x ) , m 1 ,
where
λ k , m 1 = 4 ( k + 1 ) ( 1 ) m k v m , k , 1 .
Corollary 2.
The second derivative of θ m ( x ) can be expressed in the following form:
d 2 θ m ( x ) d x 2 = k = 0 m 2 λ k , m 2 θ k ( x ) , m 2 ,
where
λ k , m 2 = 4 ( k + 1 ) ( 1 ) m k ( m k ) ( k + m + 2 ) v m , k , 2 .
Corollary 3.
The third derivative of θ m ( x ) can be expressed in the following form:
d 3 θ m ( x ) d x 3 = k = 0 m 3 λ k , m 3 θ k ( x ) , m 3 ,
where
λ k , m 3 = 2 ( k + 1 ) ( 1 ) m k ( k m 1 ) ( k m + 1 ) ( k + m + 1 ) ( k + m + 3 ) v m , k , 3 .
Corollary 4.
The fifth derivative of θ m ( x ) can be expressed in the following form:
d 5 θ m ( x ) d x 5 = k = 0 m 5 λ k , m 4 θ k ( x ) , m 5 ,
where
λ k , m 4 = 8 ( k + 1 ) ( 1 ) m k ( k + m 1 ) ( k + m + 1 ) ( k + m + 3 ) ( k + m + 5 ) Γ 1 2 ( k + m + 5 ) v m , k , 5 3 Γ 1 2 ( k + m 3 ) .
Proof. 
The proof of Corollaries 1–4 can be easily obtained after putting p = 1 , 2 , 3 , 5 respectively in Theorem 3. □
The following corollary presents the operational matrices of the integer derivatives of the polynomials θ m ( x ) , which can be deduced from the above four corollaries.
Corollary 5.
If we consider the following vector:
θ ( x ) = [ θ 0 ( x ) , θ 1 ( x ) , , θ N ( x ) ] T ,
then, the first-, second-, third-, and fifth-order derivatives of the vector θ ( x ) can be written in the following matrix forms:
d θ ( x ) d x = A θ ( x ) , d 2 θ ( x ) d x 2 = B θ ( x ) , d 3 θ ( x ) d x 3 = F θ ( x ) , d 5 θ ( x ) d x 5 = G θ ( x ) ,
where A = ( λ k , m 1 ) , B = ( λ k , m 2 ) , F = ( λ k , m 3 ) , and G = ( λ k , m 4 ) are the operational matrices of derivatives of order ( N + 1 ) 2 .
Proof. 
The expressions in (35) are direct consequences of Corollaries 1–4. □

4. A Collocation Approach for the Nonlinear Fifth-Order KdV-Type Partial DEs

Consider the following nonlinear fifth-order KdV-type partial differential equation [53,54]:
η ( x , t ) t + a η ( x , t ) 2 η ( x , t ) x + b η ( x , t ) x 2 η ( x , t ) x 2 + d η ( x , t ) 3 η ( x , t ) x 3 + 5 η ( x , t ) x 5 = 0 , 0 x , t 1 ,
governed by the following initial and boundary conditions:
η ( x , 0 ) = f ( x ) ,
η ( 0 , t ) = g 0 ( t ) , η ( 1 , t ) = g 1 ( t ) ,
η ( 0 , t ) x = g 2 ( t ) , η ( 1 , t ) x = g 3 ( t ) , 2 η ( 0 , t ) x 2 = g 4 ( t ) ,
where a , b , d are arbitrary constants.
Now, consider the following space:
Z N = span { θ m ( x ) θ n ( t ) : 0 m , n N } .
Consequently, it can be assumed that any function η N = η N ( x , t ) Z N can be represented as
η N = m = 0 N n = 0 N η ^ m n θ m ( x ) θ n ( t ) = θ ( x ) T η ^ θ ( t ) ,
where θ ( x ) is the vector defined in (34), and η ^ = ( η ^ m n ) 0 m , n N is the matrix of unknowns, whose order is ( N + 1 ) 2 .
Now, we can write the residual R N ( x , t ) of Equation (36) as
R N ( x , t ) = η N t + a η N 2 η N x + b η N x 2 η N x 2 + d η N 3 η N x 3 + 5 η N x 5 .
Thanks to Corollary 5 along with the expansion (41), the following expressions for the terms η N t , η N 2 η N x , 2 η N x 2 , η N 3 η N x 3 and 5 η N x 5 , can be obtained:
η N t = θ ( x ) T η ^ ( A θ ( t ) ) ,
η N 2 η N x = [ θ ( x ) T η ^ θ ( t ) ] 2 [ ( A θ ( x ) ) T η ^ θ ( t ) ] ,
η N x 2 η N x 2 = [ ( A θ ( x ) ) T η ^ θ ( t ) ] [ ( B θ ( x ) ) T η ^ θ ( t ) ] ,
η N 3 η N x 3 = [ θ ( x ) T η ^ θ ( t ) ] [ ( F θ ( x ) ) T η ^ θ ( t ) ] ,
5 η N x 5 = ( G θ ( x ) ) T η ^ θ ( t ) .
By virtue of the expressions (43)–(47), the residual R N ( x , t ) can be written in the following form:
R N ( x , t ) = θ ( x ) T η ^ ( A θ ( t ) ) + a [ θ ( x ) T η ^ θ ( t ) ] 2 [ ( A θ ( x ) ) T η ^ θ ( t ) ] + b [ ( A θ ( x ) ) T η ^ θ ( t ) ] [ ( B θ ( x ) ) T η ^ θ ( t ) ] + d [ θ ( x ) T η ^ θ ( t ) ] [ ( F θ ( x ) ) T η ^ θ ( t ) ] + ( G θ ( x ) ) T η ^ θ ( t ) .
Now, to obtain the expansion coefficients c m n , we apply the spectral collocation method by forcing the residual R N ( x , t ) to be zero at some collocation points m + 1 N + 2 , n + 1 N + 2 , as follows:
R N m + 1 N + 2 , n + 1 N + 2 = 0 , 0 m N 5 , 0 n N 1 .
Moreover, the initial and boundary conditions (37)–(39) imply the following equations:
θ m + 1 N + 2 T η ^ θ ( 0 ) = f m + 1 N + 2 , 0 m N ,
θ ( 0 ) T η ^ θ n + 1 N + 2 = g 0 n + 1 N + 2 , 0 n N 1 ,
θ ( 1 ) T η ^ θ n + 1 N + 2 = g 1 n + 1 N + 2 , 0 n N 1 ,
( A θ ( 0 ) ) T η ^ θ n + 1 N + 2 = g 2 n + 1 N + 2 , 0 n N 1 ,
( A θ ( 1 ) ) T η ^ θ n + 1 N + 2 = g 3 n + 1 N + 2 , 0 n N 1 ,
( B θ ( 0 ) ) T η ^ θ n + 1 N + 2 = g 4 n + 1 N + 2 , 0 n N 1 .
The ( N + 1 ) 2 nonlinear system of equations formed by the equations in (50)–(55) and (49) may be solved with the use of a numerical solver, such as Newton’s iterative technique, and thus the approximate solution given by (41) can be found.

5. The Convergence and Error Analysis

This section provides a detailed convergence analysis of the proposed approximate expansion. To begin this study, specific inequalities are necessary.
Lemma 1.
The following inequality holds [55]:
| I n ( x ) | x n cosh ( x ) 2 n Γ ( n + 1 ) , x > 0 ,
where I n ( x ) is the modified Bessel function of order n of the first kind.
Lemma 2.
Consider the infinitely differentiable function g ( x ) at the origin. g ( x ) can be expanded as
g ( x ) = n = 0 s = n 4 ( 1 ) n g ( s ) ( 0 ) ( n + 1 ) Γ s + 3 2 π ( s n ) ! ( n + s + 2 ) ! θ n ( x ) .
Proof. 
Consider the following expansion for g ( x ) :
g ( x ) = n = 0 g ( n ) ( 0 ) n ! x n .
As a result of the inversion formula (14), the previous expansion transforms into the following form:
g ( x ) = n = 0 r = 0 n ( 1 ) r g ( n ) ( 0 ) 2 1 2 n ( 2 n + 1 ) ! ( r + 1 ) n ! ( n r ) ! ( 2 + n + r ) ! θ r ( x ) .
Now, expanding the right-hand side of the last equation and rearranging the similar terms, the following expansion can be obtained:
g ( x ) = n = 0 s = n 4 ( 1 ) n g ( s ) ( 0 ) ( n + 1 ) Γ s + 3 2 π ( n s ) ! ( n + s + 2 ) ! θ n ( x ) .
This completes the proof of this lemma. □
Lemma 3.
Consider any non-negative integer m. The following inequality holds for θ m ( x ) :
| θ m ( x ) | m + 1 , x ( 0 , 1 ) .
Proof. 
Using the analytic form for θ m ( x ) in (9), we can write
θ m ( x ) = 1 2 π m ! r = 0 m m m r Γ 1 2 r , x ( 0 , 1 ) .
Now, we will use the symbolic algebra to find a closed formula for the summation in (62). Set
S m = r = 0 m m m r Γ 1 2 r .
Zeilberger’s algorithm [52] aids in demonstrating that the following first-order recursive formula is satisfied by S m :
S m + 1 + ( m + 2 ) S m = 0 , S 0 = 2 π ,
which can be immediately solved to give
S m = 2 ( 1 ) m + 1 π ( m + 1 ) ! ,
and this implies the following inequality:
S m 2 π ( m + 1 ) ! .
Now, it is easy to see from (62) along with the inequality (63), that
| θ m ( x ) | m + 1 .
This proves Lemma 3. □
Theorem 4.
If g ( x ) is defined on [ 0 , 1 ] and | g ( i ) ( 0 ) | μ i , i > 0 , where μ is a positive constant and g ( x ) = n = 0 u ^ n θ n ( x ) , then we obtain
| u ^ n | e μ + 1 2 2 n 1 μ n n ! .
Moreover, the series converges absolutely.
Proof. 
Based on Lemma 2 and the assumptions of the theorem, we can write
| u ^ n | = s = n 4 ( 1 ) n g ( s ) ( 0 ) ( n + 1 ) Γ s + 3 2 π ( s n ) ! ( n + s + 2 ) ! s = n 4 μ s ( n + 1 ) Γ s + 3 2 π ( s n ) ! ( n + s + 2 ) ! = 4 e μ / 2 ( n + 1 ) I n + 1 μ 2 μ .
The application of Lemma 1 enables us to write the previous inequality as
| u ^ n | 4 e μ / 2 ( n + 1 ) cosh μ 2 μ 2 n + 1 μ 2 n + 1 ( n + 1 ) ! ,
which can be rewritten after simplifying the right-hand side of the last inequality as
| u ^ n | e μ + 1 2 2 n 1 μ n n ! .
We now show the second part of the theorem. Since we have
n = 0 u ^ n θ n ( x ) i = 0 2 2 n 1 e μ + 1 μ n ( n + 1 ) n ! = 1 8 e μ / 4 e μ + 1 ( μ + 4 ) ,
so the series converges absolutely. □
Theorem 5.
If f ( x ) satisfies the hypothesis of Theorem 4, and e N ( x ) = n = N + 1 u ^ n θ n ( x ) , then the following error estimation is satisfied:
| e N ( x ) | < e μ + 1 e μ / 4 ( μ + 4 ) + 4 2 2 N 5 μ N + 1 N ! .
Proof. 
The definition of e N ( x ) enables us to write
| e N ( x ) | = n = N + 1 c n θ n ( x ) n = N + 1 2 2 n 1 e μ + 1 μ n ( n + 1 ) n ! = e μ + 1 2 2 N 3 μ N + 1 + e μ / 4 ( μ + 4 ) 4 N N ! Γ N + 1 , μ 4 N ! < e μ + 1 e μ / 4 ( μ + 4 ) + 4 2 2 N 5 μ N + 1 N ! ,
where Γ ( . , . ) denotes upper incomplete gamma functions [56]. □
Theorem 6.
Let η ( x , t ) = χ 1 ( x ) χ 2 ( t ) = i = 0 j = 0 η ^ i j θ i ( x ) θ j ( t ) , with | χ 1 ( i ) ( 0 ) | 1 i and | χ 2 ( i ) ( 0 ) | 2 i , where 1 and 2 are positive constants. One has
| η ^ i j | e 1 + 1 e 2 + 1 2 2 ( i + j + 1 ) 1 i 2 j i ! j ! .
Moreover, the series converges absolutely.
Proof. 
If we apply Lemma 2 and use the assumption η ( x , t ) = χ 1 ( x ) χ 2 ( t ) , then we can write
η ^ i j = p = i q = j 16 ( 1 ) i + j χ 1 ( p ) ( 0 ) χ 2 ( q ) ( 0 ) ( i + 1 ) ( j + 1 ) Γ q + 3 2 Γ p + 3 2 π ( p i ) ! ( i + p + 2 ) ! ( q j ) ! ( j + q + 2 ) ! .
If we use the assumptions: | χ 1 ( i ) ( 0 ) | 1 i , and | χ 2 ( i ) ( 0 ) | 2 i , then we obtain
| η ^ i j | p = i 4 ( 1 ) i χ 1 ( p ) ( 0 ) ( i + 1 ) Γ p + 3 2 π ( p i ) ! ( i + p + 2 ) ! × q = j 4 ( 1 ) j χ 2 ( q ) ( 0 ) ( j + 1 ) Γ q + 3 2 π ( q j ) ! ( j + q + 2 ) ! .
We obtain the desired result by performing similar steps as in the proof of Theorem 4. □
Theorem 7.
If η = η ( x , t ) satisfies the hypothesis of Theorem 6, then we have the following upper estimate on the truncation error:
| E N | = | η η N | < e 1 / 4 e 1 + 1 ( 1 + 4 ) e 2 + 1 e 2 / 4 ( 2 + 4 ) + 4 2 2 N 5 2 N + 1 N ! + e 2 / 4 e 2 + 1 ( 2 + 4 ) e 1 + 1 e 1 / 4 ( 1 + 4 ) + 4 2 2 N 5 1 N + 1 N ! .
Proof. 
From definitions of η and η N , we obtain
| E N | = | η η N | = i = 0 j = 0 η ^ i j θ i ( x ) θ j ( t ) i = 0 N j = 0 N η ^ i j θ i ( x ) θ j ( t ) i = 0 N j = N + 1 η ^ i j θ i ( x ) θ j ( t ) + i = N + 1 j = 0 η ^ i j θ i ( x ) θ j ( t ) .
If Theorem 6 and Lemma 3 are applied, then the following inequalities can be obtained:
i = 0 N 2 2 i 1 ( i + 1 ) e 1 + 1 1 i i ! = e λ + 1 2 2 N 5 ( N + 1 ) 1 N + 1 e 1 / 4 ( 1 + 4 ) E N 1 4 4 ( N + 1 ) ! < e 1 / 4 e 1 + 1 ( 1 + 4 ) ,
i = N + 1 2 2 i 1 ( i + 1 ) e 1 + 1 1 i i ! = e 1 + 1 2 2 N 3 N ! 1 N + 1 + e 1 / 4 ( 1 + 4 ) 4 N × N ! Γ N + 1 , 1 4 < e 1 + 1 e 1 / 4 ( 1 + 4 ) + 4 2 2 N 5 1 N + 1 N ! ,
i = 0 2 2 i 1 ( i + 1 ) e 1 + 1 1 i i ! = 1 8 e 1 / 4 e 1 + 1 ( 1 + 4 ) < e 1 / 4 e 1 + 1 ( 1 + 4 ) ,
and accordingly, we obtain
| η η N | < e 1 / 4 e 1 + 1 ( 1 + 4 ) e 2 + 1 e 2 / 4 ( 2 + 4 ) + 4 2 2 N 5 2 N + 1 N ! + e 2 / 4 e 2 + 1 ( 2 + 4 ) e 1 + 1 e 1 / 4 ( 1 + 4 ) + 4 2 2 N 5 1 N + 1 N ! .
This completes the proof of this theorem. □

6. Illustrative Examples

In this section, we present numerical examples to validate and demonstrate the applicability and accuracy of our proposed numerical algorithm. We also present comparisons with some other methods. Now, if we consider the successive errors E N and E N + 1 , then the order of convergence for the given method can be calculated as [57]
Order = log E N + 1 E N log N + 1 N .
Example 1
([53,54]). Consider the following Lax equation of order five:
η t + 30 η 2 η x + 20 η x 2 η x 2 + 10 η 3 η x 3 + 5 η x 5 = 0 , 0 x , t 1 ,
governed by
η ( x , 0 ) = 2 k 2 ( 3 tanh ( k x 0 k x ) + 2 ) ,
η ( 0 , t ) = 2 k 2 3 tanh k x 0 + 56 k 5 t + 2 ,
η ( 1 , t ) = 2 k 2 2 3 tanh k x 0 56 k 5 t + k ,
η ( 0 , t ) x = 6 k 3 s e c h 2 k x 0 + 56 k 5 t ,
η ( 1 , t ) x = 6 k 3 s e c h 2 k x 0 56 k 5 t + k ,
2 η ( 0 , t ) x 2 = 12 k 4 tanh k x 0 + 56 k 5 t s e c h 2 k x 0 + 56 k 5 t ,
where the analytic solution of this problem is
η ( x , t ) = 2 k 2 2 3 tanh k x 56 k 5 t k x 0 .
Table 1 presents a comparison of L error between our method at N = 12 and the method in [53] when k = 0.01 and x 0 = 0 . Table 2 shows the CPU time used in seconds of the results in Table 1. Moreover, Table 3 shows the absolute errors (AEs) at different values of t when k = 0.01 and x 0 = 0 . Table 4 shows the maximum AEs and the order of convergence, which is calculated by (81) at different values of N. Figure 1 illustrates the AEs (left) and approximate solution (right) at N = 12 when k = 0.01 and x 0 = 0 .
Remark 1
(Stability). We comment that our proposed method is stable in the sense that η N + 1 η N is sufficiently small for sufficiently large values of N, see [58]. To confirm this regarding Problem (82), we plot Figure 2 that shows that our method remains stable for x = t .
Remark 2
(Consistency). To show the consistency of our numerical method in the sense that | R N ( x , t ) | is small for sufficiently large values of N, we plot Figure 3 that gives the absolute residual | R N ( x , t ) | at t = 0.5 for Problem (82). This figure shows the absolute residual | R N ( x , t ) | at t = 0.5 is sufficiently small for sufficiently large values of N.
Example 2
([53,54]). Consider the following Sawada–Kotera equation of order five:
η t + 45 η 2 η x + 15 η x 2 η x 2 + 15 η 3 η x 3 + 5 η x 5 = 0 , 0 x , t 1 ,
governed by
η ( x , 0 ) = 2 k 2 s e c h 2 ( k x 0 k x ) ,
η ( 0 , t ) = 2 k 2 s e c h 2 k x 0 + 16 k 5 t ,
η ( 1 , t ) = 2 k 2 s e c h 2 k x 0 16 k 5 t + k ,
η ( 0 , t ) x = 4 k 3 tanh k x 0 + 16 k 4 t s e c h 2 k x 0 + 16 k 4 t ,
η ( 1 , t ) x = 4 k 3 tanh k x 0 16 k 5 t + k s e c h 2 k x 0 + 16 k 4 t 1 ,
2 η ( 0 , t ) x 2 = 4 k 4 cosh 2 k x 0 + 16 k 4 t 2 s e c h 4 k x 0 + 16 k 4 t ,
where the analytic solution of this problem is
η ( x , t ) = 2 k 2 s e c h 2 k x 16 k 5 t k x 0 .
Table 5 presents a comparison of L error between our method at N = 12 and the method in [53] when k = 0.2 and x 0 = 0 . Table 6 shows the CPU time used in seconds of the results in Table 5. Moreover, Table 7 shows the AEs at different values of t when k = 0.2 and x 0 = 0 . Figure 4 illustrates the AEs (left) and approximate solution (right) at N = 12 when k = 0.2 and x 0 = 0 .
Example 3
([53,54]). Consider the following Caudrey–Dodd–Gibbon equation of order five:
η t + 180 η 2 η x + 30 η x 2 η x 2 + 30 η 3 η x 3 + 5 η x 5 = 0 , 0 x , t 1 ,
governed by
η ( x , 0 ) = k 2 e k x e k x + 1 2 ,
η ( 0 , t ) = k 2 e k 5 t e k 5 t + 1 2 ,
η ( 1 , t ) = k 2 e k 5 t + k e k 5 t + e k 2 ,
η ( 0 , t ) x = k 3 e k 5 t e k 5 t 1 e k 5 t + 1 3 ,
η ( 1 , t ) x = k 3 e k 5 t + k e k 5 t e k e k 5 t + e k 3 ,
2 η ( 0 , t ) x 2 = k 4 e k 5 t 4 e k 5 t + e 2 k 5 t + 1 e k 5 t + 1 4 ,
where the analytic solution of this problem is
η ( x , t ) = k 2 e k x k 4 t e k x k 4 t + 1 2 .
Table 8 presents a comparison of L error between our method at N = 12 and the method in [53] when k = 0.01 . Table 9 shows the AEs at different values of t when k = 0.01 . Figure 5 illustrates the AEs (left) and approximate solution (right) at N = 12 when k = 0.01 .
Example 4
([53,54]). Consider the following fifth-order Kaup–Kuperschmidt equation:
η t + 20 η 2 η x + 25 η x 2 η x 2 + 10 η 3 η x 3 + 5 η x 5 = 0 , 0 x , t 1 ,
governed by
η ( x , 0 ) = 24 k 2 e k x 4 e k x + e 2 k x + 16 16 e k x + e 2 k x + 16 2 ,
η ( 0 , t ) = 24 k 2 e k 5 t 4 e k 5 t + 16 e 2 k 5 t + 1 16 e k 5 t + 16 e 2 k 5 t + 1 2 ,
η ( 0 , t ) x = 24 k 2 e k 5 t + k 16 e 2 k 5 t + 4 e k 5 t + k + e 2 k 16 e 2 k 5 t + 16 e k 5 t + k + e 2 k 2 ,
η ( 1 , t ) x = 24 k 3 e k 5 t 4 e k 5 t 1 3 4 e k 5 t + 1 16 e k 5 t + 16 e 2 k 5 t + 1 3 ,
2 η ( 0 , t ) x 2 = 24 k 3 e k 5 t + k e k 4 e k 5 t 3 4 e k 5 t + e k 16 e 2 k 5 t + 16 e k 5 t + k + e 2 k 3 ,
where the analytic solution of this problem is
η ( x , t ) = 24 k 2 e k x k 4 t 4 e k x k 4 t + e 2 k x k 4 t + 16 16 e k x k 4 t + e 2 k x k 4 t + 16 2 .
Table 10 presents a comparison of L error between our method at N = 12 , and the method in [53] when k = 0.01 . Table 11 shows the AEs at different values of t when k = 0.01 . Figure 6 illustrates the AEs (left) and approximate solution (right) at N = 12 when k = 0.01 .
Example 5
([53,54]). Consider the following fifth-order Ito equation:
η t + 2 η 2 η x + 6 η x 2 η x 2 + 3 η 3 η x 3 + 5 η x 5 = 0 , 0 x , t 1 ,
governed by
η ( x , 0 ) = 10 k 2 2 3 tanh 2 ( k ( x 0 + x ) ) ,
η ( 0 , t ) = 10 k 2 2 3 tanh 2 k x 0 96 k 4 t ,
η ( 1 , t ) = 10 k 2 2 3 tanh 2 k x 0 96 k 5 t + k ,
η ( 0 , t ) x = 60 k 3 tanh k x 0 96 k 4 t s e c h 2 k x 0 96 k 4 t ,
η ( 1 , t ) x = 60 k 3 tanh k x 0 96 k 5 t + k s e c h 2 k x 0 96 k 5 t + k ,
2 η ( 0 , t ) x 2 = 60 k 4 cosh 2 k x 0 96 k 4 t 2 s e c h 4 k x 0 96 k 4 t ,
where the analytic solution of this problem is
η ( x , t ) = 20 k 2 30 k 2 tanh 2 k x 96 k 5 t + k x 0 .
Table 12 presents a comparison of L error between our method at N = 12 and the method in [53] when k = 0.12 and x 0 = 0 . Table 13 shows the AEs at different values of t when k = 0.12 and x 0 = 0 . Figure 7 illustrates the AEs (left) and approximate solution (right) at N = 10 when k = 0.12 and x 0 = 0 . Table 14 shows the maximum AEs and order of convergence (81) at different values of N.
Remark 3.
Figure 8 confirms that the method remains stable when x = t for higher values of N. Finally, Figure 9 verifies that the | R N ( x , t ) | at x = t is sufficiently small for sufficiently large values of N, and this proves the consistency of the presented method.
Example 6.
Consider the following Sawada–Kotera equation of order five:
η t + 45 η 2 η x + 15 η x 2 η x 2 + 15 η 3 η x 3 + 5 η x 5 = 0 , 0 x , t 1 ,
governed by
η ( x , 0 ) = η ( 0 , t ) = η ( 1 , t ) = 0 ,
η ( 0 , t ) x = η ( 1 , t ) x = 2 η ( 0 , t ) x 2 = 0 ,
Since the exact solution is not available, so we define the following absolute residual error norm:
R E = max ( x , t ) [ 0 , 1 ] 2 η N t + 45 η N 2 η N x + 15 η N x 2 η N x 2 + 15 η N 3 η N x 3 + 5 η N x 5 ,
and applying the presented method at N = 5 to obtain Table 15, which illustrates the RE.

7. Conclusions

This study successfully developed and analyzed a numerical algorithm to treat the fifth-order KdV-type equations, producing highly accurate results. The main idea was to introduce new shifted Horadam polynomials to act as basis functions. We established many basic formulas for these polynomials to design the proposed numerical method. In addition, some specific formulas and inequalities helped to investigate the convergence of the shifted Horadam approximate solutions in depth. We also offered numerical examples to confirm the method’s applicability and usefulness in tackling complicated nonlinear problems in mathematical physics and related domains. To the best of our knowledge, this is the first time these polynomials have been used in the scope of numerical solutions of DEs. We plan to employ these polynomials to treat other types of DEs in the applied sciences.

Author Contributions

Conceptualization, W.M.A.-E. and A.G.A.; Methodology, W.M.A.-E., O.M.A. and A.G.A.; Software, A.G.A.; Validation, W.M.A.-E., O.M.A. and A.G.A.; Formal analysis, W.M.A.-E. and A.G.A.; Investigation, W.M.A.-E., O.M.A. and A.G.A.; Writing—original draft, A.G.A. and W.M.A.-E.; Writing—review and editing, W.M.A.-E., A.G.A. and O.M.A.; Visualization, A.G.A. and W.M.A.-E.; Supervision, W.M.A.-E.; Funding acquisition, W.M.A.-E. and O.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Almatrafi, M.B.; Alharbi, A. New soliton wave solutions to a nonlinear equation arising in plasma physics. CMES-Comput. Model. Eng. Sci. 2023, 137, 827–841. [Google Scholar] [CrossRef]
  2. Parand, K.; Aghaei, A.A.; Kiani, S.; Zadeh, T.I.; Khosravi, Z. A neural network approach for solving nonlinear differential equations of Lane–Emden type. Eng. Comput. 2024, 40, 953–969. [Google Scholar] [CrossRef]
  3. Oruç, Ö. A new algorithm based on Lucas polynomials for approximate solution of 1D and 2D nonlinear generalized Benjamin–Bona–Mahony–Burgers equation. Comput. Math. Appl. 2017, 74, 3042–3057. [Google Scholar] [CrossRef]
  4. Bhowmik, S.K.; Khan, J.A. High-Accurate Numerical Schemes for Black–Scholes Models with Sensitivity Analysis. J. Math. 2022, 2022, 4488082. [Google Scholar] [CrossRef]
  5. Abd-Elhameed, W.M.; Al-Harbi, M.S.; Atta, A.G. New convolved Fibonacci collocation procedure for the Fitzhugh-Nagumo non-linear equation. Nonlinear Eng. 2024, 13, 20220332. [Google Scholar] [CrossRef]
  6. Mahdy, A.M.S. A numerical method for solving the nonlinear equations of Emden-Fowler models. J. Ocean Eng. Sci. 2022. [Google Scholar] [CrossRef]
  7. Majchrzak, E.; Mochnacki, B. The BEM application for numerical solution of non-steady and nonlinear thermal diffusion problems. Comput. Assist. Methods Eng. Sci. 2023, 3, 327–346. [Google Scholar]
  8. Yasin, M.W.; Iqbal, M.S.; Seadawy, A.R.; Baber, M.Z.; Younis, M.; Rizvi, S.T.R. Numerical scheme and analytical solutions to the stochastic nonlinear advection diffusion dynamical model. Int. J. Nonlinear Sci. Numer. Simul. 2023, 24, 467–487. [Google Scholar] [CrossRef]
  9. Shang, Y.; Wang, F.; Sun, J. Randomized neural network with Petrov–Galerkin methods for solving linear and nonlinear partial differential equations. Commun. Nonlinear Sci. Numer. Simul. 2023, 127, 107518. [Google Scholar] [CrossRef]
  10. Jiang, X.; Wang, J.; Wang, W.; Zhang, H. A predictor–corrector compact difference scheme for a nonlinear fractional differential equation. Fractal Fract. 2023, 7, 521. [Google Scholar] [CrossRef]
  11. Abd-Elhameed, W.M.; Alqubori, O.M.; Atta, A.G. A collocation procedure for treating the time-fractional FitzHugh–Nagumo differential equation using shifted Lucas polynomials. Mathematics 2024, 12, 3672. [Google Scholar] [CrossRef]
  12. Sadaf, M.; Arshed, S.; Akram, G.; Ahmad, M.; Abualnaja, K.M. Solitary dynamics of the Caudrey–Dodd–Gibbon equation using unified method. Opt. Quantum Electron. 2025, 57, 21. [Google Scholar] [CrossRef]
  13. Oad, A.; Arshad, M.; Shoaib, M.; Lu, D.; Li, X. Novel soliton solutions of two-mode Sawada-Kotera equation and its applications. IEEE Access 2021, 9, 127368–127381. [Google Scholar] [CrossRef]
  14. Alharthi, M.S. Extracting solitary solutions of the nonlinear Kaup–Kupershmidt (KK) equation by analytical method. Open Phys. 2023, 21, 20230134. [Google Scholar] [CrossRef]
  15. Attia, R.A.M.; Xia, Y.; Zhang, X.; Khater, M.M.A. Analytical and numerical investigation of soliton wave solutions in the fifth-order KdV equation within the KdV-KP framework. Results Phys. 2023, 51, 106646. [Google Scholar] [CrossRef]
  16. Matsutani, S. On real hyperelliptic solutions of focusing modified KdV equation. Math. Phys. Anal. Geom. 2024, 27, 19. [Google Scholar] [CrossRef]
  17. D’Ambrosio, R.; Di Giovacchino, S. Numerical conservation issues for the stochastic Korteweg–de Vries equation. J. Comput. Appl. Math. 2023, 424, 114967. [Google Scholar] [CrossRef]
  18. Ahmed, H.M.; Hafez, R.M.; Abd-Elhameed, W.M. A computational strategy for nonlinear time-fractional generalized Kawahara equation using new eighth-kind Chebyshev operational matrices. Phys. Scr. 2024, 99, 045250. [Google Scholar] [CrossRef]
  19. Dwivedi, M.; Sarkar, T. Fully discrete finite difference schemes for the fractional Korteweg-de Vries equation. J. Sci. Comput. 2024, 101, 30. [Google Scholar] [CrossRef]
  20. Mishra, N.K.; AlBaidani, M.M.; Khan, A.; Ganie, A.H. Two novel computational techniques for solving nonlinear time-fractional Lax’s Korteweg-de Vries equation. Axioms 2023, 12, 400. [Google Scholar] [CrossRef]
  21. Ahmad, F.; Rehman, S.U.; Zara, A. A new approach for the numerical approximation of modified Korteweg–de Vries equation. Math. Comput. Simul. 2023, 203, 189–206. [Google Scholar] [CrossRef]
  22. Haq, S.; Arifeen, S.U.; Noreen, A. An efficient computational technique for higher order KdV equation arising in shallow water waves. Appl. Numer. Math. 2023, 189, 53–65. [Google Scholar] [CrossRef]
  23. Cao, H.; Cheng, X.; Zhang, Q. Numerical simulation methods and analysis for the dynamics of the time-fractional KdV equation. Phys. D 2024, 460, 134050. [Google Scholar] [CrossRef]
  24. Alshareef, A.; Bakodah, H.O. Non-central m-point formula in method of lines for solving the Korteweg-de Vries (KdV) equation. J. Umm Al-Qura Univ. Appl. Sci. 2024, 1–11. [Google Scholar] [CrossRef]
  25. Ahmed, H.M. Numerical solutions of Korteweg-de Vries and Korteweg-de Vries-Burger’s equations in a Bernstein polynomial basis. Mediterr. J. Math. 2019, 16, 102. [Google Scholar] [CrossRef]
  26. Nikiforov, F.; Uvarov, V.B. Special Functions of Mathematical Physics; Springer: Berlin/Heidelberg, Germany, 1988; Volume 205. [Google Scholar]
  27. Shen, J.; Tang, T.; Wang, L.L. Spectral Methods: Algorithms, Analysis and Applications; Springer Science and Business Media: Berlin/Heidelberg, Germany, 2011; Volume 41. [Google Scholar]
  28. Abd-Elhameed, W.M.; Al-Sady, A.M.; Alqubori, O.M.; Atta, A.G. Numerical treatment of the fractional Rayleigh-Stokes problem using some orthogonal combinations of Chebyshev polynomials. AIMS Math. 2024, 9, 25457–25481. [Google Scholar] [CrossRef]
  29. Alharbi, M.H.; Abu Sunayh, A.F.; Atta, A.G.; Abd-Elhameed, W.M. Novel Approach by Shifted Fibonacci Polynomials for Solving the Fractional Burgers Equation. Fractal Fract. 2024, 8, 427. [Google Scholar] [CrossRef]
  30. Sharma, R.; Rajeev. An operational matrix approach to solve a 2D variable-order reaction advection diffusion equation with Vieta–Fibonacci polynomials. Spec. Top. Rev. Porous Media Int. J. 2023, 14, 79–96. [Google Scholar] [CrossRef]
  31. Sharma, R.; Rajeev. A numerical approach to solve 2D fractional RADE of variable-order with Vieta–Lucas polynomials. Chin. J. Phys. 2023, 86, 433–446. [Google Scholar] [CrossRef]
  32. Adel, M.; Khader, M.M.; Algelany, S. High-dimensional chaotic Lorenz system: Numerical treatment using Changhee polynomials of the Appell type. Fractal Fract. 2023, 7, 398. [Google Scholar] [CrossRef]
  33. El-Sayed, A.A.; Agarwal, P. Spectral treatment for the fractional-order wave equation using shifted Chebyshev orthogonal polynomials. J. Comput. Appl. Math. 2023, 424, 114933. [Google Scholar] [CrossRef]
  34. Djordjevic, S.S.; Djordjevic, G.B. Generalized Horadam polynomials and numbers. An. Ştiinţ. Univ. Ovidius Constanţa Ser. Mat. 2018, 26, 91–101. [Google Scholar] [CrossRef]
  35. Keskin, R.; Siar, Z. Some new identities concerning the Horadam sequence and its companion sequence. Commun. Korean Math. Soc. 2019, 34, 1–16. [Google Scholar]
  36. Srivastava, H.M.; Altınkaya, Ş.; Yalçın, S. Certain subclasses of bi-univalent functions associated with the Horadam polynomials. Iran. J. Sci. Technol. Trans. A Sci. 2019, 43, 1873–1879. [Google Scholar] [CrossRef]
  37. Bagdasar, O.D.; Larcombe, P.J. On the characterization of periodic generalized Horadam sequences. J. Differ. Equ. Appl. 2014, 20, 1069–1090. [Google Scholar] [CrossRef]
  38. Srividhya, G.; Rani, E.K. A new application of generalized k-Horadam sequence in coding theory. J. Algebr. Stat. 2022, 13, 93–98. [Google Scholar]
  39. Canuto, C.; Hussaini, M.Y.; Quarteroni, A.; Zang, T.A. Spectral Methods in Fluid Dynamics; Springer: Berlin/Heidelberg, Germany, 1988. [Google Scholar]
  40. Hesthaven, J.S.; Gottlieb, S.; Gottlieb, D. Spectral Methods for Time-Dependent Problems; Cambridge University Press: Cambridge, UK, 2007; Volume 21. [Google Scholar]
  41. Alsuyuti, M.M.; Doha, E.H.; Ezz-Eldien, S.S. Galerkin operational approach for multi-dimensions fractional differential equations. Commun. Nonlinear Sci. Numer. Simul. 2022, 114, 106608. [Google Scholar] [CrossRef]
  42. Rezazadeh, A.; Darehmiraki, M. A fast Galerkin-spectral method based on discrete Legendre polynomials for solving parabolic differential equation. Comput. Appl. Math. 2024, 43, 315. [Google Scholar] [CrossRef]
  43. Abd-Elhameed, W.M.; Alsuyuti, M.M. Numerical treatment of multi-term fractional differential equations via new kind of generalized Chebyshev polynomials. Fractal Fract. 2023, 7, 74. [Google Scholar] [CrossRef]
  44. Atta, A.G.; Abd-Elhameed, W.M.; Moatimid, G.M.; Youssri, Y.H. Advanced shifted sixth-kind Chebyshev tau approach for solving linear one-dimensional hyperbolic telegraph type problem. Math. Sci. 2023, 17, 415–429. [Google Scholar] [CrossRef]
  45. Ahmed, H.F.; Hashem, W.A. Novel and accurate Gegenbauer spectral tau algorithms for distributed order nonlinear time-fractional telegraph models in multi-dimensions. Commun. Nonlinear Sci. Numer. Simul. 2023, 118, 107062. [Google Scholar] [CrossRef]
  46. Zaky, M.A. An improved tau method for the multi-dimensional fractional Rayleigh-Stokes problem for a heated generalized second grade fluid. Comput. Math. Appl. 2018, 75, 2243–2258. [Google Scholar] [CrossRef]
  47. Abd-Elhameed, W.M.; Youssri, Y.H.; Amin, A.K.; Atta, A.G. Eighth-kind Chebyshev polynomials collocation algorithm for the nonlinear time-fractional generalized Kawahara equation. Fractal Fract. 2023, 7, 652. [Google Scholar] [CrossRef]
  48. Amin, A.Z.; Lopes, A.M.; Hashim, I. A space-time spectral collocation method for solving the variable-order fractional Fokker-Planck equation. J. Appl. Anal. Comput. 2023, 13, 969–985. [Google Scholar] [CrossRef]
  49. Abdelkawy, M.A.; Lopes, A.M.; Babatin, M.M. Shifted fractional Jacobi collocation method for solving fractional functional differential equations of variable order. Chaos Solitons Fract. 2020, 134, 109721. [Google Scholar] [CrossRef]
  50. Sadri, K.; Hosseini, K.; Baleanu, D.; Salahshour, S.; Park, C. Designing a matrix collocation method for fractional delay integro-differential equations with weakly singular kernels based on Vieta–Fibonacci polynomials. Fractal Fract. 2021, 6, 2. [Google Scholar] [CrossRef]
  51. Horadam, A.F. Extension of a synthesis for a class of polynomial sequences. Fibonacci Q. 1996, 34, 68–74. [Google Scholar] [CrossRef]
  52. Koepf, W. Hypergeometric Summation, 2nd ed.; Springer Universitext Series; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  53. Saleem, S.; Hussain, M.Z. Numerical solution of nonlinear fifth-order KdV-type partial differential equations via Haar wavelet. Int. J. Appl. Comput. Math. 2020, 6, 164. [Google Scholar] [CrossRef]
  54. Bakodah, H.O. Modified Adomian decomposition method for the generalized fifth order KdV equations. Am. J. Comput. Math. 2013, 3, 53–58. [Google Scholar] [CrossRef]
  55. Luke, Y.L. Inequalities for generalized hypergeometric functions. J. Approx. Theory 1972, 5, 41–65. [Google Scholar] [CrossRef]
  56. Jameson, G.J.O. The incomplete gamma functions. Math. Gaz. 2016, 100, 298–306. [Google Scholar] [CrossRef]
  57. Fakhari, H.; Mohebbi, A. Galerkin spectral and finite difference methods for the solution of fourth-order time fractional partial integro-differential equation with a weakly singular kernel. J. Appl. Math. Comput. 2024, 70, 5063–5080. [Google Scholar] [CrossRef]
  58. Yassin, N.M.; Aly, E.H.; Atta, A.G. Novel approach by shifted Schröder polynomials for solving the fractional Bagley-Torvik equation. Phys. Scr. 2024, 100, 015242. [Google Scholar] [CrossRef]
Figure 1. The AEs and the approximate solution for Example 1.
Figure 1. The AEs and the approximate solution for Example 1.
Mathematics 13 00300 g001
Figure 2. Stability | η N + 1 η N | for Example 1.
Figure 2. Stability | η N + 1 η N | for Example 1.
Mathematics 13 00300 g002
Figure 3. The absolute residual | R N ( x , t ) | at t = 0.5 for Example 1.
Figure 3. The absolute residual | R N ( x , t ) | at t = 0.5 for Example 1.
Mathematics 13 00300 g003
Figure 4. The AEs and the approximate solution for Example 2.
Figure 4. The AEs and the approximate solution for Example 2.
Mathematics 13 00300 g004
Figure 5. The AEs and the approximate solution for Example 3.
Figure 5. The AEs and the approximate solution for Example 3.
Mathematics 13 00300 g005
Figure 6. The AEs and the approximate solution for Example 4.
Figure 6. The AEs and the approximate solution for Example 4.
Mathematics 13 00300 g006
Figure 7. The AEs and the approximate solution for Example 5.
Figure 7. The AEs and the approximate solution for Example 5.
Mathematics 13 00300 g007
Figure 8. Stability | η N + 1 η N | for Example 5.
Figure 8. Stability | η N + 1 η N | for Example 5.
Mathematics 13 00300 g008
Figure 9. The absolute residual | R N ( x , t ) | at x = t for Example 5.
Figure 9. The absolute residual | R N ( x , t ) | at x = t for Example 5.
Mathematics 13 00300 g009
Table 1. The L error of Example 1.
Table 1. The L error of Example 1.
Method in [53]Our Method at N = 12
2 M ^ Δ t = 1 10 Δ t = 1 100 Δ t = 1 1000
2 1.0635 × 10 7 8.5821 × 10 8 1.0503 × 10 7 1.06572 × 10 15
4 2.8529 × 10 7 2.0122 × 10 8 6.3916 × 10 9
8 2.9616 × 10 7 3.0049 × 10 8 3.4426 × 10 9
16 3.0042 × 10 7 3.0238 × 10 8 3.2241 × 10 9
32 3.0009 × 10 7 3.0057 × 10 8 3.0645 × 10 9
64 3.0045 × 10 7 3.0055 × 10 8 3.0198 × 10 9
Table 2. CPU time used in seconds of Table 1.
Table 2. CPU time used in seconds of Table 1.
Method in [53]Time of Our Method at N = 12
2 M ^ Time at Δ t = 1 10 Time at Δ t = 1 100 Time at Δ t = 1 1000
20.1486140.1177201.0003051105.16
40.0585840.1174510.968710
80.0815740.412341.078842
160.1630950.1824001.269367
320.0352350.1900481.602233
640.0401350.2321441.893176
Table 3. The AEs of Example 1.
Table 3. The AEs of Example 1.
x t = 0.2 t = 0.4 t = 0.6 t = 0.8
0.11.0842 × 10 19 2.71051 × 10 19 1.6263 × 10 19 9.21572 × 10 19
0.21.79209 × 10 25 2.71051 × 10 19 2.71051 × 10 19 7.04731 × 10 19
0.32.71051 × 10 19 2.1684 × 10 19 4.33681 × 10 19 4.11997 × 10 18
0.45.96311 × 10 19 1.6263 × 10 19 7.58942 × 10 19 9.59519 × 10 18
0.51.0842 × 10 18 2.71051 × 10 19 1.13841 × 10 18 1.62088 × 10 17
0.61.51788 × 10 18 1.6263 × 10 19 1.46367 × 10 18 2.22261 × 10 17
0.71.68051 × 10 18 1.0842 × 10 19 1.57209 × 10 18 2.54788 × 10 17
0.81.51788 × 10 18 1.6263 × 10 19 1.40946 × 10 18 2.29309 × 10 17
0.99.21572 × 10 19 2.71051 × 10 19 6.50521 × 10 19 1.21431 × 10 17
Table 4. The maximum AEs and order of convergence for Example 1.
Table 4. The maximum AEs and order of convergence for Example 1.
NErrorOrder
43.48827  × 10 7 -
55.97282  × 10 8 0.96358
61.05960  × 10 8 0.99163
78.55756  × 10 10 1.04696
89.73067  × 10 11 1.03323
95.90808  × 10 12 1.06141
104.43248  × 10 12 0.96484
117.07445  × 10 12 0.94307
122.18521  × 10 15 1.26877
Table 5. The L error of Example 2.
Table 5. The L error of Example 2.
Method in [53]Our Method at N = 12
2 M ^ Δ t = 1 10 Δ t = 1 100 Δ t = 1 1000
2 1.1108 × 10 7 1.1618 × 10 7 1.1669 × 10 7 1.11577 × 10 14
4 1.6982 × 10 8 2.3948 × 10 8 2.4639 × 10 8
8 2.6394 × 10 9 4.3489 × 10 9 5.0420 × 10 9
16 6.6177 × 10 9 4.8615 × 10 10 1.1714 × 10 9
32 7.5484 × 10 9 4.7326 × 10 10 2.3286 × 10 10
64 7.7933 × 10 9 7.0392 × 10 10 5.9370 × 10 12
Table 6. CPU time used in seconds of Table 5.
Table 6. CPU time used in seconds of Table 5.
Method in [53]Time of Our Method at N = 12
2 M ^ Time at Δ t = 1 10 Time at Δ t = 1 100 Time at Δ t = 1 1000
20.0862220.1784491.0734221111.03
40.0908160.1221361.106796
80.0380140.1576651.095270
160.0389510.1439821.266360
320.0413610.1799761.616171
640.0471670.2274361.924651
Table 7. The AEs of Example 2.
Table 7. The AEs of Example 2.
x t = 0.1 t = 0.3 t = 0.6 t = 0.9
0.12.77564  × 10 17 1.11023  × 10 16 1.11023  × 10 16 9.312  × 10 15
0.28.04913  × 10 16 8.46546  × 10 16 8.18791  × 10 16 8.28504  × 10 15
0.32.70617  × 10 15 2.47025  × 10 15 2.45637  × 10 15 5.41234  × 10 15
0.45.49561  × 10 15 4.96826  × 10 15 4.96826  × 10 15 1.06861  × 10 15
0.58.86792  × 10 15 7.93811  × 10 15 7.95199  × 10 15 4.28825  × 10 15
0.61.19488  × 10 14 1.06443  × 10 14 1.0672  × 10 14 9.68671  × 10 15
0.71.34476  × 10 14 1.1921  × 10 14 1.19904  × 10 14 1.37529  × 10 14
0.81.18655  × 10 14 1.05333  × 10 14 1.06026  × 10 14 1.47382  × 10 14
0.95.99521  × 10 15 5.32908  × 10 15 5.37071  × 10 15 1.11577  × 10 14
Table 8. The L error of Example 3.
Table 8. The L error of Example 3.
Method in [53]Our Method at N = 12
2 M ^ Δ t = 1 10 Δ t = 1 100 Δ t = 1 1000
2 1.0354 × 10 10 1.0418 × 10 10 1.0424 × 10 10 5.20417 × 10 18
4 2.2802 × 10 11 2.3648 × 10 11 2.3732 × 10 11
8 3.1943 × 10 12 4.0357 × 10 12 4.1198 × 10 12
16 4.7979 × 10 14 8.2657 × 10 13 9.1186 × 10 13
32 7.2377 × 10 13 1.2889 × 10 13 2.1392 × 10 13
64 8.9292 × 10 13 3.9850 × 10 14 4.5942 × 10 14
Table 9. The AEs of Example 3.
Table 9. The AEs of Example 3.
x t = 0.2 t = 0.4 t = 0.6 t = 0.8
0.11.69407  × 10 20 3.38813  × 10 21 1.35525  × 10 20 6.77626  × 10 21
0.26.77626  × 10 21 6.77626  × 10 21 3.0091  × 10 25 6.77626  × 10 21
0.36.77626  × 10 21 1.35525  × 10 20 8.7983  × 10 25 1.01644  × 10 20
0.41.69407  × 10 20 3.38813  × 10 21 1.35525  × 10 20 6.77626  × 10 21
0.52.03288  × 10 20 6.77626  × 10 21 3.38813  × 10 21 2.03288  × 10 20
0.61.35525  × 10 20 1.35525  × 10 20 1.35525  × 10 20 2.37169  × 10 20
0.72.37169  × 10 20 6.77626  × 10 21 1.01644  × 10 20 1.01644  × 10 20
0.81.69407  × 10 20 3.7324  × 10 24 1.69407  × 10 20 1.69407  × 10 20
0.92.71051  × 10 20 6.77626  × 10 21 2.03288  × 10 20 1.69407  × 10 20
Table 10. The L error of Example 4.
Table 10. The L error of Example 4.
Method in [53]
2 M ^ Δ t = 1 1000 Our Method at N = 12
2 2.9045 × 10 10
4 4.5383 × 10 10
8 5.0035 × 10 10 3.90177 × 10 17
16 5.1233 × 10 10
32 5.1535 × 10 10
64 5.1611 × 10 10
Table 11. The AEs of Example 4.
Table 11. The AEs of Example 4.
x t = 0.15 t = 0.4 t = 0.65 t = 0.9
0.14.40951  × 10 26 4.74338  × 10 20 1.35525  × 10 20 4.29412  × 10 17
0.21.35525  × 10 20 4.74338  × 10 20 3.38813  × 10 20 4.23652  × 10 17
0.32.03288  × 10 20 6.77626  × 10 21 3.38813  × 10 20 3.99867  × 10 17
0.42.71051  × 10 20 6.77626  × 10 21 3.38813  × 10 20 3.6436  × 10 17
0.53.38813  × 10 20 1.35525  × 10 20 6.09864  × 10 20 3.24109  × 10 17
0.64.74338  × 10 20 1.35525  × 10 20 6.77626  × 10 20 2.89279  × 10 17
0.74.74338  × 10 20 4.74338  × 10 20 9.48677  × 10 20 2.77081  × 10 17
0.82.71051  × 10 20 2.71051  × 10 20 7.45389  × 10 20 3.07168  × 10 17
0.96.77626  × 10 21 1.35525  × 10 20 4.06576  × 10 20 3.90177  × 10 17
Table 12. The L error of Example 5.
Table 12. The L error of Example 5.
Method in [53]Our method at N = 12
2 M ^ Δ t = 1 10 Δ t = 1 100 Δ t = 1 1000
2 3.8776 × 10 6 1.1080 × 10 6 8.3693 × 10 7 1.55598 × 10 13
4 4.6387 × 10 6 7.1764 × 10 7 3.3455 × 10 7
8 4.4590 × 10 6 5.1088 × 10 7 1.2552 × 10 7
16 4.4709 × 10 6 4.5665 × 10 7 6.4702 × 10 8
32 4.4544 × 10 6 4.4071 × 10 7 4.8748 × 10 8
64 4.4562 × 10 6 4.3726 × 10 7 4.4826 × 10 8
Table 13. The AEs of Example 5.
Table 13. The AEs of Example 5.
x t = 0.1 t = 0.3 t = 0.6 t = 0.9
0.14.31486  × 10 19 5.55128  × 10 17 4.28135  × 10 19 4.87388  × 10 14
0.27.21651  × 10 16 4.9961  × 10 16 3.8859  × 10 16 4.82947  × 10 15
0.31.99842  × 10 15 1.60985  × 10 15 1.16577  × 10 15 9.17599  × 10 14
0.44.10786  × 10 15 3.38623  × 10 15 2.38705  × 10 15 2.37477  × 10 13
0.56.71691  × 10 15 5.49568  × 10 15 3.83037  × 10 15 4.07507  × 10 13
0.69.10391  × 10 15 7.38308  × 10 15 5.16267  × 10 15 5.5611  × 10 13
0.71.03252  × 10 14 8.21576  × 10 15 5.77331  × 10 15 6.16396  × 10 13
0.89.43697  × 10 15 7.27206  × 10 15 5.10716  × 10 15 5.04319  × 10 13
0.95.55115  × 10 15 3.83032  × 10 15 2.72011  × 10 15 1.55598  × 10 13
Table 14. The maximum AEs and order of convergence for Example 5.
Table 14. The maximum AEs and order of convergence for Example 5.
NErrorOrder
44.81759  × 10 8 -
58.14415  × 10 8 0.834512
66.08548  × 10 10 1.16769
74.26518  × 10 10 0.936205
83.37397  × 10 12 1.14569
91.31867  × 10 12 0.980054
103.48999  × 10 13 1.00061
Table 15. The RE of Example 6.
Table 15. The RE of Example 6.
x t = 0.1 t = 0.3 t = 0.5 t = 0.8
0.11.32384  × 10 20 2.52242  × 10 21 1.0856  × 10 21 1.54641  × 10 20
0.21.32389  × 10 20 2.52131  × 10 21 1.08586  × 10 21 1.54682  × 10 20
0.31.32394  × 10 20 2.51944  × 10 21 1.08649  × 10 21 1.54706  × 10 20
0.41.32404  × 10 20 2.51749  × 10 21 1.0876  × 10 21 1.54651  × 10 20
0.51.32421  × 10 20 2.51648  × 10 21 1.08921  × 10 21 1.54441  × 10 20
0.61.32444  × 10 20 2.51736  × 10 21 1.09124  × 10 21 1.54018  × 10 20
0.71.3247  × 10 20 2.52061  × 10 21 1.09352  × 10 21 1.53379  × 10 20
0.81.32496  × 10 20 2.52588  × 10 21 1.09575  × 10 21 1.52601  × 10 20
0.91.32516  × 10 20 2.53158  × 10 21 1.09749  × 10 21 1.51877  × 10 20
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abd-Elhameed, W.M.; Alqubori, O.M.; Atta, A.G. A Collocation Approach for the Nonlinear Fifth-Order KdV Equations Using Certain Shifted Horadam Polynomials. Mathematics 2025, 13, 300. https://doi.org/10.3390/math13020300

AMA Style

Abd-Elhameed WM, Alqubori OM, Atta AG. A Collocation Approach for the Nonlinear Fifth-Order KdV Equations Using Certain Shifted Horadam Polynomials. Mathematics. 2025; 13(2):300. https://doi.org/10.3390/math13020300

Chicago/Turabian Style

Abd-Elhameed, Waleed Mohamed, Omar Mazen Alqubori, and Ahmed Gamal Atta. 2025. "A Collocation Approach for the Nonlinear Fifth-Order KdV Equations Using Certain Shifted Horadam Polynomials" Mathematics 13, no. 2: 300. https://doi.org/10.3390/math13020300

APA Style

Abd-Elhameed, W. M., Alqubori, O. M., & Atta, A. G. (2025). A Collocation Approach for the Nonlinear Fifth-Order KdV Equations Using Certain Shifted Horadam Polynomials. Mathematics, 13(2), 300. https://doi.org/10.3390/math13020300

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop