Next Article in Journal
Investigating the Hyers–Ulam Stability of the Generalized Drygas Functional Equation: New Results and Methods
Previous Article in Journal
Hopf Bifurcation and Optimal Control in an Ebola Epidemic Model with Immunity Loss and Multiple Delays
Previous Article in Special Issue
Existence Result for a Class of Time-Fractional Nonstationary Incompressible Navier–Stokes–Voigt Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Solutions for Nonlinear Ordinary and Fractional Duffing Equations Using Combined Fibonacci–Lucas Polynomials

by
Waleed Mohamed Abd-Elhameed
1,*,
Omar Mazen Alqubori
2,
Amr Kamel Amin
3 and
Ahmed Gamal Atta
4
1
Department of Mathematics, Faculty of Science, Cairo University, Giza 12613, Egypt
2
Department of Mathematics and Statistics, College of Science, University of Jeddah, Jeddah 23831, Saudi Arabia
3
Department of Mathematics, Adham University College, Umm Al-Qura University, Makkah 28653, Saudi Arabia
4
Department of Mathematics, Faculty of Education, Ain Shams University, Roxy, Cairo 11341, Egypt
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(4), 314; https://doi.org/10.3390/axioms14040314
Submission received: 31 January 2025 / Revised: 11 April 2025 / Accepted: 16 April 2025 / Published: 19 April 2025

Abstract

:
Two nonlinear Duffing equations are numerically treated in this article. The nonlinear fractional-order Duffing equations and the second-order nonlinear Duffing equations are handled. Based on the collocation technique, we provide two numerical algorithms. To achieve this goal, a new family of basis functions is built by combining the sets of Fibonacci and Lucas polynomials. Several new formulae for these polynomials are developed. The operational matrices of integer and fractional derivatives of these polynomials, as well as some new theoretical results of these polynomials, are presented and used in conjunction with the collocation method to convert nonlinear Duffing equations into algebraic systems of equations by forcing the equation to hold at certain collocation points. To numerically handle the resultant nonlinear systems, one can use symbolic algebra solvers or Newton’s approach. Some particular inequalities are proved to investigate the convergence analysis. Some numerical examples show that our suggested strategy is effective and accurate. The numerical results demonstrate that the suggested collocation approach yields accurate solutions by utilizing Fibonacci–Lucas polynomials as basis functions.

1. Introduction

For a wide variety of applications, special functions are essential. One can consult [1,2,3] for specific findings and uses of special functions. The Fibonacci and Lucas polynomials are important special functions [4]. A number of authors were keen on presenting and exploring various extensions and variations of these polynomials.
Research on generalized Lucas polynomials and their connections to Fibonacci and Lucas polynomials was covered in [5], whereas [6] focused on Gauss–Fibonacci and Gauss–Lucas polynomials and their uses. Additional contributions on these polynomials and their uses may be found in [7,8,9,10].
Finding different formulae for special functions is something that a number of mathematicians are interested in. The development of numerical approaches to solve different types of differential equations (DEs) can greatly benefit from these equations. Many approaches for solving different DEs using spectral methods require expressing the derivatives of special polynomials as combinations of their original ones. This approach has been used in previous works (see [11,12]). Furthermore, for any orthogonal or nonorthogonal polynomials, it is an important goal to construct the operational matrices for the derivatives of these polynomials. These operational matrices aid in transforming the DEs using an appropriate spectral approach into an algebraic system of equations that can be solved using appropriate linear algebra procedures; see [13,14,15,16,17].
Nonlinear DEs are indispensable in many scientific fields, including physics, biology, chemistry, economics, and engineering. These equations describe systems that are too complex to be described by linear equations. The study of nonlinear differential equations (DEs)—which are utilized to depict phenomena such as chaos, turbulence, and nonlinear waves—provides valuable insights that find widespread application in fields ranging from climate modeling and fluid dynamics to telecommunications (e.g., [18,19]).
Fractional differential equations (FDEs) are fundamental in a number of areas of the applied sciences. Standard DEs are unable to catch some events which they explain. This is due to their exceptional ability to mimic genetic and memory functions. For instance, as mentioned in [20], they mimic a variety of physiological and biological processes, such as neuronal activity and tumor formation. Since these equations cannot be solved analytically, numerical analysis typically comes into play while solving them. For instance, as demonstrated in [21], a collocation approach might be useful when dealing with several equations. Some of these approaches are referenced in [22,23,24].
Among the important nonlinear DEs are the different Duffing equations introduced by the engineer Georg Duffing in 1918. They arise in physics and engineering to model a variety of physical phenomena. Examples of these uses include the description of a system’s chaotic behavior [25] and the control of a chaotic system’s mobility around less complicated attractors by the injection of modest dampening signals [26]. Many authors were interested in handling the different Duffing equations. For example, the authors of [27] found analytical solutions for some Duffing equations. The author in [28] applied the Pell–Lucas approach to treating the Duffing equation. A machine learning was used in [29]. A certain Runge–Kutta approach was utilized in [30]. A general solution of the Duffing equation of third-order nonlinearity was proposed in [31]. In [32], the authors used an adapted block hybrid method to handle Duffing equations. To access many contributions about diverse forms of Duffing equations, one may refer to [33,34,35,36,37,38].
Spectral methods have become widely recognized as a significant class of numerical techniques for addressing various problems in different disciplines. These methods have many advantages comparable to various numerical methods (see [39]). The approximate solutions obtained by these methods are highly accurate and provide exponential convergence rates. In addition, unlike the finite element or finite difference method, they provide global solutions, not local ones. These methods are adaptable in treating different types of differentiable equations. We can choose the suitable method that we can use according to the type of differential equation and the type of underlying conditions. There are three main spectral methods. Every method has its advantages and uses. The Galerkin and Petrov–Galerkin methods can be applied successfully for linear problems and some specific nonlinear problems; see, for example, [40,41,42]. The Tau method has a wider range of applications than the Galerkin method due to its ability to handle more complex boundary conditions; for example, see [43,44,45]. The collocation method is advantageous, since it can handle any type of differential equation regulated with any type of underlying conditions, for example, [46,47,48,49,50].
In this paper, we are devoted to proposing two numerical algorithms to solve the second-order and fractional-order Duffing equations. The spectral collocation algorithm is applied for such a purpose. Moreover, a new set of basis functions that generalize the Fibonacci and Lucas basis are introduced and employed. To our knowledge, the basis functions introduced for our algorithm’s derivation are new. In what follows, we summarize the main points, including the novelty of our contribution in this paper:
  • Introducing a new type of generalized Fibonacci and Lucas polynomials.
  • Establishing some theoretical results concerning these polynomials that will be the backbone of our numerical results.
  • Designing a numerical algorithm for treating the nonlinear second-order Duffing equation.
  • Designing a numerical algorithm for treating the nonlinear fractional Duffing equation.
  • Discussing the error analysis of the proposed method.
  • Testing our algorithms numerically by presenting some numerical examples with some comparisons.
The advantages of our proposed technique can be summarized as follows:
  • By choosing combined Fibonacci-Lucas polynomials as basis functions, a few retained modes produce highly accurate approximations.
  • The approach requires fewer computations to achieve the desired precision.
  • We can obtain several approximate solutions based on the presence of two free parameters, a and b.
  • Our technique can treat both linear and non-linear equations.
Here is the outline of the paper: An overview of Fibonacci and Lucas polynomials is given in Section 2. In addition, a combined Fibonacci–Lucas class of polynomials is presented in this section. The nonlinear second-order Duffing problem is solved using a matrix collocation method in Section 3. In Section 4, the fractional-order Duffing equation is addressed using a collocation method. Section 5 discusses the expansion’s convergence and truncation error bound. Section 6 provides a few comparisons and examples. Some last thoughts are presented in Section 7.

2. Introducing a Unified Sequence of Fibonacci and Lucas Polynomials

This section discusses some essential features of Fibonacci and Lucas polynomials and introduces a unified sequence of Fibonacci and Lucas sequences.

2.1. Fibonacci and Lucas Polynomial Sequences

The Fibonacci and Lucas polynomial sequences can be generated using, respectively, the following two recursive formulas:
F i ( t ) t F i 1 ( t ) F i 2 ( t ) = 0 , F 0 ( t ) = 0 , F 1 ( t ) = 1 ,
L i ( t ) t L i 1 ( t ) L i 2 ( t ) = 0 , L 0 ( t ) = 2 , L 1 ( t ) = t .
The generating function for F i ( t ) is given by
G ( t , z ) = z 1 z 2 z t = i = 0 F i ( t ) z i ,
while the generating function for L i ( t ) is given by
G ¯ ( t , z ) = 1 + z 2 1 z 2 z t = i = 0 L i ( t ) z i .
For an approach for generating a function, one can refer to [51].
From (1) and (2), it is evident that both Fibonacci and Lucas polynomials satisfy the same recursive formula but with different initials; thus, it is clear that the following recurrence relation
ϕ i ( t ) t ϕ i 1 ( t ) ϕ i 2 ( t ) = 0 , ϕ 0 ( t ) = a , ϕ 1 ( t ) = b t ,
generalizes the two sequences in (1) and (2), and we will denote ϕ i ( t ) = F L i a , b ( t ) , that is,
F L i a , b ( t ) t F L i 1 a , b ( t ) F L i 2 a , b ( t ) = 0 , F L 0 a , b ( t ) = a , F L 1 a , b ( t ) = b t .
It is also clear that
F i + 1 ( t ) = F L i 1 , 1 ( t ) , L i ( t ) = F L i 2 , 1 ( t ) .
We also have the following expressions
F m + 1 ( t ) = s = 0 m 2 m s s t m 2 s ,
L m ( t ) = m s = 0 m 2 m s s m s t m 2 s ,
and their inverse expressions
t m = s = 0 m 2 ( 1 ) s ( m 2 s + 1 ) ( m s + 2 ) s 1 s ! F m 2 s + 1 ( t ) ,
t m = s = 0 m 2 c m 2 s ( 1 ) s ( m s + 1 ) i s ! L m 2 s ( t ) ,
where
c m = 1 2 , m = 0 , 1 , m 1 .
Remark 1. 
The key idea to develop the formulas concerned with the polynomials F L i a , b ( t ) that satisfy (3) is the following theorem, in which we will show that F L i a , b ( t ) may be expressed as a combination of two Fibonacci polynomials.
Theorem 1. 
Consider any non-negative integer j. The polynomials F L j a , b ( t ) can be represented as
F L j a , b ( t ) = b F j + 1 ( t ) + ( a b ) F j 1 ( t ) .
Proof. 
Consider the following polynomial:
ξ j ( t ) = b F j + 1 ( t ) + ( a b ) F j 1 ( t ) .
It is clear that ξ 0 ( t ) = F L 0 a , b ( t ) = a , ξ 1 ( t ) = F L 1 a , b ( t ) = b t , and hence, it is sufficient to prove that ξ j ( t ) satisfies the same recurrence relation of F L j a , b ( t ) , for j 2 , that is, we are going to prove that
ξ j + 2 ( t ) t ξ j + 1 ( t ) ξ j ( t ) = 0 .
Using the recursive formula of the Fibonacci polynomials (1) in the form
t F j ( t ) = F j + 1 ( t ) F j 1 ( t ) ,
along with the definition in (10), it can be shown that
ξ j + 2 ( t ) t ξ j + 1 ( t ) ξ j ( t ) = 0 .
This proves the theorem. □
The inverse connection formula of (9) is also interesting. The following theorem exhibits this result.
Theorem 2. 
The Fibonacci polynomials are linked by Fibonacci–Lucas polynomials by the following two formulas:
F 2 i + 1 ( t ) = ( b a ) i b i + r = 0 i 1 ( b a ) r b r + 1 F L 2 i 2 r a , b ( t ) ,
F 2 i + 2 ( t ) = r = 0 i ( b a ) r b r + 1 F L 2 i 2 r + 1 a , b ( t ) .
Proof. 
The proofs of (11) and (12) are similar. We prove now (12). We will show that the right-hand side of (12) equals its left-hand side. Now, let
η i ( x ) = r = 0 i ( b a ) r b r + 1 F L 2 i 2 r + 1 a , b ( t ) ,
and we will prove that η i ( t ) = F 2 i + 2 ( t ) .
Now, making use of Formula (9), we can write
η j ( t ) = r = 0 i ( b a ) r b 1 r ( b F 2 i 2 r + 2 ( t ) + ( a b ) F 2 i 2 r ( t ) ) ,
and therefore, we obtain
η i ( t ) = r = 1 i 1 ( b a ) r + 1 b r 1 F 2 i 2 r ( x ) + r = 0 i ( 1 ) r ( a b ) r + 1 b r 1 F 2 i 2 r ( t ) ,
which is equivalent to
η i ( t ) = F 2 + 2 i ( x ) + r = 0 i 1 ( b a ) r + 1 b r 1 F 2 i 2 r ( t ) + r = 0 i ( 1 ) r ( a b ) r + 1 b r 1 F 2 i 2 r ( t ) .
Noting that F 0 ( t ) = 0 , it is easy to show that the sum of the last two sums in (13) is zero, and accordingly,
η i ( t ) = F 2 i + 2 ( t ) .
This proves (12). □
Remark 2. 
The two connection Formulas (11) and (12) can be merged to give
F i + 1 ( t ) = r = 0 i 2 ϵ i 2 r ( b a ) r b r + 1 F L i 2 r a , b ( t ) ,
where
ϵ k = b a , i f   k = 0 , 1 , o t h e r w i s e .
Theorem 3. 
Consider a positive integer k. The power for representation of F L k a , b ( t ) is
F L k a , b ( t ) = m = 0 k 2 ( k 2 m + 1 ) m 1 ( ( k 2 m ) b + m a ) m ! t k 2 m .
Proof. 
In virtue of the combination (9), together with (5), we obtain the following expression:
F L k a , b ( t ) = b m = 0 k 2 k m m t k 2 m + ( a b ) m = 0 k 2 1 k m 2 m t k 2 m 2 ,
which can be written as
F L k a , b ( t ) = b m = 0 k 2 k m m t k 2 m + ( a b ) m = 1 k 2 k m 1 m 1 t k 2 m .
The last formula is equivalent to
F L k a , b ( t ) = m = 0 k 2 b k m m + ( a b ) k m 1 m 1 t k 2 m .
Noting the identity
b k m m + ( a b ) k m 1 m 1 = ( k 2 m + 1 ) m 1 ( ( k 2 m ) b + m a ) m ! ,
then the expression in (15) can be obtained. □
Remark 3. 
It is useful to express Formula (15) in the following alternative form:
F L k a , b ( t ) = m = 0 k a ( k m ) 2 + b m δ k + m ( m + 1 ) k m 2 1 k m 2 ! t m ,
where
δ r = 1 , i f   r even , 0 , o t h e r w i s e .
Lemma 1. 
The inversion formula of F L i a , b ( t ) is
t m = r = 0 m 2 ϵ m 2 r S r , m F L m 2 r a , b ( t ) ,
where
S r , m = j = 0 r ( 1 ) r a r j ( m r j ) ( m r + 1 ) j 1 j ! b r + 1 j .
Proof. 
Starting from the inversion formula of Fibonacci polynomials, one may write
t m = r = 0 m 2 ( 1 ) r ( m 2 r + 1 ) ( m r + 2 ) r 1 r ! F m 2 r + 1 ( t ) .
inserting the connection formula (14) into (18) yields
t m = r = 0 m 2 ( 1 ) r ( m 2 r + 1 ) ( m r + 2 ) r 1 r ! k = 0 1 2 ( m 2 r ) ϵ m 2 k 2 r ( b a ) k b 1 k F L m 2 r 2 k a , b ( t ) .
After some algebraic computations, the last formula leads to Formula (17). □

2.2. Derivatives and Operational Matrices of the Fibonacci–Lucas Polynomials

In this part, we will develop the high-order derivatives of the Fibonacci–Lucas polynomials and, after that, establish their operational matrices of integer derivatives, which will be pivotal in designing our numerical algorithm.
Theorem 4. 
Consider two positive integers q and j with j q . The qth derivative of F L i a , b ( t ) takes the form
d q F L j a , b ( t ) d t q = p = 0 j q χ j , p q F L p a , b ( t ) ,
where
χ j , p q = δ j + p q r = 0 1 2 ( j p q ) ( j r 1 ) ! ( a r + b ( j 2 r ) ) G j p q 2 r , j q 2 r r ! ( j q 2 r ) ! ,
and
G m , r = 1 a , i f   m = 0 , ϵ r 2 m j = 0 m ( 1 ) m + 1 a m j b j m 1 ( j + m r ) ( m + r + 1 ) j 1 j ! , i f   m > 0 ,
and δ k is defined as in (16).
Proof. 
The analytic formula in (15) yields the following formula:
d q F L j a , b ( t ) d t q = r = 0 j q 2 ( b ( j 2 r ) + a r ) ( j r 1 ) ! r ! ( j q 2 r ) ! t j 2 r q .
By virtue of (17), the last formula turns into
d q F L j a , b ( t ) d t q = r = 0 j q 2 ( b ( j 2 r ) + a r ) ( j r 1 ) ! r ! ( j q 2 r ) ! s = 0 j q 2 r G s , j 2 r q F L j 2 r q 2 s a , b ( t ) ,
where
G m , r = 1 a , if   m = 0 , ϵ r 2 m j = 0 m ( 1 ) m + 1 a m j b j m 1 ( j + m r ) ( m + r + 1 ) j 1 j ! , if   m > 0 ,
and hence, some computations lead to
d q F L j a , b ( t ) d t q = p = 0 j q 2 r = 0 p ( ( b ( j 2 r ) + a r ) ( j r 1 ) ! ) G p r , j 2 r q r ! ( j q 2 r ) ! F L j q 2 p a , b ( t ) .
It is easy to see that (20) is equivalent to (19). □
Now, if we consider the vector U ( t ) defined as
U ( t ) = [ F L 0 a , b ( t ) , F L 1 a , b ( t ) , , F L N a , b ( t ) ] T ,
then based on Formula (19), we can write the following general derivative expression:
d q U ( t ) d t q = M q U ( t ) ,
where M q = ( M i , p , q ) is the general operational matrix of derivatives of order ( N + 1 ) 2 whose elements can be written in the following form:
M i , p , q = χ i , p q , if   i > p + q 1 , 0 , o t h e r w i s e .
For our subsequent purposes, it is necessary to compute the two operational matrices of derivatives for the two cases corresponding to q = 1 and q = 2 . The following corollary presents these results.
Corollary 1. 
For q = 1 , 2 , Formula (22) gives, respectively, the following two derivative expressions:
d U ( t ) d t = H U ( t ) ,
d 2 U ( t ) d t 2 = F U ( t ) ,
where H = ( H i , p ) and F = ( F i , p ) are operational matrices of derivatives of order ( N + 1 ) 2 whose elements can be written in the following form:
H i , p = χ i , p 1 , i f   i > p , 0 , o t h e r w i s e , F i , p = χ i , p 2 , i f   i > p + 1 , 0 , o t h e r w i s e .
As an example, for N = 5 , H and F take the following forms:
H = 0 0 0 0 0 0 b a 0 0 0 0 0 0 2 0 0 0 0 b a 2 0 3 0 0 0 0 2 a b 0 4 0 0 2 a b 2 + b a 0 2 a b 1 0 5 0 ,
F = 0 0 0 0 0 0 0 0 0 0 0 0 2 b a 0 0 0 0 0 0 6 0 0 0 0 4 b a 10 0 12 0 0 0 0 2 ( 7 a + b ) b 0 20 0 0 .
Remark 4. 
The operational matrices of integer derivatives in (22) will play an essential role in deriving the proposed algorithm.
Remark 5. 
After establishing the fundamental background for the combined Fibonacci–Lucas polynomials, they may be utilized in solving other types of differential equations, both linear and nonlinear, using the matrix approach.
Remark 6. 
Although the Fibonacci and Lucas polynomials were utilized in several publications to act as basis functions in spectral methods, it is worth mentioning that our Fibonacci–Lucas polynomial basis has the advantage of merging both the Fibonacci and Lucas polynomial bases to obtain several approximate solutions.

3. A Matrix Collocation Approach for the Nonlinear Second-Order Duffing Equation

In this section, we consider the following nonlinear second-order Duffing equation (NSDE) [28,52]:
v ( t ) + f v ( t ) + h v ( t ) + c v 3 ( t ) + d v 5 ( t ) + e v 7 ( t ) = g ( t ) , t [ 0 , T ] ,
which is subject to the conditions
v ( 0 ) = g 1 , v ( 0 ) = g 2 .
The main idea to solve (25) and (26) is to employ the operational matrices of the derivatives of F L i a , b ( t ) , together with applying the collocation method.
  • Now, let us define the following space function:
Δ N = span { F L i a , b ( t ) : i = 0 , 1 , , N } .
Then, any function v N ( t ) Δ N may be approximated as
v N ( t ) i = 0 N θ i F L i a , b ( t ) = θ U ( t ) ,
where U ( t ) is defined in (21), and
θ = [ θ 0 , θ 1 , , θ N ] .
By virtue of (23), (24), and (27), the residual R ( t ) of Equation (25) is given by
R ( t ) = v N ( t ) + f v N ( t ) + h v N ( t ) + c v N 3 ( t ) + d v N 5 ( t ) + e v N 7 ( t ) g ( t ) = θ F U ( t ) + f θ H U ( t ) + h θ U ( t ) + c θ U ( t ) 3 + d θ U ( t ) 5 + e θ U ( t ) 7 g ( t ) .
Now, by using the collocation method, we can obtain the following ( N + 1 ) equation algebraic system in the unknown expansion coefficients of θ i :
R ( t i ) = 0 , i = 1 , 2 , , N 1 , θ U ( 0 ) = g 1 , θ d U ( 0 ) d t = g 2 ,
where { t i = i N + 1 : i = 1 , 2 , , N 1 } are some collocation points. Therefore, a system in (29) can be solved to obtain θ i with the aid of the well-known Newton’s iterative method.

4. A Matrix Collocation Approach for the Nonlinear Fractional-Order Duffing Equation

This section is confined to presenting a numerical algorithm for treating the nonlinear fractional Duffing equation. First, some fundamental properties regarding the fractional calculus are mentioned below.
Definition 1 
([53,54,55,56]). The Gerasimov–Caputo fractional derivative of order μ is defined as
D μ h ( z ) = 1 Γ ( m μ ) 0 z ( z t ) m μ 1 h ( m ) ( t ) d t , μ > 0 , z > 0 ,
where m 1 < μ m , m N .
The operator D μ satisfies the following properties for all m 1 < μ m , m N :
D μ b = 0 , b is a constant ,
D μ z m = 0 , if   m N 0 a n d m < μ , Γ ( m + 1 ) Γ ( m μ + 1 ) z m μ , if   m N 0 a n d m μ ,
where N = { 1 , 2 , 3 , } , N 0 = { 0 } N , and the notation μ denotes the ceiling function.
Now, consider the following nonlinear fractional-order Duffing equation (NFDE) [28,52]:
D α v ( t ) + f D β v ( t ) + h v ( t ) + c v 3 ( t ) + d v 5 ( t ) + e v 7 ( t ) = g ( t ) , t [ 0 , T ] ,
which is directed to the constraints
v ( 0 ) = g 1 , v ( 0 ) = g 2 ,
where α ( 1 , 2 ) , β ( 0 , 1 ) .

4.1. The Operational Matrix of Fractional Derivatives for F L i a , b ( t )

Theorem 5. 
Let U ( t ) be the vector defined in (21). The following formula holds for all μ > 0 and t [ 0 , T ] .
D μ U ( t ) = t μ λ ( μ ) U ( t ) ,
where
λ ( μ ) = 0 0 0 0 γ μ , 0 μ γ μ , μ μ 0 0 γ i , 0 μ γ i , 0 μ 0 γ N , 0 μ γ N , 1 μ γ N , 2 μ γ N , N μ ( N + 1 ) × ( N + 1 ) .
Proof. 
The proof of this theorem can be divided into two cases corresponding to the value of k and μ :
  • Case 1: If μ k N + 1 .
    Applying D μ to the series of of F L k a , b ( t ) in (3) yields
    D μ F L k a , b ( t ) = m = μ k a ( k m ) 2 + b m δ k + m ( m + 1 ) k m 2 1 m ! k m 2 ! Γ ( m μ + 1 ) t m μ .
    As a result of (17), the last relation can be written as
    D μ F L k a , b ( t ) = t μ m = μ k r = 0 m 2 a ( k m ) 2 + b m δ k + m ( m + 1 ) k m 2 1 m ! ϵ m 2 r S r , m k m 2 ! Γ ( m μ + 1 ) F L m 2 r a , b ( t ) ,
    which can be written again in the form
    D μ F L k a , b ( t ) = t μ n = 0 k m = μ k ϵ n δ k + m δ n + m Γ k + m 2 S m n 2 , m ( a ( k m ) + 2 b m ) 2 a k m 2 ! Γ ( m μ + 1 ) F L n a , b ( t ) ,
    which can be rewritten as
    D μ F L k a , b ( t ) = t μ n = 0 k γ k , n μ F L n a , b ( t ) ,
    where
    γ k , n μ = m = μ k ϵ n δ k + m δ n + m Γ k + m 2 S m n 2 , m ( a ( k m ) + 2 b m ) 2 a k m 2 ! Γ ( m μ + 1 ) .
  • Case 2: If 0 k μ 1 .
    Here, we find that
    D μ F L k a , b ( t ) = 0 .
Finally, Cases 1 and 2 may be joined in matrix form as
D μ U ( t ) = t μ λ ( μ ) U ( t ) ,
where the elements of the matrix λ ( μ ) are given in the following form:
λ k , n ( μ ) = γ k , n μ , i f   k μ , k n , 0 , otherwise .
This ends the proof of this theorem. □

4.2. Collocation Algorithm for the NFDE

Using similar procedures as in the preceding section, we can obtain the following residual based on Theorem 5:
R ¯ ( t ) = D α v N ( t ) + f D β v N ( t ) + h v N ( t ) + c v N 3 ( t ) + d v N 5 ( t ) + e v N 7 ( t ) g ( t ) = t α θ λ ( α ) U ( t ) + f t β θ λ ( β ) U ( t ) + h θ U ( t ) + c θ U ( t ) 3 + d θ U ( t ) 5 + e θ U ( t ) 7 g ( t ) .
As a result, we may obtain the following ( N + 1 ) system of equations by using the collocation method:
R ¯ ( t i ) = 0 , i = 1 , 2 , , N 1 , θ U ( 0 ) = g 1 , θ d U ( 0 ) d t = g 2 ,
where { t i = i N + 1 : i = 1 , 2 , , N 1 } .
The above system can be solved to obtain θ i with the aid of the well-known Newton’s iterative method.

5. Error Bound

In this section, we aim to demonstrate that when N approaches infinity, R ( t ) and R ¯ ( t ) converge to zero. For the unknown function v ( t ) , we derive various error bounds and derivatives of this function.
Theorem 6. 
Assume that d i v ( t ) d t i C ( [ 0 , T ] ) , i = 0 , 1 , 2 , , N , let v N ( t ) be the proposed approximate solution belonging to Δ N , and define
N = sup t [ 0 , T ] d N + 1 v ( t ) d t N + 1 .
Consequently, this estimate holds:
v ( t ) v N ( t ) 2 N T N + 3 2 2 N + 3 ( N + 1 ) ! .
Proof. 
Consider the following Taylor expansion of v ( t ) about the point t = 0 :
χ N ( t ) = i = 0 N d i v ( t ) d t i t = 0 t i i ! ,
v ( t ) χ N ( t ) = t N + 1 ( N + 1 ) ! d N + 1 v ( t ) d t N + 1 t = c , c [ 0 , T ] .
Because v N ( t ) is the best approximation solution of v ( t ) , we have, using the idea of best approximation,
v ( t ) v N ( t ) 2 2 v ( t ) χ N ( t ) 2 2 0 T N 2 t 2 ( N + 1 ) ( ( N + 1 ) ! ) 2 d t = N 2 T 2 N + 3 ( 2 N + 3 ) ( ( N + 1 ) ! ) 2 ,
and therefore, we have
v ( t ) v N ( t ) 2 N T N + 3 2 2 N + 3 ( N + 1 ) ! .
Theorem 7. 
Suppose that v ( t ) and v N ( t ) meet the assumption of Theorem 6 and
σ N , m = sup t [ 0 , T ] d N + 1 v ( t ) d t N + 1 m , m = 2 , 3 , 4 , .
The following estimation holds:
v m ( t ) v N m ( t ) 2 σ N , m T m ( N + 1 ) + 1 2 ( 2 m ( N + 1 ) + 1 ) ( ( N + 1 ) ! ) m .
Proof. 
At the point t = 0 , we can use the Taylor expansion of v ( t ) in (42) to write
v m ( t ) χ N m ( t ) = t m ( N + 1 ) ( ( N + 1 ) ! ) m d N + 1 v ( t ) d t N + 1 t = c m , c [ 0 , T ] .
Imitating similar steps as in Theorem 6 in accordance with the best approximation’s concept, one has
v m ( t ) v N m ( t ) 2 2 v m ( t ) χ N m ( t ) 2 2 0 T σ N , m 2 t 2 m ( N + 1 ) ( ( N + 1 ) ! ) 2 m d t = σ N , m 2 T 2 m ( N + 1 ) + 1 ( 2 m ( N + 1 ) + 1 ) ( ( N + 1 ) ! ) 2 m ,
and this leads to
v m ( t ) v N m ( t ) 2 σ N , m T m ( N + 1 ) + 1 2 ( 2 m ( N + 1 ) + 1 ) ( ( N + 1 ) ! ) m .
Theorem 8. 
Suppose that v ( t ) , v N ( t ) , and d i v ( t ) d t i meet the assumption of Theorem 6 and
τ N , m = sup t [ 0 , T ] d N m + 1 v ( t ) d t N m + 1 , m N .
Then, the following estimation holds:
d m d t m [ v ( t ) v N ( t ) ] 2 τ N , m T N m + 3 2 2 ( N m ) + 3 ( N m + 1 ) ! .
Proof. 
Assume that d m χ N ( t ) d t m is the Taylor expansion of d m v ( t ) d t m about the point t = 0 ; then, the residual between d m v ( t ) d t m and d m χ N ( t ) d t m can be written as
d m d t m [ v ( t ) χ N ( t ) ] = t N m + 1 ( N m + 1 ) ! d N m + 1 v ( t ) d t N m + 1 t = n , n [ 0 , T ] .
Since d m v N ( t ) d t m is the best approximate solution of d m v ( t ) d t m , then according to the definition of the best approximation, we obtain
d m d t m [ v ( t ) v N ( t ) ] 2 d m d t m [ v ( t ) χ N ( t ) ] 2 .
We obtain the desired result by performing steps as in Theorem 6. □
Theorem 9. 
Assume that the Gerasimov–Caputo operator D μ v ( t ) C ( [ 0 , T ] ) and the conditions of Theorem 6 hold. Then,
D μ [ v ( t ) v N ( t ) ] 2 N T N μ + 3 2 2 ( N μ ) + 3 Γ ( N μ + 2 ) .
Proof. 
Using Equation (43) along with the application of the operator D μ , one obtains
D μ [ v ( t ) v N ( t ) ] 2 2 0 T N 2 t 2 ( N μ + 1 ) ( Γ ( N μ + 2 ) ) 2 d t = N 2 T 2 ( N μ ) + 3 ( 2 ( N μ ) + 3 ) ( Γ ( N μ + 2 ) ) 2 .
Therefore, we obtain
D μ [ v ( t ) v N ( t ) ] 2 N T N μ + 3 2 2 ( N μ ) + 3 Γ ( N μ + 2 ) .
Theorem 10. 
Let R ( t ) be the residual of Equation (25) given by (28). Then, R ( t ) 2 will be sufficiently small for the sufficiently large values of N.
Proof. 
R ( t ) of Equation (28) can be written as
R ( t ) = v N ( t ) + f v N ( t ) + h v N ( t ) + c v N 3 ( t ) + d v N 5 ( t ) + e v N 7 ( t ) g ( t ) = d 2 d t 2 [ v N ( t ) v ( t ) ] + f d d t [ v N ( t ) v ( t ) ] + h [ v N ( t ) v ( t ) ] + c [ v N 3 ( t ) v 3 ( t ) ] + d [ v N 5 ( t ) v 5 ( t ) ] + e [ v N 7 ( t ) v 7 ( t ) ] .
Taking . 2 and using Theorems 6–8, we obtain
R ( t ) 2 τ N , 2 T N 1 2 2 N 1 ( N 1 ) ! + f τ N , 1 T N + 1 2 2 N + 1 ( N ) ! + h N T N + 3 2 2 N + 3 ( N + 1 ) ! + c σ N , 3 T 3 N + 7 2 ( 6 N + 7 ) ( ( N + 1 ) ! ) 3 + d σ N , 5 T 5 N + 11 2 ( 10 N + 11 ) ( ( N + 1 ) ! ) 5 + e σ N , 7 T 7 N + 15 2 ( 14 N + 15 ) ( ( N + 1 ) ! ) 7 .
At last, it is clear from Equation (46) that R ( t ) 2 will be small enough for suitably large values of N. This concludes the proof of the theorem. □
Theorem 11. 
Let R ¯ ( t ) be the residual of Equation (32) given by (41). Then, R ¯ ( t ) 2 will be sufficiently small for the sufficiently large values of N.
Proof. 
R ¯ ( t ) of Equation (41) can be written as
R ¯ ( t ) = D α v N ( t ) + f D β v N ( t ) + h v N ( t ) + c v N 3 ( t ) + d v N 5 ( t ) + e v N 7 ( t ) g ( t ) = D α [ v N ( t ) v ( t ) ] + f D β [ v N ( t ) v ( t ) ] + h [ v N ( t ) v ( t ) ] + c [ v N 3 ( t ) v 3 ( t ) ] + d [ v N 5 ( t ) v 5 ( t ) ] + e [ v N 7 ( t ) v 7 ( t ) ] .
Taking . 2 and using Theorems 6–9, we obtain
R ¯ ( t ) 2 N T N α + 3 2 2 ( N α ) + 3 Γ ( N α + 2 ) + f N T N β + 3 2 2 ( N β ) + 3 Γ ( N β + 2 ) + h N T N + 3 2 2 N + 3 ( N + 1 ) ! + c σ N , 3 T 3 N + 7 2 ( 6 N + 7 ) ( ( N + 1 ) ! ) 3 + d σ N , 5 T 5 N + 11 2 ( 10 N + 11 ) ( ( N + 1 ) ! ) 5 + e σ N , 7 T 7 N + 15 2 ( 14 N + 15 ) ( ( N + 1 ) ! ) 7 .
Finally, it is clear from Equation (48) that R ¯ ( t ) 2 will be small enough for sufficiently high N values. So, this theorem’s proof is complete. □

6. Illustrative Examples

Evaluation of our suggested collocation methods is the focus of this section. We solve a few test problems and present a few comparisons to ensure that our suggested methods are applicable and accurate.
The absolute errors (AEs) in the given tables are
A E = | v ( t ) v N ( t ) | .
Example 1 
([28,52]). Consider the following NFDE:
D α v ( t ) + f D β v ( t ) + h v ( t ) + c v 3 ( t ) + d v 5 ( t ) + e v 7 ( t ) = g ( t ) , t [ 0 , T ] ,
which is subject to the conditions
v ( 0 ) = 1 2 , v ( 0 ) = 1 2 ,
and g ( t ) is selected in a way that makes the exact solution become v ( t ) = e t 2 .
Equation (1) is solved using our proposed algorithm for T = 1 and T = 12 :
  • Case 1: For T = 1 and f = 4 , h = 2 , c = 3 , d = 0 , e = 0 , Table 1 presents the AEs at different values of a , b at N = 13 when α = 2 , β = 1 . Furthermore, Figure 1 shows the AEs at a = 2 , b = 3 at different values of N when α = 2 , β = 1 . Figure 2 shows that the approximate solutions have smaller variations for values of α and β near the values α = 2 and β = 1 when a = 2 , b = 3 .  Table 2 presents the absolute errors (AEs) at different values of a , b at N = 12 when α = 1.9 and β = 0.9 .
  • Case 2: For T = 12 , α = 2 , β = 1 and f = 2 , h = 1 , c = 8 , d = 0 , e = 0 , Table 3 presents a comparison between our method at N = 14 and method in [52]. Figure 3 shows the AE (left) and exact, approximate solutions (right) of the example at N = 17 and a = 2 , b = 3 , which demonstrates that the results of our method are extremely close to the exact solution.
Example 2 
([52]). Consider the following NFDE:
D α v ( t ) + f D β v ( t ) + h v ( t ) + c v 3 ( t ) + d v 5 ( t ) + e v 7 ( t ) = g ( t ) , t [ 0 , T ] ,
which is subject to the conditions
v ( 0 ) = 1 , v ( 0 ) = 0 ,
and g ( t ) is selected in a way that makes the exact solution become v ( t ) = cos ( t ) at f = h = c = 1 , where d = 0 .
Equation (2) is solved using our algorithm for T = 1 and T = 5 :
  • Case 1: For T = 1 , Table 4 and Table 5 present the AEs at different values of a , b at, respectively, N = 2 and N = 3 when α = 1.9 , β = 0.9 . Table 6 presents the AEs at different values of a , b at N = 12 when α = 2 , β = 1 . Figure 4 shows that the approximate solutions have smaller variations for values of α and β near the values α = 2 and β = 1 when a = 1 , b = 3 .  Table 7 presents the absolute errors (AEs) at different values of a , b at N = 13 when α = 1.8 and β = 0.8 .
  • Case 2: For T = 5 and α = 2 , β = 1 , Figure 5 shows the AEs at different values of N when a = b = 1 . Also, Figure 6 illustrates the AEs at different values of N when a = b = 2 . These figures show the accuracy of our method.
Remark 7. 
The results of Table 4 demonstrate that the small values of N cause clear variation in the error values for different choices of the parameters a and b. For instance, for N = 2 , the error changes from 2.8 × 10 2 to 1.05 × 10 4 at N = 2 due to the change of the involved parameters. Especially at larger time steps, the error differences brought on by variations in a and b become slightly less obvious.
Example 3. 
Consider the following NFDE:
D α v ( t ) + f D β v ( t ) + h v ( t ) + c v 3 ( t ) + d v 5 ( t ) + e v 7 ( t ) = 0 , t [ 0 , T ] ,
which is governed by
χ ( 0 ) = 0 , χ ( 0 ) = 1 .
Due to the nonavailability of the exact solution, let us define the following absolute residual error norm at T = 1 :
R E = max t ( 0 , 1 ) D α v N ( t ) + f D β v N ( t ) + h v N ( t ) + c v N 3 ( t ) + d v N 5 ( t ) + e v N 7 ( t ) ,
and apply our method at f = h = c = d = e = T = 1 when α = 2 , β = 1 .
Figure 7 illustrates the RE (left) and approximate solution (right) at a = b = 1 and N = 16 . Also, Figure 8 illustrates the RE at different values of N when a = 1 and b = 2 .
Remark 8. 
The numerical results obtained in this section show that we have received several highly accurate approximate solutions using the combined Fibonacci–Lucas polynomials. This gives us an advantage in introducing these generalized polynomials.
Remark 9. 
We comment that the approximations resulting from utilizing other generalized polynomials, such as ultraspherical and Jacobi polynomials, do not change significantly due to the change of their parameters, especially for large values of the retained modes; see, for example, [57].
Remark 10. 
The combined Fibonacci–Lucas polynomials provide excellent approximations, since an order error 10 16 is sometimes reached for certain choices of a , b , and N.
Remark 11. 
Combining Fibonacci–Lucas polynomials leads to little improvement in numerical results, since the change of the two parameters a and b leads to small changes in the resulting errors.

7. Concluding Remarks

This paper established a generalized sequence of polynomials, namely, unified Fibonacci–Lucas polynomials. The well-known polynomial sequences of Fibonacci and Lucas are particular types of these polynomials. These polynomials have two parameters, yielding various solutions for every choice of them. Some theoretical results concerned with these polynomials were the keys to implementing our numerical algorithms for solving the second-order and the fractional-order Duffing nonlinear DEs via the celebrated collocation method. The operational matrices of derivatives of the Fibonacci–Lucas polynomials that are derived using the derivative formula of these polynomials were employed to design the proposed numerical algorithm. We comment here that For every choice of the two parameters a and b, a numerical solution was obtained. The numerical results show that the change in the absolute errors caused by variations in the two parameters of the combined Fibonacci–Lucas polynomials is minimal when choosing large values of the retained modes; however, these variations become larger for small values of the retained modes. We aim to investigate the impact of these parameters when solving other types of differential equations. We believe that this is the first time these polynomials have been employed in applications. Future research directions may involve employing these polynomials to solve other types of DEs.

Author Contributions

Conceptualization, W.M.A.-E. and A.G.A.; Methodology, W.M.A.-E., O.M.A. and A.G.A.; Software, W.M.A.-E. and A.G.A.; Validation, W.M.A.-E., O.M.A., A.K.A. and A.G.A.; Formal analysis, W.M.A.-E. and A.G.A.; Investigation, W.M.A.-E., O.M.A., A.K.A. and A.G.A.; Writing—original draft, W.M.A.-E. and A.G.A.; Writing—review & editing, W.M.A.-E. and A.G.A.; Supervision, W.M.A.-E.; Funding acquisition, A.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research work was funded by Umm Al-Qura University, Saudi Arabia, under grant number 25UQU4331287GSSR03.

Data Availability Statement

The data are contained within the article.

Acknowledgments

The authors extend their appreciation to Umm Al-Qura University, Saudi Arabia, for funding this research work through grant number 25UQU4331287GSSR03.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Boyd, J.P. Chebyshev and Fourier Spectral Methods; Courier Corporation: North Chelmsford, MA, USA, 2001. [Google Scholar]
  2. Hesthaven, J.; Gottlieb, S.; Gottlieb, D. Spectral Methods for Time-Dependent Problems; Cambridge University Press: Cambridge, UK, 2007; Volume 21. [Google Scholar]
  3. Trefethen, L.N. Spectral Methods in MATLAB; SIAM: Philadelphia, PA, USA, 2000; Volume 10. [Google Scholar]
  4. Koshy, T. Fibonacci and Lucas Numbers with Applications; John Wiley & Sons: Hoboken, NJ, USA, 2019. [Google Scholar]
  5. Ozkan, E.; Altun, I. Generalized Lucas polynomials and relationships between the Fibonacci polynomials and Lucas polynomials. Comm. Algebra 2019, 47, 4020–4030. [Google Scholar] [CrossRef]
  6. Özkan, E.; Taştan, M. On Gauss Fibonacci polynomials, on Gauss Lucas polynomials and their applications. Comm. Algebra 2020, 48, 952–960. [Google Scholar] [CrossRef]
  7. Du, T.; Wu, Z. Some identities involving the bi-periodic Fibonacci and Lucas polynomials. AIMS Math. 2023, 8, 5838–5846. [Google Scholar] [CrossRef]
  8. Haq, S.; Ali, I. Approximate solution of two-dimensional Sobolev equation using a mixed Lucas and Fibonacci polynomials. Eng. Comput. 2022, 38, 2059–2068. [Google Scholar] [CrossRef]
  9. Mohamed, A.S. Fibonacci collocation pseudo-spectral method of variable-order space-fractional diffusion equations with error analysis. AIMS Math. 2022, 7, 14323–14337. [Google Scholar] [CrossRef]
  10. Manohara, G.; Kumbinarasaiah, S. An Innovative Fibonacci Wavelet Collocation Method for the Numerical Approximation of Emden-Fowler Equations. Appl. Numer. Math. 2024, 201, 347–369. [Google Scholar]
  11. Abd-Elhameed, W.M.; Al-Harbi, A.K.; Alqubori, O.M.; Alharbi, M.H.; Atta, A.G. Collocation method for the time-fractional generalized Kawahara equation using a certain Lucas polynomial sequence. Axioms 2025, 14, 114. [Google Scholar] [CrossRef]
  12. Abd-Elhameed, W.M. Novel expressions for the derivatives of sixth-kind Chebyshev polynomials: Spectral solution of the non-linear one-dimensional Burgers’ equation. Fractal Fract. 2021, 5, 74. [Google Scholar] [CrossRef]
  13. Youssri, Y.H.; Abd-Elhameed, W.M.; Abdelhakem, M. A robust spectral treatment of a class of initial value problems using modified Chebyshev polynomials. Math. Methods Appl. Sci. 2021, 44, 9224–9236. [Google Scholar] [CrossRef]
  14. Ahmed, H.M. A new first finite class of classical orthogonal polynomials operational matrices: An application for solving fractional differential equations. Contemp. Math. 2023, 4, 974–994. [Google Scholar] [CrossRef]
  15. Tohidi, E.; Bhrawy, A.H.; Erfani, K. A collocation method based on Bernoulli operational matrix for numerical solution of generalized pantograph equation. Appl. Math. Model. 2013, 37, 4283–4294. [Google Scholar] [CrossRef]
  16. Abd-Elhameed, W.M.; Alsuyuti, M.M. New spectral algorithm for fractional delay pantograph equation using certain orthogonal generalized Chebyshev polynomials. Commun. Nonlinear Sci. Numer. Simul. 2025, 141, 108479. [Google Scholar] [CrossRef]
  17. Ahmed, H.M. New generalized Jacobi Galerkin operational matrices of derivatives: An algorithm for solving multi-term variable-order time-fractional diffusion-wave equations. Fractal Fract. 2024, 8, 68. [Google Scholar] [CrossRef]
  18. Enns, R.M. Nonlinear Phenomena in Physics and Biology; Springer Science & Business Media: New York, NY, USA, 2012; Volume 75. [Google Scholar]
  19. Hilborn, R.C. Chaos and Nonlinear Dynamics: An Introduction for Scientists and Engineers; Oxford University Press: Oxford, UK, 2000. [Google Scholar]
  20. Magin, R. Fractional calculus in bioengineering, part 1. Crit. Rev. Biomed. Eng. 2004, 32, 1–104. [Google Scholar] [CrossRef]
  21. Abd-Elhameed, W.M.; Ahmed, H.M. Spectral solutions for the time-fractional heat differential equation through a novel unified sequence of Chebyshev polynomials. AIMS Math. 2024, 9, 2137–2166. [Google Scholar] [CrossRef]
  22. Wang, F.; Hou, E.; Salama, S.A.; Khater, M.M.A. Numerical investigation of the nonlinear fractional Ostrovsky equation. Fractals 2022, 30, 2240142. [Google Scholar] [CrossRef]
  23. Amin, A.Z.; Abdelkawy, M.A.; Solouma, E.; Al-Dayel, I. A spectral collocation method for solving the non-linear distributed-order fractional Bagley–Torvik differential equation. Fractal Fract. 2023, 7, 780. [Google Scholar] [CrossRef]
  24. Heydari, M.H.; Razzaghi, M.; Baleanu, D. A numerical method based on the piecewise Jacobi functions for distributed-order fractional Schrödinger equation. Commun. Nonlinear Sci. Numer. Simul. 2023, 116, 106873. [Google Scholar] [CrossRef]
  25. Kovacic, I.; Brennan, M.J. The Duffing Equation: Nonlinear Oscillators and Their Behaviour; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  26. Alvarez-Ramirez, J.; Espinosa-Paredes, G.; Puebla, H. Chaos control using small-amplitude damping signals. Phys. Lett. A 2003, 316, 196–205. [Google Scholar] [CrossRef]
  27. Aghdam, M.M.; Fallah, A. Analytical Solutions for Generalized Duffing Equation. In Nonlinear Approaches in Engineering Applications: Advanced Analysis of Vehicle Related Technologies; Springer: Cham, Switzerland, 2016; pp. 263–278. [Google Scholar]
  28. El-Sayed, A.A. Pell-Lucas polynomials for numerical treatment of the nonlinear fractional-order Duffing equation. Demonstr. Math. 2023, 56, 20220220. [Google Scholar] [CrossRef]
  29. Wang, Y.R.; Chen, G.W. Predicting multiple numerical solutions to the Duffing equation using machine learning. Appl. Sci. 2023, 13, 10359. [Google Scholar] [CrossRef]
  30. Kamiński, M.; Corigliano, A. Numerical solution of the Duffing equation with random coefficients. Meccanica 2015, 50, 1841–1853. [Google Scholar] [CrossRef]
  31. Elías-Zúñiga, A. A general solution of the Duffing equation. Nonlinear Dynam. 2006, 45, 227–235. [Google Scholar] [CrossRef]
  32. Abdulganiy, R.I.; Wen, S.; Feng, Y.; Zhang, W.; Tang, N. Adapted block hybrid method for the numerical solution of Duffing equations and related problems. AIMS Math. 2021, 6, 14013–14034. [Google Scholar] [CrossRef]
  33. Geng, F. Numerical solutions of Duffing equations involving both integral and non-integral forcing terms. Comput. Math. Appl. 2011, 61, 1935–1938. [Google Scholar] [CrossRef]
  34. Tabatabaei, K.; Gunerhan, E. Numerical solution of Duffing equation by the differential transform method. Appl. Math. Inf. Sci. Lett. 2014, 2, 1–6. [Google Scholar]
  35. Salas, A.H.; Castillo, J.E. Exact solutions to cubic Duffing equation for a nonlinear electrical circuit. Visión Electrónica 2014, 8, 46–53. [Google Scholar]
  36. Singh, H.; Srivastava, H.M. Numerical investigation of the fractional-order Liénard and Duffing equations arising in oscillating circuit theory. Front. Phys. 2020, 8, 120. [Google Scholar] [CrossRef]
  37. Kim, V.A.; Parovik, R.I.; Rakhmonov, Z.R. Implicit finite-difference scheme for a Duffing oscillator with a derivative of variable fractional order of the Riemann-Liouville Type. Mathematics 2023, 11, 558. [Google Scholar] [CrossRef]
  38. Kim, V.A.; Parovik, R.I. Some aspects of the numerical analysis of a fractional duffing oscillator with a fractional variable order derivative of the Riemann-Liouville type. AIP Conf. Proc. 2022, 2467, 060014. [Google Scholar]
  39. Canuto, C.; Hussaini, M.Y.; Quarteroni, A.; Zang, T.A. Spectral Methods in Fluid Dynamics; Springer: Berlin/Heidelberg, Germany, 1988. [Google Scholar]
  40. Alsuyuti, M.M.; Doha, E.H.; Ezz-Eldien, S.S. Galerkin operational approach for multi-dimensions fractional differential equations. Commun. Nonlinear Sci. Numer. Simul. 2022, 114, 106608. [Google Scholar] [CrossRef]
  41. Atta, A.G.; Abd-Elhameed, W.M.; Moatimid, G.M.; Youssri, Y.H. Shifted fifth-kind Chebyshev Galerkin treatment for linear hyperbolic first-order partial differential equations. Appl. Numer. Math. 2021, 167, 237–256. [Google Scholar] [CrossRef]
  42. Hafez, R.M.; Youssri, Y.H. Fully Jacobi–Galerkin algorithm for two-dimensional time-dependent PDEs arising in physics. Int. J. Mod. Phys. C 2024, 35, 2450034. [Google Scholar] [CrossRef]
  43. Atta, A.G.; Abd-Elhameed, W.M.; Moatimid, G.M.; Youssri, Y.H. Modal shifted fifth-kind Chebyshev tau integral approach for solving heat conduction equation. Fractal Fract. 2022, 6, 619. [Google Scholar] [CrossRef]
  44. El-Sayed, A.A.; Boulaaras, S.; Sweilam, N.H. Numerical solution of the fractional-order logistic equation via the first-kind Dickson polynomials and spectral tau method. Math. Meth. Appl. Sci. 2023, 46, 8004–8017. [Google Scholar] [CrossRef]
  45. Tu, H.; Wang, Y.; Yang, C.; Liu, W.; Wang, X. A Chebyshev–Tau spectral method for coupled modes of underwater sound propagation in range-dependent ocean environments. Phys. Fluids 2023, 35, 037113. [Google Scholar] [CrossRef]
  46. Mostafa, D.; Zaky, M.A.; Hafez, R.M.; Hendy, A.S.; Abdelkawy, M.A.; Aldraiweesh, A.A. Tanh Jacobi spectral collocation method for the numerical simulation of nonlinear Schrödinger equations on unbounded domain. Math. Meth. Appl. Sci. 2023, 46, 656–674. [Google Scholar] [CrossRef]
  47. Weera, W.; Kumar, R.S.V.; Sowmya, G.; Khan, U.; Prasannakumara, B.C.; Mahmoud, E.E.; Yahia, I.S. Convective-radiative thermal investigation of a porous dovetail fin using spectral collocation method. Ain Shams Eng. J. 2023, 14, 101811. [Google Scholar] [CrossRef]
  48. Atta, A.G. Two spectral Gegenbauer methods for solving linear and nonlinear time fractional Cable problems. Int. J. Mod. Phys. C 2023, 35, 2450070. [Google Scholar] [CrossRef]
  49. Abd-Elhameed, W.M.; Alqubori, O.M.; Atta, A.G. A collocation procedure for treating the time-fractional FitzHugh–Nagumo differential equation using shifted Lucas polynomials. Mathematics 2024, 12, 3672. [Google Scholar] [CrossRef]
  50. Abd-Elhameed, W.M.; Alqubori, O.M.; Atta, A.G. A collocation procedure for the numerical treatment of FitzHugh–Nagumo equation using a kind of Chebyshev polynomials. AIMS Math. 2025, 10, 1201–1223. [Google Scholar] [CrossRef]
  51. Lyapin, A.P.; Akhtamova, S.S. Recurrence relations for the sections of the generating series of the solution to the multidimensional difference equation. Vestnik Udmurtskogo Universiteta Matematika Mekhanika Komp’yuternye Nauki 2021, 31, 414–423. [Google Scholar] [CrossRef]
  52. Pirmohabbati, P.; Sheikhani, A.H.R.; Najafi, H.S.; Ziabari, A.A. Numerical solution of full fractional Duffing equations with Cubic-Quintic-Heptic nonlinearities. AIMS Math. 2020, 5, 1621–1641. [Google Scholar] [CrossRef]
  53. Novozhenova, O.G. Life and science of Alexey Gerasimov, one of the pioneers of fractional calculus in Soviet Union. Fract. Calc. Appl. Anal. 2017, 20, 790–809. [Google Scholar] [CrossRef]
  54. Caputo, M.; Fabrizio, M. On the notion of fractional derivative and applications to the hysteresis phenomena. Meccanica 2017, 52, 3043–3052. [Google Scholar] [CrossRef]
  55. Caputo, M. Linear models of dissipation whose Q is almost frequency independent—II. Geophys. J. Int. 1967, 13, 529–539. [Google Scholar] [CrossRef]
  56. Gerasimov, A.N. Generalization of linear deformation laws and their application to internal friction problems. Appl. Math. Mech. 1948, 12, 529–539. [Google Scholar]
  57. Hafez, R.M.; Zaky, M.A.; Abdelkawy, M.A. Jacobi spectral Galerkin method for distributed-order fractional Rayleigh–Stokes problem for a generalized second grade fluid. Front. Phys. 2020, 7, 240. [Google Scholar] [CrossRef]
Figure 1. The AEs of Example 1 at a = 2 , b = 3 .
Figure 1. The AEs of Example 1 at a = 2 , b = 3 .
Axioms 14 00314 g001
Figure 2. Different solutions of Example 1 at N = 12 and different values of α , β .
Figure 2. Different solutions of Example 1 at N = 12 and different values of α , β .
Axioms 14 00314 g002
Figure 3. The AEs (left) and exact, approximate solutions (right) of Example 1 at N = 17 and a = 2 , b = 3 .
Figure 3. The AEs (left) and exact, approximate solutions (right) of Example 1 at N = 17 and a = 2 , b = 3 .
Axioms 14 00314 g003
Figure 4. Different solutions of Example 2 at N = 13 and different values of α , β .
Figure 4. Different solutions of Example 2 at N = 13 and different values of α , β .
Axioms 14 00314 g004
Figure 5. The AEs of Example 2 at a = b = 1 .
Figure 5. The AEs of Example 2 at a = b = 1 .
Axioms 14 00314 g005
Figure 6. The AEs of Example 2 at a = b = 2 .
Figure 6. The AEs of Example 2 at a = b = 2 .
Axioms 14 00314 g006
Figure 7. The RE (left) and approximate solution (right) of Example 3 at a = b = 1 and N = 16 .
Figure 7. The RE (left) and approximate solution (right) of Example 3 at a = b = 1 and N = 16 .
Axioms 14 00314 g007
Figure 8. The RE of Example 3 at a = 1 , b = 2 .
Figure 8. The RE of Example 3 at a = 1 , b = 2 .
Axioms 14 00314 g008
Table 1. AEs of Example 1 at N = 13 .
Table 1. AEs of Example 1 at N = 13 .
t a = 1 , b = 1 a = 1 , b = 3 a = 2 , b = 1 a = 2 , b = 2
0.11.77636 × 10 15 1.9984 × 10 15 3.88578 × 10 16 1.16573 × 10 15
0.23.44169 × 10 15 3.66374 × 10 15 6.66134 × 10 16 1.9984 × 10 15
0.34.4964 × 10 15 4.82947 × 10 15 6.66134 × 10 16 2.66454 × 10 15
0.45.21805 × 10 15 5.71765 × 10 15 7.77156 × 10 16 3.10862 × 10 15
0.55.77316 × 10 15 6.21725 × 10 15 8.88178 × 10 16 3.38618 × 10 15
0.66.10623 × 10 15 6.66134 × 10 15 1.05471 × 10 16 3.71925 × 10 15
0.76.46705 × 10 15 6.93889 × 10 15 1.08247 × 10 15 3.85803 × 10 15
0.86.74461 × 10 15 7.35523 × 10 15 1.19349 × 10 15 4.05231 × 10 15
0.97.13318 × 10 15 7.60503 × 10 15 1.02696 × 10 15 4.16334 × 10 15
Table 2. AEs of Example 1 at N = 12 .
Table 2. AEs of Example 1 at N = 12 .
t a = 1 , b = 3 a = 2 , b = 4 a = 3 , b = 5
0.11.11022 × 10 16 3.33067 × 10 16 1.11022 × 10 16
0.22.22045 × 10 16 4.44089 × 10 16 2.77556 × 10 16
0.32.22045 × 10 16 7.21645 × 10 16 3.33067 × 10 16
0.42.22045 × 10 16 8.32667 × 10 16 3.33067 × 10 16
0.51.66533 × 10 16 8.32667 × 10 16 3.88578 × 10 16
0.63.33067 × 10 16 8.32667 × 10 16 3.88578 × 10 16
0.72.49805 × 10 16 8.88178 × 10 16 4.16334 × 10 16
0.83.60822 × 10 16 1.08247 × 10 15 4.99649 × 10 16
0.94.44089 × 10 16 1.05471 × 10 15 6.38378 × 10 16
Table 3. Comparison of AEs of Example 1 at a = 1 , b = 1 .
Table 3. Comparison of AEs of Example 1 at a = 1 , b = 1 .
tMethod in [52]Our Method at N = 14
02.9762 × 10 3 0
1.0081.0924 × 10 3 9.86874 × 10 8
2.0163.9923 × 10 4 5.27874 × 10 8
3.0121.4726 × 10 4 2.30097 × 10 8
4.0085.4206 × 10 5 9.63606 × 10 9
5.0041.9913 × 10 5 3.97005 × 10 9
67.3004 × 10 6 1.61747 × 10 9
7.0082.6387 × 10 6 6.45425 × 10 10
8.0049.6327 × 10 7 2.57282 × 10 10
93.5087 × 10 7 1.08199 × 10 10
10.0081.2596 × 10 7 1.90535 × 10 10
11.0044.5672 × 10 8 2.68969 × 10 9
11.9881.6717 × 10 8 3.06842 × 10 6
Table 4. AEs of Example 2 at N = 2 .
Table 4. AEs of Example 2 at N = 2 .
t a = 0.5 , b = 4 a = 2 , b = 8 a = 1 , b = 3
0.11.05497 × 10 4 3.47492 × 10 4 6.66711 × 10 4
0.23.72070 × 10 4 1.34005 × 10 3 2.61693 × 10 3
0.36.50468 × 10 4 2.82842 × 10 3 5.70139 × 10 3
0.46.93596 × 10 4 4.56552 × 10 3 9.67302 × 10 3
0.51.58985 × 10 4 6.20886 × 10 3 1.41893 × 10 2
0.61.38779 × 10 3 7.32403 × 10 3 1.88159 × 10 2
0.74.46876 × 10 3 7.38901 × 10 3 2.30307 × 10 2
0.89.68835 × 10 3 5.79933 × 10 3 2.62293 × 10 2
0.91.77274 × 10 2 7.87423 × 10 3 2.77310 × 10 2
Table 5. AEs of Example 2 at N = 3 .
Table 5. AEs of Example 2 at N = 3 .
t a = 0.5 , b = 4 a = 2 , b = 8 a = 1 , b = 3
0.11.08735 × 10 4 3.14522 × 10 4 1.70807 × 10 4
0.22.99588 × 10 4 1.01358 × 10 3 5.05742 × 10 4
0.34.43908 × 10 4 1.80481 × 10 3 8.12949 × 10 4
0.45.10886 × 10 4 2.49366 × 10 3 9.98418 × 10 4
0.55.65089 × 10 4 2.98097 × 10 3 1.06351 × 10 3
0.67.63035 × 10 4 3.25955 × 10 3 1.10154 × 10 3
0.71.34886 × 10 3 3.40979 × 10 3 1.29345 × 10 3
0.82.64908 × 10 3 3.59449 × 10 3 1.90254 × 10 3
0.95.06659 × 10 3 4.05282 × 10 3 3.26851 × 10 3
Table 6. AEs of Example 2 at N = 12 .
Table 6. AEs of Example 2 at N = 12 .
t a = 1 , b = 1 a = 1 , b = 3 a = 2 , b = 1 a = 2 , b = 2
0.11.9984 × 10 15 6.66134 × 10 16 8.88178 × 10 16 1.55431 × 10 15
0.23.88578 × 10 15 8.88178 × 10 16 9.99201 × 10 16 2.66454 × 10 15
0.35.32907 × 10 15 1.33227 × 10 15 2.22045 × 10 15 3.55271 × 10 15
0.46.99441 × 10 15 1.9984 × 10 15 3.21965 × 10 15 5.10703 × 10 15
0.57.99361 × 10 15 2.22045 × 10 15 2.88658 × 10 15 5.66214 × 10 15
0.68.65974 × 10 15 2.66454 × 10 15 3.44169 × 10 15 5.88418 × 10 15
0.78.21565 × 10 15 2.33147 × 10 15 3.55271 × 10 15 5.66214 × 10 15
0.88.32667 × 10 15 2.33147 × 10 15 2.55351 × 10 15 5.88418 × 10 15
0.98.65974 × 10 15 2.44249 × 10 15 3.77476 × 10 15 6.10623 × 10 15
Table 7. AEs of Example 2 at N = 13 .
Table 7. AEs of Example 2 at N = 13 .
t a = 2 , b = 3 a = 3 , b = 3 a = 3 , b = 4
0.11.11022 × 10 16 1.11022 × 10 16 1.11022 × 10 16
0.2000
0.31.11022 × 10 16 1.11022 × 10 16 1.11022 × 10 16
0.44.44089 × 10 16 4.44089 × 10 16 4.44089 × 10 16
0.53.33067 × 10 16 3.33067 × 10 16 3.33067 × 10 16
0.62.22045 × 10 16 2.22045 × 10 16 2.22045 × 10 16
0.71.11022 × 10 16 1.11022 × 10 16 1.11022 × 10 16
0.8000
0.92.22045 × 10 16 2.22045 × 10 15 2.22045 × 10 16
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abd-Elhameed, W.M.; Alqubori, O.M.; Amin, A.K.; Atta, A.G. Numerical Solutions for Nonlinear Ordinary and Fractional Duffing Equations Using Combined Fibonacci–Lucas Polynomials. Axioms 2025, 14, 314. https://doi.org/10.3390/axioms14040314

AMA Style

Abd-Elhameed WM, Alqubori OM, Amin AK, Atta AG. Numerical Solutions for Nonlinear Ordinary and Fractional Duffing Equations Using Combined Fibonacci–Lucas Polynomials. Axioms. 2025; 14(4):314. https://doi.org/10.3390/axioms14040314

Chicago/Turabian Style

Abd-Elhameed, Waleed Mohamed, Omar Mazen Alqubori, Amr Kamel Amin, and Ahmed Gamal Atta. 2025. "Numerical Solutions for Nonlinear Ordinary and Fractional Duffing Equations Using Combined Fibonacci–Lucas Polynomials" Axioms 14, no. 4: 314. https://doi.org/10.3390/axioms14040314

APA Style

Abd-Elhameed, W. M., Alqubori, O. M., Amin, A. K., & Atta, A. G. (2025). Numerical Solutions for Nonlinear Ordinary and Fractional Duffing Equations Using Combined Fibonacci–Lucas Polynomials. Axioms, 14(4), 314. https://doi.org/10.3390/axioms14040314

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop