Next Article in Journal
On a Friction Oscillator of Integer and Fractional Order; Stick–Slip Attractors
Next Article in Special Issue
A Fractional-Order Sliding Mode DTC–SVM Framework for Precision Control of Surgical Robot Actuators
Previous Article in Journal
Existence of Heteroclinic Orbits in Fractional-Order and Integer-Order Coupled Lorenz Systems
Previous Article in Special Issue
On Extended Numerical Discretization Technique of Fractional Models with Caputo-Type Derivatives
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient and Accurate Numerical Approach for Fractional Bagley–Torvik Equations: Hermite Polynomials Combined with Least Squares

by
Heba S. Osheba
1,
Mohamed A. Ramadan
1,* and
Taha Radwan
2,*
1
Mathematics and Computer Science Department, Faculty of Science, Menoufia University, Shebin El-Kom 32511, Egypt
2
Department of Management Information Systems, College of Business and Economics, Qassim University, Buraydah 51452, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Fractal Fract. 2026, 10(1), 37; https://doi.org/10.3390/fractalfract10010037
Submission received: 1 December 2025 / Revised: 2 January 2026 / Accepted: 4 January 2026 / Published: 7 January 2026
(This article belongs to the Special Issue Advanced Numerical Methods for Fractional Functional Models)

Abstract

This paper proposes an efficient and accurate numerical framework for solving fractional Bagley–Torvik equations, which model viscoelastic and memory-dependent dynamic systems. The method combines the Hermite polynomial approximation with a least-squares optimization scheme to achieve high-accuracy solutions. By leveraging the analytical properties of Hermite polynomials, Caputo fractional derivatives were computed efficiently, avoiding the complexities of direct fractional differentiation. The resulting weighted least-squares formulation transforms the problem into a stable algebraic system. Numerical results confirm the method’s superior accuracy, rapid convergence, and robustness compared with existing techniques, demonstrating its potential for broader applications in fractional-order boundary value problems.

1. Introduction

Fractional calculus, a generalization of classical integer-order calculus, has gained significant attention in recent decades for its ability to model complex systems with memory and non-local effects. Unlike traditional derivatives, which depend only on a function’s local behavior, fractional derivatives and integrals incorporate the entire history of a process, making them indispensable for accurately describing phenomena in viscoelasticity, finance, anomalous diffusion, and control theory. The standard form of the fractional Bagley–Torvik equation (BTE) can be written as the following:
A 2 D α y ( t ) + A 1 D β y ( t ) + A 0 y ( t ) = f ( t ) ,
where   A 0 , A 1 ,   a n d   A 2   are constants,   1 < α 2 and 0 < β 1 are fractional orders, and   f t is an external forcing function. Due to its wide-ranging applications in engineering and applied sciences, including modeling damping materials and fluid mechanics, the development of robust and efficient methods for solving the BTE is a critical area of research.
While the analytical solution of the BTE is challenging to obtain, various numerical techniques have been developed to find approximate solutions. These methods include finite difference schemes, wavelet-based approaches, and spectral collocation methods using various polynomial families. Although effective, many of these approaches suffer from limitations, such as complex implementation, computational cost, or potential issues with stability and convergence, especially for higher accuracy requirements. For instance, traditional finite difference methods can struggle with the non-local nature of fractional derivatives, often requiring dense matrices that increase computational burden. Spectral methods offer high accuracy but can be sensitive to the choice of basis functions and the handling of fractional derivatives. While various numerical techniques, including finite difference schemes, wavelet methods, and spectral collocation approaches using different polynomial families, have been developed to approximate solutions for the Bagley–Torvik equation, many encounter limitations such as computational expense, complex implementation, or stability issues when high accuracy is required. Traditional methods often struggle with the non-local nature of fractional derivatives or lead to dense matrices that increase the computational burden. The existing literature demonstrates the utility of orthogonal polynomials; however, an area for improvement remains in combining these basis functions with a robust optimization framework that efficiently handles fractional derivatives and boundary conditions. Our method directly addresses these shortcomings by proposing a novel framework that leverages the analytical properties of Hermite polynomials within a stable least-squares formulation, thereby transforming the problem into a well-conditioned system of algebraic equations and ensuring superior accuracy and rapid convergence compared to existing methods.
The use of orthogonal polynomials, such as Hermite polynomials, offers a powerful alternative for approximating solutions to fractional differential equations. The analytical properties of Hermite polynomials, including their efficient calculation and recursive relationships, provide a strong foundation for developing accurate and stable numerical schemes. Previous work has demonstrated the utility of Hermite polynomials in conjunction with collocation and operational matrices to solve fractional-order systems. However, combining these basis functions with a robust optimization framework remains an area for improvement.
To overcome the shortcomings of existing methods, this paper proposes a novel and highly accurate numerical framework that synergizes the strengths of Hermite polynomial approximation with a least-squares optimization scheme. By expanding the solution in a series of Hermite polynomials, we can leverage their properties to efficiently compute the Caputo fractional derivatives without the complexity of direct discretization. The problem is then recast as a weighted least-squares minimization problem, which transforms the original fractional differential equation into a stable and well-conditioned system of algebraic equations. This approach not only ensures high-accuracy solutions but also provides a systematic way to enforce initial and boundary conditions. Our numerical results will confirm that this combined methodology surpasses existing techniques in accuracy, convergence rate, and overall robustness, highlighting its potential for wider application to general fractional-order boundary value problems.
Several numerical approaches have been developed to obtain approximate solutions of the fractional Bagley–Torvik equation. Jeon and Bu [1] proposed an improved numerical technique based on the fractional integral formula and Adams–Moulton method, achieving higher accuracy for fractional dynamic systems. Similarly, Liu et al. [2] presented a Taylor expansion-based approach for solving the BTE with integral boundary conditions, providing an efficient tool for boundary-value formulations. Other researchers have explored polynomial-based spectral techniques. Taha et al. [3] presented a novel analytical technique for solving fractional Bagley–Torvik equations, specifically applying it to the model describing the motion of a rigid plate in Newtonian fluids, while Zeen El Deen et al. [4] developed a fourth-kind Chebyshev operational Tau algorithm, both demonstrating the power of orthogonal polynomial bases in obtaining high-precision results.
Kamal et al. [5] compared least-squares methods for nonlinear time-fractional gas dynamic equations, confirming the robustness of least-squares formulations in handling fractional nonlinearities. In a related direction, Zhang et al. [6] adopted the Hermite wavelet method to approximate solutions of the BTE, emphasizing the effectiveness of Hermite functions in fractional modeling. Recent comprehensive reviews, such as that by Yang, Zhao, and Li [7], highlighted both the progress and the remaining challenges in developing efficient algorithms for fractional Bagley–Torvik equations. The following authors predominantly focus on solving the fractional Bagley–Torvik equation, which is characterized as a specific type of multi-term linear fractional differential equation. S. T. Ejaz et al. [8] provided a general numerical comparative analysis of methods for solving various fractional differential equations, rather than focusing exclusively on the Bagley–Torvik equation itself. Ford and Connolly [9] developed systems-based decomposition schemes specifically for solving multi-term fractional differential equations, for which the Bagley–Torvik equation is a primary example. Rahimkhani and Ordokhani [10] applied Müntz–Legendre polynomials to solve the Bagley–Torvik equation effectively over large intervals. Sayed et al. [11] utilized a Legendre-Galerkin spectral algorithm, with an application provided specifically for the Bagley–Torvik equation. Askari [12] employed Lucas polynomials for the numerical solution of the fractional Bagley–Torvik equations. El-Gamel et al. [13] used a Chelyshkov–Tau approach to address the Bagley–Torvik equation. El-Gamel and Abd El-Hady [14] utilized the Legendre-Collocation method to solve the same equation. Mekkaouii and Hammouch [15] focused on finding approximate analytical solutions to the Bagley–Torvik equation using the fractional iteration method. Dinçel [16] employed a sine–cosine wavelet method for approximating solutions of the fractional Bagley–Torvik equation.
The remainder of this paper is organized as follows: Section 2 introduces the essential mathematical preliminaries, including the definitions of fractional derivatives, the properties of Hermite polynomials, and the formulation of the least-squares method that forms the foundation of the proposed scheme. Section 3 presents the construction of the Hermite polynomial approximation and develops the weighted least-squares model for the fractional Bagley–Torvik equation. Section 4 outlines the implementation procedure and discusses the computational framework of the proposed approach. Section 5 provides several numerical experiments to validate the efficiency, accuracy, and convergence behavior of the method, along with comparisons to existing techniques reported in the literature. Finally, Section 6 summarizes the main conclusions and highlights potential directions for future research and extensions of the proposed methodology.

2. Mathematical Preliminaries

This section introduces the mathematical foundations necessary for the proposed Hermite-least-squares numerical framework. It includes the definitions of fractional derivatives in the Caputo sense, essential properties of Hermite polynomials, and the formulation of the least-squares approach used to construct the approximate solution.

2.1. Fractional Calculus

Fractional calculus extends the concept of differentiation and integration to non-integer orders, allowing for more accurate modeling of systems with memory and hereditary properties. Among several definitions of fractional derivatives, the Caputo definition is widely used in engineering and applied sciences due to its ability to handle standard initial conditions:
  D t μ y ( t ) = 1 Γ ( n μ ) 0 t ( t τ ) n μ 1 y ( n ) ( τ ) d τ  
where Γ ( ) denotes the Gamma function, and n is the smallest integer greater than μ . This definition ensures that the Caputo fractional derivative of a constant is zero, which simplifies the treatment of physical initial conditions.
The Caputo fractional derivative of order γ   (where n 1 < γ < n ) can be computed using the gamma function for power functions:
For   D γ t k = Γ k + 1 Γ k + 1 γ t k γ ,       i f     k γ   o r   k > γ 1 0 ,                             i f   N 0   a n d   k < γ .  

2.2. Hermite Polynomials

Hermite Polynomials are a sequence of orthogonal polynomials; they are denoted by H n ( x ) , where n is a non-negative integer and x is a real variable. We can generate Hermite Polynomials using the Rodrigue’s formula:
H n ( x ) = ( 1 ) n e x 2 d n d x n e x 2 .
They can also be generated using the recurrence relation H n + 1 = 2 x H n ( x ) 2 n H n 1 ( x ) , with initial conditions H 0 ( x ) = 1 and H 1 ( x ) = 2 x .
The first few Hermite polynomials valid in the interval [ 0, 1 ] are given as:
H 0 ( x ) = 1 , H 1 x = 2 x ,   H 2 ( x ) = 4 x 2 2 ,   H 3 x = 8 x 3 12 x ,   H 4 ( x ) = 16 x 4 48 x 2 + 12 .

2.3. Fractional Derivatives of Hermite Polynomials

For the fractional derivatives of Hermite polynomials. The derivation will use the Caputo fractional derivative (for consistency with the finite-interval analysis):
D α f x = 1 Γ 1 α a x f t x t ) α d t , 0 < α < 1 .
Express Hermite polynomials H n ( x ) in power-series form:
H n x = n ! m = 0 n 2 1 m 2 x n 2 m m ! n 2 m ! .
Compute D α H n ( x ) term-by-term using the fractional derivative of monomials:
D α x k = Γ ( k + 1 ) Γ ( k + 1 α ) x k α , k 0 .
Simplify to a closed form:
D α H n ( x ) = m = 0 n 2 n ! ( 1 ) m 2 n 2 m m ! Γ ( n 2 m + 1 α ) x n 2 m α .
This derivation explicitly shows gamma/beta function manipulations, enhancing transparency.

2.4. Least-Squares Formulation

The least-squares (LS) method is a widely used optimization technique for obtaining approximate solutions to differential equations. The key idea is to minimize the residual error in the mean-square sense over the domain of interest.
Assume that the solution y ( x ) is approximated by a truncated Hermite series:
y N x = i = 0 N a i H i x .    
Substituting y N ( t ) into the fractional Bagley–Torvik equation yields a residual function R ( t ) . The least-squares approach minimizes the integral of the squared residual:
J ( a 0 , a 1 , , a N ) = 0 T [ R ( t ) ] 2 w ( t ) d t .
Taking partial derivatives of J with respect to each coefficient a i and setting them equal to zero produces a system of algebraic equations. Solving this system provides the optimal coefficients that minimize the residual in the least-squares sense. This weighted least-squares formulation ensures numerical stability and accuracy, particularly when combined with the orthogonality of the Hermite basis functions.

3. Description of the Proposed Hermite Least Squares Method

Here, we consider the standard form of the fractional Bagley–Torvik equation (BTE):
A 2 D α y ( x ) + A 1 D β y ( x ) + A 0 y ( x ) = f ( x ) .
We assume an approximate solution of the form and the equation for the second Derivative:
  y N x = i = 0 N a i H i ( x ) , y ( x ) = k = 2 N 4 a k k ( k 1 ) H k 2 ( x ) .
The expansion of this equation is the following:
y N ( x ) = a 0 H 0 ( x ) + a 1 H 1 ( x ) + a 2 H 2 ( x ) + + a N H N ( x ) .
Substituting the approximate solution into the BTE yields the following:
A 2 D α i = 0 N a i H i ( x ) + A 1 D β i = 0 N a i H i ( x ) + A 0 i = 0 N a i H i ( x ) = f ( x ) .
Moving the right-hand side of the equation to the left-hand side gives us the residue equation, which is set to zero.
The residue equation, R ( x ) , of the above expression is:
R ( x ) = A 2 i = 0 N a i D α H i ( x ) + A 1 i = 0 N a i D β H i ( x ) + A 0 i = 0 N a i H i ( x ) f ( x ) .
From this residual function, we generate our functional, S ( a 0 , a 1 , a 2 , , a N ) , using a weight function, W ( x ) , where in this case we assume W ( x ) = 1 for simplicity:
S ( a 0 , a 1 , , a N ) = a b [ R ( x ) ] 2 d x .
Finding the values of a i   for i = 0 ,   1 ,   2 , , N , which minimize S , is equivalent to finding the best approximate solution. The minimum value of S is obtained by setting:
S a j = 0 , j = 0 ,   1 , , N .
Using this condition, we get a system of N + 1   equations. For j = 0 :
S a 0 = a b 2 ( R ( x ) a 0 ) R ( x ) d x = 0 .
For j = 1 :
S a 1 = a b 2 ( R ( x ) a 1 ) R ( x ) d x = 0
This process continues up to j = N , generating a system of N + 1 linear algebraic equations with N + 1   unknown constants ( a 0 , a 1 , , a N ) . This system can be solved using a method such as Gaussian elimination to obtain the values of the unknown constants. The unknown constants are then substituted back into the assumed approximate solution to find the solution for the fractional Bagley–Torvik equation.
Formulate the system of equations in general matrix form. The system of equations is derived from the condition that the partial derivative of the functional S with respect to each coefficient a j   is zero, for all j from 0 to N . This gives us N + 1 equations.
The core equation for each j is as follows:
a b R x a j R x d x = 0 ,
where the residue function is given by the following:
R ( x ) = i = 0 N a i ( A 2 D α H i ( x ) + A 1 D β H i ( x ) + A 0 H i ( x ) ) f ( x ) .
The partial derivative of the residue function with respect to a j   is:
R ( x ) a j = A 2 D α H j ( x ) + A 1 D β H j ( x ) + A 0 H j ( x ) .
For clarity, let’s define a combined function Ψ k ( x ) for each basis function H k ( x ) :
Ψ k ( x ) = A 2 D α H k ( x ) + A 1 D β H k ( x ) + A 0 H k ( x ) .
Using this, the system of equations becomes as follows:
a b Ψ j ( x ) ( i = 0 N a i Ψ i ( x ) f ( x ) ) d x = 0   for   j = 0,1 , , N .
Expanding and rearranging the terms to form a linear system of equations
M a = b ,
we get the following:
i = 0 N a i a b Ψ j x Ψ i x d x = a b Φ j x f x d x   for   j = 0,1 , , N .
Now, we define the general form of matrix M and vector b . The system of equations is a linear system of size ( N + 1 ) × ( N + 1 ) for the unknown coefficients a = [ a 0 , a 1 , , a N ] T .
The matrix M is a symmetric matrix of size ( N + 1 ) × ( N + 1 ) . The element at row j and column i is given by the integral on the left side of the equation:
                                                              M j i = a b [ A 2 D α H j ( x ) + A 1 D β H j ( x ) + A 0 H j ( x ) ] A 2 D α H i x + A 1 D β H i x + A 0 H i x d t .
The matrix takes the following form:
M = M 00 M 01 M 0 N   M N 0 M N 1 M N N .
The vector b is a column vector of size ( N + 1 ) × 1 . The element at row j is given by the integral on the right side of the equation:
b j = a b [ A 2 D α H j ( x ) + A 1 D β H j ( x ) + A 0 H j ( x ) ] f ( x ) d x .

4. Discussion of Existence, Uniqueness, and Convergence Analysis

This section delves into an analysis of the provided method’s existence, uniqueness, and convergence properties.

4.1. Existence and Uniqueness

The proposed hybrid method, Hermite coupled with least squares, transforms the original fractional differential equation into a system of linear algebraic equations represented by the matrix equation M a = b . The existence and uniqueness of the solution to this system depend on the properties of the matrix M . The matrix M is defined by the integrals of the products of the combined basis functions Φ j ( t ) and Φ i ( t ) :
M j i = a b Φ j ( t ) Φ i ( t ) d t ,
where
Φ k ( t ) = A 2 D α H k ( t ) + A 1 D β H k ( t ) + A 0 H k ( t ) .
The matrix M is a Gram matrix, which is inherently symmetric. This is because the element M i j   is equal to M j i due to the commutative property of multiplication within the integral:
M i j = a b Φ i ( t ) Φ j ( t ) d t = a b Φ j ( t ) Φ i ( t ) d t = M j i .
The existence of a unique solution for the system M a = b is guaranteed if and only if the matrix M is non-singular. For a Gram matrix, this is equivalent to the condition that the set of functions { Φ 0 ( t ) , Φ 1 ( t ) , , Φ N ( t ) } is linearly independent over the interval [ a , b ] . If these functions are linearly independent, the matrix M is positive definite, ensuring that a unique solution vector a = [ a 0 , a 1 , , a N ] T exists.
Therefore, the existence and uniqueness of the approximate solution depend on the linear independence of the set of functions formed by the application of the fractional differential operator to the Hermite polynomial basis functions. This is to be discussed as follows:

4.2. Theoretical Rigor: Linear Independence and Gram Matrix Bounds

We will strengthen the theoretical section with the following additions:
Theorem 1.
Proof of Linear Independence
Consider the basis  { ϕ n ( x ) = D α H n ( x ) } n = 1 N   (excluding  n = 0    since  D α H 0 = 0 ).
Claim: { ϕ n } n = 1 N   is linearly independent on 0, 1   ( a > 0 ) .
Proof: Each ϕ n ( x ) has a leading term x n α with coefficient c n = n ! 2 n Γ ( n + 1 α ) 0 .
Assume n = 1 N c n ϕ n ( x ) = 0 .
For x [ a , b ] , the highest power x N α must vanish, implying c N = 0 .
Proceeding inductively, all c n = 0 .
Conclusion: Basis functions are linearly independent.
Theorem 2.
Bounds on Gram Matrix Condition Number
Define the Gram matrix G with entries:
G i j = a b ϕ i ( x ) ϕ j ( x ) d x .
Claim: κ ( G ) C 4 b 2 a 2 N ,   where C > 0   is a constant independent of N , and
κ ( G ) = λ m a x ( G ) / λ m i n ( G ) .
Proof. 
Express ϕ n   in a monomial basis. The change-of-basis matrix M   (lower triangular) has diagonal entries 2 n , so κ ( M ) 2 N .
The Gram matrix for monomials G   ~ satisfies κ ( G ~ ) κ ( b / a ) 2 N .
Since   G = M T G ~ M ,
κ ( G ) κ ( M ) 2 κ ( G ~ ) ( 2 N ) 2 κ ( b / a ) 2 N = C 4 b 2 a 2 N .
Implication: κ ( G ) grows exponentially with N , typical for polynomial bases. □

4.3. Convergence

The convergence of the method relates to whether the approximate solution   y N ( t )   approaches the exact solution y ( t )   as the number of basis functions, N increases. The Hermite polynomials form a complete basis for the space of square-integrable functions, L 2 ( [ a , b ] ) . This means that any function in this space can be approximated with arbitrary accuracy as a linear combination of Hermite polynomials. The least-squares method is designed to minimize the L 2 -norm of the residual function R ( t ) , defined as:
R ( t ) = i = 0 N a i Φ i ( t ) f ( t ) .
The functional to be minimized is
S ( a ) = a b [ R ( t ) ] 2 d t = R ( t ) L 2 2 .
The core principle of the method is that the coefficients a i are chosen to make the residual orthogonal to each of the basis functions Φ j ( t ) , as shown by the normal equations: a b Φ j t R t d t = 0   for   j = 0 , 1 , , N . This ensures that the approximate solution y N ( t ) is the best possible approximation to the true solution in the subspace spanned by the basis functions, in the sense of the L 2 norm.
Since Hermite polynomials form a complete basis, as the number of basis functions N approaches infinity, the finite-dimensional subspace spanned by { H 0 , H 1 , , H N } becomes dense in the solution space. Consequently, the minimum of the residual functional S ( a ) will approach zero, implying that the approximate solution y N ( t ) converges to the exact solution y ( t ) in the L 2 norm as N ,
y N y L 2 0   as   N .
The convergence rate (e.g., algebraic, or spectral) would depend on the smoothness of the exact solution and the specific properties of the fractional operators involved. The method leverages the high-order accuracy of spectral methods.

5. Numerical Examples and Results

This section is dedicated to a thorough investigation into the effectiveness and accuracy of a novel hybrid technique developed for solving the fractional differential equation known as the Bagley–Torvik equation. The core of this methodology lies in combining Hermite polynomials for approximating the solution with the least squares method for minimizing the residual error over a specified interval. The objective is to rigorously analyze the method’s performance by applying it to several specific examples of the Bagley–Torvik equation, each with different initial or boundary conditions. Furthermore, a critical comparative analysis is conducted, where the results of the proposed hybrid method are measured against those obtained from other existing and recent numerical techniques, such as the Bernstein collocation, Subdivision collocation, and various spectral methods. The aim of this comprehensive evaluation is to demonstrate the superior accuracy and efficiency of the proposed approach across various test cases.
Example 1.
Consider the fractional differential equation known as Bagley–Torvik equation [8]
y x + D 3 / 2 y x + y x = 1 + x   ,   0 x 1  
subject to initial conditions y ( 0 ) = 1 , y ( 0 ) = 1 . The exact solution of this equation is y ( x ) = 1 + x .
This problem was recently solved by S. T. Ejaz et al. [8] using two approaches, namely Bernstein collocation method and Subdivision collocation method.
We will apply our novel hybrid technique, which combines Hermite polynomials with the least squares method. We take the case N = 2 where N representes the highest degree of the Hermite polynomial.
Step 1: Choose a Truncated Hermite Polynomial Expansion.
The analytic solution y ( x ) is approximated using a truncated series, as:
y x y 2 x = i = 0 2 a i H i x = a 0 H 0 x + a 1 H 1 x + a 2 H 2 x  
y x a 0 + 2 x a 1 + ( 4 x 2 2 ) a 2
y x = 2 a 1 + 8 x a 2 ,
where Hermite polynomials are the following:
H 0 x = 1 ,   H 1 x = 2 x ,   H 2 x = 4 x 2 2 .
Step 2: Construct Derivatives in the Hermite Basis.
By using Equation (4), we obtain the following:
y x = k = 2 N 4 a k k ( k 1 ) H k 2 ( x ) = 8 a 2  
Step 3: Compute the Caputo Fractional Derivative of order 3 2 .
By evaluating the fractional derivative D 1.5 , we obtain the following:
D 1.5 1 = 0 , D 1.5 2 x = 0   a n d   D 1.5 4 x 2 2 = 16 x π ,  
D 1.5 y ( x ) = D 1.5 a 0 + 2 x a 1 + ( 4 x 2 2 ) a 2 = a 2 16 x π .  
Step 4: Transformed Basis (the Ψ functions)
The functions Ψ i ( x ) are defined as the result of applying the entire differential operator (which includes the second derivative, fractional derivative, and the function itself) to the original Hermite polynomials:
Ψ i ( x ) = H i ( x ) + D 1.5 H i ( x ) + H i ( x ) ,   i = 0 ,   1 ,   2 ,
then,
Ψ 0 x = H 0 x + D 3 / 2 H 0 x + H 0 x = 1 ,   Ψ 1 x = H 1 x + D 3 / 2 H 1 x + H 1 x = 2 x ,     Ψ 2 x = H 2 x + D 3 / 2 H 2 x + H 2 x = 16 x π + 4 x 2 + 6 .    
Step 5: Residual computation
The residual equation is defined as the following:
R x , a 0 , a 1 , a 2 = y x + D 3 / 2 y x + y x f x = 8 a 2 + D 3 / 2 a 0 + 2 x a 1 + 4 x 2 2 a 2 + a 0 + 2 x a 1 + 4 x 2 2 a 2 x 2 2 4 x π = a 0 + 2 x a 1 + 16 x π + 4 x 2 + 6   a 2 x 1 .
The least-squares approach chooses ( a 0 , a 1 , a 2 ) to minimize the squared L 2 -norm of the residual J on [ 0, 1 ]   w h e r e
J a 0 , a 1 , a 2 = 0 1 R ( x , a 0 , a 1 , a 2 ) 2 d x .
Set the gradient to J to be zero which yields the normal equations. Because R x , a 0 , a 1 , a 2   is linear in ( a 0 , a 1 , a 2 ) , the normal equations, of the matrix form, are linear:
A a = d  
where the entries of matrix A and column vector d are given by:
A i j = 0 1 Ψ i ( x ) Ψ j ( x ) d x   ,   d i = 0 1 Ψ i ( x ) f ( x ) d x i , j = 0 ,   1 ,   2  
where Ψ j x ,   j = 0,1 , 2   are defined as in Equation (11).
So, we obtain the basis   f u n c t i o n s   Ψ 0 x , Ψ 1 x ,   Ψ 2 x   as the following:
Ψ 0 x = 1   ,   Ψ 1 x = 2 x   a n d   Ψ 2 x = 16 x π + 4 x 2 + 6 .
The matrix A   o f   3 × 3 is then given as:
A = 0 1 Ψ 0 ( x ) Ψ 0 ( x ) d x 0 1 Ψ 0 ( x ) Ψ 1 ( x ) d x 0 1 Ψ 0 ( x ) Ψ 2 ( x ) d x 0 1 Ψ 1 ( x ) Ψ 0 ( x ) d x 0 1 Ψ 1 ( x ) Ψ 1 ( x ) d x 0 1 Ψ 1 ( x ) Ψ 2 ( x ) d x 0 1 Ψ 2 ( x ) Ψ 0 ( x ) d x 0 1 Ψ 2 ( x ) Ψ 1 ( x ) d x 0 1 Ψ 2 ( x ) Ψ 2 ( x ) d x = 1.00000 1.00000 13.3514 1.00000 1.3333 15.2216 13.3514 15.2216 188.7932
Second, we compute vector column d using Equation (13) which takes the following form:
d = 1.500   1.666 20.9622 .
Step 6: Initial Conditions
Now, we use the initial conditions y ( 0 ) = 1 , y ( 0 ) = 1 to get the system of two equations
a 0 1 + a 1 0 + a 0 2 = a 0 2 a 2 = 1 , 2 a 1 + 8 0 a 2 = 2 a 1 = 1 .
These two conditions can be written in the matrix equation form C a = b   as follows:
C = 1 0 2 0 2 0 ,   b = 1 1 .
Finally, from Equations (12) and (14), we arrive at the linear system of matrix equation form M a = F , where   M = A C is of size 5 × 3 and size of F = d b is of s i z e   5 × 1 given by:
M = 1.0000   1.0000   13.3514 1.0000   1.3333   15.2216 13.3514   15.2216   188.7932 1.0000   0 2.0000 0   2.0000   0 and   F   = 1.5000 1.6667 20.9622 1.0000 1.0000 .
The objective is to determine the unknown vector a =     a 0   a 1   a 2   by solving this system. The resulting solution for the vector is found to be:   a =     a 0   a 1   a 2   =   1.0000   0.5000   0.0000   .
These coefficients { a 0 ,   a 1 ,   a 2 }   were then used to obtain the approximate solution y 2 x as a weighted sum of the functions   H i x   e x p r e s s e d   a s :
y 2 x = i = 0 2 a i H i x = a 0 , H 0 x + a 1 H 1 x + a 2 H 2 x = 1.0000 × 1 + 0.5000 × 2 x + 0 × ( 4 x 2 2 )   1 + x .
In the Table 1, we display the exact and approximate solution at indicated x points.
In Table 2, we compare the accuracy of our proposed method to the method presented in [8] using Bernstein collocation method and Subdivision collocation method.
Table 2 illustrates that our proposed method outperforms other approaches, delivering better accuracy through the application of both the Bernstein collocation and subdivision collocation techniques.
Example 2.
Consider the fractional differential equation known as Bagley–Torvik equation [9,10]
y x + D 1 / 2 y x + y x = 2 + x 2 + 8 3 π x 3 2 ,   0 x 1  
subject to initial conditions y 0 = 0 ,   y ( 0 ) = 0 . The exact solution of (15) is y ( x ) = x 2 .
We apply our novel hybrid technique, which combines Hermite polynomials with the least squares method. We take the case N = 2 where N representes the highest degree of the Hermite polynomial. This problem is considered in [9,10], where in [9] Ford and Connolly give a comparison of the efficiency of three alternative decomposition schemes for the approximate solution of this problem using the Caputo form of the fractional derivative equation. Rahimkhani and Ordokhani [10] proposed a new numerical method, which is introduced to approximately solve this fractional Bagley–Torvik equation using Müntz–Legendre polynomials.
Step 1: Choose a Truncated Hermite Polynomial Expansion.
The analytic solution y ( x ) is approximated using a truncated series, as:
y x y 2 x = i = 0 2 a i H i x = a 0 H 0 x + a 1 H 1 x + a 2 H 2 x
y x a 0 + 2 x a 1 + ( 4 x 2 2 ) a 2
y x = 2 a 1 + 8 x a 2 .
Step 2: Construct Derivatives in the Hermite Basis.
By using Equation (4), we obtain the following:
y x = k = 2 N 4 a k k ( k 1 ) H k 2 ( x ) = 8 a 2    
Step 3: Compute the Caputo Fractional Derivative of order 1 2 .
By evaluating the fractional derivative D 1 / 2 , we obtain the following:
D 1 / 2 1 = 0 , D 1 / 2 2 x = 4 π x 1 2   a n d   D 1 / 2 4 x 2 2 = 32 3 π x 3 2 ,  
D 1 / 2 y ( x ) = D 1 / 2 a 0 + 2 x a 1 + ( 4 x 2 2 ) a 2 = a 1 4 π x 1 2 + a 2 32 3 π x 3 2 .    
Step 4: Transformed Basis (the Ψ functions)
The functions Ψ i ( x ) are defined as the result of applying the entire differential operator (which includes the second derivative, fractional derivative, and the function itself) to the original Hermite polynomials:
Ψ i ( x ) = H i ( x ) + D 1 / 2 H i ( x ) + H i ( x ) ,   i = 0,1 , 2
then,
Ψ 0 x = H 0 x + D 1 / 2 H 0 x + H 0 x = 0 + 0 + 1 = 1 , Ψ 1 x = H 1 x + D 1 / 2 H 1 x + H 1 x = 0 + 4 π x 1 2 + 2 x = 4 π x 1 2 + 2 x   Ψ 2 x = H 2 x + D 1 / 2 H 2 x + H 2 x = 8 + 32 3 π x 3 2 + 4 x 2 2 = 32 3 π x 3 2 + 4 x 2 + 6 .
Step 5: Residual computation
The residual equation is defined as the following:
R x , a 0 , a 1 , a 2 = y x + D 1 / 2 y x + y x f x = 8 a 2 + D 1 / 2 a 0 + 2 x a 1 + 4 x 2 2 a 2 + a 0 + 2 x a 1 + 4 x 2 2 a 2 2 x 2 8 3 π x 3 2 = a 0 + 4 π x 1 2 + 2 x a 1 + 32 3 π x 3 2 + 4 x 2 + 6 a 2 2 x 2 8 3 π x 3 2 .
The least-squares approach chooses ( a 0 , a 1 , a 2 ) to minimize the squared L 2 -norm of the residual J on [ 0, 1 ] :
J a 0 , a 1 , a 2 = 0 1 R ( x , a 0 , a 1 , a 2 ) 2 d x .
Set the gradient to J to be zero which yields the normal equations. Because R x , a 0 , a 1 , a 2   is linear in ( a 0 , a 1 , a 2 ) , the normal equations, of the matrix form, are linear:
A a = d  
where the entries of the matrix A and the column vector d are given by:
A i j = 0 1 Ψ i ( x ) Ψ j ( x ) d x   ,   d i = 0 1 Ψ i ( x ) f ( x ) d x i   , j = 0 ,   1 ,   2  
where Ψ j x ,   j = 0 ,   1 ,   2   are defined as in Equation (21).
So, we obtain the basis   f u n c t i o n s   Ψ 0 x , Ψ 1 x , Ψ 2 x   as:
Ψ 0 x = 1 ,   Ψ 1 x = 4 π x 1 2 + 2 x   a n d   Ψ 2 x = 32 3 π x 3 2 + 4 x 2 + 6 .
Using Equation (23), matrix A   a n d   v e c t o r   d   are obtained as follows:
A = 0 1 Ψ 0 x Ψ 0 x d x 0 1 Ψ 0 x Ψ 1 x d x 0 1 Ψ 0 x Ψ 2 x d x 0 1 Ψ 1 x Ψ 0 x d x 0 1 Ψ 1 x Ψ 1 x d x 0 1 Ψ 1 x Ψ 2 x d x 0 1 Ψ 2 x Ψ 0 x d x 0 1 Ψ 2 x Ψ 1 x d x 0 1 Ψ 2 x Ψ 2 x d x = 1.00000 1.7522527 9.7405422 1.7522527 3.7753597 19.505499 9.7405422 19.505499 103.839360 .
Second, we compute the vector column d using Equation (23), which takes the following form:
d = d 1 d 2 d 3 = [ 2.93513556 5.75250131 30.83011129 ] T .
That is the matrix A and the vector d are of the final form as:
A = 1.00000 1.7522527 9.7405422 1.7522527 3.7753597 19.505499 9.7405422 19.505499 103.839360 , d = 2.93513556   5.75250131 30.83011129 .
Step 6: Initial Conditions
Now, we use the initial conditions y ( 0 ) = 1 , y ( 0 ) = 0 to get the following system of two equations:
a 0 × 1 + a 1 × 0 + a 0 × 2 = a 0 2 a 2 = 0 , 2 a 1 + 8 × 0 a 2 = 2 a 1 = 0 .
These two conditions can be written in the matrix equation form C a = b   as follows:
C = 1 0 2 0 2 0 ,   b = 0 0   .
Finally, from Equations (22) and (24), we arrive to the linear system of matrix equation form M a = F , where   M = A C is of size 5 × 3 and size of F = d b is o f   s i z e   5 × 1 given by:
M = 1.00000 1.7522527 9.7405422 1.7522527 3.7753597 19.505499 9.7405422 19.505499 103.839360 1.00000   0.00000   2.00000 0.00000   2.00000   0.00000 and   F   = 2.93513556 5.75250131 30.83011129 0.00000000 0.0000000 .
We solve for the unknown vector a =     a 0   a 1   a 2   , which will be of the following form:
a =     a 0 a 1   a 2   =   0.50000000 0.00000000 0.25000000 .
Hence, our approximate solution will be given as follows:
y 2 x = a 0 H 0 x + a 1 H 1 x + a 2 H 2 x = 0.50000000 × 1 + 0.00000000 × 2 x + 0.25000000 × ( 4 x 2 2 )   x 2 ,
which is the exact solution. The values for both the exact and approximate solutions are shown for various x points in Table 3 below.
The proposed hybrid method achieves superior accuracy compared with the three decomposition approaches in [9] as well as the Müntz–Legendre polynomial method in [10] (Table 4).
Example 3. 
Let us examine the fractional Bagley–Torvik differential equation [8,11]
  y x + D γ y x + y x = x 2 + 2 + 4 x π ,   0 < x < 1 ,   1 < γ < 2 ,  
subject to boundary conditions y 0 = 0 , y ( 1 ) = 1 .
The analytical solution when γ = 1.5 ,   i s   g i v e n   b y   y ( x ) = x 2 .
In Ref. [8], the authors introduced a numerical solution technique for the Bagley–Torvik equation using the Taylor matrix method. This approach converts the differential equation into a system of algebraic equations, where these equations are solved to determine the coefficients of a generalized Taylor series, which then provides the approximate solution. In Ref. [11], the authors utilized a spectral Galerkin algorithm with a specialized shifted Legendre basis to find semi-analytic solutions for a problem. The method first transforms the non-homogeneous boundary conditions into homogeneous ones. It then solves this new problem, which has a new exact solution, by converting the fractional differential equation into a linear system with well-structured, invertible matrices.
We apply our novel hybrid technique, which combines Hermite polynomials with the least squares method. We take the case N = 2 , where N representes the highest degree of the Hermite polynomial.
Step 1: Choose a Truncated Hermite Polynomial Expansion
The analytic solution y ( x ) is approximated using a truncated series, as:
y x y 2 x = i = 0 2 a i H i x = a 0 H 0 x + a 1 H 1 x + a 2 H 2 x ,  
y x a 0 + 2 x a 1 + ( 4 x 2 2 ) a 2 ,
y x = 2 a 1 + 8 x a 2 .
Step 2: Construct Derivatives in the Hermite Basis.
By using Equation (4), we obtain the following:
y x = k = 2 N 4 a k k k 1 H k 2 x = 8 a 2 .  
Step 3: Compute the Caputo Fractional Derivative of order 3 2 .
By Evaluate fractional derivative D 1.5 , we obtain the following:
D 1.5 1 = 0 , D 1.5 2 x = 0   a n d   D 1.5 4 x 2 2 = 16 x π ,  
D 1.5 y ( x ) = D 1.5 a 0 + 2 x a 1 + ( 4 x 2 2 ) a 2 = a 2 16 x π .  
Step 4: Transformed Basis (the Ψ functions)
The functions Ψ i x   are derived by subjecting the original Hermite polynomials to the full differential operator, which is composed of the second derivative, the fractional derivative, and the function itself:
Ψ i ( x ) = H i ( x ) + D 1.5 H i ( x ) + H i ( x ) ,   i = 0 ,   1 ,   2
then,
Ψ 0 x = H 0 x + D 3 / 2 H 0 x + H 0 x = 1 , Ψ 1 x = H 1 x + D 3 / 2 H 1 x + H 1 x = 2 x , Ψ 2 x = H 2 x + D 3 / 2 H 2 x + H 2 x = 16 x π + 4 x 2 + 6   .
Step 5: Residual computation
The residual equation is defined as follows:
R x , a 0 , a 1 , a 2 = y x + D 3 / 2 y x + y x f x = 8 a 2 + D 3 / 2 a 0 + 2 x a 1 + 4 x 2 2 a 2 + a 0 + 2 x a 1 + 4 x 2 2 a 2 x 2 24 x π = a 0 + 2 x a 1 + 16 x π + 4 x 2 + 6   a 2 x 2 2 4 x π .
The least-squares approach chooses ( a 0 , a 1 , a 2 ) to minimize the squared L 2 -norm of the residual J on [ 0, 1 ] :
J a 0 , a 1 , a 2 = 0 1 R ( x , a 0 , a 1 , a 2 ) 2 d x .
Set the gradient to J to zero, which yields the normal equations. Because R x , a 0 , a 1 , a 2   is linear in ( a 0 , a 1 , a 2 ) , the normal equations, of the matrix form, are linear:
A a = d  
where the entries of matrix A and column vector d are given by:
A i j = 0 1 Ψ i ( x ) Ψ j ( x ) d x ,   d i = 0 1 Ψ i ( x ) f ( x ) d x i , j = 0 ,   1 ,   2  
where Ψ j x ,   j = 0,1 , 2   are defined as in Equation (31).
So, we have the basis   f u n c t i o n s   Ψ 0 x , Ψ 1 x ,   Ψ 2 x   given as the following:
Ψ 0 x = 1 ,   Ψ 1 x = 2 x   a n d   Ψ 2 x = 16 x π + 4 x 2 + 6 .
Using Equation (33), the matrix A   a n d   t h e   v e c t o r   d   are given as follows:
A = 1.00000 1.00000 13.3514 1.00000 1.3333 15.2216 13.3514 15.2216 188.7932 ,   d = 3.83783889 4.30540667 53.8739655 .
Step 6: Boundary Conditions
Now, we use the boundary conditions y ( 0 ) = 1 , y 1 = 1 to get the system of two equations:
a 0 · 1   +   a 1 0   +   a 2 ·   ( 2 )   =   a 0   2 a 2 = 0 , a 0 · 1   +   2 a 1 + ( 4 × 1   2 ) a 2 =   a 0 +   2 a 1   +   2 a 2   = 1 .
These two conditions can be written in the matrix equation form C a = b   as follows:
C = 1 0 2 1 2 2 ,   b = 0 1 .
Finally, from Equations (32) and (34), we arrive at the linear system of matrix equation form:
M a = F
where   M = A C is of size 5 × 3 and size of F = d b is 5 × 1 given by:
M = 1.0000   1.0000   13.3514 1.0000   1.3333   15.2216 13.3514   15.2216   188.7932 1   0 2.0000 1   2.00   2.0000 and   F   =   3.8378 4.3054   53.8740 0.0000 1.0000
and we solve for the unknown vector a =     a 0   a 1   a 2   , which will be of the following form:
a =     a 0 a 1   a 2   =   0.50000 0.00000 0.25000 .
Hence, our approximate solution   y 2 x will be given as:
y 2 x = i = 0 2 a i H i x = a 0 H 0 x + a 1 H 1 x + a 2 H 2 x = 0.500000 + 0.250000 ( 4 x 2 2 ) x 2 .
In Table 5, below, we display the approximate solution versus the exact as will as the absolute error of our proposed method.
In Table 6, we compare the accuracy of our proposed method to the method presented in [11] using the Galerkin method with shifted Legendre polynomials.
Remark 1. 
M represents the number of terms used in the truncated series expansion to approximate the solution using shifted Legendre.
From Table 6, it is clear our proposed method is deemed more accurate than the existing technique described [11] utilizing the Galerkin method with shifted Legendre polynomials,
Example 4. 
Consider fractional Bagley–Torvik equation [12,13,14,15]:
D 2 y ( x ) + D 3 2 y ( x ) + y ( x ) = x 3 + 7 x + 1 + 8 x 3 2 π
where the initial conditions are y ( 0 ) = 1 and y ( 0 ) = 1 . This problem has the exact solution of the form y x = x 3 + x + 1 . By using the above proposed method, solving this problem for N = 3 . Step 1: Choose a truncated Hermite polynomial expansion.
The analytic solution y ( x ) is approximated using a truncated series, as:
y x y 3 x = i = 0 3 a i H i x = a 0 H 0 x + a 1 H 1 x + a 2 H 2 x + a 3 H 3 x  
y x a 0 + 2 x a 1 + 4 x 2 2 a 2 + 8 x 3 12 x a 3
y x = 2 a 1 + 8 x a 2 + 24 x 2 12 a 3
where Hermite polynomials are the following:
H 0 x = 1 ,   H 1 x = 2 x ,   H 2 x = 4 x 2 2 ,   H 3 x = 8 x 3 12 x .
Step 2: Construct Derivatives in the Hermite Basis.
By using Equation (4), we have
y x = k = 2 N 4 a k k ( k 1 ) H k 2 ( x ) = 8 a 2 + 48 x a 3 .
Step 3: Compute the Caputo Fractional Derivative of order 3 2 .
By evaluating fractional derivative D 1.5 , we have
D 1.5 1 = 0 , D 1.5 2 x = 0 ,   D 1.5 4 x 2 2 = 16 x π   a n d   D 1.5 8 x 3 12 x = 64 π x 1.5  
D 1.5 y ( x ) = D 1.5 a 0 + 2 x a 1 + ( 4 x 2 2 ) a 2 = 16 x π a 2 + 64 π x 1.5 a 3 .
Step 4: Transformed Basis (the Ψ functions)
The functions Ψ i ( x ) are defined as the result of applying the entire differential operator (which includes the second derivative, fractional derivative, and the function itself) to the original Hermite polynomials:
Ψ i ( x ) = H i ( x ) + D 1.5 H i ( x ) + H i ( x ) ,   i = 0,1 , 2,3 ,
then,
Ψ 0 x = H 0 x + D 3 / 2 H 0 x + H 0 x = 1 , Ψ 1 x = H 1 x + D 3 / 2 H 1 x + H 1 x = 2 x Ψ 2 x = H 2 x + D 3 / 2 H 2 x + H 2 x = 16 x π + 4 x 2 + 6 , Ψ 3 x = H 3 x + D 3 / 2 H 3 x + H 3 x = 64 π x 1.5 + 8 x 3 + 36 x .
Step 5: Residual computation
The residual equation is defined as the following:
R x , a 0 , a 1 , a 2 = 8 a 2 + 48 x a 3 + D 1.5 a 0 + 2 x a 1 + 4 x 2 2 a 2 + 8 x 3 12 x a 3 + a 0 + 2 x a 1 + 4 x 2 2 a 2 + 8 x 3 12 x a 3 x 3 7 x 1 8 x 3 2 π = 8 a 2 + 48 x a 3 + 16 x π a 2 + 64 π x 1.5 a 3 + a 0 + 2 x a 1 + 4 x 2 2 a 2 + 8 x 3 12 x a 3 x 3 7 x 1 8 x 3 2 π .
The least-squares approach chooses ( a 0 , a 1 , a 2 ) to minimize the squared L 2 -norm of the residual J on [ 0, 1 ]
J a 0 , a 1 , a 2 , a 3 = 0 1 R ( x , a 0 , a 1 , a 2 , a 3 ) 2 d x .
Set the gradient to J to be zero which yields the normal equations. Because R x , a 0 , a 1 , a 2 , a 3   is linear in ( a 0 , a 1 , a 2 , a 3 ) , the normal equations, of the matrix form, are linear:
A a = d  
where the entries of matrix A and column vector d are given by:
A i j = 0 1 Ψ i ( x ) Ψ j ( x ) d x ,   d i = 0 1 Ψ i ( x ) f ( x ) d x i , j = 0 ,   1 ,   2,3    
where Ψ j x ,   j = 0 ,   1 ,   2   are defined as in Equation (40).
So, we obtain the basis   f u n c t i o n s   Ψ 0 x   , Ψ 1 x ,   Ψ 2 x , Ψ 3 x   as:
  Ψ 0 x = 1 ,   Ψ 1 x = 2 x ,   Ψ 2 x = 16 x π + 4 x 2 + 6   a n d   Ψ 3 x = 64 π x 1.5 + 8 x 3 + 36 x .
Using Equation (42), the matrix A   a n d   t h e   v e c t o r   d   are given as the following:
A = 1.00000 1.00000 1.00000 1.3333 13.3514 34.4433 15.2216 47.8332 13.3514 15.2216 34.4433 47.8332 188.7932 534.776   534.776   1730.13   , d = 6.5554067 8.6458190 99.2253996 310.501016 .
Step 6: Initial Conditions
Now, we use the initial conditions y ( 0 ) = 1 , y ( 0 ) = 1 to get the system of two equations
a 0 × 1 + a 1 × 0 + a 2 ·   ( 4 × 0 2 ) + ( 8 × 0 12 × 0 ) a 3 = a 0 2 a 2 = 1 , 2 a 1 + ( 8 × 0 ) a 2 + 24 × 0 12 a 3   = 2 a 1 12 a 3 = 1 .
These two conditions can be written in the matrix equation form C a = b   as follows:
C = 1 0 2 0 0 2 0 12 ,   b = 1 1 .
Finally, from Equations (41) and (43), we arrive at the linear system of matrix equation form
M a = F
where   M =   A C is of size 6 × 4 and size of F = d b  is 6 × 1 given by:
M = 1.00000 1.00000 13.3514 34.4433 1.00000 1.3333 15.2216 47.8332 13.3514 15.2216 188.7932 534.776 34.4433 47.8332 534.776 1730.13 1 0 2 0 1 2 0 12   and   F = 6.5554 8.6458 99.2254   310.5010 1.0000 1.0000 .
Solving the matrix equation M a = F , we obtain the coefficients   a 0 , a 1 , a 2 , a 3   as follows:
a 0 = 1.0000 ,     a 1 = 1.25000 ,   a 2 = 0.0000 ,   a 3 = 0.12500 .
Now, we have obtained the approximate solution
y 3 x = 1.0000 + 1.25000 2 x + 0.12500 8 x 3 12 x x 3 + x + 1 ,
which is very close to the exact solution.
In Table 7, below, we display the approximate solution versus the exact as will as the absolute error of our proposed method
Table 8 presents a comparative analysis of results derived from several numerical approaches: the proposed hybrid method (Hermite coupled with least squares method), the Lucas collocation method (LCM), the Lucas collocation method combined with residual error function (LCM-REF) [12], the Chelyshkov–Tau method for N = 8 [13], the fractional iteration method (FIM) [14], and the Legendre-collocation method [15]. The data clearly indicate that the accuracy of the proposed hybrid method is comparable to that of the Lucas collocation, Lucas combined with residual error function, and Chelyshkov–Tau methods. Furthermore, the hybrid method demonstrates superior accuracy when compared to the methods outlined in references [15,16].
Computational cost analysis for this example: The proposed method involves forming a system of linear algebraic equations M a = F of size N + 1 ) × ( N + 1 , N = 3 , so the system is 4 × 4 . The most computationally intensive steps are the calculation of the matrix integrals and the solution of the linear system. Matrix Formation: Calculating the entries of matrix M and vector F involves numerical integration of the basis functions Ψ i ( x )   and Ψ j ( x ) over the interval   [ 0 ,   1 ] . The number of matrix entries grows quadratically with N , i.e., O ( N 2 ) entries. The cost per integral depends on the specific quadrature rule used and the complexity of the Ψ functions, but this is a fixed cost once the numerical integration scheme is chosen. System Solution: Solving the resulting linear system M a = F for the unknown coefficients a is typically carried out using methods like Gaussian elimination. The computational complexity for solving a general square linear system of size n × n is O ( n 3 ) . In this case, the complexity is O ( ( N + 1 ) 3 ) , which simplifies to O ( N 3 ) Thus, the overall theoretical computational complexity of the proposed method is dominated by solving the linear system, making it an O ( N 3 ) algorithm, where N is the highest degree of the Hermite polynomial used. While methods based on finite differences or specific wavelet methods can sometimes involve dense matrices and higher computational burdens, especially for higher accuracy requirements, potentially scaling differently (e.g., quadratically, or higher with the number of grid points).
Theoretical Convergence rate analysis: The Hermite polynomials form a complete basis for the space of square-integrable functions L 2 ( [ a , b ] ) . The least-squares method is designed to minimize the L 2 -norm of the residual, which ensures that as the number of basis functions N approaches infinity, the approximate solution y N ( t ) converges to the exact solution y ( t ) in the L 2 . For this example, Example 4, using a low polynomial degree of N = 3 , the method achieved absolute errors near machine precision (on the order of 10 16 to 10 15 . This suggests the solution converges very rapidly even with a small number of terms.
Comparison of Accuracy: Table 8 compares the proposed method with others. The proposed method’s error at N = 3 is shown to be comparable to, or better than, other methods (like Chelyshkov–Tau with N = 8 . This empirical evidence suggests a high convergence rate, characteristic of spectral methods. The rate of convergence (algebraic or spectral) depends on the smoothness of the exact solution and the properties of the fractional operators involved.
The memory requirements for the proposed method in Example 4 primarily depend on the size of the linear system of algebraic equations that is solved.
System Size: The method transforms the fractional differential equation into a system of linear equations, which, after incorporating the initial conditions, results in an overdetermined system M a = F of size n × m . For example, Example 4, the highest polynomial degree used is N = 3 . The resulting matrix M is explicitly shown to be of size 6 × 4 , and the vector F is 6 × 1 . Generally, for a polynomial degree N , the system involves m = N + 1 unknown coefficients and n equations (at least N + 1 , plus additional equations for boundary/initial conditions). The core matrix is approximately of size N + 1 ) × ( N + 1 before extra conditions are added. The memory needed to store the matrix M is proportional to the number of its entries. Since the matrix is dense (all entries are non-zero), the memory complexity scales quadratically with the degree of the polynomial N , i.e., O ( N 2 ) . The vectors require O N   memory. The memory requirement is relatively low because only a small number of Hermite basis functions N are needed to achieve high accuracy (e.g., N = 3 for machine precision results in Example 4). This contrasts favorably with methods that require a very fine grid or many basis functions to reach comparable accuracy. Adding this analysis demonstrates the efficiency of the method not just in speed, but also in memory usage.
Example 5.
Consider the following Bagley–Torvik equation [16]
y x + D 1.5 y x + y x = x 3 + 6 x + 8 π x 3 2 ,
where the initial conditions are y 0 = 0 ,     y 0 = 0 and the exact solution is y x = x 3 .
By solving this problem for N = 3 , we follow the same steps as in the previous examples:
Step 1: Choose a truncated Hermite polynomial expansion.
The analytic solution y ( x ) is approximated using a truncated series, as:
y x y 3 x = i = 0 3 a i H i x = a 0 H 0 x + a 1 H 1 x + a 2 H 2 x + a 3 H 3 x    
y x a 0 + 2 x a 1 + 4 x 2 2 a 2 + 8 x 3 12 x a 3
y x = 2 a 1 + 8 x a 2 + 24 x 2 12 a 3 .
Step 2: Construct derivatives in the Hermite Basis.
By using Equation (4), we obtain the following
y x = k = 2 N 4 a k k ( k 1 ) H k 2 ( x ) = 8 a 2 + 48 x a 3   .
Step 3: compute the Caputo fractional derivative of order 3 2
By evaluating fractional derivative D 1.5 , we obtain the following:
D 1.5 1 = 0 ,   D 1.5 2 x = 0 ,
D 1.5 4 x 2 2 = 16 x π   a n d   D 1.5 8 x 3 12 x = 64 π x 1.5
D 1.5 y ( x ) = D 1.5 a 0 + 2 x a 1 + ( 4 x 2 2 ) a 2 = 16 x π a 2 + 64 π x 1.5 a 3   .
Step 4: Transformed Basis (the Ψ functions)
The functions Ψ i ( x ) are defined as the result of applying the entire differential operator (which includes the second derivative, fractional derivative, and the function itself) to the original Hermite polynomials:
Ψ i ( x ) = H i ( x ) + D 1.5 H i ( x ) + H i ( x ) ,   i = 0,1 , 2,3
then,
Ψ 0 x = H 0 x + D 3 / 2 H 0 x + H 0 x = 1 , Ψ 1 x = H 1 x + D 3 / 2 H 1 x + H 1 x = 2 x ,   Ψ 2 x = H 2 x + D 3 / 2 H 2 x + H 2 x = 16 x π + 4 x 2 + 6 ,   Ψ 3 x = H 3 x + D 3 / 2 H 3 x + H 3 x = 64 π x 1.5 + 8 x 3 + 36 x .  
Step 5: Residual computation
The residual equation is defined as follows:
R x , a 0 , a 1 , a 2 = 8 a 2 + 48 x a 3 + D 1.5 a 0 + 2 x a 1 + 4 x 2 2 a 2 + 8 x 3 12 x a 3 + a 0 + 2 x a 1 + 4 x 2 2 a 2 + 8 x 3 12 x a 3 x 3 7 x 1 8 x 3 2 π = 8 a 2 + 48 x a 3 + 16 x π a 2 + 64 π x 1.5 a 3 + a 0 + 2 x a 1 + 4 x 2 2 a 2 + 8 x 3 12 x a 3 x 3 7 x 1 8 x 3 2 π   .
In the least-squares approach, ( a 0 , a 1 , a 2 ) are selected to minimize the squared L 2 -norm of the residual J on [ 0 , 1 ] :
J a 0 , a 1 , a 2 , a 3 = 0 1 R ( x , a 0 , a 1 , a 2 , a 3 ) 2 d x .
Set the gradient to J to be zero which yields the normal equations. Because R x , a 0 , a 1 , a 2 , a 3   is linear in ( a 0 , a 1 , a 2 , a 3 ) , the normal equations, of the matrix form, are linear:
A a = d  
where the entries of the matrix A and the column vector d are given by:
  A i j = 0 1 Ψ i ( x ) Ψ j ( x ) d x ,   d i = 0 1 Ψ i ( x ) f ( x ) d x i   , j = 0 ,   1 ,   2,3  
where Ψ j x ,   j = 0 ,   1 ,   2 ,   3   are defined as in Equation (51).
So, we obtain the basis   f u n c t i o n s   Ψ 0 x , Ψ 1 x ,   Ψ 2 x , Ψ 3 x   as:
Ψ 0 x = 1 ,   Ψ 1 x = 2 x ,   Ψ 2 x = 16 x π + 4 x 2 + 6   a n d   Ψ 3 x = 64 π x 1.5 + 8 x 3 + 36 x .
Using Equation (53), matrix A   a n d   v e c t o r   d   are given as:
A = 1.00000 1.00000 1.00000 1.3333 13.3514 34.4433 15.2216 47.8332 13.3514 15.2216 34.4433 47.8332 188.7932 534.776   534.776   1730.13   ,   d = 5.0554 6.9792 78.2632 252.1412 .
Step 6: Initial Conditions
Now, we use the initial conditions y ( 0 ) = 1 , y ( 0 ) = 0 to get the system of two equations
a 0 × 1 + a 1 × 0 + a 2 ·   ( 4 × 0 2 ) + ( 8 × 0 12 × 0 ) a 3 = a 0 2 a 2 = 0 , 2 a 1 + ( 8 × 0 ) a 2 + 24 × 0 12 a 3   = 2 a 1 12 a 3 = 0 .
These two conditions can be written in the matrix equation form C a = d   as follows:
C = 1 0 2 0 0 2 0 12 , d = 0 0  
Finally, from Equations (52) and (54), we arrive to the linear system of matrix equation form
M a = F
where   M = A C is of size 6 × 4 and size of F = b d is 6 × 1 given by:
M = 1.00000 1.00000 13.3514 34.4433 1.00000 1.3333 15.2216 47.8332 13.3514 15.2216 188.7932 534.776 34.4433 47.8332 534.776 1730.13 1 0 2 0 1 2 0 12   and   F   = 5.0554 6.9792 78.2632   252.1412 0 0 .
Solving the matrix equation M a = F , we obtain the coefficients   a 0 , a 1 , a 2 , a 3   as follows:
  a 0 = 0.0000 ,   a 1 = 0.7500 ,   a 2 = 0.0000 ,   a 3 = 0.12500 .
Hence, our approximate solution will be given as the following:
y 3 x = 0.0000 + 0.7500 2 x + 0.12500 8 x 3 12 x x 3 .
The results presented in Table 9 demonstrate a high degree of accuracy, as indicated by the very small absolute error values (on the order of 10 16 to 10 17 ), with one instance of zero error.
Based on the data presented in Table 9 and Table 10, a distinct performance gap exists between the two numerical approaches. The proposed hybrid Hermite-least-squares method consistently delivers superior accuracy compared to the sine–cosine wavelet method across all tested x values. Although the wavelet method’s accuracy improves as parameter k increases (errors typically falling between 10 4 and 10 8 ), its precision remains notably lower than that of the hybrid method, which consistently achieves absolute errors within machine precision ( 10 16 to 10 17 ). Ultimately, the results strongly indicate that the proposed hybrid method is numerically superior for approximating the exact solution under these conditions.

6. Conclusions

The paper concludes that the novel hybrid numerical method is highly effective and accurate for solving fractional Bagley–Torvik equations. This approach merges Hermite polynomial approximation with a least-squares optimization technique, converting complex differential equations into a stable system of algebraic equations. The outcomes of the study indicate that this combined methodology delivers superior accuracy, quick convergence, and resilience when tested against existing methods like the Bernstein collocation, subdivision collocation, systems-based decomposition, and sine–cosine wavelet approaches. The authors suggest this reliable method holds significant potential for wider use in solving various fractional-order boundary value problems.
In summary, our novel hybrid numerical method offers significant advantages over several existing techniques for solving fractional Bagley–Torvik equations. By combining Hermite polynomial approximation with a least-squares optimization scheme, we achieved a stable system of algebraic equations that consistently delivered superior accuracy and rapid convergence, often reaching machine precision errors in comparison to methods like the Bernstein collocation, subdivision collocation, systems-based decomposition, Müntz–Legendre polynomial, and sine-cosine wavelet approaches. This high efficiency stems from leveraging the specific analytical properties of Hermite polynomials to compute Caputo fractional derivatives efficiently without direct discretization complexities. The robustness and accuracy demonstrated in our numerical examples highlight the potential for this reliable methodology to be applied more broadly to a wider range of challenging fractional-order boundary value problems.

Author Contributions

Formal analysis, software, methodology, validation, data curation: H.S.O.; formal analysis, software, methodology, validation, data curation, supervision: M.A.R.; investigation, writing—original draft, resources, writing—review and editing, visualization, funding acquisition: T.R. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by Qassim University (QU-APC-2026).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Acknowledgments

The Researchers would like to thank the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support (QU-APC-2026).

Conflicts of Interest

All authors declare no conflicts of interest in this paper.

References

  1. Jeon, Y.; Bu, S. Improved Numerical Approach for Bagley–Torvik Equation Using Fractional Integral Formula and Adams–Moulton Method. J. Comput. Nonlinear Dyn. 2024, 19, 051005. [Google Scholar] [CrossRef]
  2. Liu, X.; Huang, J.; Li, J.; Zhang, Y. Numerical Solutions for Fractional Bagley-Torvik Equation with Integral Boundary Conditions. Symmetry 2025, 17, 1755. [Google Scholar] [CrossRef]
  3. Taha, M.H.; Ramadan, M.A.; Baleanu, D.; Moatimid, G.M. A Novel Analytical Technique of the Fractional Bagley-Torvik Equations for Motion of a Rigid Plate in Newtonian Fluids. Comput. Model. Eng. Sci. 2020, 124, 969–983. [Google Scholar] [CrossRef]
  4. El Deen, M.R.Z.; Youssri, Y.H.; Taema, M.A. Fourth-Kind Chebyshev Operational Tau Algorithm for Fractional Bagley-Torvik Equation. Front. Sci. Res. Technol. 2025, 11, 358327.1152. [Google Scholar] [CrossRef]
  5. Kamal, A.H.; Arafa, A.A.M.; Rida, S.Z.; Dagher, M.A.; El-Sherbiny, H. An effective comparison with Least Square method for solving fractional gas dynamic equations. Front. Sci. Res. Technol. 2023, 7, 108–119. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Afridi, M.I.; Khan, M.S.; Amanullah. Investigating an approximate solution for a fractional-order Bagley–Torvik equation by applying the Hermite wavelet method. Mathematics 2025, 13, 528. [Google Scholar] [CrossRef]
  7. Yang, C.; Zhao, X.; Li, Y. A survey of numerical methods for fractional Bagley-Torvik equations. Math. Comput. Simul. 2025; in press. [Google Scholar]
  8. Ejaz, S.T.; Zulqarnain, N.; Younis, J.; Bibi, S. A numerical comparative analysis of methods for solving fractional differential equations. Arab. J. Basic Appl. Sci. 2024, 31, 154–164. [Google Scholar] [CrossRef]
  9. Ford, N.J.; Connolly, J.A. Systems-based decomposition schemes for the approximate solution of multi-term fractional differential equations. J. Comput. Appl. Math. 2009, 229, 382–391. [Google Scholar] [CrossRef]
  10. Rahimkhani, P.; Ordokhani, Y. Application of Müntz-Legendre polynomials for solving the Bagley-Torvik equation in a large interval. SeMA J. 2018, 75, 517–533. [Google Scholar] [CrossRef]
  11. Sayed, S.M.; Mohamed, A.S.; Abo-Eldahab, E.M.; Youssri, Y.H. Legendre-Galerkin spectral algorithm for fractional-order BVPs: Application to the Bagley-Torvik equation. Math. Syst. Sci. 2024, 2, 27–33. [Google Scholar] [CrossRef]
  12. Askari, M. Numerical solution of fractional Bagley–Torvik equations using Lucas polynomials. Iran. J. Numer. Anal. Optim. 2023, 13, 695–710. [Google Scholar] [CrossRef]
  13. El-Gamel, M.; Abd-El-Hady, M.; El-Azab, M. Chelyshkov-Tau Approach for Solving Bagley-Torvik Equation. Appl. Math. 2017, 8, 1795–1807. [Google Scholar] [CrossRef]
  14. El-Gamel, M.; El-Hady, A. Numerical Solution of the Bagley-Torvik Equation by Legendre-Collocation Method. SeMA J. 2017, 74, 371–383. [Google Scholar] [CrossRef]
  15. Mekkaouii, T.; Hammouch, Z. Approximate Analytical Solutions to the Bagley-Torvik Equation by the Fractional Iteration Method. Math. Comput. Sci. 2012, 38, 251–256. [Google Scholar]
  16. Dinçel, A.T. A sine-cosine wavelet method for the approximation solutions of the fractional Bagley-Torvik equation. Sigma J. Eng. Nat. Sci. 2022, 40, 150–154. [Google Scholar] [CrossRef]
Table 1. The exact versus the approximate y 2 x solutions for Example 1 using N = 2 , where N represents the highest degree of the Hermite polynomial.
Table 1. The exact versus the approximate y 2 x solutions for Example 1 using N = 2 , where N represents the highest degree of the Hermite polynomial.
x Exact SolutionApproximate Solution
0.0001.0000.99999999999999956
0.1251.2501.12499999999999960
0.2501.2501.24999999999999960
0.3751.3751.37499999999999960
0.5001.5001.49999999999999960
0.6251.6251.62499999999999960
0.7501.7501.74999999999999980
0.8751.8751.87499999999999980
Table 2. A comparative analysis of the accuracy between the suggested approach and the techniques detailed in reference [8], specifically focusing on the Bernstein collocation method and the Subdivision collocation method.
Table 2. A comparative analysis of the accuracy between the suggested approach and the techniques detailed in reference [8], specifically focusing on the Bernstein collocation method and the Subdivision collocation method.
x Absolute Error for the Proposed Method ( N = 2 )Bernstein Collocation MethodSubdivision Collocation Method
0.0004.441 × 10−160 0
0.1254.441 × 10−164.4408 × 10−164.3299 × 10−14
0.2504.441 × 10−162.2204 × 10−163.97459 × 10−14
0.3754.441 × 10−164.4408 × 10−163.30846 × 10−14
0.5004.441 × 10−162.4424 × 10−152.59792 × 10−14
0.6254.441 × 10−164.2188 × 10−151.88737 × 10−14
0.7502.220 × 10−164.440 × 10−151.24344 × 10−14
0.8752.220 × 10−169.325 × 10−157.99360 × 10−15
Table 3. The exact versus the approximate y 2 x solutions for Example 2 using N = 2 , where N representes the highest degree of the Hermite polynomial.
Table 3. The exact versus the approximate y 2 x solutions for Example 2 using N = 2 , where N representes the highest degree of the Hermite polynomial.
x Exact SolutionApproximate SolutionAbsolute Error
0.10.010.009999999999999623.816 × 10−16
0.20.040.039999999999999762.498 × 10−16
0.30.090.089999999999999801.943 × 10−16
0.40.160.159999999999999985.551 × 10−17
0.50.250.250000000000000000.000 × 10+00
0.60.360.360000000000000212.220 × 10−16
0.70.480.490000000000000162.220 × 10−16
0.80.640.640000000000000352.220 × 10−16
0.90.810.810000000000000504.441 × 10−16
1.01.001.000000000000000404.441 × 10−16
Table 4. A comparison of the effectiveness of the Hermite-least-squares approach versus systems-based decomposition schemes [9] and Müntz–Legendre polynomials [10].
Table 4. A comparison of the effectiveness of the Hermite-least-squares approach versus systems-based decomposition schemes [9] and Müntz–Legendre polynomials [10].
x Ref. [9]Ref. [10] m = 4 Our Method
Method 1aMethod 1bMethod 2Method 1a
1 7.00 × 10 5 1.10 × 10 3 1.54 × 10 5 7.00 × 10 5 1.89 × 10 12 4.441 × 10−16
Table 5. The exact versus the approximate y 2 x solutions for Example 3 using N = 2 , where N representes the highest degree of the Hermite polynomial.
Table 5. The exact versus the approximate y 2 x solutions for Example 3 using N = 2 , where N representes the highest degree of the Hermite polynomial.
x Exact SolutionApproximate SolutionAbsolute Error
0.10.010.009999999999999297.147 × 10−16
0.20.040.039999999999999435.829 × 10−16
0.30.090.089999999999999475.274 × 10−16
0.40.160.159999999999999643.886 × 10−16
0.50.250.249999999999999722.776 × 10−16
0.60.360.359999999999999821.665 × 10−16
0.70.490.489999999999999885.551 × 10−17
0.80.640.640000000000000120.000 × 10+00
0.90.810.810000000000000161.110 × 10−16
Table 6. A comparative analysis of the accuracy between the suggested approach and the techniques detailed in [11], using the Galerkin method with shifted Legendre polynomials.
Table 6. A comparative analysis of the accuracy between the suggested approach and the techniques detailed in [11], using the Galerkin method with shifted Legendre polynomials.
x Proposed MethodMethod [11] Using Legendre–Galerkin Spectral Method
M = 3 M = 4 M = 8 M = 256
0.17.147 × 10−16003.597 × 10−43.899 × 10−6
0.25.829 × 10−16 1.583 × 10 3 4.171 × 10 6 1.583 × 10−34.171 × 10−6
0.35.274 × 10−160 7.105 × 10 14 1.787 × 10−33.943 × 10−6
0.43.886 × 10−160 5.684 × 10 13 1.634 × 10−33.373 × 10−6
0.52.776 × 10−16 2.274 × 10 13 2.956 × 10 12 1.158 × 10−32.609 × 10−6
0.61.665 × 10−16 1.819 × 10 12 1.000 × 10 11 5.836 × 10−41.788 × 10−6
0.75.551 × 10−17 3.638 × 10 12 2.910 × 10 11 1.271 × 10−41.041 × 10−6
0.80.000 × 10+00 7.276 × 10 12 6.912 × 10 11 1.197 × 10−44.920 × 10−7
0.91.110 × 10−160 1.455 × 10 10 5.540 × 10−42.611 × 10−7
Table 7. The exact versus the approximate y 3 x solutions for Example 4 using N = 3 , where N representes the highest degree of the Hermite polynomial.
Table 7. The exact versus the approximate y 3 x solutions for Example 4 using N = 3 , where N representes the highest degree of the Hermite polynomial.
x Exact SolutionApproximate SolutionAbsolute Error
0.01.0000000.999999999999999445.5511 × 10−16
0.21.2081.207999999999999702.2204 × 10−16
0.41.4641.464000000000000000.0000 × 10+00
0.61.8161.816000000000001201.3323 × 10−15
0.82.3122.312000000000001208.8818 × 10−16
1.03.00003.000000000000001301.3323 × 10−15
Table 8. Comparison of the proposed method’s approximate solution against the exact solution and other available numerical methods for Example 4.
Table 8. Comparison of the proposed method’s approximate solution against the exact solution and other available numerical methods for Example 4.
x Exact SolutionProposed MethodLCM-REF   L C M R E F [12] L C M [12] Chelyshkov Tau N = 8 [13]Fractional
Iteration Method [14]
Legendre-
Collocation Method [16]
0.101.1010001.1010001.1010001.1010001.1010001.1037631.101000
0.251.2656251.2656251.2656251.2656251.2656251.2690401.265625
0.501.6250001.6250001.6250001.6250001.6250001.6239971.625000
0.752.1718752.1718752.1718752.1718752.1718752.1669002.171875
1.003.0000003.0000003.0000003.0000003.0000002.9949883.000002
Table 9. Comparison of exact and approximate solutions showing absolute error for varying x values.
Table 9. Comparison of exact and approximate solutions showing absolute error for varying x values.
x ExactApproximate SolutionAbsolute Error
0.00.000−0.000000000000000131.2785 × 10−16
0.20.0080.07999999999999955.0307 × 10−17
0.40.0640.0639999999999999466.9389 × 10−17
0.60.2160.215999999999999970.0000 × 10+00
0.80.5120.512000000000000011.1102 × 10−16
Table 10. Comparison of absolute errors for the sine–cosine wavelet method [16] versus the proposed hybrid Hermite-least-squares method.
Table 10. Comparison of absolute errors for the sine–cosine wavelet method [16] versus the proposed hybrid Hermite-least-squares method.
x ( k = 6 , L = 1 ) ( k = 7 , L = 1 ) ( k = 8 , L = 1 ) Presented Method
01.6248 × 10−62.0329 × 10−72.5428 × 10−81.27855 × 10−16
0.21.2640 × 10−41.1938 × 10−53.0823 × 10−56.93889 × 10−17
0.49.5463 × 10−42.4670 × 10−52.3622 × 10−50.00000 × 10+00
0.62.1195 × 10−45.5888 × 10−45.3325 × 10−51.11022 × 10−16
0.81.9748 × 10−41.8901 × 10−44.9591 × 10−52.22045 × 10−16
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Osheba, H.S.; Ramadan, M.A.; Radwan, T. An Efficient and Accurate Numerical Approach for Fractional Bagley–Torvik Equations: Hermite Polynomials Combined with Least Squares. Fractal Fract. 2026, 10, 37. https://doi.org/10.3390/fractalfract10010037

AMA Style

Osheba HS, Ramadan MA, Radwan T. An Efficient and Accurate Numerical Approach for Fractional Bagley–Torvik Equations: Hermite Polynomials Combined with Least Squares. Fractal and Fractional. 2026; 10(1):37. https://doi.org/10.3390/fractalfract10010037

Chicago/Turabian Style

Osheba, Heba S., Mohamed A. Ramadan, and Taha Radwan. 2026. "An Efficient and Accurate Numerical Approach for Fractional Bagley–Torvik Equations: Hermite Polynomials Combined with Least Squares" Fractal and Fractional 10, no. 1: 37. https://doi.org/10.3390/fractalfract10010037

APA Style

Osheba, H. S., Ramadan, M. A., & Radwan, T. (2026). An Efficient and Accurate Numerical Approach for Fractional Bagley–Torvik Equations: Hermite Polynomials Combined with Least Squares. Fractal and Fractional, 10(1), 37. https://doi.org/10.3390/fractalfract10010037

Article Metrics

Back to TopTop