1. Introduction
Fractional calculus, a generalization of classical integer-order calculus, has gained significant attention in recent decades for its ability to model complex systems with memory and non-local effects. Unlike traditional derivatives, which depend only on a function’s local behavior, fractional derivatives and integrals incorporate the entire history of a process, making them indispensable for accurately describing phenomena in viscoelasticity, finance, anomalous diffusion, and control theory. The standard form of the fractional Bagley–Torvik equation (BTE) can be written as the following:
where
are constants,
and
are fractional orders, and
is an external forcing function. Due to its wide-ranging applications in engineering and applied sciences, including modeling damping materials and fluid mechanics, the development of robust and efficient methods for solving the BTE is a critical area of research.
While the analytical solution of the BTE is challenging to obtain, various numerical techniques have been developed to find approximate solutions. These methods include finite difference schemes, wavelet-based approaches, and spectral collocation methods using various polynomial families. Although effective, many of these approaches suffer from limitations, such as complex implementation, computational cost, or potential issues with stability and convergence, especially for higher accuracy requirements. For instance, traditional finite difference methods can struggle with the non-local nature of fractional derivatives, often requiring dense matrices that increase computational burden. Spectral methods offer high accuracy but can be sensitive to the choice of basis functions and the handling of fractional derivatives. While various numerical techniques, including finite difference schemes, wavelet methods, and spectral collocation approaches using different polynomial families, have been developed to approximate solutions for the Bagley–Torvik equation, many encounter limitations such as computational expense, complex implementation, or stability issues when high accuracy is required. Traditional methods often struggle with the non-local nature of fractional derivatives or lead to dense matrices that increase the computational burden. The existing literature demonstrates the utility of orthogonal polynomials; however, an area for improvement remains in combining these basis functions with a robust optimization framework that efficiently handles fractional derivatives and boundary conditions. Our method directly addresses these shortcomings by proposing a novel framework that leverages the analytical properties of Hermite polynomials within a stable least-squares formulation, thereby transforming the problem into a well-conditioned system of algebraic equations and ensuring superior accuracy and rapid convergence compared to existing methods.
The use of orthogonal polynomials, such as Hermite polynomials, offers a powerful alternative for approximating solutions to fractional differential equations. The analytical properties of Hermite polynomials, including their efficient calculation and recursive relationships, provide a strong foundation for developing accurate and stable numerical schemes. Previous work has demonstrated the utility of Hermite polynomials in conjunction with collocation and operational matrices to solve fractional-order systems. However, combining these basis functions with a robust optimization framework remains an area for improvement.
To overcome the shortcomings of existing methods, this paper proposes a novel and highly accurate numerical framework that synergizes the strengths of Hermite polynomial approximation with a least-squares optimization scheme. By expanding the solution in a series of Hermite polynomials, we can leverage their properties to efficiently compute the Caputo fractional derivatives without the complexity of direct discretization. The problem is then recast as a weighted least-squares minimization problem, which transforms the original fractional differential equation into a stable and well-conditioned system of algebraic equations. This approach not only ensures high-accuracy solutions but also provides a systematic way to enforce initial and boundary conditions. Our numerical results will confirm that this combined methodology surpasses existing techniques in accuracy, convergence rate, and overall robustness, highlighting its potential for wider application to general fractional-order boundary value problems.
Several numerical approaches have been developed to obtain approximate solutions of the fractional Bagley–Torvik equation. Jeon and Bu [
1] proposed an improved numerical technique based on the fractional integral formula and Adams–Moulton method, achieving higher accuracy for fractional dynamic systems. Similarly, Liu et al. [
2] presented a Taylor expansion-based approach for solving the BTE with integral boundary conditions, providing an efficient tool for boundary-value formulations. Other researchers have explored polynomial-based spectral techniques. Taha et al. [
3] presented a novel analytical technique for solving fractional Bagley–Torvik equations, specifically applying it to the model describing the motion of a rigid plate in Newtonian fluids, while Zeen El Deen et al. [
4] developed a fourth-kind Chebyshev operational Tau algorithm, both demonstrating the power of orthogonal polynomial bases in obtaining high-precision results.
Kamal et al. [
5] compared least-squares methods for nonlinear time-fractional gas dynamic equations, confirming the robustness of least-squares formulations in handling fractional nonlinearities. In a related direction, Zhang et al. [
6] adopted the Hermite wavelet method to approximate solutions of the BTE, emphasizing the effectiveness of Hermite functions in fractional modeling. Recent comprehensive reviews, such as that by Yang, Zhao, and Li [
7], highlighted both the progress and the remaining challenges in developing efficient algorithms for fractional Bagley–Torvik equations. The following authors predominantly focus on solving the fractional Bagley–Torvik equation, which is characterized as a specific type of multi-term linear fractional differential equation. S. T. Ejaz et al. [
8] provided a general numerical comparative analysis of methods for solving various fractional differential equations, rather than focusing exclusively on the Bagley–Torvik equation itself. Ford and Connolly [
9] developed systems-based decomposition schemes specifically for solving multi-term fractional differential equations, for which the Bagley–Torvik equation is a primary example. Rahimkhani and Ordokhani [
10] applied Müntz–Legendre polynomials to solve the Bagley–Torvik equation effectively over large intervals. Sayed et al. [
11] utilized a Legendre-Galerkin spectral algorithm, with an application provided specifically for the Bagley–Torvik equation. Askari [
12] employed Lucas polynomials for the numerical solution of the fractional Bagley–Torvik equations. El-Gamel et al. [
13] used a Chelyshkov–Tau approach to address the Bagley–Torvik equation. El-Gamel and Abd El-Hady [
14] utilized the Legendre-Collocation method to solve the same equation. Mekkaouii and Hammouch [
15] focused on finding approximate analytical solutions to the Bagley–Torvik equation using the fractional iteration method. Dinçel [
16] employed a sine–cosine wavelet method for approximating solutions of the fractional Bagley–Torvik equation.
The remainder of this paper is organized as follows:
Section 2 introduces the essential mathematical preliminaries, including the definitions of fractional derivatives, the properties of Hermite polynomials, and the formulation of the least-squares method that forms the foundation of the proposed scheme.
Section 3 presents the construction of the Hermite polynomial approximation and develops the weighted least-squares model for the fractional Bagley–Torvik equation.
Section 4 outlines the implementation procedure and discusses the computational framework of the proposed approach.
Section 5 provides several numerical experiments to validate the efficiency, accuracy, and convergence behavior of the method, along with comparisons to existing techniques reported in the literature. Finally,
Section 6 summarizes the main conclusions and highlights potential directions for future research and extensions of the proposed methodology.
3. Description of the Proposed Hermite Least Squares Method
Here, we consider the standard form of the fractional Bagley–Torvik equation (BTE):
We assume an approximate solution of the form and the equation for the second Derivative:
The expansion of this equation is the following:
Substituting the approximate solution into the BTE yields the following:
Moving the right-hand side of the equation to the left-hand side gives us the residue equation, which is set to zero.
The residue equation,
, of the above expression is:
From this residual function, we generate our functional,
, using a weight function,
, where in this case we assume
for simplicity:
Finding the values of
for
, which minimize
, is equivalent to finding the best approximate solution. The minimum value of
is obtained by setting:
Using this condition, we get a system of
equations. For
:
This process continues up to , generating a system of linear algebraic equations with unknown constants . This system can be solved using a method such as Gaussian elimination to obtain the values of the unknown constants. The unknown constants are then substituted back into the assumed approximate solution to find the solution for the fractional Bagley–Torvik equation.
Formulate the system of equations in general matrix form. The system of equations is derived from the condition that the partial derivative of the functional with respect to each coefficient is zero, for all from to . This gives us equations.
The core equation for each
is as follows:
where the residue function is given by the following:
The partial derivative of the residue function with respect to
is:
For clarity, let’s define a combined function
for each basis function
:
Using this, the system of equations becomes as follows:
Expanding and rearranging the terms to form a linear system of equations
we get the following:
Now, we define the general form of matrix M and vector . The system of equations is a linear system of size for the unknown coefficients .
The matrix
is a symmetric matrix of size
. The element at row
and column
is given by the integral on the left side of the equation:
The matrix takes the following form:
The vector
is a column vector of size
The element at row
is given by the integral on the right side of the equation:
5. Numerical Examples and Results
This section is dedicated to a thorough investigation into the effectiveness and accuracy of a novel hybrid technique developed for solving the fractional differential equation known as the Bagley–Torvik equation. The core of this methodology lies in combining Hermite polynomials for approximating the solution with the least squares method for minimizing the residual error over a specified interval. The objective is to rigorously analyze the method’s performance by applying it to several specific examples of the Bagley–Torvik equation, each with different initial or boundary conditions. Furthermore, a critical comparative analysis is conducted, where the results of the proposed hybrid method are measured against those obtained from other existing and recent numerical techniques, such as the Bernstein collocation, Subdivision collocation, and various spectral methods. The aim of this comprehensive evaluation is to demonstrate the superior accuracy and efficiency of the proposed approach across various test cases.
Example 1. Consider the fractional differential equation known as Bagley–Torvik equation [8] subject to initial conditions . The exact solution of this equation is .
This problem was recently solved by S. T. Ejaz et al. [8] using two approaches, namely Bernstein collocation method and Subdivision collocation method. We will apply our novel hybrid technique, which combines Hermite polynomials with the least squares method. We take the case where representes the highest degree of the Hermite polynomial.
Step 1: Choose a Truncated Hermite Polynomial Expansion.
The analytic solution is approximated using a truncated series, as:where Hermite polynomials are the following: Step 2: Construct Derivatives in the Hermite Basis.
By using Equation (4), we obtain the following: Step 3: Compute the Caputo Fractional Derivative of order .
By evaluating the fractional derivative , we obtain the following: Step 4: Transformed Basis (the functions)
The functions are defined as the result of applying the entire differential operator (which includes the second derivative, fractional derivative, and the function itself) to the original Hermite polynomials:then, Step 5: Residual computation
The residual equation is defined as the following: The least-squares approach chooses to minimize the squared -norm of the residual on Set the gradient to to be zero which yields the normal equations. Because is linear in , the normal equations, of the matrix form, are linear:where the entries of matrix and column vector are given by:where are defined as in Equation (11). So, we obtain the basis as the following: The matrix is then given as: Second, we compute vector column using Equation (13) which takes the following form: Step 6: Initial Conditions
Now, we use the initial conditions to get the system of two equations These two conditions can be written in the matrix equation form as follows: Finally, from Equations (12) and (14), we arrive at the linear system of matrix equation form , where is of size and size of is of given by: The objective is to determine the unknown vector by solving this system. The resulting solution for the vector is found to be:
These coefficients were then used to obtain the approximate solution as a weighted sum of the functions In the Table 1, we display the exact and approximate solution at indicated x points. In
Table 2, we compare the accuracy of our proposed method to the method presented in [
8] using Bernstein collocation method and Subdivision collocation method.
Table 2 illustrates that our proposed method outperforms other approaches, delivering better accuracy through the application of both the Bernstein collocation and subdivision collocation techniques.
Example 2. Consider the fractional differential equation known as Bagley–Torvik equation [9,10] subject to initial conditions . The exact solution of (15) is .
We apply our novel hybrid technique, which combines Hermite polynomials with the least squares method. We take the case where representes the highest degree of the Hermite polynomial. This problem is considered in [9,10], where in [9] Ford and Connolly give a comparison of the efficiency of three alternative decomposition schemes for the approximate solution of this problem using the Caputo form of the fractional derivative equation. Rahimkhani and Ordokhani [10] proposed a new numerical method, which is introduced to approximately solve this fractional Bagley–Torvik equation using Müntz–Legendre polynomials. Step 1: Choose a Truncated Hermite Polynomial Expansion.
The analytic solution is approximated using a truncated series, as: Step 2: Construct Derivatives in the Hermite Basis.
By using Equation (4), we obtain the following: Step 3: Compute the Caputo Fractional Derivative of order .
By evaluating the fractional derivative , we obtain the following: Step 4: Transformed Basis (the functions)
The functions are defined as the result of applying the entire differential operator (which includes the second derivative, fractional derivative, and the function itself) to the original Hermite polynomials:then, Step 5: Residual computation
The residual equation is defined as the following: The least-squares approach chooses to minimize the squared -norm of the residual on : Set the gradient to to be zero which yields the normal equations. Because is linear in , the normal equations, of the matrix form, are linear:where the entries of the matrix and the column vector are given by:where are defined as in Equation (21). So, we obtain the basis as: Using Equation (23), matrix are obtained as follows: Second, we compute the vector column using Equation (23), which takes the following form: That is the matrix and the vector are of the final form as: Step 6: Initial Conditions
Now, we use the initial conditions to get the following system of two equations: These two conditions can be written in the matrix equation form as follows: Finally, from Equations (22) and (24), we arrive to the linear system of matrix equation form , where is of size and size of is given by:We solve for the unknown vector , which will be of the following form: Hence, our approximate solution will be given as follows:which is the exact solution. The values for both the exact and approximate solutions are shown for various points in Table 3 below. The proposed hybrid method achieves superior accuracy compared with the three decomposition approaches in [
9] as well as the Müntz–Legendre polynomial method in [
10] (
Table 4).
Example 3. Let us examine the fractional Bagley–Torvik differential equation [8,11] subject to boundary conditions .
The analytical solution when .
In Ref. [8], the authors introduced a numerical solution technique for the Bagley–Torvik equation using the Taylor matrix method. This approach converts the differential equation into a system of algebraic equations, where these equations are solved to determine the coefficients of a generalized Taylor series, which then provides the approximate solution. In Ref. [11], the authors utilized a spectral Galerkin algorithm with a specialized shifted Legendre basis to find semi-analytic solutions for a problem. The method first transforms the non-homogeneous boundary conditions into homogeneous ones. It then solves this new problem, which has a new exact solution, by converting the fractional differential equation into a linear system with well-structured, invertible matrices. We apply our novel hybrid technique, which combines Hermite polynomials with the least squares method. We take the case where representes the highest degree of the Hermite polynomial.
Step 1: Choose a Truncated Hermite Polynomial Expansion
The analytic solution is approximated using a truncated series, as: Step 2: Construct Derivatives in the Hermite Basis.
By using Equation (4), we obtain the following: Step 3: Compute the Caputo Fractional Derivative of order .
By Evaluate fractional derivative , we obtain the following: Step 4: Transformed Basis (the functions)
The functions are derived by subjecting the original Hermite polynomials to the full differential operator, which is composed of the second derivative, the fractional derivative, and the function itself:then, Step 5: Residual computation
The residual equation is defined as follows: The least-squares approach chooses to minimize the squared -norm of the residual on : Set the gradient to to zero, which yields the normal equations. Because is linear in , the normal equations, of the matrix form, are linear:where the entries of matrix and column vector are given by:where are defined as in Equation (31). So, we have the basis given as the following: Using Equation (33), the matrix are given as follows: Step 6: Boundary Conditions
Now, we use the boundary conditions = 1
to get the system of two equations: These two conditions can be written in the matrix equation form as follows: Finally, from Equations (32) and (34), we arrive at the linear system of matrix equation form:where is of size and size of is given by:and we solve for the unknown vector , which will be of the following form: Hence, our approximate solution will be given as: In
Table 5, below, we display the approximate solution versus the exact as will as the absolute error of our proposed method.
In
Table 6, we compare the accuracy of our proposed method to the method presented in [
11] using the Galerkin method with shifted Legendre polynomials.
Remark 1. represents the number of terms used in the truncated series expansion to approximate the solution using shifted Legendre.
From
Table 6, it is clear our proposed method is deemed more accurate than the existing technique described [
11] utilizing the Galerkin method with shifted Legendre polynomials,
Example 4. Consider fractional Bagley–Torvik equation [12,13,14,15]: where the initial conditions are and . This problem has the exact solution of the form . By using the above proposed method, solving this problem for . Step 1: Choose a truncated Hermite polynomial expansion. The analytic solution is approximated using a truncated series, as:where Hermite polynomials are the following: Step 2: Construct Derivatives in the Hermite Basis.
By using Equation (4), we have Step 3: Compute the Caputo Fractional Derivative of order .
By evaluating fractional derivative , we have Step 4: Transformed Basis (the functions)
The functions are defined as the result of applying the entire differential operator (which includes the second derivative, fractional derivative, and the function itself) to the original Hermite polynomials:then, Step 5: Residual computation
The residual equation is defined as the following: The least-squares approach chooses to minimize the squared -norm of the residual on Set the gradient to to be zero which yields the normal equations. Because is linear in , the normal equations, of the matrix form, are linear:where the entries of matrix and column vector are given by:where are defined as in Equation (40). So, we obtain the basis as: Using Equation (42), the matrix are given as the following: Step 6: Initial Conditions
Now, we use the initial conditions to get the system of two equations These two conditions can be written in the matrix equation form as follows: Finally, from Equations (41) and (43), we arrive at the linear system of matrix equation form where is of size and size of is given by: Solving the matrix equation , we obtain the coefficientsas follows: Now, we have obtained the approximate solutionwhich is very close to the exact solution. In
Table 7, below, we display the approximate solution versus the exact as will as the absolute error of our proposed method
Table 8 presents a comparative analysis of results derived from several numerical approaches: the proposed hybrid method (Hermite coupled with least squares method), the Lucas collocation method (LCM), the Lucas collocation method combined with residual error function (LCM-REF) [
12], the Chelyshkov–Tau method for
[
13], the fractional iteration method (FIM) [
14], and the Legendre-collocation method [
15]. The data clearly indicate that the accuracy of the proposed hybrid method is comparable to that of the Lucas collocation, Lucas combined with residual error function, and Chelyshkov–Tau methods. Furthermore, the hybrid method demonstrates superior accuracy when compared to the methods outlined in references [
15,
16].
Computational cost analysis for this example: The proposed method involves forming a system of linear algebraic equations of size , , so the system is . The most computationally intensive steps are the calculation of the matrix integrals and the solution of the linear system. Matrix Formation: Calculating the entries of matrix and vector involves numerical integration of the basis functions and over the interval. The number of matrix entries grows quadratically with , i.e., entries. The cost per integral depends on the specific quadrature rule used and the complexity of the functions, but this is a fixed cost once the numerical integration scheme is chosen. System Solution: Solving the resulting linear system for the unknown coefficients is typically carried out using methods like Gaussian elimination. The computational complexity for solving a general square linear system of size is . In this case, the complexity is , which simplifies to Thus, the overall theoretical computational complexity of the proposed method is dominated by solving the linear system, making it an algorithm, where is the highest degree of the Hermite polynomial used. While methods based on finite differences or specific wavelet methods can sometimes involve dense matrices and higher computational burdens, especially for higher accuracy requirements, potentially scaling differently (e.g., quadratically, or higher with the number of grid points).
Theoretical Convergence rate analysis: The Hermite polynomials form a complete basis for the space of square-integrable functions . The least-squares method is designed to minimize the -norm of the residual, which ensures that as the number of basis functions approaches infinity, the approximate solution converges to the exact solution in the . For this example, Example 4, using a low polynomial degree of , the method achieved absolute errors near machine precision (on the order of to . This suggests the solution converges very rapidly even with a small number of terms.
Comparison of Accuracy: Table 8 compares the proposed method with others. The proposed method’s error at
is shown to be comparable to, or better than, other methods (like Chelyshkov–Tau with
. This empirical evidence suggests a high convergence rate, characteristic of spectral methods. The rate of convergence (algebraic or spectral) depends on the smoothness of the exact solution and the properties of the fractional operators involved.
The memory requirements for the proposed method in Example 4 primarily depend on the size of the linear system of algebraic equations that is solved.
System Size: The method transforms the fractional differential equation into a system of linear equations, which, after incorporating the initial conditions, results in an overdetermined system of size . For example, Example 4, the highest polynomial degree used is . The resulting matrix is explicitly shown to be of size , and the vector is . Generally, for a polynomial degree , the system involves unknown coefficients and equations (at least , plus additional equations for boundary/initial conditions). The core matrix is approximately of size before extra conditions are added. The memory needed to store the matrix is proportional to the number of its entries. Since the matrix is dense (all entries are non-zero), the memory complexity scales quadratically with the degree of the polynomial , i.e., . The vectors require memory. The memory requirement is relatively low because only a small number of Hermite basis functions are needed to achieve high accuracy (e.g., for machine precision results in Example 4). This contrasts favorably with methods that require a very fine grid or many basis functions to reach comparable accuracy. Adding this analysis demonstrates the efficiency of the method not just in speed, but also in memory usage.
Example 5. Consider the following Bagley–Torvik equation [16] where the initial conditions are and the exact solution is .
By solving this problem for , we follow the same steps as in the previous examples:
Step 1: Choose a truncated Hermite polynomial expansion.
The analytic solution is approximated using a truncated series, as: Step 2: Construct derivatives in the Hermite Basis.
By using Equation (4), we obtain the following Step 3: compute the Caputo fractional derivative of order
By evaluating fractional derivative , we obtain the following: Step 4: Transformed Basis (the functions)
The functions are defined as the result of applying the entire differential operator (which includes the second derivative, fractional derivative, and the function itself) to the original Hermite polynomials:then, Step 5: Residual computation
The residual equation is defined as follows: In the least-squares approach, are selected to minimize the squared -norm of the residual on : Set the gradient to to be zero which yields the normal equations. Because is linear in , the normal equations, of the matrix form, are linear:where the entries of the matrix and the column vector are given by:where are defined as in Equation (51). So, we obtain the basis as: Using Equation (53), matrix are given as: Step 6: Initial Conditions
Now, we use the initial conditions to get the system of two equations These two conditions can be written in the matrix equation form as follows: Finally, from Equations (52) and (54), we arrive to the linear system of matrix equation formwhere is of size and size of is given by: Solving the matrix equation , we obtain the coefficientsas follows: Hence, our approximate solution will be given as the following: The results presented in
Table 9 demonstrate a high degree of accuracy, as indicated by the very small absolute error values (on the order of
to
), with one instance of zero error.
Based on the data presented in
Table 9 and
Table 10, a distinct performance gap exists between the two numerical approaches. The proposed hybrid Hermite-least-squares method consistently delivers superior accuracy compared to the sine–cosine wavelet method across all tested
x values. Although the wavelet method’s accuracy improves as parameter
k increases (errors typically falling between
and
), its precision remains notably lower than that of the hybrid method, which consistently achieves absolute errors within machine precision (
to
). Ultimately, the results strongly indicate that the proposed hybrid method is numerically superior for approximating the exact solution under these conditions.