Next Article in Journal
The Integration Model of Kano Model and Importance-Performance and Gap Analysis—Application of Mutual Information
Previous Article in Journal
State Estimation Based State Augmentation and Fractional Order Proportional Integral Unknown Input Observers
Previous Article in Special Issue
Managing the Risk via the Chi-Squared Distribution in VaR and CVaR with the Use in Generalized Autoregressive Conditional Heteroskedasticity Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Numerical Approximation of Caputo Fractional Derivatives of Higher Orders Using a Shifted Gegenbauer Pseudospectral Method: A Case Study of Two-Point Boundary Value Problems of the Bagley–Torvik Type

by
Kareem T. Elgindy
1,2
1
Department of Mathematics and Sciences, College of Humanities and Sciences, Ajman University, Ajman P.O.Box 346, United Arab Emirates
2
Nonlinear Dynamics Research Center (NDRC), Ajman University, Ajman P.O.Box 346, United Arab Emirates
Mathematics 2025, 13(11), 1793; https://doi.org/10.3390/math13111793
Submission received: 20 April 2025 / Revised: 28 April 2025 / Accepted: 19 May 2025 / Published: 27 May 2025
(This article belongs to the Special Issue Advances in Computational Mathematics and Applied Mathematics)

Abstract

:
This paper introduces a novel Shifted Gegenbauer Pseudospectral (SGPS) method for approximating Caputo fractional derivatives (FDs) of an arbitrary positive order. The method employs a strategic variable transformation to express the Caputo FD as a scaled integral of the mth-derivative of the Lagrange interpolating polynomial, thereby mitigating singularities and improving numerical stability. Key innovations include the use of shifted Gegenbauer (SG) polynomials to link mth-derivatives with lower-degree polynomials for precise integration via SG quadratures. The developed fractional SG integration matrix (FSGIM) enables efficient, pre-computable Caputo FD computations through matrix–vector multiplications. Unlike Chebyshev or wavelet-based approaches, the SGPS method offers tunable clustering and employs SG quadratures in barycentric forms for optimal accuracy. It also demonstrates exponential convergence, achieving superior accuracy in solving Caputo fractional two-point boundary value problems (TPBVPs) of the Bagley–Torvik type. The method unifies interpolation and integration within a single SG polynomial framework and is extensible to multidimensional fractional problems.

1. Introduction

Fractional calculus generalizes calculus by allowing differentiation and integration to arbitrary real orders. This framework provides a powerful tool for modeling memory effects, long-range interactions, and anomalous diffusion—phenomena commonly observed in scientific and engineering applications. Unlike classical integer-order models, which assume purely local and instantaneous interactions, fractional-order models naturally incorporate non-locality and history dependence. This feature allows them to more accurately represent real-world processes such as viscoelasticity, dielectric polarization, electrochemical reactions, and subdiffusion in disordered media [1,2,3]. Moreover, fractional-order models often require fewer parameters to match or exceed the accuracy of classical models in representing complex dynamics, making them both efficient and descriptive [4].
The key advantage of FDs over classical derivatives lies in their capacity to capture hereditary characteristics and long-range temporal correlations, which are particularly relevant in biological systems [5], control systems [6], and viscoelastic materials [3]. For instance, traditional damping models use exponential kernels that decay too quickly to accurately capture certain relaxation behaviors. In contrast, fractional models employ power-law kernels, enabling them to describe slower and more realistic decay rates [7].
Among the various definitions of FDs, the Caputo FD is particularly popular due to its compatibility with classical initial and boundary conditions, which allows seamless integration with standard numerical and analytical techniques for solving fractional differential equations. Unlike the Riemann–Liouville FD, the Caputo FD defines the FD of a constant as zero, simplifying the mathematical treatment of steady-state solutions and improving the applicability of collocation methods. A prominent example of its application is the Bagley–Torvik equation, a well-known fractional differential equation involving a Caputo derivative of order 1.5. This equation models the motion of a rigid plate immersed in a viscous fluid, where the FD term represents a damping force that depends on the history of the plate’s motion. Such damping—referred to as fractional or viscoelastic damping—is commonly used to model materials exhibiting memory effects.
Recent advances in finite-time stability analysis for fractional systems (e.g., [8]) underscore the growing demand for robust numerical methods. In particular, the numerical approximation of FDs and the solution to equations such as the Bagley–Torvik equation remain active and challenging areas of research. These developments highlight the necessity of stable, efficient, and highly accurate numerical methods capable of capturing the complex dynamics inherent in fractional-order models. Several numerical studies have demonstrated that classical methods struggle to maintain accuracy or stability when adapted to fractional settings due to the singular kernel behavior of fractional integrals, especially near the origin [9]. Hence, developing dedicated fractional methods that respect the non-local structure of the problem is crucial for realistic simulations. In the following, we mention some of the key contributions to the numerical solution to the Bagley–Torvik equation using the Caputo FD: Spectral Methods: Saw and Kumar [10] proposed a Chebyshev collocation scheme for solving the fractional Bagley–Torvik equation. The Caputo FD was handled through a system of algebraic equations formed using Chebyshev polynomials and specific collocation points. Ji et al. [11] presented a numerical solution using SC polynomials. The Caputo derivative was expressed using an operational matrix of FDs, and the fractional-order differential equation was reduced to a system of algebraic equations that was solved using Newton’s method. Hou et al. [12] solved the Bagley–Torvik equation by converting the differential equation into a Volterra integral equation, which was then solved using Jacobi collocation. Ji and Hou [13] applied Laguerre polynomials to approximate the solution to the Bagley–Torvik equation. The Laplace transform was first used to convert the problem into an algebraic equation, and then, Laguerre polynomials were used for numerical inversion. Wavelet-Based Methods: Kaur et al. [14] developed a hybrid numerical method using non-dyadic wavelets for solving the Bagley–Torvik equation. Dincel [15] employed sine–cosine wavelets to approximate the solution to the Bagley–Torvik equation, where the Caputo FD was computed using the operational matrix of fractional integration. Rabiei and Razzaghi [16] introduced a wavelet-based technique, utilizing the Riemann–Liouville integral operator to transform the fractional Bagley–Torvik equation into algebraic equations. Operational Matrix Methods: Abd-Elhameed and Youssri [17] formulated an operational matrix of FDs in the Caputo sense using Lucas polynomials, and applied Tau and collocation methods to solve the Bagley–Torvik equation. Youssri [18] introduced an operational matrix approach using Fermat polynomials for solving the fractional Bagley–Torvik equation in the Caputo sense. A spectral tau method was employed to transform the problem into algebraic equations. Galerkin Methods: Izadi and Negar [19] used a local discontinuous Galerkin scheme with upwind fluxes for solving the Bagley–Torvik equation. The Caputo derivative was approximated by discretizing elementwise systems. Chen [20] proposed a fast multiscale Galerkin algorithm using orthogonal functions with vanishing moments. Spline and Finite Difference Methods: Tamilselvan et al. [21] used a second-order spline approximation for the Caputo FD and a central difference scheme for the second-order derivative term in solving the Bagley–Torvik equation. Artificial Intelligence-Based Methods: Verma and Kumar [22] employed an artificial neural network method with Legendre polynomials to approximate the solution to the Bagley–Torvik equation, where the Caputo derivative was handled through an optimization-based training process.
This work introduces a novel framework for approximating Caputo FDs of any positive orders using an SGPS method. Unlike traditional approaches, our method employs a strategic change of variables to transform the Caputo FD into a scaled integral of the mth-derivative of the Lagrange interpolating polynomial, where m is the ceiling of the fractional order α . This transformation mitigates the singularity inherent in the Caputo derivative near zero, thereby improving numerical stability and accuracy. The numerical approximation of the Caputo FD is finally furnished by linking the mth-derivative of SG polynomials with another set of SG polynomials of lower degrees and higher parameter values whose integration can be recovered within excellent accuracies using SG quadratures. By employing orthogonal collocation and SG quadratures in barycentric form, we achieve a highly accurate, computationally efficient, and stable scheme for solving fractional differential equations under optimal parameter settings compared to classical PS methods. Furthermore, we provide a rigorous error analysis showing that the SGPS method is convergent when implemented within a semi-analytic framework, where all necessary integrals are computed analytically, and is conditionally convergent with an exponential rate of convergence for sufficiently smooth functions when performed using finite-precision arithmetic. This exponential convergence generally leads to superior accuracy compared to existing wavelet-based, operational matrix, and finite difference methods. We conduct rigorous error and convergence analyses to derive the total truncation error bound of the method and study its asymptotic behavior within double-precision arithmetic. The SGPS is highly flexible in the sense that the SG parameters associated with SG interpolation and quadratures allow for flexibility in adjusting the method to suit different types of problems. These parameters influence the clustering of collocation and quadrature points and can be tuned for optimal performance. A key contribution of this work is the development of the FSGIM. This matrix facilitates the direct computation of Caputo FDs through efficient matrix–vector multiplications. Notably, the FSGIM is constant for a given set of points and parameter values. This allows for pre-computation and storage, significantly accelerating the execution of the SGPS method. The SGPS method avoids the need for extended precision arithmetic, as it remains within the limits of double-precision computations, making it computationally efficient compared to methods that require high-precision arithmetic. The current approach is designed to handle any positive fractional order α , making it more flexible than some existing methods that are constrained to specific fractional orders. Unlike Chebyshev polynomials (fixed clustering) or wavelets (local support), SG polynomials offer tunable clustering via their index λ , optimizing accuracy for smooth solutions, while their derivative properties enable efficient FD computation, surpassing finite difference methods in convergence rate. The efficacy of our approach is demonstrated through its application to Caputo fractional TPBVPs of the Bagley–Torvik type, where it outperforms existing numerical schemes. The method’s framework supports extension to multidimensional and time-dependent fractional PDEs through the tensor products of FSGIMs. By integrating interpolation and integration into a cohesive SG polynomial-based approach, it provides a unified solution framework for fractional differential equations.
The remainder of this paper is structured as follows. Section 2 introduces the SGPS method, providing a detailed exposition of its theoretical framework and numerical implementation. The computational complexity of the derived FSGIM is discussed in Section 3. A comprehensive error analysis of the method is carried out in Section 4, establishing its convergence properties and providing insights into its accuracy. In Section 5, we demonstrate the effectiveness of the SGPS method through a case study, focusing on its application to Caputo fractional TPBVPs of the Bagley–Torvik type. Section 6 presents a series of numerical examples, demonstrating the superior performance of the SGPS method in comparison to existing techniques. Section 7 conducts a sensitivity analysis to investigate the impact of the SG parameters on the numerical stability of the SGPS method, providing practical insights into parameter selection for relatively small interpolation and quadrature mesh sizes. Finally, Section 8 concludes the paper with a summary of our key findings and a discussion of potential future research directions. Table 1 and the list of acronyms display the symbols and acronyms used in the paper and their meanings. A pseudocode for the SGPS method to solve Bagley–Torvik TPBVPs is provided in Appendix A. Appendix B supports the error analysis conducted in Section 4 by providing rigorous mathematical justifications for the asymptotic order of some key terms in the error bound.

2. The SGPS Method

This section introduces the SGPS method for approximating Caputo FDs. Readers interested in obtaining a deeper understanding of Gegenbauer and SG polynomials, as well as their associated quadratures, are encouraged to consult [23,24,25,26].
Let α R + Z + , m = α , f H m , 2 ( Ω 1 ) , x ^ n , 0 : n λ = G ^ n λ , and consider the following SGPS interpolant of f:
I n f ( x ) = f 0 : n L 0 : n λ - [ x - ] ,
where L k λ ( x ) is the nth-degree Lagrange interpolating polynomial in modal form defined by
L k λ ( x ) = ϖ ^ k λ trp λ ^ 0 : n λ ÷ G ^ 0 : n λ - [ x ^ n , k λ - ] G ^ 0 : n λ - [ x - ] , k J n + ;
λ ^ 0 : n λ and ϖ ^ 0 : n λ are the normalization factors for SG polynomials and the Christoffel numbers associated with their quadratures, respectively,
λ ^ j λ = π 2 1 4 λ Γ j + 2 λ j ! Γ 2 λ j + λ , ϖ ^ k λ = 1 / trp λ ^ 0 : n λ ÷ G ^ 0 : n λ - [ x ^ n , k λ - ] ( 2 ) ;
and j , k J n + (cf. Equations (2.6), (2.7), (2.10), and (2.12) in [23]). The matrix form of Equation (2) can be stated as
L 0 : n λ - [ x - ] = diag ϖ ^ 0 : n λ G ^ 0 : n λ - [ x 1 n + 1 - ] G ^ 0 : n λ - [ x ^ n λ - ]   λ ^ 0 : n λ ÷ .
Equation (1) allows us to approximate the Caputo FD of f:
D x α c f D x α c I n f = f 0 : n D x α c L 0 : n λ .
To accurately evaluate D x α c L 0 : n λ , we apply the following m-dependent change of variables:
τ = x 1 y 1 m α ,
which reduces D x α c f to a scalar multiple of the integral of the mth-derivative of f on the fixed interval Ω 1 , denoted by D x α E f , and defined by
D x α E f = x m α Γ ( m α + 1 ) I 1 ( y ) f ( m ) - [ x 1 y 1 m α - ] .
It is easy here to show that the value of x 1 y 1 m α will always lie in the range Ω x 0 x , y 1 . Combining Equations (4) and (5) gives
D x α c f x m α Γ ( m α + 1 ) f 0 : n I 1 ( y ) L 0 : n λ , m - [ x 1 y 1 m α - ] ,
where L j λ , m denotes the mth-derivative of L j λ j J n + . Substituting Equation (3) into Equation (6) yields
D x α c f x m α Γ ( m α + 1 ) trp λ ^ m : n λ ÷ ×
I 1 ( y ) G ^ m : n λ , m - [ x x y 1 m α 1 n + 1 - ] G ^ m : n λ - [ x ^ n λ - ] diag ϖ ^ 0 : n λ f 0 : n ,
where G ^ j λ , m denotes the mth-derivative of G ^ j λ j N m , n .
To efficiently evaluate Caputo FDs at arbitrary points z 0 : M Ω 1 s M Z 0 + , Formula (7) can be applied iteratively within a loop. While direct implementation using a loop over the vector’s elements of z M is possible, employing matrix operations is highly recommended for substantial performance gains. To this end, notice first that Equation (3) can be rewritten at z M as
L 0 : n λ - [ z M - ] = resh n + 1 , M + 1 trp λ ^ 0 : n λ ÷ × G ^ 0 : n λ - [ z M 1 n + 1 - ] G ^ 0 : n λ - [ 1 M + 1 x ^ n λ - ] ×
I M + 1 diag ϖ ^ 0 : n λ .
Equation (8) together with (6) yield:
D z M α c f 1 Γ ( m α + 1 ) z M ( m α ) Q ^ n α E f 0 : n ,
where
Q ^ n α E = resh n + 1 , M + 1 trp λ ^ m : n λ ÷ × I 1 ( y ) G ^ m : n λ , m - [ z M 1 y 1 m α 1 n + 1 - ] G ^ m : n λ - [ 1 M + 1 x ^ n λ - ] × I M + 1 diag ϖ ^ 0 : n λ .
With simple algebraic manipulation, we can further show that Equation (9) can be rewritten as
D z M α c f Q n α E f 0 : n ,
where
Q n α E = 1 Γ ( m α + 1 ) diag z M ( m α ) Q ^ n α E .
We refer to the ( M + 1 ) × ( n + 1 ) matrix Q n α E as “the αth-order FSGIM,” which approximates Caputo FD at the points z 0 : M using an nth-degree SG interpolant. We also refer to Q ^ n α E as the “αth-order FSGIM Generator” for an obvious reason. Although the implementation of Formula (10) is straightforward, Formula (9) is slightly more stable numerically, with fewer arithmetic operations, particularly because it avoids constructing a diagonal matrix and directly applies elementwise multiplication after the matrix–vector product. Note that for M = 0 , Formulas (9) and (10) reduce to (7).
It remains now to show how to compute
I 1 ( y ) G ^ j λ , m - [ x 1 y 1 m α - ] , a j N m : n , x Ω 1 ,
effectively. Notice first that although the integrand is defined in terms of a polynomial in x, the integrand itself is not a polynomial in y, since 1 / ( m α ) is not an integer for α R + Z + . Therefore, when trying to evaluate the integral symbolically, the process can be very challenging and slow. Numerical integration, on the other hand, is often more practical for such integrals because it can achieve any specified accuracy by evaluating the integrand at discrete points without requiring closed-form antiderivatives or algebraic complications. Our reliable tool for this task is the SGIM; cf. [23,25] and the references therein. The SGIM utilizes the barycentric representation of shifted Lagrange interpolating polynomials and their associated barycentric weights to approximate definite integrals effectively through matrix–vector multiplications. The SGPS quadratures constructed by these operations extend the classical Gegenbauer quadrature methods and can improve their performance in terms of convergence speed and numerical stability. An efficient way to construct the SGIM is to premultiply the corresponding GIM by half, rather than shifting the quadrature nodes, weights, and Lagrange polynomials to the target domain Ω 1 , as shown earlier in [23]. In the current work, we only need the GIRV, P , which extends the applicability of the barycentric GIM to include the boundary point 1 (cf. [24] Algorithm 6 or 7). The associated SGIRV, P ^ , can be directly generated through the formula
P ^ = 1 2 P .
Given that the construction of P ^ is independent of the SGPS interpolant (1), we can define P ^ using any set of SGG quadrature nodes G ^ n q λ q s n q Z 0 + , λ q > 1 / 2 . This flexibility enables us to improve the accuracy of the required integrals without being constrained by the resolution of the interpolation grid. With this strategy, the SGIRV provides a convenient way to approximate the required integral through the following matrix–vector multiplication:
I 1 ( y ) G ^ j λ , m - [ x x y 1 m α - ] P ^ G ^ j λ , m x 1 x ^ n q λ q 1 m α ,
a j N m : n , x Ω 1 . We refer to a quadrature of the form (12) as the ( n q , λ q ) -SGPS quadrature. A remarkable property of Gegenbauer polynomials (and their shifted counterparts) is that their derivatives are essentially other Gegenbauer polynomials, albeit with different degrees and parameters, as shown by the following theorem.
Theorem 1.
The mth-derivatives of the nth-degree, λ-indexed, Gegenbauer and SG polynomials are given by
G n λ , m ( x ) = χ n , m λ G n m λ + m ( x ) ,
G ^ n λ , m ( x ^ ) = χ ^ n , m λ G ^ n m λ + m ( x ^ ) ,
where
χ n , m λ = 2 m n ! Γ 2 λ λ m Γ m + n + 2 λ n m ! Γ 2 m + λ Γ n + 2 λ ,
χ ^ n , m λ = 2 m χ n , m λ = n ! Γ ( λ + 1 / 2 ) Γ ( n + m + 2 λ ) ( n m ) ! Γ ( n + 2 λ ) Γ ( m + λ + 1 / 2 ) ,
n m , x Ω 1 , 1 , and x ^ Ω 1 .
Proof. 
Let C n λ ( x ) be the nth-degree, λ -indexed Gegenbauer polynomial standardized by Szegö [27]. We shall first prove that
C n λ , m ( x ) = 2 m ( λ ) m C n m λ + m ( x ) , n m ,
where C j λ , m denotes the mth-derivative of C j λ j N m , n . To this end, we shall use the well-known derivative formula of this polynomial given by the following recurrence relation:
C n λ , 1 ( x ) = 2 λ C n 1 λ + 1 ( x ) , n 1 .
We will prove Equation (15) through mathematical induction on m. The base case m = 1 holds true due to the given recurrence relation for the first derivative. Assume now that Equation (15) holds true for m = k , where k is an arbitrary integer such that 1 < k n 1 . That is,
C n λ , k ( x ) = 2 k ( λ ) k C n k λ + k ( x ) .
We need to show that it also holds true for m = k + 1 . Differentiating both sides of the induction hypothesis with respect to x gives
C n λ , k + 1 ( x ) = d d x C n λ , k ( x ) = d d x 2 k ( λ ) k C n k λ + k ( x ) = 2 k ( λ ) k d d x C n k λ + k ( x ) = 2 k ( λ ) k · 2 ( λ + k ) C n k 1 λ + k + 1 ( x ) = 2 k + 1 ( λ ) k + 1 C n k 1 λ + k + 1 ( x ) .
This shows that if the formula holds for m = k , it also holds for m = k + 1 . Through mathematical induction, Equation (15) holds true for all integers m : 0 m n . Formula ([28] (A.5)) and the fact that
C n λ ( 1 ) = Γ ( n + 2 λ ) Γ ( n + 1 ) Γ ( 2 λ ) ,
immediately show that
G n λ , m ( x ) = 2 m ( λ ) m C n m λ + m ( 1 ) C n λ ( 1 ) G n m λ + m ( x ) , n m ,
from which Equation (13a) is derived. Formula (13b) follows from (13a) through successive application of the Chain Rule.    □
Equations (12) and (13b) bring to light the sought formula
I 1 ( y ) G ^ j λ , m - [ x x y 1 m α - ] χ ^ j , m λ P ^ G ^ j m λ + m x 1 x ^ n q λ q 1 m α ,
where a j N m : n , x Ω 1 . Figure 1 illustrates the key polynomial transformations in the SGPS method, where lower-degree SG polynomials serve as scaled transformations of the derivative terms. We denote the approximate α th-order Caputo FD of a function at point x, computed using Equation (16) in conjunction with either Equations (9) or (10), as D x α n q , λ q n , λ , E . It is interesting to notice here that the quadrature nodes involved in the computations of the necessary integrals (16), which are required for the construction of the FSGIM D x α n q , λ q n , λ , E , are independent of the SGG points associated with the SGPS interpolant (1), and therefore, any set of SGG quadrature nodes can be used. This flexibility allows for improving the accuracy of the required integrals without being constrained by the resolution of the interpolation grid.
Figure 2 illustrates the logarithmic absolute errors of Caputo FD approximations for f 1 ( t ) = t N . These approximations utilize SG interpolants of varying parameters but consistent degrees, in conjunction with a ( 15 , 0.5 ) -SGPS quadrature. The exact Caputo FD of f 1 is given below:
D t α c f 1 = N ! Γ ( N + 1 α ) t N α , N > α 1 , 0 , N α 1 , N Z 0 + , α R + .
In all plots of Figure 2, the rapid convergence of the PS approximations is evident. Given that the SG interpolants share the same polynomial degree as the power function, and since f 1 ( n + 1 ) 0 , the interpolation error vanishes, as we demonstrate later with Theorem 2 in Section 4. Consequently, the quadrature error becomes the dominant component. Theorem 4 in Section 4 further indicates that the quadrature error vanishes for n < n q + m + 1 , which elucidates the high accuracy achieved by the SGPS method in all four plots when n is sufficiently less than n q + m + 1 in many cases, leading to a near-machine epsilon level of the total error. While the error analysis in Section 4 predicts the collapse of the total error when n n q + m under exact arithmetic, the limitations of finite-digit arithmetic often prevent this, frequently necessitating an increase in n q by one unit or more, especially when varying λ q , for effective total error collapse. In Subplot 1, with n q = 4 , an nth-degree SG interpolant sufficiently approximates the Caputo FD of the power function t n to within machine precision for 2 n 5 . The error curves exhibit plateaus in this range, with slight fluctuations for specific λ values, attributed to accumulated round-off errors as the approximation approaches machine precision. For 6 n 10 , the total error becomes predominantly the quadrature error and remains relatively stable around 10 4 . Notably, the error profiles remain consistent for 6 n 10 despite variations in λ . Altering λ q while keeping λ constant can significantly impact the error, as shown in the upper right plot. Specifically, the error generally decreases with decreasing λ q values, with the exception of λ q = 0.5 , where the error reaches its minimum. The lower left plot demonstrates the exponential decay of the error with increasing values of n q , with the error decreasing by approximately two orders of magnitude for every two-unit increase in n q . The lower right plot presents a comparison between the SGPS method and MATLAB’s “integral” function, employing the tolerance parameters RelTol = AbsTol = 10 15 . The SGPS method achieves near-machine-precision accuracy with the parameter values λ = λ q = 0.5 and n q = 12 , outperforming MATLAB’s integral function by nearly two orders of magnitude in certain cases. The method achieves near-machine-epsilon precision with relatively coarse grids, demonstrating notable stability through consistent error trends.
Figure 3 further shows the logarithmic absolute errors of the Caputo FD approximations of the function f 2 ( t ) = e β t : β R + using SG interpolants of various parameters and a ( 15 , 0.5 ) -SGPS quadrature. The exact Caputo FD of f 2 is given below:
D t α c f 2 = k = 0 β k + m t α + k + m Γ k + m α + 1 = β α t α E 1 , α m + 1 ( β t ) .
The figure illustrates the rapid convergence of the proposed PS approximations. Specifically, across the parameter range λ { 0.2 , 0.1 , 0 , 0.5 , 1 , 2 } , the logarithmic absolute errors exhibit a consistent decrease as the degree of the Gegenbauer interpolant increases. This trend underscores the improved accuracy of higher-degree interpolants in approximating the Caputo FD up to a defined precision threshold. For lower degrees (n), the error reduction is more enunciated as λ decreases, indicating that other members of the SG polynomial family, associated with negative λ values, exhibit superior convergence rates in these cases. For higher degrees (n), the errors converge to a stable accuracy level irrespective of the λ value, highlighting the robustness of higher-degree interpolants in accurately approximating the Caputo FD. The near-linear error profiles observed in the plots confirm the exponential convergence of the PS approximations, with convergence rates modulated by the parameter selections, as detailed in Section 4.

3. Computational Complexity

In this section, we provide a computational complexity analysis of constructing Q n α E , incorporating the quadrature approximation (16). The analysis is based on the key matrix operations involved in the construction process, which we analyze individually as follows: Observe from Equation (11) that the term z M ( m α ) involves raising each element of an ( M + 1 ) -dimensional vector to the power ( m α ) , which requires O ( M ) operations. Constructing Q n α E from Q ^ n α E involves diagonal scaling by diag z M ( m α ) , which requires another O ( M n ) operation. The matrix Q ^ n α E is constructed using several matrix multiplications and elementwise operations. For each entry of z M , the dominant steps include the following:
  • The computation of G ^ m : n λ . Using the three-term recurrence equation
    ( n + 2 α ) G ^ n + 1 ( λ ) ( x ^ ) = 2 ( n + α ) ( 2 x ^ 1 ) G ^ n ( λ ) ( x ^ ) n G ^ n 1 ( λ ) ( x ^ ) ,
    n Z + , starting with G ^ 0 ( λ ) ( x ^ ) = 1 and G ^ 1 ( λ ) ( x ^ ) = 2 x ^ 1 , we find that each polynomial evaluation requires O ( 1 ) per point, as the number of operations remains constant regardless of the value of n. Since the polynomial evaluation is required for polynomials up to degree n, this requires O ( n ) operations per point. The computations of G ^ m : n λ - [ x ^ n λ - ] therefore require O ( n 2 ) operations.
  • The quadrature (16) involves evaluating a polynomial at transformed nodes. The cost of calculating χ ^ j , m λ depends on the chosen methods for computing factorials and the Gamma function. It can be considered a constant overhead for each evaluation of the Equation (14). The computation of x ^ n q λ q 1 m α involves raising each element of the column vector x ^ n q λ q to the power 1 / ( m α ) . The cost here is linear in ( n q + 1 ) , as each element requires a single exponentiation operation. Since we need to evaluate the polynomial at n q + 1 points, the total cost for this step is O ( n q ) . The cost of the matrix–vector multiplication is also linear in n q + 1 . Therefore, the computational cost of this step is O ( n q ) for each j N m : n . The overall cost, if we consider all polynomial functions involved in this step, is thus O ( n n q ) .
  • The Hadamard product introduces another O ( n 2 ) operations.
  • The evaluation of λ ^ m : n λ ÷ requires O ( n ) operations, and the product of trp λ ^ m : n λ ÷ according to the result from the Hadamard product requires O ( n 2 ) operations.
  • The final diagonal scaling diag ϖ ^ 0 : n λ contributes O ( n ) .
Summing the dominant terms, the overall computational complexity of constructing Q n α E is of O n ( n + n q ) per entry of z M . We therefore expect the total number of operations required to construct the matrix Q n α E for all entries of z M to be of O M n ( n + n q ) .
Remark 1.
The construction runtime of the FSGIM matrix Q n α E (size ( M + 1 ) × ( n + 1 ) ) used by the SGPS method scales as O ( M n ( n + n q ) ) , where n is the interpolant degree, M is the number of evaluation points, and n q is the highest degree of the Gegenbauer polynomial used to construct the quadrature rule. For large n and M, the FSGIM requires O ( M n ) storage. While this remains manageable in double-precision arithmetic, precomputation of the FSGIM offsets runtime costs, making the method practical for moderate-scale problems. For sufficiently smooth solutions, the chosen quadrature parameter n q can often be smaller than n without sacrificing accuracy, as the integrands are well approximated by low-degree polynomials. This reduces the dominant O ( M n n q ) term in the runtime and further improves the efficiency.

4. Error Analysis

The following theorem defines the truncation error of the α th-order SGPS quadrature (10) associated with the α th-order FSGIM Q n α E in closed form.
Theorem 2.
Let n m 1 , and suppose that f C n + 1 ( Ω 1 ) is approximated by the SGPS interpolant (1). Also, assume that the integrals
I 1 ( y ) G ^ m : n λ , m - [ x 1 y 1 m α - ] ,
are computed exactly a x Ω 1 . Then, ξ = ξ ( x ) Ω 1 such that the truncation error, T n λ α ( x , ξ ) , in the Caputo FD approximation (7) is given by
T n λ α ( x , ξ ) = η n λ α f ( n + 1 ) ( ξ ) I x ( τ ) G ^ n + 1 m λ + m ( x τ ) α + 1 m ,
where
η n λ α = π 2 2 λ 2 n 1 Γ m + n + 2 λ + 1 ( n m + 1 ) ! Γ m α Γ m + λ + 1 2 Γ n + λ + 1 .
Proof. 
The Lagrange interpolation error associated with the SGPS interpolation (1) is given below:
f ( x ) = I n f ( x ) + f ( n + 1 ) ( ξ ) ( n + 1 ) ! K ^ n + 1 λ G ^ n + 1 λ ( x ) ,
where K ^ n λ is the leading coefficient of the nth-degree, λ -indexed SG polynomial (cf. Equation (4.12) in [23]). Applying Caputo FD on both sides of the equation gives the truncation error associated with Formula (7) in the following form:
T n λ α ( x , ξ ) = f ( n + 1 ) ( ξ ) ( n + 1 ) ! K ^ n + 1 λ D x α c G ^ n + 1 ( λ )
= f ( n + 1 ) ( ξ ) ( n + 1 ) ! K ^ n + 1 λ Γ ( m α ) I x ( τ ) G ^ n + 1 ( λ , m ) ( x τ ) α + 1 m .
The proof is established by substituting Formula (13b) into (19).    □
For the theoretical truncation error in Equation (18), we assume that the integrals in Equation (17) are evaluated exactly. In practice, however, these integrals are approximated using SGPS quadratures, with the corresponding quadrature errors analyzed in Theorems 4 and 5, as discussed later in this section.
The following theorem marks the truncation error bound associated with Theorem 2.
Theorem 3.
Suppose that the assumptions of Theorem 2 hold true. Then, the truncation error T n λ α ( x , ξ ) is asymptotically bounded above by
T n λ α ( x , ξ ) A n + 1 ϑ ^ m , λ 2 2 λ 2 n n λ + m r l n ,
where A n = f ( n ) L ( Ω 1 ) and
ϑ ^ m , λ = 1 e λ + m 1 2 λ m λ + m 1 2 sinh 1 λ + m 1 2 1 4 2 λ 2 m + 1 .
Proof. 
Since λ + m > 3 / 2 > 0 , Equation (4.29a) in [23] shows that G ^ n + 1 m λ + m L ( Ω 1 ) = 1 . Thus,
I x ( τ ) G ^ n + 1 m λ + m ( x τ ) α + 1 m I x ( τ ) ( x τ ) m α 1 = x m α m α 1 m α ,
according to the Mean Value Theorem for Integrals. Notice also that Γ ( z ) > 1 / z z Ω 1 . Combining this elementary inequality with the sharp inequalities of the Gamma function ([25] Inequality (96)) implies that
η n λ α < 1 e m α λ + m 1 2 λ m 2 2 λ 2 n 3 2 λ + n λ n 1 2 ×
λ + m 1 2 sinh 1 λ + m 1 2 1 4 2 λ 2 m + 1 2 λ + m + n 2 λ + m + n + 1 2 ×
1 1620 2 λ + m + n 5 + 1 λ + n sinh 1 λ + n 1 2 λ n ×
2 λ + m + n sinh 1 2 λ + m + n λ + m + n 2 ϑ α , λ 2 2 λ 2 n 3 2 n λ + m r l n ,
where ϑ α , λ = ( m α ) ϑ ^ m , λ . The required asymptotic Formula (20) is derived by combining the asymptotic Formula (22) with In Equation (21).    □
Since the dominant term in the asymptotic bound (20) is 2 2 λ 2 n , the truncation error exhibits exponential decay as n . Notice also that increasing α while keeping λ fixed and keeping n sufficiently large leads to an increase in m, which, in turn, affects two factors: (i) the polynomial term n λ + m grows, which slightly slows convergence, and (ii) the prefactor ϑ ^ m , λ e 1 / 2 m ( λ + m ) r l m , which decreases exponentially, reducing the error; cf. Figure 4. Despite the polynomial growth of the former factor, the exponential decay term 2 2 n dominates. Now, let us consider the effect of changing λ while keeping α fixed and n large enough. If we increase λ gradually, the term 2 2 λ will exhibit exponential decay, and the prefactor ϑ ^ m , λ e 1 / 2 λ ( λ + m ) r l λ will also decrease exponentially, further reducing the error. The polynomial term n λ + m , on the other hand, will increase, slightly increasing the error. Although the polynomial term n λ + m grows and slightly increases the error, the dominant exponential decay effects from both 2 2 λ and the prefactor ϑ ^ m , λ ensure that the truncation error decreases significantly as λ increases. Hence, increasing λ leads to faster decay of the truncation error. This analysis shows that for r l n , increasing α slightly increases the error bound due to polynomial growth but does not affect exponential convergence. Furthermore, increasing λ generally improves convergence, since the exponential decay dominates the polynomial growth. In fact, one can see this last remark from two other viewpoints:
(i)
r l n / ( m 1 ) , supp G ^ n + 1 m λ + m { 0 , 1 } r l λ , and the truncation error T n λ α 0 accordingly.
(ii)
λ R + , supp G ^ j λ , m { 0 , 1 } , as j / m . Consequently, the integrals (17) collapse G ^ k λ , m : m < k n , k m , indicating faster convergence rates in the Caputo FD approximation (7).
In all cases, choosing a sufficiently large n ensures overall exponential convergence. It is important to note that these observations are based on the asymptotic behavior of the error upper bound as n , assuming the SGPS quadrature is computed exactly.
Beyond the convergence considerations mentioned above, we highlight two important numerical stability issues related to this analysis:
(i)
A small buffer parameter ε is often introduced to offset the instability of the SG interpolation near λ = 1 / 2 , where SG polynomials grow rapidly for increasing orders [23].
(ii)
As λ increases, the SGG nodes x ^ n , 0 : n λ cluster more toward the center of the interval. This means that the SGPS interpolation rule (1) relies more on extrapolation than interpolation, making it more sensitive to perturbations in the function values and amplifying numerical errors. This consideration reveals that, although increasing λ theoretically improves the convergence rate, it can introduce numerical instability due to increased extrapolation effects. Therefore, when selecting λ , one must balance convergence speed against numerical stability considerations to ensure accurate interpolation computations. This aligns well with the widely accepted understanding that, for sufficiently smooth functions and sufficiently large spectral expansion terms, the truncated expansion in the SC quadrature (corresponding to λ = 0 ) is optimal in the L -norm for definite integral approximations; cf. [28] and the references therein.
In the following, we study the truncation error of the quadrature Formula (16) and how its outcomes add up to the above analysis.
Theorem 4.
Let j N m : n , x Ω 1 , and assume that G ^ j m λ + m x x y 1 m α is interpolated by the SG polynomials with respect to the variable y at the SGG nodes x ^ n q , 0 : n q λ q . Then, η = η ( y ) Ω 1 such that the truncation error, T j , n q λ q ( η ) , in the quadrature approximation (16) is given by
T j , n q λ q ( η ) = ( 1 ) n q + 1 χ ^ j m , n q + 1 λ + m ( n q + 1 ) ! K ^ n q + 1 λ q x m α n q + 1 η ( n q + 1 ) ( 1 m + α ) m α ×
G ^ j m n q 1 λ + m + n q + 1 x x η 1 m α I 1 ( y ) G ^ n q + 1 λ q · I j m + n q + 1 .
Proof. 
Theorem 4.1 in [23] immediately shows that
T j , n q λ q ( η ) = 1 ( n q + 1 ) ! K ^ n q + 1 λ q y n q + 1 G ^ j m λ + m x x y 1 m α y = η I 1 ( y ) G ^ n q + 1 λ q
= ( 1 ) n q + 1 ( n q + 1 ) ! K ^ n q + 1 λ q x m α n q + 1 η ( n q + 1 ) ( 1 m + α ) m α G ^ j m λ + m , n q + 1 x x η 1 m α I 1 ( y ) G ^ n q + 1 λ q ,
according to the Chain Rule. The error bound (23) is accomplished by substituting Formula (13b) into (24). The proof is completed by further realizing that
G ^ j m λ + m , n q + 1 x x η 1 m α = τ n q + 1 G ^ j m λ + m τ τ = x x η 1 m α = 0 ,
j < m + n q + 1 .    □
The truncation error analysis of the quadrature approximation (16) hinges on understanding the interplay between the parameters j , n q , m , λ and λ q . While Theorem 4 provides an exact error expression, the next theorem establishes a rigorous asymptotic upper bound, revealing how the error scales with these parameters.
Theorem 5.
Let the assumptions of Theorem 4 hold true. Then, the truncation error, T j , n q λ q ( η ) , in the quadrature approximation (16) is bounded above by
T j , n q λ q ( η ) B m λ , λ q 2 2 n q j m n q j + m + n q + 1 2 j 2 λ 2 m + 1 ×
j + n q j + 2 λ + m + n q + 1 2 n q 2 n q m λ + λ q 5 2 x m α n q + 1 η ( n q + 1 ) ( 1 m + α ) m α ×
Υ D λ q ( n q ) I j m + n q + 1 ,
r l n q , where
Υ D λ q ( n q ) = 1 , λ q R 0 + , D λ q n q λ q , λ q R 1 / 2 ,
where D λ q > 1 is a constant dependent on λ q , and B m λ , λ q is a constant dependent on m , λ , and λ q .
Proof. 
Lemma 5.1 in [26] shows that
G ^ k γ L ( Ω 1 ) = 1 , k Z 0 + , γ R 0 + , σ γ k γ , γ R 1 / 2 , k ,
where σ γ > 1 is a constant dependent on γ . Therefore,
G ^ j m n q 1 λ + m + n q + 1 x x η 1 m α 1 ,
since λ + m + n q + 1 > 0 . Moreover, Formula (14) and the definition of K ^ n q λ q (see p.g. 103 of [26]) show that
χ ^ j m , n q + 1 λ + m ( n q + 1 ) ! K ^ n q + 1 λ q = 2 2 n q 1 Γ λ q + 1 j m ! Γ 2 λ q + 1 Γ n q + 2 Γ n q + λ q + 1 ×
Γ m + λ + 1 2 Γ n q + 2 λ q + 1 Γ j + m + n q + 2 λ + 1 Γ j + m + 2 λ Γ j m n q Γ m + n q + λ + 3 2 .
The proof is established by applying the sharp inequalities of the Gamma function (Inequality (96) of [25]) to (27).    □
When m n q , the analysis of Theorem 5 bifurcates into the following two essential cases:
  • Case I ( j n q ): Let j = m + n q + k + 1 : k = o ( n q ) . The first few error factors in (25) can be simplified as follows:
    2 2 n q j m n q j + m + n q + 1 2 j 2 λ 2 m + 1 j + n q j + 2 λ + m + n q + 1 2 × n q 2 n q m λ + λ q 5 2 2 2 n q k + 1 k 1 / 2 n q 2 λ 2 m + 1 2 n q 2 n q + 2 λ + m + 1 2 × n q 2 n q m λ + λ q 5 2 2 1 2 + m + 2 λ k + 1 k 1 2 n q 1 2 m λ + λ q .
    The dominant exponential decay factor in sup T j , n q λ q ( η ) is therefore
    Λ n q , m α ( x , η ) = x m α n q + 1 η ( n q + 1 ) ( 1 m + α ) m α .
    This shows that the error bound decays exponentially with n q if
    x η 1 m + α m α m α < 1 ,
    is satisfied. Observe that increasing λ accelerates the algebraic decay, driven by the polynomial term n q 1 2 m λ + λ q . While increasing λ q can counteract this acceleration, the exponential term eventually dictates the convergence rate. Practically, to improve the algebraic decay in this case, we can increase λ and choose λ q : λ q λ + 2 m + 1 to prevent polynomial term growth.
  • Case II ( j n q ): According to Lemma A1, the dominant terms involving j are approximately e 2 λ n q j j 2 n q + 2 j 2 n q + 2 . This result can also be derived from the asymptotic error bound in (25) by observing that j n q j j + n q . Thus, the dominant terms involving j,
    j n q j + m + n q + 1 2 j 2 λ 2 m + 1 j + n q j + 2 λ + m + n q + 1 2 ,
    reduce to approximately j 2 n q + 2 . Consequently, the error bound becomes
    T j , n q λ q ( η ) B m λ , λ q 2 2 n q j 2 n q + 2 n q 2 n q m λ + λ q 5 2 x m α n q + 1 η ( n q + 1 ) ( 1 m + α ) m α Υ D λ q ( n q ) = B m λ , λ q j 2 4 n q 2 n q j 2 n q m + λ λ q + 5 2 x m α n q + 1 η ( n q + 1 ) ( 1 m + α ) m α Υ D λ q ( n q ) .
    The exponential decay is now governed by
    Θ n q , m , j α ( x , η ) = j 2 x η 1 m + α m α 4 n q 2 ( m α ) n q ,
    which requires that either
    j < n q , or
    j = n q and x η 1 m + α m α 4 ( m α ) < 1 .
    However, both conditions contradict the assumption j n q . Therefore, error convergence occurs only if
    x η 1 m + α m α 4 ( m α ) 1 : j 2 x η 1 m + α m α 4 n q 2 ( m α ) < 1 .
    In practice, to improve the algebraic decay in this case, we can choose λ q : λ q λ + m + 5 2 to prevent the polynomial term growth.
Given m n q , the quadrature truncation error in Theorem 5 converges when n q n m k , under the conditions that 1 k n q and Condition (28) are met, or if k n q and Condition (31) are satisfied. In the special case when n q > n m 1 , the quadrature truncation error totally collapses due to Theorem 4. In all cases, the parameter λ always serves as a decay accelerator, whereas λ q functions as a decay brake. Notably, the observed slower convergence rate with increasing λ q aligns well with the earlier finding in [28] that selecting relatively large positive values of λ q > 2 causes the Gegenbauer weight function associated with the GIM to diminish rapidly near the boundaries x = ± 1 . This effect shifts the focus of the Gegenbauer quadrature toward the central region of the interval, increasing sensitivity to errors and making the quadrature more extrapolatory. Extensive prior research by the author on the application of Gegenbauer and SG polynomials for interpolation and collocation informs the selection of the Gegenbauer index γ within the interval
T c , r = γ 1 2 + ε γ r , 0 < ε 1 , r [ 1 , 2 ] ,
designated as the “Gegenbauer parameter collocation interval of choice” in [29]. Specifically, investigations utilizing GG, SG, flipped-GG-Radau, and related nodal sets demonstrate that these configurations yield optimal numerical performance within this interval, consistently producing stable and accurate schemes for problems with smooth solutions; cf. [26,28,29] and the references therein, for example.
The following theorem provides an upper bound for the asymptotic total error, encompassing both the series truncation error and the quadrature approximation error in light of Theorems 3 and 5.
Theorem 6
(Asymptotic total truncation error bound). Let m n q , and suppose that the assumptions of Theorems 2 and 4 hold true. Then, the total truncation error, denoted by E n , n q λ , λ q α ( x , ξ ) , arising from both the series truncation (1) and the quadrature approximation (16), is asymptotically bounded above by:
E n , n q λ , λ q α ( x , ξ , η ) A n + 1 ϑ ^ m , λ 2 2 λ 2 n n λ + m + A 0 ϖ upp B m λ , λ q x m α λ max λ Γ ( m α + 1 ) 2 2 n q n 2 ( 1 m λ ) n q 2 n q m λ + λ q 5 2 × n n q n + m + n q + 3 2 n + n q n + 2 λ + m + n q + 1 2 Λ n q , m α ( x , η ) × Y σ λ , D λ q 2 ( n , n q ) I n m + n q + 1 ,
r l n , n q , where
ϖ upp = ϖ upp , + , λ R 0 + , ϖ upp , - , λ R 1 / 2 ,
1 λ max λ = 1 λ n λ , λ R 0 + , 1 λ m + n q + 1 λ , λ R 1 / 2 ,
Y σ λ , D λ q 2 ( n , n q ) = 1 , λ R 0 + , λ q R 0 + , σ λ n λ , λ R 1 / 2 , λ q R 0 + , D λ q n q λ q , λ R 0 + , λ q R 1 / 2 , σ λ D λ q n λ n q λ q , λ R 1 / 2 , λ q R 1 / 2 ,
A n + 1 , ϑ ^ m , λ , D λ q , and σ λ are constants with the definitions and properties outlined in Theorems 3 and 5, as well as in Equation (26).
Proof. 
The total truncation error is the sum of the truncation error associated with Caputo FD approximation (7), T n λ α ( x , ξ ) , and the accumulated truncation errors associated with the quadrature approximation (16), for j = m : n , arising from Formula (7):
E n , n q λ , λ q α ( x , ξ , η ) = T n λ α ( x , ξ ) + x m α Γ ( m α + 1 ) k J n + ϖ ^ k λ f k j N m : n λ ^ j λ 1 T j , n q λ q ( η ) G ^ j λ x ^ n , k λ = T n λ α ( x , ξ ) + x m α Γ ( m α + 1 ) k J n + ϖ k λ f k j N m : n λ j λ 1 T j , n q λ q ( η ) G ^ j λ x ^ n , k λ ,
where T j , n q λ q ( η ) is the truncation error associated with the quadrature approximation (16) e j , and  λ 0 : n λ and ϖ 0 : n λ are the normalization factors for Gegenbauer polynomials and the Christoffel numbers associated with their quadratures. The key upper bounds on these latter factors were recently derived in Lemmas B.1 and B.2 of [30]:
ϖ j λ ϖ upp , + = π n + 1 ( j , λ ) J n + × R 0 + , ϖ j λ < ϖ upp , - = Γ 2 ( λ + 1 / 2 ) 2 n 1 + 2 λ ( j , λ ) J n + × R 1 / 2 , max j J n + 1 λ j λ = 1 λ n λ , λ R 0 + , 1 λ 0 λ , λ R 1 / 2 ,
where  λ 0 λ = π Γ ( 1 / 2 + λ ) Γ ( 1 + λ ) . By combining these results with Equation (26), we can bound the total truncation error by
E n , n q λ , λ q α ( x , ξ , η ) A n + 1 ϑ ^ m , λ 2 2 λ 2 n n λ + m
+ A 0 ϖ upp x m α λ max λ Γ ( m α + 1 ) ( n + 1 ) ( n m n q ) max j N m + n q + 1 : n T j , n q λ q ( η ) Υ σ λ ( n ) ,
where r l n . Since the j-dependent polynomial factor
( j m n q ) j + m + n q + 1 2 j 2 λ 2 m + 1 ( j + n q ) j + 2 λ + m + n q + 1 2 ,
is maximized at j = n by Lemma A2, the proof is accomplished by applying the asymptotic inequality (25) to (33) after replacing j with n.    □
Under the assumptions of Theorem 6, exponential error decay dominates the overall error behavior if n q n m k , provided that k n q and Condition (28) hold, or if k n q and Condition (31) are satisfied. In the special case when n q > n m 1 , the total truncation error reduces to pure interpolation error, as the quadrature truncation error vanishes. The rigorous asymptotic analysis presented in this section leads to the following practical guideline for selecting λ and λ q :
Rule of Thumb(Selection of λ and λ q Parameters). r l n and n q :
  • High-precision computations: Consider λ Ω 2 with appropriately adjusted λ q :
    1 / 2 + ε λ q 2 .
  • General-purpose computations: Consider λ = λ q = 0 (SC interpolation and quadrature). This latter choice is motivated by the fact that the truncated expansion in the SC quadrature is known to be optimal in the L -norm for definite integral approximations of smooth functions.
Remark 2.
The recommended range (34) for λ q is derived by combining two key observations:
  • Polynomial term growth prevention: To control the quadrature truncation error bound:
    • Choose λ q such that
      λ q λ + 2 m + 1 r s m ,
      for n q n m k : k = o ( n q ) .
    • Choose λ q such that
      λ q λ + m + 5 2 r s m ,
      for n q n m k : k o ( n q ) .
  • Stability and accuracy: The Gegenbauer index should lie within the interval T c , r to ensure higher stability and accuracy.
Since m 1 , the inequalities λ + 2 m + 1 > 2 and λ + m + 5 2 > 2 hold. To maintain stability (as indicated by Observation 2), we enforce λ q 2 .
Remark 3.
It is important to note that the observations made in this section rely on asymptotic results r l n , n q . However, since the integrand is smooth when α m , the SG quadrature often achieves high accuracy with relatively few nodes. Smooth integrands may exhibit spectral convergence before asymptotic effects takes place, as we demonstrate later in Section 6.
Remark 4.
The truncation errors in the SGPS method’s quadrature strategy are not negligible in general but can be made negligible by choosing a sufficiently large n q , especially when n q > n m 1 , as demonstrated in this section. Aliasing errors, while less severe than in Fourier-based methods on equi-spaced grids, can still arise in the SGPS method due to undersampling in interpolation or quadrature, particularly for non-smooth functions or when n and n q are not sufficiently large. These errors are mitigated by the use of non-equispaced SGG nodes, barycentric forms, and the flexibility to increase n q independently of n. To ensure robustness, we may (i) increase n q for complex integrands or higher fractional orders α, (ii) follow this study’s guidelines for λ and λ q to optimize node clustering and stability, (iii) monitor solution smoothness and consider adaptive methods for non-smooth cases, and (iv) utilize the precomputable FSGIM to efficiently test the convergence of the SGPS method for different n q values. The numerical simulations in Section 6 suggest that, for smooth problems, these errors are already well controlled, with modest n and n q , achieving near-machine precision. However, for more challenging problems, careful parameter tuning and validation are essential to minimize error accumulation.
Remark 5.
The SGPS method assumes sufficient smoothness of the solution to exploit the rapid convergence properties of PS approximations. For less smooth functions, alternative specialized methods may be more appropriate. In particular, filtering techniques (e.g., modal filtering) can be integrated to dampen spurious high-frequency oscillations without significantly degrading the overall accuracy. Adaptive interpolation strategies, such as local refinement near singularities or moving-node approaches, may also be employed to capture localized features more accurately. Furthermore, domain decomposition techniques, where the computational domain is partitioned into subdomains with potentially different resolutions or spectral parameters, offer another viable pathway to accommodate irregularities while preserving the advantages of SGPS approximations within each smooth subregion.
To provide empirical support for our theoretical claims on the convergence rate of the SGPS method, we analyze the error in computing the Caputo FD as a function of the number of interpolation points for various parameter values. We estimate the rate of convergence based on a semi-log regression of the error. Specifically, we assume that the error follows an exponential decay model of the form E n c · e p n , where p is the exponential decay rate and c is a positive constant. Taking the natural logarithm of this expression yields ln E n p n + ln c . We can estimate p by performing a linear regression of ln E n against n. The magnitude of the slope of the resulting line provides an estimate for the decay rate p. As an illustration, reconsider Test Function f 2 , previously examined in Section 2, with its error plots shown in Figure 3. Under the same data settings, Figure 5 depicts the variation in the estimated exponential decay rate (p) and coefficient (c) with respect to λ . The decay rate p remains relatively consistent across different λ values, fluctuating slightly between 4 and 4.6, indicating that the SGPS method sustains a stable exponential convergence rate under variations in λ . The coefficient c varies smoothly between approximately 0.1 and 1, reflecting a stable baseline magnitude of the approximation error. The bounded variation in c further suggests that the method’s accuracy is largely insensitive to the choice of λ within the considered range.

5. Case Study: Caputo Fractional TPBVP of the Bagley–Torvik Type

In this section, we consider the application of the proposed method on the following Caputo fractional TPBVP of the Bagley–Torvik type, defined as follows: 353035
a D x α c u + b D x 1.5 c u + c u ( x ) = f ( x ) , x Ω 1 ,
with the given Dirichlet boundary conditions
u ( 0 ) = γ 1 , u ( 1 ) = γ 2 ,
where α > 1 , { a , b , c , γ 1 : 2 } R , and f L 2 ( Ω 1 ) . With the derived numerical instrument for approximating Caputo FDs, determining an accurate numerical solution to the TPBVP is rather straightforward. Indeed, collocating System (35a) at the SGG set x ^ n , 0 : n λ = G ^ n λ in conjunction with Equation (10) yields 363536
a Q n α E u 0 : n + b Q n 1.5 E u 0 : n + c u 0 : n = f 0 : n .
Since G ^ k λ ( 0 ) = ( 1 ) k and G ^ k λ ( 1 ) = 1 k J n + , according to the properties of SG polynomials, substituting the boundary conditions (35b) into Equation (1) gives the following system of equations:
trp λ ^ 0 : n λ ÷ ( 1 ) 0 : n 1 n + 1 G ^ 0 : n λ - [ x ^ n λ - ] diag ϖ ^ 0 : n λ u 0 : n = γ 1 ,
trp λ ^ 0 : n λ ÷ G ^ 0 : n λ - [ x ^ n λ - ] diag ϖ ^ 0 : n λ u 0 : n = γ 2 .
Therefore, the linear system described by Equations (36a), (36b) and (36c) can now be compactly written in the following form:
A u 0 : n = F ,
where
A = a Q n α E + b Q n 1.5 E + c I n + 1 trp λ ^ 0 : n λ ÷ ( 1 ) 0 : n 1 n + 1 G ^ 0 : n λ - [ x ^ n λ - ] diag ϖ ^ 0 : n λ trp λ ^ 0 : n λ ÷ G ^ 0 : n λ - [ x ^ n λ - ] diag ϖ ^ 0 : n λ ,
is the collocation matrix, and
F = [ f 0 : n , γ 1 : 2 ] .
The solution to the linear system (37) provides the approximate solution values at the SGG points. The solution values at any non-collocated point in Ω 1 can further be estimated with excellent accuracy via the interpolation Formula (1).
When α Z + , Caputo FD reduces to the classical integer-order derivative of the same order. In this case, we can use the first-order GDM in barycentric form, D ( 1 ) , of Elgindy and Dahy [31]. This matrix enables the approximation of the function’s derivative at the GG nodes using the function values at those nodes by employing matrix–vector multiplication. The entries of the differentiation matrix are computed based on the barycentric weights and GG nodes. The associated differentiation formula exhibits high accuracy, often exhibiting exponential convergence for smooth functions. This rapid convergence is a hallmark of PS methods and makes the GDM highly accurate for approximating derivatives. Furthermore, the utilization of barycentric forms improves the numerical stability of the differentiation matrix and leads to efficient computations. Using the properties of PS differentiation matrices, higher-order differentiation matrices can be readily generated through successive multiplication by the first-order GDM:
D ( k ) = D ( k ) ( 1 ) , k > 1 .
The SGDM of any order k , D ^ ( k ) , based on the SGG point set G ^ n λ , can be generated directly from D ( 1 ) using the following formula:
D ^ ( k ) = 2 k D ( k ) ( 1 ) , k 1 .
Figure 6 outlines the complete solution workflow for applying the SGPS method to Bagley–Torvik TPBVPs. The process begins with constructing the FSGIMs and, when necessary, the SGDM for integer orders. These are used to discretize the governing fractional differential equations via collocation at SGG nodes. The resulting system is assembled into a linear algebraic system, which is solved to obtain the numerical solution at collocation points. Finally, the global numerical solution is recovered by interpolating these discrete values using the SGPS interpolant.

6. Numerical Examples

In this section, we present numerical experiments conducted on a personal laptop equipped with an AMD Ryzen 7 4800H processor (2.9 GHz, 8 cores/16 threads) and 16GB of RAM, and running Windows 11. All simulations were performed using MATLAB R2023b. The accuracy of the computed solutions was assessed using absolute errors and maximum absolute errors, which provide quantitative measures of the pointwise and worst-case discrepancies between the exact and numerical solutions, respectively.
Example 1.
Consider the Caputo fractional TPBVP of the Bagley–Torvik type
D x 2 c u + D x 1.5 c u + u ( x ) = x 2 + 2 + 4 x π , x Ω 1 ,
with the given Dirichlet boundary conditions
u ( 0 ) = 0 , u ( 1 ) = 1 .
The exact solution is u ( x ) = x 2 . This problem was solved by Al-Mdallal et al. [32] using a method that combines conjugating collocation, spline analysis, and the shooting technique. Their reported error norm was 3.78 × 10 12 ; cf. [33]. Later, Batool et al. [33] addressed the same problem using integral operational matrices based on Chelyshkov polynomials, transforming the problem into solvable Sylvester-type equations. They reported an error norm of 2.3388 × 10 25 , obtained using approximate solution terms with significantly more than 16 digits of precision. Specifically, the three terms used to derive this error included 32, 47, and 47 digits after the decimal point, indicating that the method utilizes extended or arbitrary-precision arithmetic, rather than being constrained to standard double precision. For a more fair comparison, since all components of our computational algorithm adhere to double-precision representations and computations, we recalculated their approximate solution using Equation (92) of [33] on the MATLAB platform with double-precision arithmetic. Our results indicate that the maximum absolute error in their approximate solution, evaluated at 50 equally spaced points in Ω 1 , was approximately 2.22 × 10 16 . The SGPS method produced this same result using the parameters n = n q = 4 and λ = λ q = 1.1 . The elapsed time required to run the SGPS method was 0.004732 s. Figure 7 illustrates the exact solution, the approximate solution obtained using the SGPS method, and the absolute errors at the SGG collocation points.
Example 2.
Consider the Caputo fractional TPBVP of the Bagley–Torvik type
D x 2 c u + D x 1.5 c u + u ( x ) = 1 + x , x Ω 1 ,
with the given Dirichlet boundary conditions
u ( 0 ) = 1 , u ( 1 ) = 2 .
The exact solution is u ( x ) = 1 + x . Yüzbaşı [34] solved this problem using a numerical technique based on collocation points, matrix operations, and a generalized form of Bessel functions of the first kind. The maximum absolute error reported in [34] (at M = 6 ) was 4.6047 × 10 8 . Our SGPS method produced near-exact solution values within a maximum absolute error of 4.44 × 10 16 using n = n q = λ = λ q = 2 ; cf. Figure 8. The elapsed time required to run the SGPS method was 0.004142 s.
Example 3.
Consider the Caputo fractional TPBVP of the Bagley–Torvik type
D x 1.5 c u + u ( x ) = 2 Γ ( 3 / 2 ) x + x ( x 1 ) , x Ω 1 ,
with the given Dirichlet boundary conditions
u ( 0 ) = u ( 1 ) = 0 .
The exact solution is u ( x ) = x 2 x . Our SGPS method produced near-exact solution values within a maximum absolute error of 1.94 × 10 16 using n = n q = 3 and λ = λ q = 1 ; cf. Figure 9. The elapsed time required to run the SGPS method was 0.004160 s.

7. Sensitivity Analysis of SG Parameters r s n and n q

Optimizing the performance of the SGPS method requires a thorough understanding of how the Gegenbauer parameters λ and λ q influence numerical stability and accuracy. These parameters govern the clustering of collocation and quadrature points, directly affecting the condition number of the collocation matrix A and the overall robustness of the method. In this section, we present a sensitivity analysis to quantify the impact of varying λ and λ q on the stability of the SGPS method, as measured by the condition number κ ( A ) . The analysis specifically examines the behavior of the method when solving the Caputo Fractional TPBVP of the Bagley–Torvik Type with smaller interpolation and quadrature mesh sizes. The numerical examples from Section 6 serve as the basis for this analysis, and the results are visualized using surface plots, contour plots, and semilogarithmic plots to illustrate the condition number’s behavior across the parameter space.
Figure 10 illustrates the influence of varying the parameters λ and λ q on the condition number of collocation matrix A associated with Example 1. Higher condition numbers indicate increased sensitivity to perturbations in the input data, potentially leading to instability in the numerical solution. The results show that r s n and n q , the condition number is influenced by λ ; as λ increases, the condition number tends to grow linearly. Conversely, the condition number exhibits minimal sensitivity to changes in λ q within the range specified by the “Rule of Thumb.” This suggests that the stability of the method for low-degree SG interpolants and small quadrature mesh sizes is primarily dependent on the appropriate selection of λ . Specifically, choosing a λ that is relatively large and positive can compromise stability. Conversely, the figure indicates that λ values closer to 1 / 2 (while maintaining a sufficient distance to prevent excessive growth in SG polynomial values), combined with λ q values within the interval T c , r , particularly near its endpoints, yield lower condition numbers. We notice, however, that κ ( A ) remains in the order of 10 2 for 0.49 λ , λ q 1.9 , indicating that the SGPS method is numerically stable for this range of parameters. Moreover, for double-precision arithmetic, this observation implies a potential loss of two significant digits in the worst case. However, the actual error observed in the numerical experiments is much smaller, indicating that the method is highly accurate in practice. Figure 11 and Figure 12 present further sensitivity analyses of the SGPS method’s numerical stability for Examples 2 and 3. The condition number in both examples is in the order of 10 for the parameter range considered (up to nearly 2), indicating high numerical stability for that parameter range. These figures consistently indicate that stability is influenced by λ r s n and n q , but minimally impacted by λ q .
The sensitivity analysis conducted in this section reveals an important decoupling in parameter effects: while λ primarily governs the numerical stability through its linear relationship with the condition number of the collocation matrix, λ q predominantly controls the accuracy of Caputo FD approximations, as seen earlier in Figure 2 and Figure 3, without significantly affecting system conditioning. This decoupling allows for the independent optimization of stability and accuracy. In particular, we can select λ to ensure well-conditioned systems while tuning λ q to achieve the desired precision in derivative computations. The recommended parameter ranges ( λ , λ q T c , r ) provide a practical balance. Negative values of λ and λ q close to 0.49 can improve stability and accuracy when using smaller interpolation and quadrature grids, while excellent quadrature accuracy is often achieved at λ q = 0.5 . This separation of concerns simplifies parameter selection and enables robust implementations across diverse problem configurations.

8. Conclusions and Discussion

This study pioneers a unified SGPS framework that seamlessly integrates interpolation and integration for approximating higher-order Caputo FDs and solving TPBVPs of the Bagley–Torvik type, offering significant advancements in numerical methods for fractional differential equations through the following: (i) The development of FSGIMs that accurately and efficiently approximate Caputo FDs at any random set of points using SG quadratures generalizes traditional PS differentiation matrices to the fractional-order setting, which we consider a significant theoretical advancement. (ii) The use of FSGIMs allows for pre-computation and storage, significantly accelerating the execution of the SGPS method. (iii) The method applies an innovative change of variables that transforms the Caputo FD into a scaled integral of an integer-order derivative. This transformation simplifies computations, facilitates error analysis, and mitigates singularities in the Caputo FD near zero, which improves both stability and accuracy. (iv) The method can produce approximations withing near-full machine precision at an exponential rate using relatively coarse mesh grids. (v) The method generally improves numerical stability and attempts to avoid issues related to ill conditioning in classical PS differentiation matrices by using SG quadratures in barycentric form. (vi) The proposed methodology can be extended to multidimensional fractional problems, making it a strong candidate for future research in high-dimensional fractional differential equations. (vii) Unlike traditional methods that treat interpolation and integration separately, the current method unifies these operations into a cohesive framework using SG polynomials. Numerical experiments validated the superior accuracy of the proposed method over existing techniques, achieving near-machine precision results in many cases. The current study also highlighted critical guidelines for selecting the parameters λ and λ q to optimize the performance of the SGPS method r s α . In particular, for large interpolation and quadrature mesh sizes, and for high-precision computations, λ should be selected within the range Ω 2 , while λ q should be adjusted to satisfy 1 / 2 + ε λ q 2 . This ensures a balance between convergence speed and numerical stability. For general-purpose computations, setting λ = λ q = 0 (corresponding to the SC interpolant and quadrature) is recommended, as it provides optimal L -norm accuracy for smooth functions. The analysis also revealed that increasing λ accelerates theoretical convergence but may introduce numerical instability due to extrapolation effects, while larger λ q values can slow convergence. r s n and n q , the sensitivity analysis in this study reveals that the conditioning of the linear system of equations produced by the SGPS method when treating a Caputo Fractional TPBVP of the Bagley–Torvik Type increases approximately linearly with λ . This indicates that smaller values of λ in this case can lead to improved numerical stability. In particular, it is advisable to choose negative λ values, especially in the neighborhood of 0.49 , as evidenced by the numerical simulations, but not too close to 1 / 2 , to avoid the rapid growth of SG polynomials. The conditioning of the linear system is less sensitive to variations in λ q compared to λ r s n and n q , with minimal effect on stability. However, to maintain accuracy, it is still recommended to keep λ q within the recommended interval T c , r , with excellent quadrature accuracy often attained at λ q = 0.5 . These insights ensure robust and efficient implementations of the SGPS method across diverse problem settings. The SGPS method’s computational efficiency is further underscored by its predictable runtime and storage costs, as summarized in Table 2. For practitioners, these estimates provide clear guidelines for resource allocation. The table also highlights recommended parameter ranges to balance accuracy and stability.
The current work assumes sufficient smoothness of the solution to achieve exponential convergence. For fractional problems involving weakly singular or non-smooth solutions, where derivatives may be unbounded, future research may investigate adaptive techniques—such as graded meshes or hybrid spectral–finite element approaches—to extend the method’s applicability. The robust approximation of Caputo derivatives achieved by the SGPS method creates opportunities for modeling viscoelasticity in smart materials, anomalous transport in heterogeneous media, and non-local dynamics in control theory. Future directions could include adaptive parameter tuning to capture singularities in viscoelastic models or coupling the method with machine learning to optimize fractional-order controllers. These applications would improve the method’s interdisciplinary relevance while preserving its mathematical rigor. Additionally, the SGPS approach could be extended to multidimensional fractional problems, where tensor products of one-dimensional FSGIMs can be employed. The inherent parallelizability of FSGIM matrix–vector operations makes the method particularly suitable for GPU acceleration or distributed computing. For time-dependent fractional PDEs, like fractional diffusion equations, the SGPS method can employ the FSGIM for spatial discretization, transforming the problem into a system of ODEs in time. Standard time-stepping schemes, such as Runge–Kutta or fractional linear multistep methods, can then be applied. The precomputation and reuse of the FSGIM for spatial discretization at each time step can yield significant efficiency gains in time-marching schemes.

Funding

The Article Processing Charges (APCs) for this publication were funded by Ajman University, United Arab Emirates.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

The author declares that the data supporting the findings of this study are available within the article.

Conflicts of Interest

The author declares that there are no conflicts of interests.

Abbreviations

AcronymMeaning
FDFractional derivative
FSGIMFractional-order shifted Gegenbauer integration matrix
GDMGegenbauer differentiation matrix
GGGegenbauer–Gauss
GIMGegenbauer integration matrix
GIRVGegenbauer integration row vector
PSPseudospectral
SCShifted Chebyshev
SGDMShifted Gegenbauer differentiation matrix
SGIMShifted Gegenbauer integration matrix
SGIRVShifted Gegenbauer integration row vector
SGPSShifted Gegenbauer pseudospectral
SGShifted Gegenbauer
SGGShifted Gegenbauer–Gauss
SLShifted Legendre
TPBVPTwo-point boundary value problem

Appendix A. SGPS Algorithm for Bagley–Torvik TPBVPs

Algorithm A1  SGPS_BAGLEY_TORVIK
1:
procedureSGPS_BAGLEY_TORVIK( f , α , a , b , c , n , λ , n q , λ q , z )
2:
    // Step 1: Generate SGG nodes and weights
3:
     ( ξ , w , w ˜ ) GG nodes, Christoffel numbers, and barycentric weights on [ 1 , 1 ] with n + 1 points and parameter λ
4:
     x 0.5 · ( ξ + 1 ) ▹ SGG nodes on [ 0 , 1 ]
5:
    // Step 2: Construct Caputo FSGIM
6:
     Q α FSGIM for Caputo derivative of order α on [ 0 , 1 ]
7:
     ( G , λ ¯ ) Gegenbauer basis evaluated at ξ and their squared norms
8:
    // Step 3: Assemble linear system from Equations (35a)–(35b)
9:
     I identity matrix of size ( n + 1 ) × ( n + 1 )
10:
    if  α Z +  then
11:
         A colloc a · Q α + b · Q 1.5 + c · I
12:
    else
13:
        // Step 4: Construct PS differentiation matrices
14:
         D barycentric GDM on [ 1 , 1 ]
15:
         m α
16:
         D m 2 m · D ( m ) mth-order SGDM
17:
         A colloc a · D m + b · Q 1.5 + c · I
18:
     A BC 1 trp λ ¯ ÷ ( 1 ) 0 : n 1 n + 1 G ^ ( x ) diag w G ^ is the SG polynomial
19:
     A BC 2 trp λ ¯ ÷ G ^ ( x ) diag w
20:
     A A colloc A BC 1 A BC 2
21:
    // Step 5: Assemble right-hand side
22:
     F f ( x ) γ 1 γ 2 γ 1 , γ 2 : boundary conditions
23:
    // Step 6: Solve and interpolate
24:
     u approximate solution of A u = F at the collocation nodes x
5:
     v ( z ) barycentric interpolation of u at target points z [ 0 , 1 ]
26:
    return u, v ( z )

Appendix B. Mathematical Proof

Lemma A1.
Let λ > 1 2 , m 1 , and j m + n q + 1 . Then, the j-dependent factor in Equation (27),
( j m ) ! Γ ( j + m + n q + 2 λ + 1 ) Γ ( j + m + 2 λ ) Γ ( j m n q ) ,
has the asymptotic order O j 2 n q + 2 as j , r l n q .
Proof. 
We analyze the asymptotic behavior of the expression as j using Stirling’s approximation for the Gamma function:
Γ ( z ) 2 π z z 1 2 e z r l z .
By also realizing that ( j m ) ! = Γ ( j m + 1 ) , we have
Γ ( j m + 1 ) 2 π j j m + 1 2 e j , Γ ( j + m + n q + 2 λ + 1 ) 2 π ( j + n q ) j + m + n q + 2 λ + 1 2 e j n q , Γ ( j + m + 2 λ ) 2 π j j + m + 2 λ 1 2 e j , Γ ( j m n q ) 2 π ( j n q ) j m n q 1 2 e n q j .
Since ( j ± n q ) k j k ( 1 ± n q j ) k j k e ± k n q / j r l j , we can write the key ratio (A1) as follows:
Γ ( j m + 1 ) Γ ( j + m + n q + 2 λ + 1 ) Γ ( j + m + 2 λ ) Γ ( j m n q ) j j m + 1 2 ( j + n q ) j + m + n q + 2 λ + 1 2 j j + m + 2 λ 1 2 ( j n q ) j m n q 1 2 e 2 n q j j m + 1 2 j j + m + n q + 2 λ + 1 2 e ( j + m + n q + 2 λ + 1 2 ) n q / j j j + m + 2 λ 1 2 j j m n q 1 2 e ( j m n q 1 2 ) n q / j e 2 n q = e 2 λ n q j j 2 + 2 n q = O j 2 n q + 2 , as j , r l n q .
The following lemma is useful in analyzing the error bound of Theorem 5.
Lemma A2.
Let λ > 1 2 and m 1 be an integer. The function
E ( j ) = j 2 λ 2 m + 1 ( j m n q ) j + m + n q + 1 2 ( j + n q ) j + 2 λ + m + n q + 1 2 ,
is strictly increasing with j j m + n q + 1 r l n q .
Proof. 
Suppose that the assumptions of the lemma hold true. We show first that the logarithmic derivative of E ( j ) is positive j m + n q + 1 . To this end, take the natural logarithm
ln E ( j ) = A ln j + B ln ( j m n q ) + C ln ( j + n q ) ,
where
A = 2 λ 2 m + 1 , B = j + m + n q + 1 2 , C = j + 2 λ + m + n q + 1 2 .
Differentiating with respect to j yields
j ln E ( j ) = A j ln ( j m n q ) + B j m n q + ln ( j + n q ) + C j + n q = ln j + n q j m n q + A j + B j m n q + C j + n q .
For j m + n q + 1 , we have
  • ln j + n q j m n q > 0 , since j + n q j m n q > 1 .
  • A j 0 .
  • B j m n q = 1 + 1 2 ( j m n q ) ( 1 , 1 / 2 ] .
  • C j + n q = 1 + 2 λ + m + 1 / 2 j + n q = 1 + .
The rational terms combine to give a positive quantity. Thus, the logarithmic derivative, j ln E ( j ) , is positive j m + n q + 1 . Since the natural logarithm is strictly increasing, it follows that E ( j ) itself must be strictly increasing with j in that range. □

References

  1. Podlubny, I. Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications; Elsevier: Amsterdam, The Netherlands, 1998; Volume 198. [Google Scholar]
  2. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier: Amsterdam, The Netherlands, 2006; Volume 204. [Google Scholar]
  3. Mainardi, F. Fractional Calculus and Waves in Linear Viscoelasticity; Imperial College Press: London, UK, 2010. [Google Scholar]
  4. Das, S. Functional Fractional Calculus; Springer: Berlin/Heidelberg, Germany, 2011; Volume 1. [Google Scholar]
  5. Magin, R. Fractional calculus in bioengineering, part 1. Crit. Rev. Biomed. Eng. 2004, 32. [Google Scholar]
  6. Monje, C.A.; Chen, Y.; Vinagre, B.M.; Xue, D.; Feliu-Batlle, V. Fractional-Order Systems and Controls: Fundamentals and Applications; Springer Science & Business Media: Basel, Switzerland, 2010. [Google Scholar]
  7. Caputo, M. Linear models of dissipation whose Q is almost frequency independent—II. Geophys. J. Int. 1967, 13, 529–539. [Google Scholar] [CrossRef]
  8. Momani, S.; Batiha, I.M.; Bendib, I.; Ouannas, A.; Hioual, A.; Mohamed, D. Examining finite-time behaviors in the fractional Gray–Scott model: Stability, synchronization, and simulation analysis. Int. J. Cogn. Comput. Eng. 2025, 6, 380–390. [Google Scholar] [CrossRef]
  9. Diethelm, K.; Ford, N.J.; Freed, A.D. A detailed error analysis for a fractional Adams method. Numer. Algorithms 2004, 36, 31–52. [Google Scholar] [CrossRef]
  10. Saw, V.; Kumar, S. Numerical solution of fraction Bagley–Torvik boundary value problem based on Chebyshev collocation method. Int. J. Appl. Comput. Math. 2019, 5, 68. [Google Scholar] [CrossRef]
  11. Ji, T.; Hou, J.; Yang, C. Numerical solution of the Bagley–Torvik equation using shifted Chebyshev operational matrix. Adv. Differ. Equations 2020, 2020, 648. [Google Scholar] [CrossRef]
  12. Hou, J.; Yang, C.; Lv, X. Jacobi collocation methods for solving the fractional Bagley–Torvik equation. Int. J. Appl. Math 2020, 50, 114–120. [Google Scholar]
  13. Ji, T.; Hou, J. Numerical solution of the Bagley–Torvik equation using Laguerre polynomials. SeMA J. 2020, 77, 97–106. [Google Scholar] [CrossRef]
  14. Kaur, H.; Kumar, R.; Arora, G. Non-dyadic wavelets based computational technique for the investigation of Bagley–Torvik equations. Int. J. Emerg. Technol. 2019, 10, 1–14. [Google Scholar]
  15. Dincel, A.T. A sine-cosine wavelet method for the approximation solutions of the fractional Bagley–Torvik equation. Sigma J. Eng. Nat. Sci. 2021, 40, 150–154. [Google Scholar]
  16. Rabiei, K.; Razzaghi, M. The Numerical Solution of the Fractional Bagley–Torvik Equation by the Boubaker Wavelets. In Acoustics and Vibration of Mechanical Structures–AVMS-2021: Proceedings of the 16th AVMS, Timişoara, Romania, 28–29 May 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 27–37. [Google Scholar]
  17. Abd-Elhameed, W.; Youssri, Y. Spectral solutions for fractional differential equations via a novel Lucas operational matrix of fractional derivatives. Rom. J. Phys 2016, 61, 795–813. [Google Scholar]
  18. Youssri, Y.H. A new operational matrix of Caputo fractional derivatives of Fermat polynomials: An application for solving the Bagley–Torvik equation. Adv. Differ. Equations 2017, 2017, 1–17. [Google Scholar] [CrossRef]
  19. Izadi, M.; Negar, M.R. Local discontinuous Galerkin approximations to fractional Bagley–Torvik equation. Math. Methods Appl. Sci. 2020, 43, 4798–4813. [Google Scholar] [CrossRef]
  20. Chen, J. A fast multiscale Galerkin algorithm for solving boundary value problem of the fractional Bagley–Torvik equation. Bound. Value Probl. 2020, 2020, 1–13. [Google Scholar] [CrossRef]
  21. Tamilselvan, A. Second order spline method for fractional Bagley–Torvik equation with variable coefficients and Robin boundary conditions. J. Math. Model. 2023, 11, 117–132. [Google Scholar]
  22. Verma, A.; Kumar, M. Numerical solution of Bagley–Torvik equations using Legendre artificial neural network method. Evol. Intell. 2021, 14, 2027–2037. [Google Scholar] [CrossRef]
  23. Elgindy, K.T. High-order numerical solution of second-order one-dimensional hyperbolic telegraph equation using a shifted Gegenbauer pseudospectral method. Numer. Methods Partial. Differ. Equ. 2016, 32, 307–349. [Google Scholar] [CrossRef]
  24. Elgindy, K.T. High-order, stable, and efficient pseudospectral method using barycentric Gegenbauer quadratures. Appl. Numer. Math. 2017, 113, 1–25. [Google Scholar] [CrossRef]
  25. Elgindy, K.T. Optimal control of a parabolic distributed parameter system using a fully exponentially convergent barycentric shifted Gegenbauer integral pseudospectral method. J. Ind. Manag. Optim. 2018, 14, 473. [Google Scholar] [CrossRef]
  26. Elgindy, K.T.; Refat, H.M. High-order shifted Gegenbauer integral pseudo-spectral method for solving differential equations of Lane–Emden type. Appl. Numer. Math. 2018, 128, 98–124. [Google Scholar] [CrossRef]
  27. Szegö, G. Orthogonal Polynomials; American Mathematical Society Colloquium Publication: Seattle, WA, USA, 1975; Volume 23. [Google Scholar]
  28. Elgindy, K.T.; Smith-Miles, K.A. Optimal Gegenbauer quadrature over arbitrary integration nodes. J. Comput. Appl. Math. 2013, 242, 82–106. [Google Scholar] [CrossRef]
  29. Elgindy, K.T.; Karasözen, B. Distributed optimal control of viscous Burgers’ equation via a high-order, linearization, integral, nodal discontinuous Gegenbauer-Galerkin method. Optim. Control. Appl. Methods 2020, 41, 253–277. [Google Scholar] [CrossRef]
  30. Elgindy, K.T.; Refat, H.M. Direct integral pseudospectral and integral spectral methods for solving a class of infinite horizon optimal output feedback control problems using rational and exponential Gegenbauer polynomials. Math. Comput. Simul. 2024, 219, 297–320. [Google Scholar] [CrossRef]
  31. Elgindy, K.T.; Dahy, S.A. High-order numerical solution of viscous Burgers’ equation using a Cole-Hopf barycentric Gegenbauer integral pseudospectral method. Math. Methods Appl. Sci. 2018, 41, 6226–6251. [Google Scholar] [CrossRef]
  32. Al-Mdallal, Q.M.; Syam, M.I.; Anwar, M. A collocation-shooting method for solving fractional boundary value problems. Commun. Nonlinear Sci. Numer. Simul. 2010, 15, 3814–3822. [Google Scholar] [CrossRef]
  33. Batool, A.; Talib, I.; Riaz, M.B. Fractional-order boundary value problems solutions using advanced numerical technique. Partial. Differ. Equations Appl. Math. 2025, 13, 101059. [Google Scholar] [CrossRef]
  34. Yüzbaşı, Ş. Numerical solution of the Bagley–Torvik equation by the Bessel collocation method. Math. Methods Appl. Sci. 2013, 36, 300–312. [Google Scholar] [CrossRef]
Figure 1. Key relationships in the SGPS method showing the polynomial transformations and their computational roles. The lower-degree SG polynomial G ^ j m λ + m ( x ^ ) serves as a scaled transformation of the derivative term G ^ j λ , m x x y 1 m α through Theorem 1, and can be numerically integrated with high precision to approximate the necessary integrals of the mth-derivatives of higher-degree SG polynomials. The approximation is then used to construct the α th-order FSGIM generator, which directly generates the α th-order FSGIM. The FSGIM is finally used to approximate the Caputo FD at the required nodes. The required Caputo FD approximation can also be obtained directly by using the generator matrix through Equation (9).
Figure 1. Key relationships in the SGPS method showing the polynomial transformations and their computational roles. The lower-degree SG polynomial G ^ j m λ + m ( x ^ ) serves as a scaled transformation of the derivative term G ^ j λ , m x x y 1 m α through Theorem 1, and can be numerically integrated with high precision to approximate the necessary integrals of the mth-derivatives of higher-degree SG polynomials. The approximation is then used to construct the α th-order FSGIM generator, which directly generates the α th-order FSGIM. The FSGIM is finally used to approximate the Caputo FD at the required nodes. The required Caputo FD approximation can also be obtained directly by using the generator matrix through Equation (9).
Mathematics 13 01793 g001
Figure 2. The logarithmic absolute errors of Caputo FD approximations of the power function f 1 , computed using the SGPS method. The fractional order is set to α = 1.5 , and the approximations are evaluated at t = 0.5 . The SG interpolant degrees range from n = 2 to 10. The figure presents errors under different conditions: (Top-left): Varying λ with fixed λ q = 0.5 and n q = 4 . (Top-right): Varying λ q with fixed λ = 0.5 and n q = 4 . (Bottom-left): Varying n q with λ = λ q = 0.5 . (Bottom-right): Comparison between the SGPS method (with n q = 12 and λ = λ q = 0.5 ) and MATLAB’s integral function. The top figures include comparisons with SC and SL interpolants and quadrature cases, where λ = 0 and λ = 0.5 correspond to the standard SC and SL cases, respectively.
Figure 2. The logarithmic absolute errors of Caputo FD approximations of the power function f 1 , computed using the SGPS method. The fractional order is set to α = 1.5 , and the approximations are evaluated at t = 0.5 . The SG interpolant degrees range from n = 2 to 10. The figure presents errors under different conditions: (Top-left): Varying λ with fixed λ q = 0.5 and n q = 4 . (Top-right): Varying λ q with fixed λ = 0.5 and n q = 4 . (Bottom-left): Varying n q with λ = λ q = 0.5 . (Bottom-right): Comparison between the SGPS method (with n q = 12 and λ = λ q = 0.5 ) and MATLAB’s integral function. The top figures include comparisons with SC and SL interpolants and quadrature cases, where λ = 0 and λ = 0.5 correspond to the standard SC and SL cases, respectively.
Mathematics 13 01793 g002
Figure 3. The logarithmic absolute errors of Caputo FD approximations of f 2 at t = 0.5 for β = 0.1 , α = 1.5 , comparing Gegenbauer interpolants (degrees n = 3 –7) across five parameter values λ { 0.2 , 0.1 , 0 , 0.5 , 1 , 2 } , using a ( 15 , 0.5 ) -SGPS quadrature. The figure includes comparisons with SC and SL interpolants cases.
Figure 3. The logarithmic absolute errors of Caputo FD approximations of f 2 at t = 0.5 for β = 0.1 , α = 1.5 , comparing Gegenbauer interpolants (degrees n = 3 –7) across five parameter values λ { 0.2 , 0.1 , 0 , 0.5 , 1 , 2 } , using a ( 15 , 0.5 ) -SGPS quadrature. The figure includes comparisons with SC and SL interpolants cases.
Mathematics 13 01793 g003
Figure 4. Log-lin plots of ϑ ^ m , λ for λ = 0.1 , 0 , 0.5 , 1 , 2 , and m = 2 : 10 .
Figure 4. Log-lin plots of ϑ ^ m , λ for λ = 0.1 , 0 , 0.5 , 1 , 2 , and m = 2 : 10 .
Mathematics 13 01793 g004
Figure 5. The empirical convergence analysis of the fractional operator approximation showing the relationship between λ parameters and error model components obtained through regression. The left axis (blue circles) displays the exponential decay rate p from the error model E n = c e p n , while the right axis (red crosses) shows the corresponding coefficient c values. The dual-axis visualization demonstrates how different λ values in the approximation scheme affect both the convergence rate and magnitude of approximation errors.
Figure 5. The empirical convergence analysis of the fractional operator approximation showing the relationship between λ parameters and error model components obtained through regression. The left axis (blue circles) displays the exponential decay rate p from the error model E n = c e p n , while the right axis (red crosses) shows the corresponding coefficient c values. The dual-axis visualization demonstrates how different λ values in the approximation scheme affect both the convergence rate and magnitude of approximation errors.
Mathematics 13 01793 g005
Figure 6. The solution workflow for Bagley–Torvik TPBVPs using the SGPS method. The process begins with problem discretization using FSGIMs and the SGDM (if necessary), followed by collocation at SGG points to form a linear system. After solving the system, the solution is obtained at collocation points and can be interpolated to arbitrary points.
Figure 6. The solution workflow for Bagley–Torvik TPBVPs using the SGPS method. The process begins with problem discretization using FSGIMs and the SGDM (if necessary), followed by collocation at SGG points to form a linear system. After solving the system, the solution is obtained at collocation points and can be interpolated to arbitrary points.
Mathematics 13 01793 g006
Figure 7. The exact solution to Example 1 and its approximation on Ω 1 (upper) and the absolute errors at the collocation points (lower). The approximate solution was obtained using the SGPS method with parameters n = n q = 4 and λ = λ q = 1.1 .
Figure 7. The exact solution to Example 1 and its approximation on Ω 1 (upper) and the absolute errors at the collocation points (lower). The approximate solution was obtained using the SGPS method with parameters n = n q = 4 and λ = λ q = 1.1 .
Mathematics 13 01793 g007
Figure 8. The exact solution to Example 2 and its approximation on Ω 1 (upper) and the absolute errors at the collocation points (lower). The approximate solution was obtained using the SGPS method with parameters n = n q = λ = λ q = 2 .
Figure 8. The exact solution to Example 2 and its approximation on Ω 1 (upper) and the absolute errors at the collocation points (lower). The approximate solution was obtained using the SGPS method with parameters n = n q = λ = λ q = 2 .
Mathematics 13 01793 g008
Figure 9. The exact solution to Example 3 and its approximation on Ω 1 (upper) and the absolute errors at the collocation points (lower). The approximate solution was obtained using the SGPS method with parameters n = n q = 3 and λ = λ q = 1 .
Figure 9. The exact solution to Example 3 and its approximation on Ω 1 (upper) and the absolute errors at the collocation points (lower). The approximate solution was obtained using the SGPS method with parameters n = n q = 3 and λ = λ q = 1 .
Mathematics 13 01793 g009
Figure 10. The sensitivity analysis of collocation matrix A ’s numerical stability for Example 1 using the SGPS method. The panels illustrate the following: (left) a surface plot depicting the condition number κ ( A ) as a function of the parameters λ and λ q ; (center) a contour plot showing the distribution of the condition number across the parameter space; and (right) semilogarithmic plots of the condition number κ ( A ) as a function of λ q for selected fixed values of λ . The parameters used in the analysis are α = 1.5 , n = n q = 4 , and λ , λ q [ 0.49 , 2 ] .
Figure 10. The sensitivity analysis of collocation matrix A ’s numerical stability for Example 1 using the SGPS method. The panels illustrate the following: (left) a surface plot depicting the condition number κ ( A ) as a function of the parameters λ and λ q ; (center) a contour plot showing the distribution of the condition number across the parameter space; and (right) semilogarithmic plots of the condition number κ ( A ) as a function of λ q for selected fixed values of λ . The parameters used in the analysis are α = 1.5 , n = n q = 4 , and λ , λ q [ 0.49 , 2 ] .
Mathematics 13 01793 g010
Figure 11. The sensitivity analysis of collocation matrix A ’s numerical stability for Example 2 using the SGPS method. The panels illustrate the following: (left) a surface plot depicting the condition number κ ( A ) as a function of the parameters λ and λ q ; (center) a contour plot showing the distribution of the condition number across the parameter space; and (right) semilogarithmic plots of the condition number κ ( A ) as a function of λ q for selected fixed values of λ . The parameters used in the analysis are α = 1.5 , n = n q = 2 , and λ , λ q [ 0.49 , 2 ] .
Figure 11. The sensitivity analysis of collocation matrix A ’s numerical stability for Example 2 using the SGPS method. The panels illustrate the following: (left) a surface plot depicting the condition number κ ( A ) as a function of the parameters λ and λ q ; (center) a contour plot showing the distribution of the condition number across the parameter space; and (right) semilogarithmic plots of the condition number κ ( A ) as a function of λ q for selected fixed values of λ . The parameters used in the analysis are α = 1.5 , n = n q = 2 , and λ , λ q [ 0.49 , 2 ] .
Mathematics 13 01793 g011
Figure 12. The sensitivity analysis of collocation matrix A ’s numerical stability for Example 3 using the SGPS method. The panels illustrate the following: (left) a surface plot depicting the condition number κ ( A ) as a function of the parameters λ and λ q ; (center) a contour plot showing the distribution of the condition number across the parameter space; and (right) semilogarithmic plots of the condition number κ ( A ) as a function of λ q for selected fixed values of λ . The parameters used in the analysis are α = 1.5 , n = n q = 3 , and λ , λ q [ 0.49 , 2 ] .
Figure 12. The sensitivity analysis of collocation matrix A ’s numerical stability for Example 3 using the SGPS method. The panels illustrate the following: (left) a surface plot depicting the condition number κ ( A ) as a function of the parameters λ and λ q ; (center) a contour plot showing the distribution of the condition number across the parameter space; and (right) semilogarithmic plots of the condition number κ ( A ) as a function of λ q for selected fixed values of λ . The parameters used in the analysis are α = 1.5 , n = n q = 3 , and λ , λ q [ 0.49 , 2 ] .
Mathematics 13 01793 g012
Table 1. Table of symbols and their meanings.
Table 1. Table of symbols and their meanings.
SymbolMeaningSymbolMeaningSymbolMeaning
for all a for any a a for almost all
e for each s for some r s for (a) relatively small
r l for (a) relatively largemuch less than ¬ not much less than
much greater thanthere exist(s)asymptotically equivalent
asymptotically less than asymptotically less than or equal tonot sufficiently close to
C set of all complex-valued functions F set of all real-valued functions C set of complex numbers
R set of real numbers R 0 set of non-negative real numbers R θ set of nonzero real numbers
R 1 / 2 { x R : 1 / 2 < x < 0 } Z set of integers Z + set of positive integers
Z 0 + set of non-negative integers Z e + set of positive even integersi:j:klist of numbers from i to k with increment j
i:klist of numbers from i to k with increment 1 y 1 : n or y i i = 1 : n list of symbols y 1 , y 2 , , y n { y 1 : n } set of symbols y 1 , y 2 , , y n
J n { 0 : n 1 } J n + J n { n } N n { 1 : n }
N m , n { m : n } G n λ set of GG zeros of the ( n + 1 ) st-degree Gegenbauer polynomial with index λ > 1 / 2 G ^ n λ set of SGG points in the interval [ 0 , 1 ]
Ω a , b closed interval [ a , b ] Ω interior of the set Ω Ω T specific interval [ 0 , T ]
Ω L × T Cartesian product Ω L × Ω T Γ ( · ) Gamma function Γ ( · , · ) upper incomplete gamma function
. ceiling function I j k indicator (characteristic) function 1 if j k , 0 otherwise . E α , β ( z ) two-parameter Mittag–Leffler function
( · ) n Pochhammer symbol supp ( f ) support of function f f * complex conjugate of f
f n f ( t n ) f N , n f N ( t n ) I b ( t ) h 0 b h ( t ) d t
I a , b ( t ) h a b h ( t ) d t I t ( t ) h 0 t h ( . ) d ( . ) I b ( t ) h - [ u ( t ) - ] 0 b h ( u ( t ) ) d t
I a , b ( t ) h - [ u ( t ) - ] a b h ( u ( t ) ) d t I Ω a , b ( x ) h a b h ( x ) d x x d / d x
x n d n / d x n D x α c f α th-order Caputo FD of f at x given by D x α c f = 1 Γ ( α α ) 0 x f ( α ) ( t ) ( x t ) α α + 1 d t Def Ω space of all functions defined on Ω
C k ( Ω ) space of k times continuously differentiable functions on Ω L p ( Ω ) Banach space of measurable functions u Def Ω with u L p = I Ω u p 1 / p < L ( Ω ) space of all essentially bounded measurable functions on Ω
f L ( Ω ) L norm: sup x Ω | f ( x ) | = inf { M 0 : | f ( x ) | M a a x Ω } · 1 l 1 -norm · 2 Euclidean norm
H k , p ( Ω ) Sobolev space of weakly differentiable functions with integrable weak derivatives up to order k t N [ t N , 0 , t N , 1 , , t N , N ] g 0 : N [ g 0 , g 1 , , g N ]
g ( 0 : N ) [ g , g , , g ( N ) ] c 0 : N [ 1 , c , c 2 , , c N ] t N or [ t N , 0 : N ] [ t N , 0 , t N , 1 , , t N , N ]
h ( y ) vector with i-th element h ( y i ) h ( y ) or h 1 : m - [ y - ] [ h 1 ( y ) , , h m ( y ) ] y ÷ vector of reciprocals of the elements of y
O n zero matrix of size n 1 n all-ones matrix of size n I n identity matrix of size n
C n , m matrix C of size n × m C n n-th row of matrix C 1 n n-dimensional all ones column vector
0 n n-dimensional all-zeros column vector A or trp A transpose of matrix A diag v diagonal matrix with v on the diagonal
resh m , n A reshape A into an m × n matrix resh n A reshape A into a square matrix of size n κ ( A ) condition number of A
Kronecker productHadamard product A ( r ) r-times Hadamard product of A
A m each entry in A raised to the power m f ( n ) = O ( g ( n ) ) n 0 , c > 0 : 0 f ( n ) c g ( n ) n n 0 f ( n ) = o ( g ( n ) ) lim n f ( n ) g ( n ) = 0
Remark: A vector is represented in print by a bold italicized symbol, while a two-dimensional matrix is represented by a bold symbol, except for a row vector whose elements form a certain row of a matrix, which is represented by a bold symbol.
Table 2. Computational costs and typical parameters for the SGPS method.
Table 2. Computational costs and typical parameters for the SGPS method.
AspectCost/ParameterTypical Values
RuntimeConstruction of FSGIM: O ( M n n q ) Small: n , n q = 2 , 3 , 4 ; M = n
Application of FSGIM: O ( M n ) Large: n , n q 10 ; M = n
StorageFSGIM: O ( M n ) Same as above
Parameter RangesSmall n, n q : λ , λ q [ 1 / 2 + ε , 2 ] Suggested: λ 0.49 , λ q = 0.5
Large n, n q : λ Ω 2 , λ q [ 1 2 + ε , 2 ] Suggested: λ = λ q = 0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Elgindy, K.T. The Numerical Approximation of Caputo Fractional Derivatives of Higher Orders Using a Shifted Gegenbauer Pseudospectral Method: A Case Study of Two-Point Boundary Value Problems of the Bagley–Torvik Type. Mathematics 2025, 13, 1793. https://doi.org/10.3390/math13111793

AMA Style

Elgindy KT. The Numerical Approximation of Caputo Fractional Derivatives of Higher Orders Using a Shifted Gegenbauer Pseudospectral Method: A Case Study of Two-Point Boundary Value Problems of the Bagley–Torvik Type. Mathematics. 2025; 13(11):1793. https://doi.org/10.3390/math13111793

Chicago/Turabian Style

Elgindy, Kareem T. 2025. "The Numerical Approximation of Caputo Fractional Derivatives of Higher Orders Using a Shifted Gegenbauer Pseudospectral Method: A Case Study of Two-Point Boundary Value Problems of the Bagley–Torvik Type" Mathematics 13, no. 11: 1793. https://doi.org/10.3390/math13111793

APA Style

Elgindy, K. T. (2025). The Numerical Approximation of Caputo Fractional Derivatives of Higher Orders Using a Shifted Gegenbauer Pseudospectral Method: A Case Study of Two-Point Boundary Value Problems of the Bagley–Torvik Type. Mathematics, 13(11), 1793. https://doi.org/10.3390/math13111793

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop