Next Article in Journal
RU-OLD: A Comprehensive Analysis of Offensive Language Detection in Roman Urdu Using Hybrid Machine Learning, Deep Learning, and Transformer Models
Previous Article in Journal
A Survey of Visual SLAM Based on RGB-D Images Using Deep Learning and Comparative Study for VOE
Previous Article in Special Issue
General Position Subset Selection in Line Arrangements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Super-Exponential Approximation of the Riemann–Liouville Fractional Integral via Gegenbauer-Based Fractional Approximation Methods

by
Kareem T. Elgindy
1,2
1
Department of Mathematics and Sciences, College of Humanities and Sciences, Ajman University, Ajman P.O. Box 346, United Arab Emirates
2
Nonlinear Dynamics Research Center (NDRC), Ajman University, Ajman P.O. Box 346, United Arab Emirates
Algorithms 2025, 18(7), 395; https://doi.org/10.3390/a18070395
Submission received: 24 May 2025 / Revised: 17 June 2025 / Accepted: 25 June 2025 / Published: 27 June 2025

Abstract

This paper introduces a Gegenbauer-based fractional approximation (GBFA) method for high-precision approximation of the left Riemann–Liouville fractional integral (RLFI). By using precomputable fractional-order shifted Gegenbauer integration matrices (FSGIMs), the method achieves super-exponential convergence for smooth functions, delivering near machine-precision accuracy with minimal computational cost. Tunable shifted Gegenbauer (SG) parameters enable flexible optimization across diverse problems, while rigorous error analysis confirms rapid error decay under optimal settings. Numerical experiments demonstrate that the GBFA method outperforms MATLAB’s integral, MATHEMATICA’s NIntegrate, and existing techniques by up to two orders of magnitude in accuracy, with superior efficiency for varying fractional orders 0 < α < 1 . Its adaptability and precision make the GBFA method a transformative tool for fractional calculus, ideal for modeling complex systems with memory and non-local behavior.

1. Introduction

Fractional calculus offers a powerful framework for modeling intricate systems characterized by memory and non-local interactions, finding applications in diverse fields such as viscoelasticity [1], anomalous diffusion [2], and control theory [3], among others. A fundamental concept in this domain is the left RLFI, defined for α Ω 1 and f L 2 ( Ω 1 ) as detailed in Table 1. In contrast to classical calculus, which operates under the assumption of local dependencies, fractional integrals, exemplified by the left RLFI, inherently account for cumulative effects over time through a singular kernel. This characteristic renders them particularly well-suited for modeling phenomena where past states significantly influence future behavior. The RLFI proves especially valuable in the description of complex dynamics exhibiting self-similarity, scale-invariance, or memory effects. For instance, in anomalous diffusion, the RLFI captures self-similar patterns and scale-invariant power-law behaviors, enabling the precise modeling of sub-diffusive processes [4]. In viscoelasticity, it accounts for memory effects by modeling stress relaxation with past-dependent dynamics [5] and reveals temporal symmetries, thus improving predictive accuracy [1]. These properties make the RLFI a critical tool for analyzing complex systems and improving their forecasting capabilities [3,6,7]. Nevertheless, the singular nature of the RLFI’s kernel, ( t τ ) α 1 , presents substantial computational hurdles, as conventional numerical methods frequently encounter limitations in accuracy or incur high computational costs. Consequently, the development of efficient and high-precision approximation techniques for the RLFI is of paramount importance for advancing computational modeling across physics, engineering, and biology, where fractional calculus is increasingly employed to tackle real-world problems involving non-local behavior or fractal structures.
Existing approaches to RLFI approximation include wavelet-based methods [7,8,9,10,11,12], polynomial and orthogonal function techniques [13,14,15,16,17], finite difference and quadrature schemes [18,19,20,21], operational matrix methods [22,23,24,25], local meshless techniques [26], alternating direction implicit schemes [27], and radial basis functions [28]. While these methods have shown promise in specific contexts, they often struggle with trade-offs between accuracy, computational cost, and flexibility, particularly when adapting to diverse problem characteristics or varying fractional orders.
This study introduces the GBFA method, which overcomes these challenges through three key innovations: (i) Parameter adaptability, utilizing the tunable SG parameters λ (for interpolation) and λ q (for quadrature approximation) to optimize performance across a wide range of problems; (ii) Super-exponential convergence, achieving rapid error decay for smooth functions, often reaching near machine-precision with modest node counts; and (iii) Computational efficiency, enabled by precomputable FSGIMs that minimize runtime costs. The proposed GBFA method addresses these challenges by using the orthogonality and flexibility of SG polynomials to achieve super-exponential convergence. This approach offers near machine-precision accuracy with minimal computational effort, particularly for systems necessitating repeated fractional integrations or displaying symmetric patterns. Numerical experiments demonstrate that the GBFA method significantly outperforms established tools like MATLAB’s integral function and MATHEMATICA’s NIntegrate, achieving up to two orders of magnitude higher accuracy in certain cases. It also surpasses prior methods, such as the trapezoidal approach of Dimitrov [29], the spline-based techniques of Ciesielski and Grodzki [30], and the neural network method [31], in both precision and efficiency. Sensitivity analysis reveals that setting λ q < λ often accelerates convergence, while rigorous error bounds confirm super-exponential decay under optimal parameter choices. The FSGIM’s invariance for fixed points and parameters enables precomputation, making the GBFA method ideal for problems requiring repeated fractional integrations with varying α . While the method itself exploits the mathematical properties of orthogonal polynomials, its application can be crucial in analyzing systems where symmetry or repeating patterns are fundamental, as fractional calculus is used to model phenomena with memory where past states influence future behavior, and identifying symmetries in such systems can simplify analysis and prediction.
This paper is organized as follows: Section 2 presents the GBFA framework. Section 3 analyzes computational complexity. Section 4 provides a detailed error analysis. Section 4.1 provides actionable guidelines for selecting the tunable parameters λ and λ q , balancing accuracy and computational efficiency. Section 5 evaluates numerical performance. Section 6 presents the conclusions of this study with future works of potential methodological extensions to non-smooth functions encountered in fractional calculus. Appendix A lists all acronyms used in the paper. Appendix B includes supporting mathematical proofs. Finally, Appendix C provides a detailed dimensional analysis of the matrix expressions in Equations (15) and (16), which are central to deriving our proposed RLFI approximation method.

2. Numerical Approximation of RLFI

This section presents the GBFA method adapted for approximating the RLFI. We shall briefly begin with some background on the classical Gegenbauer polynomials, as they form the foundation for the GBFA method developed in this study.
The classical Gegenbauer polynomials G n λ ( x ) , defined for x Ω 1 , 1 and λ > 1 / 2 , are orthogonal with respect to the weight function w λ ( x ) = ( 1 x 2 ) λ 1 / 2 . They satisfy the three-term recurrence relation:
( n + 2 α ) G n + 1 λ ( x ) = 2 ( n + λ ) x G n λ ( x ) n G n 1 λ ( x ) ,
with G 0 λ ( x ) = 1 and G 1 λ ( x ) = x . Their orthogonality is given by
I 1 , 1 ( t ) w λ G n λ G m λ = h n λ δ m , n ,
where
h n λ = 2 1 2 λ π Γ ( n + 2 λ ) n ! ( n + λ ) Γ 2 ( λ ) .
The SG polynomials G ^ n λ ( t ) , used in this study, are obtained via the transformation x = 2 t 1 for t Ω 1 , inheriting analogous properties adjusted for the shifted domain. In particular, they satisfy the three-term recurrence relation:
( n + 2 λ ) G ^ n + 1 λ ( t ) = 2 ( n + λ ) ( 2 t 1 ) G ^ n λ ( t ) n G ^ n 1 λ ( t ) ,
with initial conditions G ^ 0 ( λ ) ( t ) = 1 and G ^ 1 ( λ ) ( t ) = 2 t 1 . They are directly related to Jacobi polynomials by the following identity:
G ^ n ( λ ) ( t ) = n ! Γ λ + 1 2 Γ n + λ + 1 2 J n λ 1 2 , λ 1 2 ( 2 t 1 ) , t Ω 1 , n Z 0 + ,
where J n γ 1 , γ 2 ( t ) is the nth-degree Jacobi polynomial with parameters γ 1 : 2 > 1 [32] (Equation (A.1)). One may also directly evaluate the SG polynomial of any degree n using the identity
G ^ n ( λ ) ( t ) = m = 0 n / 2 ( 1 ) m n ! Γ ( 2 λ ) Γ ( n m + λ ) m ! ( n 2 m ) ! Γ ( λ ) Γ ( n + 2 λ ) ( 4 t 2 ) n 2 m ;
cf. [33] (Equation (D.2)). For a comprehensive treatment of their properties and quadrature rules, see [34,35,36,37].
Let α Ω 1 , f L 2 ( Ω 1 ) , and G ^ n λ = t ^ n , 0 : n λ . The GBFA interpolant of f is given by
I n f ( t ) = f 0 : n L 0 : n λ [ t ] ,
where L k λ ( t ) is defined as
L k λ ( t ) = ϖ ^ k λ trp ƛ 0 : n λ ÷ G ^ 0 : n λ [ t ^ n , k λ ] G ^ 0 : n λ [ t ] , k J n + ,
with normalization factors and Christoffel numbers:
ƛ ^ j λ = π 2 1 4 λ Γ ( j + 2 λ ) j ! Γ 2 ( λ ) ( j + λ ) ,
ϖ ^ k λ = 1 / trp ƛ ^ 0 : n λ ÷ G ^ 0 : n λ [ t ^ n , k λ ] ( 2 ) ,
j , k J n + . In matrix form, we can write Equation (8) as
L 0 : n λ [ t ] = diag ϖ ^ 0 : n λ G ^ 0 : n λ [ t 1 n + 1 ] G ^ 0 : n λ [ t ^ n λ ] ƛ ^ 0 : n λ ÷ .
This allows us to approximate the RLFI as follows:
I t α f I t α I n f = f 0 : n I t α L 0 : n λ .
Using the transformation
τ = t 1 y 1 / α ,
Formula (12) becomes
I t α f t α Γ ( α + 1 ) f 0 : n I 1 ( y ) L 0 : n λ [ t ( 1 y 1 / α ) ] .
Notice here that the transformed integrand t α α L 0 : n λ [ t ( 1 y 1 / α ) ] is non-singular in y. However, the singularity at τ = t maps to y = 0 , and the behavior near y = 0 reflects the original singularity via y 1 / α . The singularity is regularized in the new integrand, facilitating numerical computation while maintaining the RLFI’s mathematical structure. Substituting Equation (11) into Equation (14):
I t α f t α Γ ( α + 1 ) trp ƛ ^ 0 : n λ ÷ I 1 ( y ) G ^ 0 : n λ [ t ( 1 y 1 / α ) 1 n + 1 ] G ^ 0 : n λ [ t ^ n λ ] diag ϖ ^ 0 : n λ f 0 : n .
For multiple points z 0 : M Ω 1 s M Z 0 + , we extend Equation (11):
L 0 : n λ [ z M ] = resh n + 1 , M + 1 trp ƛ ^ 0 : n λ ÷ G ^ 0 : n λ [ z M 1 n + 1 ] G ^ 0 : n λ [ 1 M + 1 t ^ n λ ] I M + 1 diag ϖ ^ 0 : n λ .
Thus,
I z M α f 1 Γ ( α + 1 ) z M α Q ^ n α E f 0 : n ,
where z M α = [ z 0 α , z 1 α , , z M α ] is an ( M + 1 ) × 1 column vector, with each element z i raised to the power α , and
Q ^ n α E = resh n + 1 , M + 1 trp ƛ ^ 0 : n λ ÷ I 1 ( y ) G ^ 0 : n λ [ z M ( 1 y 1 / α ) 1 n + 1 ] G ^ 0 : n λ [ 1 M + 1 t ^ n λ ] I M + 1 diag ϖ ^ 0 : n λ .
Alternatively,
I z M α f Q n α E f 0 : n ,
where
Q n α E = 1 Γ ( α + 1 ) diag z M α Q ^ n α E .
We term Q n α E the “ α th-order FSGIM” for the RLFI and Q ^ n α E the “ α th-order FSGIM Generator.” Equation (17) is preferred for computational efficiency. For a detailed dimensional analysis of the matrix expressions in Equation (15), including the explicit form of I 1 ( y ) G ^ 0 : n λ [ t ( 1 y 1 / α ) 1 n + 1 ] and clarification of the dimensions and reshaping operation in Equation (16), see Appendix C.
To compute I 1 ( y ) G ^ j λ [ t ( 1 y 1 / α ) ] , we can use the SGIRV P ^ = 1 2 P with SGG nodes t ^ n q λ q :
I 1 ( y ) G ^ j λ [ t ( 1 y 1 / α ) ] P ^ G ^ j λ t 1 t ^ n q λ q 1 / α , j J n + , t Ω 1 ,
cf. [35] (Algorithm 6 or 7). Formula (21) represents the ( n q , λ q ) -GBFA quadrature used for the numerical partial calculation of the RLFI. We denote the approximate α th-order RLFI of a function at point t, computed using Equation (21) in conjunction with either Equation (17) or Equation (20), by I t α n q , λ q n , λ , E (The parameters in the left superscripts and subscripts of I t α n q , λ q n , λ , E are the same parameters used by the numerical method: n and λ indicate the degree and index of the SG polynomial G ^ n λ ( x ) , while n q and λ q specify the parameters of the polynomial basis used in the quadrature scheme. E is a letter distinguishing this discrete operator from other operators in the literature, referring to the first initial of the author’s surname).
Figure 1 provides a visual summary of the GBFA method’s workflow, illustrating the seamless integration of interpolation, transformation, and quadrature steps. This schematic highlights the method’s flexibility, as the tunable parameters λ and λ q allow practitioners to tailor the approximation to specific problem characteristics, optimizing both accuracy and computational efficiency. The alternative path of precomputing the FSGIM, as indicated by the dashed arrow, underscores the method’s suitability for applications requiring repeated evaluations.
Remark 1. 
The selection of f L 2 ( Ω 1 ) for defining the RLFI is motivated by both mathematical rigor and practical relevance. The kernel ( t τ ) α 1 exhibits a singularity at τ = t , necessitating integrability of the product for the RLFI to be well-defined. The space L 2 ( Ω 1 ) , characterized by 0 1 | f ( t ) | 2 d t < , ensures sufficient regularity to guarantee convergence. Since the singularity is integrable for α > 0 , the RLFI remains finite when f L 2 ( Ω 1 ) . Moreover, L 2 -integrability is consistent with many physical models in viscoelasticity, anomalous diffusion, and control theory, making it a natural function space for modeling. From another viewpoint, this choice does not conflict with the GBFA method’s superior convergence for smoother functions. While L 2 ( Ω 1 ) accommodates a broad class of functions, the GBFA method, based on pseudospectral techniques, achieves super-exponential convergence for analytic functions, as we demonstrate in Section 4, and algebraic rates for less regular L 2 functions, as is typical in pseudospectral methods. Thus, the method balances general applicability with rapid convergence under smoothness. Enhancements such as modal filtering and domain decomposition, as we discuss later in Section 6, can further improve performance for non-smooth functions, reinforcing the suitability of the L 2 setting.

3. Computational Complexity

This section provides a computational complexity analysis of constructing the α th-order FSGIM Q n α E and its generator Q ^ n α E . The analysis is based on the key matrix operations involved in the construction process, which we analyze individually in the following:
  • The term z M α involves raising each element of an ( M + 1 ) -dimensional vector to the power α . This operation requires O ( M ) operations.
  • Constructing Q n α E from Q ^ n α E involves a diagonal scaling by diag z M α , which requires another O ( M n ) operations.
  • The matrix Q ^ n α E is constructed using several matrix multiplications and element-wise operations. For each entry of z M , the dominant steps include the following:
    -
    The computation of G ^ 0 : n λ using the three-term recurrence relation requires O ( n ) operations per point. Since the polynomial evaluation is required for polynomials up to degree n, this requires O ( n 2 ) operations.
    -
    The quadrature approximation involves evaluating a polynomial at transformed nodes. The cost of calculating P ^ depends on the chosen methods for computing factorials and the Gamma function, which can be considered a constant overhead. The computation of t ^ n q λ q 1 / α involves raising each element of the column vector t ^ n q λ q to the power 1 / α , which is linear in ( n q + 1 ) . The cost of the matrix–vector multiplication is also linear in n q + 1 . Therefore, the computational cost of this step is O ( n q ) for each j J n + . The overall cost, considering all polynomial functions involved, is O ( n n q ) .
    -
    The Hadamard product introduces another O ( n 2 ) operations.
    -
    The vector ƛ ^ 0 : n λ ÷ = [ 1 / ƛ ^ 0 λ , 1 / ƛ ^ 1 λ , , 1 / ƛ ^ n λ ] is the element-wise reciprocal of ƛ ^ 0 : n λ = [ ƛ ^ 0 λ , ƛ ^ 1 λ , , ƛ ^ n λ ] , an ( n + 1 ) × 1 column vector of coefficients. To evaluate ƛ ^ 0 : n λ ÷ , we compute the reciprocal 1 / ƛ ^ j λ for each element ƛ ^ j λ , j J n + . This requires one floating-point division per element, totaling ( n + 1 ) divisions. Thus, the evaluation of ƛ ^ 0 : n λ ÷ requires O ( n ) operations, as the operation scales linearly with the vector length.
    -
    The product of ƛ ^ 0 : n λ ÷ by the result from the Hadamard product requires O ( n 2 ) operations.
    -
    The final diagonal scaling by diag ϖ ^ 0 : n λ contributes O ( n ) .
Summing the dominant terms, the overall computational complexity of constructing Q ^ n α E is O ( n ( n + n q ) ) per entry of z M . Therefore, the total number of operations required to construct the matrix Q n α E for all entries of z M is O ( M n ( n + n q ) ) .
Once the FSGIM is precomputed, applying it to compute the RLFI of a function requires only a matrix–vector multiplication with complexity O ( M n ) . The FSGIM’s invariance for fixed points and parameters enables precomputation, making the GBFA method ideal for problems requiring repeated fractional integrations with varying α . This indicates that the method is particularly efficient when (i) multiple integrations are needed with the same parameters, (ii) the same set of evaluation points is used repeatedly, and (iii) different functions need to be integrated with the same fractional order. The precomputation approach becomes increasingly advantageous as the number of repeated evaluations increases, since the one-time O ( M n ( n + n q ) ) cost is amortized across multiple O ( M n ) applications.

4. Error Analysis

The following theorem establishes the truncation error of the α th-order GBFA quadrature associated with the α th-order FSGIM Q n α E in closed form.
Theorem 1. 
Suppose that f C n + 1 ( Ω 1 ) is approximated by the GBFA interpolant (7). Assume also that the integrals
I 1 ( y ) G ^ 0 : n λ [ t ( 1 y 1 / α ) ] ,
are computed exactly a t Ω 1 . Then ξ = ξ ( t ) Ω 1 such that the truncation error, T n λ α ( t , ξ ) , in the RLFI approximation (15) is given by
T n λ α ( t , ξ ) = t α f ( n + 1 ) ( ξ ) ( n + 1 ) ! Γ ( α + 1 ) K ^ n + 1 λ I 1 ( y ) G ^ n + 1 λ [ t ( 1 y 1 / α ) ] ,
where K ^ n λ is the leading coefficient of the nth-degree, λ-indexed SG polynomial, defined as
K ^ n λ = 2 2 n 1 Γ ( n + λ ) Γ ( 2 λ + 1 ) Γ ( n + 2 λ ) Γ ( λ + 1 ) , n Z 0 + .
Proof. 
The Lagrange interpolation error associated with the GBFA interpolation (7) is given by
f ( t ) = I n f ( t ) + f ( n + 1 ) ( ξ ) ( n + 1 ) ! K ^ n + 1 λ G ^ n + 1 λ ( t ) ,
where K ^ n λ is given by Equation (24), which can be directly derived from [34] (Equation (4.12)) by setting L = 1 . Applying the RLFI operator I t α on both sides of Equation (25) results in the truncation error
T n λ α ( t , ξ ) = f ( n + 1 ) ( ξ ) ( n + 1 ) ! K ^ n + 1 λ I t α G ^ n + 1 λ .
The proof is accomplished from Equation (26) by applying the change of variables (13) on the RLFI of G ^ n + 1 λ .    □
The following theorem provides an upper bound for the truncation error (23).
Theorem 2. 
Let f ( n ) L ( Ω 1 ) = A n , and suppose that the assumptions of Theorem 1 hold. Then the truncation error T n λ α ( t , ξ ) satisfies the asymptotic bound
T n λ α ( t , ξ ) < ˜ A n + 1 ϑ α , λ e 4 n n 3 2 n + λ Υ σ λ ( n ) r l n ,
where
Υ σ λ ( n ) = 1 , λ R 0 + , σ λ n λ , λ R 1 / 2 ,
σ λ > 1 is a constant dependent on λ, and
ϑ α , λ = 1 4 π e α + 1 α 1 2 α 1 + 2 λ 1 2 2 λ 1 + 1 1620 1 + λ 5 × 1 + λ csch 1 1 + 2 λ 1 + 2 λ 1 2 + λ α sinh 1 α α / 2 1 + λ sinh 1 1 + λ 1 + λ 2 .
Proof. 
Observe first that t α 1 t Ω 1 . Notice also that
1 ( n + 1 ) ! Γ ( α + 1 ) K ^ n + 1 λ = 2 2 n 1 ( 2 λ + 1 ) Γ λ + 2 Γ n + 2 λ + 1 ( λ + 1 ) Γ α + 1 Γ 2 λ + 2 Γ n + 2 Γ n + λ + 1 ,
by definition. The asymptotic inequality (27) results immediately from (23) after applying the sharp inequalities of the Gamma function [36] ([InEquation  (96)]) on Equation (29) and using [37] ([Lemma 5.1]), which gives the uniform norm of Gegenbauer polynomials and their associated shifted forms.    □
Theorem 2 manifests that the error bound is influenced by the smoothness of the function f (through its derivatives) and the specific values of α and λ with super-exponentially decay rate as n , which guarantees that the error becomes negligible even for moderate values of n. By looking at the bounding constant ϑ α , λ , we notice that, while holding λ fixed, the factor α 1 2 α decays exponentially as α increases because α α dominates. For large α , sinh ( 1 / α ) 1 / α , so the factor α sinh ( 1 / α ) α / 2 behaves like 1. Thus, it does not significantly affect the behavior as α increases. The factor e α + 1 grows exponentially as α increases. Combining these observations, the dominant behavior as α increases is determined by the exponential growth of e α + 1 and the exponential decay of α 1 2 α . The exponential decay dominates, so ϑ α , λ decays as α increases, leading to a tighter error bound and improved convergence rate. On the other hand, considering large λ values while holding α fixed, the factor ( 1 + 2 λ ) 1 2 2 λ decays as λ increases. The factor 1 + 1 / 1620 ( 1 + λ ) 5 approaches 1 as λ increases. For large λ , csch 1 / ( 1 + 2 λ ) 1 + 2 λ , so the factor ( 1 + λ ) csch 1 1 + 2 λ 1 + 2 λ 1 2 + λ behaves like ( 1 + λ ) 1 2 + λ . This grows as λ increases. Finally, the factor ( 1 + λ ) sinh 1 1 + λ 1 + λ 2 behaves like 1 as λ increases. Thus, it does not significantly affect the behavior as λ increases. Combining these observations, the dominant behavior as λ increases is determined by the growth of ( 1 + λ ) 1 2 + λ and the decay of ( 1 + 2 λ ) 1 2 2 λ . The decay of ( 1 + 2 λ ) 1 2 2 λ dominates, so ϑ α , λ also decays as λ increases. It is noteworthy that ϑ α , λ remains finite as λ 1 / 2 , even though lim λ 0 . 5 + T 1 ( λ ) = , where T 1 ( λ ) = ( 1 + 2 λ ) 1 2 2 λ . This divergence is offset by the behavior of T 2 ( λ ) :
T 2 ( λ ) = ( 1 + λ ) csch 1 1 + 2 λ 1 + 2 λ 1 2 + λ .
Specifically, for λ 0 . 5 + , using the approximation csch ( x ) 2 e x for large x:
T 2 ( λ ) ( 1 + λ ) · 2 e 1 1 + 2 λ 1 + 2 λ 1 2 + λ 0.5 · 2 e 1 0 + 0 + 0 + 0 .
Consequently,
lim λ 0 . 5 + T 1 ( λ ) T 2 ( λ ) = 0 .
Hence, ϑ α , λ 0 as λ 1 / 2 + , driven by the product T 1 ( λ ) T 2 ( λ ) 0 . For λ R 1 / 2 , we notice that T 1 is strictly concave with a maximum value at
λ * = e + e W ( e / 2 ) 2 e 0.1351 ,
rounded to 4 significant digits; cf. Theorem A1. Figure 2 shows further the plots of T 2 ( λ ) and
T 3 ( λ ) = ( 1 + λ ) sinh 1 1 + λ 1 + λ 2 ,
where T 2 grows at a slow, quasi-linearly rates of change, while T 3 decays at a comparable rate; thus, their product remains positive with a small bounded variation. This shows that, while holding α fixed, the bounding constant ϑ α , λ displays a unimodal behavior: it rises from 0 as λ increases from 1 / 2 + , attains a maximum at λ * 0.1351 , and subsequently decays monotonically for λ > λ * . Figure 3 highlights how ϑ α , λ decays with increasing α while exhibiting varying sensitivity to λ , with the λ = λ * case (red curve) representing the parameter value that maximizes ϑ α , λ for fixed α .
It is important to note that the bounding constant ϑ α , λ modulates the error bound without altering its super-exponential decay as n . Near λ = 1 / 2 , ϑ α , λ 0 , which shrinks the truncation error bound, improving accuracy despite potential sensitivity in the SG polynomials. At λ = λ * , the maximized ϑ α , λ widens the bound, though it still vanishes as n . Beyond λ = λ * , the overall error bound decreases monotonically, despite a slower n-dependent decay for larger positive λ . The super-exponential term e 4 n n ensures rapid convergence in all cases. The decreasing ϑ α , λ after λ * , combined with the fact that the truncation error bound (excluding ϑ α , λ ) is smaller for 1 / 2 < λ 0 compared to its value for λ > 0 , further suggests that λ = 0 appears “optimal” in practice for minimizing the truncation error bound n in a stable numerical scheme, given the SG polynomial instability near λ = 1 / 2 . However, for relatively small or moderate values of n, other choices of λ may be optimal, as the derived error bounds are asymptotic and apply only as n .
In what follows, we analyze the truncation error of the quadrature Formula (21), and demonstrate how its results complement the preceding analysis.
Theorem 3. 
Let j J n + , t Ω 1 , and assume that G ^ j λ t ( 1 y 1 / α ) is interpolated by the SG polynomials with respect to the variable y at the SGG nodes t ^ n q , 0 : n q λ q . Then η = η ( y ) Ω 1 such that the truncation error, T j , n q λ q ( η ) , in the quadrature approximation (21) is given by
T j , n q λ q ( η ) = ( 1 ) n q + 1 χ ^ j , n q + 1 λ ( n q + 1 ) ! K ^ n q + 1 λ q t α n q + 1 η ( n q + 1 ) ( 1 α ) α G ^ j n q 1 λ + n q + 1 t 1 η 1 α I 1 ( y ) G ^ n q + 1 λ q · I j n q + 1 ,
where χ ^ n , m λ is defined by
χ ^ n , m λ = n ! Γ ( λ + 1 / 2 ) Γ ( n + m + 2 λ ) ( n m ) ! Γ ( n + 2 λ ) Γ ( m + λ + 1 / 2 ) n m .
Proof. 
Let G ^ j λ , m denote the mth-derivative of G ^ j λ j J n + . Ref. [34] (Theorem 4.1) tells us that
T j , n q λ q ( η ) = 1 ( n q + 1 ) ! K ^ n q + 1 λ q y n q + 1 G ^ j λ t 1 y 1 α y = η I 1 ( y ) G ^ n q + 1 λ q .
The error bound (34) is accomplished by applying the Chain Rule on Equation (36), which gives
T j , n q λ q ( η ) = ( 1 ) n q + 1 ( n q + 1 ) ! K ^ n q + 1 λ q t α n q + 1 η ( n q + 1 ) ( 1 α ) α G ^ j λ , n q + 1 t 1 η 1 α I 1 ( y ) G ^ n q + 1 λ q .
The proof is complete by realizing that
G ^ j λ , n q + 1 t 1 η 1 α = τ n q + 1 G ^ j λ τ τ = t 1 η 1 α = 0 ,
j < n q + 1 .    □
The next theorem provides an upper bound on the quadrature truncation error derived in Theorem 3.
Theorem 4. 
Suppose that the assumptions of Theorem 3 hold true. Then the truncation error, T j , n q λ q ( η ) , in the quadrature approximation (21) vanishes j < n q + 1 , and is bounded above by
T j , n q λ q ( η ) ˜ η n q ( 1 α ) α Υ ρ λ q ( n q ) μ λ , λ q n q λ q λ 2 t α n q , j n q , γ n q λ j 2 ( n q + 1 ) Θ λ q n q 3 / 2 λ q e t 2 α n q n q , j n q ,
j n q + 1 r l n q , where μ λ , λ q = ν λ / Θ λ q , Θ λ q is as defined by (A1), ν λ R + is a λ-dependent constant, ρ λ q > 1 is a λ q -dependent constant, and γ n q λ is a constant dependent on n q and λ.
Proof. 
Notice first that
G ^ j n q 1 λ + n q + 1 t 1 η 1 α 1 ,
by [37] (Lemma 5.1), since λ + n q + 1 > 0 . We now consider the following two cases.
Case I ( j n q ):
Lemmas A1 and A3 imply
χ ^ j , n q + 1 λ ( n q + 1 ) ! K ^ n q + 1 λ q < ˜ ν λ 4 e n q n q n q + 3 2 λ Θ λ q n q 3 / 2 λ q 2 n q e n q = μ λ , λ q 2 n q n q λ q λ .
Case II ( j n q ):
Here χ ^ j , n q + 1 λ = O j 2 ( n q + 1 ) by Lemma A3, and we have
χ ^ j , n q + 1 λ ( n q + 1 ) ! K ^ n q + 1 λ q < ˜ γ n q λ j 2 ( n q + 1 ) Θ λ q n q 3 / 2 λ q 2 n q e n q .
Formula (38) is obtained by substituting (39) and (40) into (34).    □
When j n q , the dominant term in sup T j , n q λ q ( η ) becomes
2 n q n q λ q λ · t α n q η n q ( 1 α ) α = 2 t η 1 α 1 α n q n q λ q λ .
Exponential decay occurs when
α > 2 t η 1 α 1 .
On the other hand, the dominant term in the error bound j n q is given by
γ n q λ j 2 ( n q + 1 ) Θ λ q n q 3 / 2 λ q 2 n q e n q · t α n q η n q ( 1 α ) α = γ n q λ Θ λ q e t j 2 η 1 α 1 2 α n q n q j 2 n q λ q 3 / 2 .
For convergence, we require
e t j 2 η 1 α 1 2 α n q < 1 .
Given j n q , j 2 / n q , and e t j 2 η 1 α 1 2 α n q typically diverges unless
t η 1 α 1 α is sufficiently small : e t j 2 η 1 α 1 2 α n q n q j 2 n q λ q 3 / 2 0 .
The relative choice of λ and λ q in either case controls the error bound’s decay rate. In particular, choosing λ q < λ ensures faster convergence rates when j n q due to presence of the polynomial factor n q λ q λ . For j n q , choosing λ q < 3 / 2 accelerates the convergence if Condition (43) holds.
The following theorem provides a rigorous asymptotic bound on the total truncation error for the RLFI approximation, combining both sources of error, namely, the interpolation and quadrature errors.
Theorem 5 
(Asymptotic Total Truncation Error Bound). Suppose that f C n + 1 ( Ω 1 ) is approximated by the GBFA interpolant (7), and the assumptions of Theorems 1 and 3 hold true. Then the total truncation error in the RLFI approximation of f, denoted by E n , n q λ , λ q α ( t , ξ , η ) , arising from both the series truncation (7) and the quadrature approximation (21), is asymptotically bounded above by
E n , n q λ , λ q α ( t , ξ , η ) < ˜ A n + 1 ϑ α , λ e 4 n n 3 2 n + λ Υ σ λ ( n ) + A 0 ϖ upp t α ƛ max λ Γ ( α + 1 ) n ( n n q ) η n q ( 1 α ) α Υ 2 D λ , ρ λ q ( n , n q ) I n n q + 1 μ λ , λ q n q λ q λ 2 t α n q , n n q , γ n q λ n 2 ( n q + 1 ) Θ λ q n q 3 / 2 λ q e t 2 α n q n q , n n q ,
r l n , n q , where
ϖ upp = ϖ upp , + , λ R 0 + , ϖ upp , - , λ R 1 / 2 ,
1 ƛ max λ = 1 ƛ n λ , λ R 0 + , 1 ƛ n q + 1 λ , λ R 1 / 2 ,
Υ D λ , ρ λ q 2 ( n , n q ) = 1 , λ R 0 + , λ q R 0 + , D λ n λ , λ R 1 / 2 , λ q R 0 + , ρ λ q n q λ q , λ R 0 + , λ q R 1 / 2 , D λ ρ λ q n λ n q λ q , λ R 1 / 2 , λ q R 1 / 2 ,
with D λ > 1 being a λ-dependent constant, and ϑ α , λ , Υ σ λ ( n ) , γ n q λ , and Θ λ q are constants with the definitions and properties outlined in Theorems 2 and 4, and Lemmas A1 and A3, and ϖ upp , ± are as defined by [38] (Formulas (B.4) and (B.15)).
Proof. 
The total truncation error combines the interpolation error from Theorem 2 and the accumulated quadrature errors from Theorem 4 for j J n + :
E n , n q λ , λ q α ( t , ξ , η ) = T n λ α ( t , ξ ) + t α Γ ( α + 1 ) k J n + ϖ ^ k λ f k j J n + ƛ ^ j λ 1 T j , n q λ q ( η ) G ^ j λ x ^ n , k λ = T n λ α ( t , ξ ) + t α Γ ( α + 1 ) k J n + ϖ k λ f k j N n q + 1 , n ƛ j λ 1 T j , n q λ q ( η ) G ^ j λ x ^ n , k λ .
Using the bounds on Christoffel numbers ϖ k λ and normalization factors ω j λ from [38] (Lemmas B.1 and B.2), along with the uniform bound on Gegenbauer polynomials from [37] (Lemma 5.1), we obtain
E n , n q λ , λ q α ( t , ξ , η ) < ˜ T n λ α ( t , ξ ) + A 0 ϖ upp t α ƛ max λ Γ ( α + 1 ) ( n + 1 ) ( n n q ) max j N n q + 1 : n T j , n q λ q ( η ) Υ D λ ( n ) .
The proof is completed by applying Theorems 2 and 4 on Formula (48), noting that max j N n q + 1 : n T j , n q λ q ( η ) occurs at j = n .    □
The total truncation error bound presented in Theorem 5 reveals several important insights about the convergence behavior of the RLFI approximation: (i) The total error consists of the interpolation error term T n λ α ( t , ξ ) and the accumulated quadrature error term. The interpolation error term decays at a super-exponential rate r l n due to the factor e 4 n n . The quadrature error either vanishes when n q n , decays exponentially when n q n under Condition (41), or typically diverges when n n q unless Condition (43) is fulfilled. Therefore, the quadrature nodes should scale appropriately with the interpolation mesh size in practice. (ii) The interpolation error bound tightens as α increases, due to the ϑ α , λ factor’s decay. The λ parameter shows a unimodal influence on the interpolation error, with potentially maximum error size at about λ * 0.1351 . The λ q parameter should generally be chosen smaller than λ to accelerate the convergence of the quadrature error when n q n . This analysis suggests that the proposed method achieves spectral convergence when the parameters are chosen appropriately. The quadrature precision should be selected to maintain balance between the two error components based on the desired accuracy and computational constraints.
Remark 2. 
Theorem 5 provides a theoretical guarantee for the accuracy of the GBFA method when computing the RLFI. In simple terms, it shows that the error in our method’s approximations decreases extremely rapidly as we increase the number of computational points, particularly for smooth functions. This rapid error reduction, often referred to as super-exponential convergence, means that with just a few additional points, the GBFA method can achieve results that are nearly as accurate as the computer’s maximum precision allows. The theorem considers key parameters of the GBFA method: the polynomial degree n, the quadrature degree n q , and the Gegenbauer parameters λ and λ q . It predicts that when these parameters are chosen wisely, the error shrinks faster than exponentially, as we demonstrate later in our numerical experiments. This is especially powerful for modeling complex systems with memory, such as those in viscoelasticity or anomalous diffusion, where high precision is critical. While the mathematical details of the theorem are complex, its core message is that the GBFA method is highly reliable and efficient, producing accurate results with minimal computational effort for a wide range of problems.

4.1. Practical Guidelines for Parameter Selection

The asymptotic analysis reveals dependencies on the parameters λ (for interpolation) and λ q (for quadrature), which play crucial roles in determining the method’s accuracy and efficiency. Here, we provide practical guidance for selecting these parameters to balance interpolation error, quadrature error, and computational cost.
The effectiveness of the GBFA method for approximating the RLFI hinges on the appropriate selection of parameters λ and λ q . Building upon established numerical approximation principles, our analysis incorporates specific considerations for RLFI computation. In particular, we identify the following effective operational range for the SG parameters λ and λ q :
T c , r = γ 1 2 + ε γ r , 0 < ε 1 , r Ω 1 , 2 .
This range, previously recommended by Elgindy and Karasözen [39] based on extensive theoretical and numerical testing consistent with broader spectral approximation theory, helps avoid numerical instability caused by increased extrapolation effects associated with larger positive SG indices. Furthermore, SG polynomials exhibit blow-up behavior as their indices approach 0 . 5 + . Within T c , r , we observe a crucial balance between theoretical convergence and numerical stability for RLFI computation, a finding corroborated by our numerical investigations using the GBFA method, which consistently demonstrate superior performance across various test functions with parameter choices in this range.
Based on this analysis of error bounds and numerical performance, we recommend the following parameter selection strategies for RLFI approximation.
  • r s n and n q , the selection of λ T c , r is feasible. Furthermore, choosing smaller λ q T c , r generally improves quadrature accuracy, with the notable exception of λ q = 0.5 , where the quadrature error is often minimized, as we demonstrate later in Figure 4 and Figure 5.
  • r l n and n q :
    -
    For precision computations: Select λ Ω 0.5 + ε , 0 N δ ( λ * ) and λ q T c , r , where
    λ q < λ , if n n q , 3 / 2 , if n n q ,
    with λ * defined by Equation (33). Here, Ω 0.5 + ε , 0 is a subset of the recommended interval T c , r for SG indices. Positive values of λ are excluded from T c , r , as the polynomial error factor n 3 2 n + λ increases with positive λ , as shown in Equation (27). The δ -neighborhood N δ ( λ * ) is excluded because ϑ α , λ , the leading factor in the asymptotic interpolation error bound, peaks at λ = λ * , potentially increasing the error. By avoiding this neighborhood, parameter choices that could amplify interpolation errors are circumvented, ensuring robust performance for high-precision applications.
    -
    For standard computational scenarios: Utilize λ = λ q = 0 , which corresponds to shifted Chebyshev approximation. This recommendation employs the well-established optimality of Chebyshev approximation for smooth functions, offering a robust default that balances accuracy and efficiency for RLFI approximation.
This parameter selection guideline provides practical and effective guidance for balancing interpolation and quadrature errors and computational cost across a wide range of applications.

5. Further Numerical Simulations

Example 1. 
To demonstrate the accuracy of the derived numerical approximation formulas, we consider the power function f ( t ) = t N , where N Z + , as our first test case. The RLFI of f is given analytically by
I t α f = N ! Γ ( N + α + 1 ) t N + α .
Figure 4 displays the logarithmic absolute errors of the RLFI approximations computed using the GBFA method, with fractional order α = 0.5 evaluated at t = 0.5 . The figure consists of four subplots that investigate (i) the effects of varying the parameters λ , λ q , and n q , and (ii) a comparative analysis between the GBFA method and MATLAB’s integral function with tolerance parameters set to RelTol = AbsTol = 10 15 . Our numerical experiments reveal several key observations:
  • Variation of λ while holding other parameters constant ( r s n , n q ) shows negligible impact on the error. The error reaches near machine epsilon precision at n = 3 , consistent with Theorems 1 and 3, which predict the collapse of both interpolation and quadrature errors when f ( n + 1 ) 0 and n q > n .
  • For n 5 , the total error reduces to pure quadrature error since f ( n + 1 ) 0 while n q < n .
  • Variation of λ q significantly affects the error, with λ q λ generally yielding higher accuracy.
  • Increasing either n or n q while fixing the other parameter leads to exponential error reduction.
The GBFA method achieves near machine-precision accuracy with parameter values λ = λ q = 0.5 and n q = 12 , outperforming MATLAB’s integral function by nearly two orders of magnitude. The method demonstrates remarkable stability, as evidenced by consistent error trends for λ q λ , with nearly exact approximations obtained for n q 12 in optimal parameter ranges.
Figure 6 compares further the computation times of the GBFA method and MATLAB’s integral function, plotted on a logarithmic scale. The GBFA method demonstrates significantly lower computational times compared to MATLAB’s integral function. This highlights the efficiency of the GBFA method, which achieves high accuracy with minimal computational cost.
Example 2. 
Next, we evaluate the 0.5 th-order RLFI of g ( t ) = e k t , where k R Θ , over Ω 0.5 . The analytical solution on Ω t is
I t α g = t α e k t Γ ( α + 1 ) F 1 1 ( α ; α + 1 ; k t ) .
Figure 5 presents the logarithmic absolute errors of the GBFA method, with subplots analyzing (i) the impact of λ , λ q , n, and n q , and (ii) a performance comparison against MATLAB’s integral function (tolerances 10 15 ). Some of the main findings include
  • Similar to Example 1, varying λ (with fixed small n, n q ) has minimal effect on accuracy, whereas λ q λ consistently improves precision.
  • Exponential error decay occurs when increasing either n or n q while holding the other constant.
  • Near machine-epsilon accuracy is achieved for λ = λ q = 0.5 and n q = 12 , with the GBFA method surpassing integral by two orders of magnitude.
The method’s stability is further demonstrated by the uniform error trends for λ q λ and its rapid exponential convergence (see Figure 7), underscoring its suitability for high-precision fractional calculus, particularly in absence of closed-form solutions.
Notably, this test problem was previously studied in [29,30] on Ω 2 . The former employed an asymptotic expansion for trapezoidal RLFI approximation, while the latter used linear, quadratic, and three cubic spline variants, with all computations performed in 128-bit precision. The methods were tested for grid sizes N { 40 , 80 , 160 , 320 , 640 } (step sizes Δ x { 0.05 , 0.025 , , 0.003125 } ). At N = 640 , Dimitrov’s method achieved the smallest error of 2.34 × 10 13 , whereas Ciesielski and Grodzki [30] reported their smallest error of 9.17 × 10 13 using Cubic Spline Variant 1. With λ = λ q = 0.5 and n q = 12 , our method attains errors close to machine epsilon ( 10 16 ), surpassing both methods in [29,30] by several orders of magnitude, even with significantly fewer computational nodes.
Section 4.1 notes the potential instability of SG polynomials as λ 1 / 2 , a behavior not yet investigated within the GBFA framework. Figure 8 visually confirms this trend: initially, the errors decrease as λ approaches 0.5 , since the leading error factor ϑ α , λ 0 as λ 1 / 2 + , as explained by Theorem 2 and evident when λ transitions from 0.499999 to 0.4999999 . However, the ill-conditioning of SG polynomials becomes prominent as λ approaches 0.5 more closely, leading to larger errors. This non-monotonic behavior near λ = 0.5 underscores the practical stability limits of the method, aligning with the early observations in [40] on the sensitivity of Gegenbauer polynomials with negative parameters. In particular, Elgindy [40] discovered that for large degrees M, Gegenbauer polynomials G M λ ( x ) with λ < 0 exhibit strong sensitivity to small perturbations in x, unlike Legendre or Chebyshev polynomials. For example, it was reported in [40] that evaluating G 150 1 / 4 ( x ) at x = 1 / 2 (exact: 5.754509478448837 ) versus x = 0.5001 (perturbed by 10 4 ) introduces an absolute error of 0.0122 , whereas Legendre and Chebyshev polynomials of the same degree incur errors of only 10 4 . This highlights the risks of using SG polynomials with λ 1 / 2 for high-order approximations, where even minor numerical perturbations can disrupt spectral convergence. These results reinforce the need to avoid such parameter regimes for robust approximations, as prescribed in Section 4.1.
Example 3. 
We evaluate the 0.5 th-order RLFI of the function f ( t ) = 2 t 3 + 8 t over the interval Ω 0.5 . The analytical solution on Ω t is given by
I t 0.5 f = 192 t 7 / 2 + 1120 t 3 / 2 105 π ,
which serves as the reference for numerical comparison. Table 2 presents a comprehensive comparison of numerical results obtained using three distinct approaches: (i) the proposed GBFA method, (ii) MATLAB’s high-precision  integral  function, and (iii) MATHEMATICA’s  NIntegrate  function. This test case has been previously investigated by Batiha et al. [41] on the interval Ω 2 . Their approach employed an n-point composite fractional formula derived from a three-point central fractional difference scheme, incorporating generalized Taylor’s theorem and fundamental properties of fractional calculus. Notably, their implementation with n = 10 subintervals achieved a relative error of approximately 6.87 × 10 3 , which is significantly higher than the errors produced by the current method.
Example 4. 
Consider the αth-order RLFI of the function f ( t ) = sin ( 1 t ) over the interval Ω 1 . The exact RLFI on Ω t is given by
I t α f = t α Γ ( α + 1 ) k = 0 ( 1 ) k Γ ( 2 k + 2 ) F 1 2 2 k 1 , 1 , α + 1 , t ;
cf. [31]. In the latter work, a shallow neural network with 50 hidden neurons was used to solve this problem. Trained on 1000 random points x i Ω 1 with targets sin ( 1 x i ) , the network yielded a Euclidean error norm of 9.89 × 10 3 for α = 0.2 , as reported in [31]. The GBFA method, with n = n q = 16 , λ = 1 , and λ q = 0.5 , achieved a much smaller error norm of about 7.63 × 10 16 when evaluated over 1000 equidistant points in Ω 1 .

6. Conclusions and Future Works

This study presents the GBFA method, a powerful approach for approximating the left RLFI. By using precomputable FSGIMs, the method achieves super-exponential convergence for smooth functions, delivering near machine-precision accuracy with minimal computational effort. The tunable Gegenbauer parameters λ and λ q enable tailored optimization, with sensitivity analysis showing that λ q < λ accelerates convergence for n q n . The strategic parameter selection guideline outlined in Section 4.1 improves the GBFA method’s versatility, ensuring high accuracy for a wide range of problem types. Rigorous error bounds confirm rapid error decay, ensuring robust performance across diverse problems. Numerical results highlight the GBFA method’s superiority, surpassing MATLAB’s integral, MATHEMATICA’s NIntegrate, and prior techniques like trapezoidal [29], spline-based methods [30], and the neural network method [31] by up to several orders of magnitude in accuracy, while maintaining lower computational costs. The FSGIM’s invariance for fixed points improves efficiency, making the method ideal for repeated fractional integrations with varying α Ω 1 . These qualities establish the GBFA method as a versatile tool for fractional calculus, with significant potential to advance modeling of complex systems exhibiting memory and non-local behavior, and to inspire further innovations in computational mathematics.
The error bounds derived in this study are asymptotic in nature, meaning they are most accurate for large values of n and n q . While these bounds provide valuable insights into the theoretical convergence behavior of the method, they may not be tight or predictive for small-to-moderate values of n and n q . For practitioners, this implies that while the asymptotic bounds guarantee eventual convergence, careful numerical experimentation is recommended to determine the optimal values of n, n q , λ , and λ q for a given problem. The numerical experiments presented in Section 5 demonstrate that the method performs exceptionally well even for small and moderate values of n and n q , but the asymptotic bounds should be interpreted as theoretical guides rather than precise predictors for all cases. To address this limitation, future work could focus on deriving non-asymptotic error bounds or developing heuristic strategies for parameter selection in practical scenarios. Additionally, adaptive algorithms could be explored to dynamically adjust n and n q based on local error estimates, further improving the method’s robustness for real-world applications. While the current work focuses on 0 < α < 1 , the extension of the GBFA method to α > 1 is the subject of ongoing research by the author and will be addressed in future publications. Exploring further adaptations to operators such as the right RLFI presents exciting avenues for future work.
The super-exponential convergence of spectral and pseudospectral methods, including those using Gegenbauer polynomials and Gauss nodes, typically degrades to algebraic convergence when applied to functions with limited regularity. This degradation is a well-established phenomenon in approximation theory, with the rate of algebraic convergence directly related to the function’s degree of smoothness. Specifically, for a function possessing k continuous derivatives, theoretical analysis predicts a convergence rate of approximately O ( N k ) , where N represents the number of collocation points [42]. To improve the applicability of the GBFA method for non-smooth functions, several methodological extensions can be incorporated: (i) Modal filtering techniques offer one promising approach, effectively dampening spurious high-frequency oscillations without significantly compromising overall accuracy. This process involves applying appropriate filter functions to the spectral coefficients, thereby regularizing the approximation while preserving accuracy in smooth solution regions. When implemented within the GBFA framework, filtering the coefficients in the SG polynomial expansion can potentially recover higher convergence rates for functions with isolated non-smoothness. The tunable parameters inherent in SG polynomials may provide valuable flexibility for optimizing filter characteristics to address specific types of non-smoothness. (ii) Adaptive interpolation strategies represent another valuable extension, employing local refinement near singularities or implementing moving-node approaches to more accurately capture localized features. By strategically concentrating computational resources in regions of limited regularity, these methods maintain high accuracy while effectively accommodating non-smooth behavior. Within the GBFA method, this approach could be realized through non-uniform SG collocation points with increased density near known singularities. (iii) Domain decomposition techniques offer particularly powerful capabilities by partitioning the computational domain into subdomains with potentially different resolutions or spectral parameters. This approach accommodates irregularities while preserving the advantages of the GBFA within each smooth subregion. Domain decomposition proves especially effective for problems featuring isolated singularities or discontinuities, allowing the GBFA method to maintain super-exponential convergence in smooth subdomains while appropriately addressing non-smooth regions through specialized treatment. (iv) For fractional problems involving weakly singular or non-smooth solutions with potentially unbounded derivatives, graded meshes provide an effective solution. These non-uniform discretizations concentrate points near singularities according to carefully chosen distributions, often recovering optimal convergence rates despite the presence of singularities. The inherent flexibility of SG parameters makes the GBFA method particularly amenable to such adaptations. (v) Hybrid spectral-finite element approaches represent yet another viable pathway, combining the high accuracy of spectral methods in smooth regions with the flexibility of finite element methods near singularities. Such hybrid frameworks effectively balance accuracy and robustness for problems with limited regularity. The GBFA method could be integrated into these frameworks by utilizing its super-exponential convergence where appropriate while delegating singularity treatment to more specialized techniques. These theoretical considerations and methodological extensions can significantly expand the potential applicability of the GBFA method to non-smooth functions commonly encountered in fractional calculus applications. While the current implementation focuses on smooth functions to demonstrate the method’s super-exponential convergence properties, the framework possesses sufficient flexibility to accommodate various extensions for handling non-smooth behavior. Future research may investigate these techniques to extend the GBFA method’s applicability to a wider range of practical problems in fractional calculus.

Funding

The Article Processing Charges (APCs) for this publication were funded by Ajman University, United Arab Emirates.

Data Availability Statement

The author declares that the data supporting the findings of this study are available within this article.

Conflicts of Interest

The author declares there are no conflicts of interests.

Appendix A. List of Acronyms

Table A1. List of Acronyms.
Table A1. List of Acronyms.
AcronymMeaning
FSGIMFractional-order shifted Gegenbauer integration matrix
GBFAGegenbauer-based fractional approximation
RLFIRiemann–Liouville fractional integral
SGIRVShifted Gegenbauer integration row vector
SGShifted Gegenbauer
SGGShifted Gegenbauer–Gauss

Appendix B. Mathematical Theorems and Proofs

Theorem A1. 
The function T 1 ( λ ) = ( 1 + 2 λ ) 1 / 2 2 λ is strictly concave over the interval 0.5 < λ < 0 , and attains its maximum value at
λ * = e + e W ( e / 2 ) 2 e 0.1351 ,
rounded to 4 significant digits.
Proof. 
Notice that ln T 1 ( λ ) = 1 / 2 2 λ ln ( 1 + 2 λ ) . Thus,
d d λ ln T 1 ( λ ) = 2 ln ( 1 + 2 λ ) 1 + 4 λ 1 + 2 λ , d 2 d λ 2 ln T 1 ( λ ) = 6 8 λ ( 1 + 2 λ ) 2 .
For λ R 1 / 2 , we find that ( 1 + 2 λ ) 2 > 0 , and 6 8 λ is linear and decreasing in λ . Thus, d 2 d λ 2 ln T 1 ( λ ) < 0 for all λ R 1 / 2 . Since the logarithmic second derivative is negative, ln T 1 ( λ ) is strictly concave on R 1 / 2 . Since the logarithm is a strictly increasing function and ln T 1 ( λ ) is strictly concave, then T 1 ( λ ) itself is also strictly concave on the interval R 1 / 2 . This proves the first part of the theorem. Now, to prove that T 1 attains its maximum at λ * , we set the logarithmic derivative equal to zero:
2 ln ( 1 + 2 λ ) 1 + 4 λ 1 + 2 λ = 0 .
Let x = 1 + 2 λ :
2 ln x 2 x 1 x = 0 2 x ln x + 2 x 1 = 0 .
Let x = e u : 2 u e u + 2 e u 1 = 0 e u + 1 ( u + 1 ) = e 2 . The solution of this the transcendental equation is u = W ( e 2 ) 1 . Thus, x = e W ( e / 2 ) / e = 1 + 2 λ , from which the maximum point λ = λ * is determined. This completes the proof of the theorem. □
Figure A1 shows a sketch of T 1 on the interval R 1 / 2 .
Figure A1. Behavior of T 1 for λ R 1 / 2 . The red dashed line indicates λ = λ * 0.1351 .
Figure A1. Behavior of T 1 for λ R 1 / 2 . The red dashed line indicates λ = λ * 0.1351 .
Algorithms 18 00395 g0a1
The following lemma establishes the asymptotic equivalence of ( n q + 1 ) ! K ^ n q + 1 λ q r l n q .
Lemma A1. 
Let λ q > 1 2 and consider the leading coefficient of the ( n q + 1 ) th-degree, λ q -indexed SG polynomial defined by
K ^ n q λ q = 2 n q 1 Γ ( 2 λ q + 1 ) Γ ( n q + λ q ) Γ ( λ q + 1 ) Γ ( n q + 2 λ q ) .
Then,
( n q + 1 ) ! K ^ n q + 1 λ q Θ λ q n q 3 / 2 λ q 2 n q e n q r l n q ,
where
Θ λ q = 2 π Γ ( 2 λ q + 1 ) Γ ( λ q + 1 ) .
Proof. 
We begin by expressing K ^ n q + 1 λ q explicitly:
K ^ n q + 1 λ q = 2 n q Γ ( 2 λ q + 1 ) Γ ( n q + λ q + 1 ) Γ ( λ q + 1 ) Γ ( n q + 2 λ q + 1 ) .
Since
( n + 1 ) ! K ^ n q + 1 λ q = O n q 3 / 2 λ q 2 n q e n q , as n q ,
a λ q > 1 / 2 by [43] (Lemma 2.2) and [34] (Lemma 4.2). Then,
( n q + 1 ) ! K ^ n q + 1 λ q n q 3 / 2 λ q 2 n q e n q = ( n q + 1 ) ! e n q Γ ( 2 λ q + 1 ) Γ ( n q + λ q + 1 ) Γ ( λ q + 1 ) Γ ( n q + 2 λ q + 1 ) n q n q λ q + 3 / 2 .
Stirling’s approximations to the factorial and gamma functions r l n q give
( n q + 1 ) ! 2 π n q n q + 3 / 2 e n q , Γ ( n q + λ q + 1 ) 2 π n q n q + λ q + 1 / 2 e n q , Γ ( n q + 2 λ q + 1 ) 2 π n q n q + 2 λ q + 1 / 2 e n q .
Therefore,
( n q + 1 ) ! Γ ( n q + λ q + 1 ) Γ ( n q + 2 λ q + 1 ) = 2 π n q n q λ q + 3 / 2 e n q .
Multiplying by the remaining terms:
Γ ( 2 λ q + 1 ) e n q Γ ( λ q + 1 ) n q n q λ q + 3 / 2 · 2 π n q n q λ q + 3 / 2 e n q = Θ λ q .
This shows that
( n + 1 ) ! K ^ n q + 1 λ q n q 3 / 2 λ q 2 n q e n q Θ λ q r l n q ,
which completes the proof. □
The next lemma is needed for the proof of the following lemma.
Lemma A2 
(Falling Factorial Approximation). Let n = m + k : k = o ( m ) r l m Z + . Then the falling factorial satisfies
( n ) m = m m m k k + 1 / 2 e k m 1 4 k k 2 2 m exp O max k 3 m 2 , 1 k .
Proof. 
Express the falling factorial as
( n ) m = i = 0 m 1 ( m + k i ) = j = 1 m ( k + j ) .
Take logarithms to convert the product into a sum:
ln ( n ) m = j = 1 m ln ( k + j ) .
Since m is large, we can approximate the sum by using the midpoint rule of integrals:
j = 1 m ln ( k + j ) I 1 / 2 , m + 1 / 2 ( x ) ln ( k + x )
= ( k + m + 1 2 ) ln ( k + m + 1 2 ) ( k + 1 2 ) ln ( k + 1 2 ) m .
For k = o ( m ) and large m, Taylor series expansions produce
( k + m + 1 2 ) ln ( k + m + 1 2 ) = ( k + m + 1 2 ) ln m 1 + k + 1 2 m m ln m + m k + 1 2 m 1 2 k + 1 2 m 2 + O k + 1 2 m 3 = m ln m + k + 1 2 k 2 2 m + O k 3 m 2 + O 1 m ,
and
( k + 1 2 ) ln ( k + 1 2 ) = ( k + 1 2 ) ln k + ln ( 1 + 1 2 k ) = 1 2 + 1 4 k + ( k + 1 2 ) ln k + O 1 k .
Substituting these approximations back yields
ln ( n ) m m ln m k ln k + k 1 2 ln k 1 4 k m k 2 2 m + O k 3 m 2 + O 1 k .
Exponentiating gives
( n ) m m m k k e k m k 1 / 2 e 1 / ( 4 k ) exp k 2 2 m + O k 3 m 2 + O 1 k
= m m k m k k e k m m k 1 m e 1 / ( 4 k ) exp k 2 2 m + O k 3 m 2 + O 1 k
= m m m k k + 1 / 2 e k m 1 4 k k 2 2 m exp O max k 3 m 2 , 1 k .
The next lemma gives the asymptotic upper bound on the growth rate of the parameteric scalar χ ^ n , m λ .
Lemma A3. 
Let λ > 1 / 2 and m Z + : m λ . Then
χ ^ n , m λ = O 4 e m m m + 1 2 λ if n m , O n 2 m r l n m .
Proof. 
Let n m . Then n = m + k : k = o ( m ) , and we can write ( n ) m m m + 1 / 2 e m by Lemma A2. Substituting this approximation and the sharp inequalities of the Gamma function [36] (InEquation (96)) into Equation (35) give
χ ( n , m , λ ) < ˜ ^ λ m m + 1 / 2 e m · m m λ n n 2 λ + 1 2 m + n m + n + 2 λ 1 2 λ 4 e m m m + 1 2 λ ,
where λ = 4 λ 2 ^ λ , and ^ λ R + is a λ -dependent constant. Now, consider the case when n m . Applying Stirling’s approximation of factorials and the same sharp inequalities of the Gamma function on the parameteric scalar χ ^ n , m λ give
χ ^ n , m λ < ˜ h m λ n 1 2 + n n m 1 2 1 + 3 m 3 n n + 2 λ 1 1 2 n 2 λ ×
n + m + 2 λ 1 n + m + 2 λ 1 2 sinh m n 2 1 n m ×
n + 2 λ 1 sinh 1 n + 2 λ 1 1 2 1 n 2 λ ×
n + m + 2 λ 1 sinh 1 n + m + 2 λ 1 1 2 n + m 1 + λ
m λ n 1 2 ( 5 m n ) sinh m n 2 1 n m ,
where m λ = e 5 m / 2 h m λ , and h m λ is a constant dependent on the parameters m and λ . Since 1 / ( n m ) is small r l n , we have
sinh m n 2 1 n m 1 n m m n 2 n n m 2 n m .
Substituting (A18) into (A17) yields
χ ^ n , m λ < ˜ m λ n 2 m ,
from which the proof is accomplished. □

Appendix C. Dimensional Analysis of Matrix Expressions in RLFI Approximations

This appendix provides a detailed dimensional analysis of the matrix expressions within Equations (15) and (16), addressing the mechanics of operations such as the Hadamard product ⊙ and Kronecker product ⊗. The following clarifications ensure clarity regarding the shapes of intermediate matrices and vectors, using notation defined in Table 1.
In Equation (15), the expression inside the brackets, trp ƛ ^ 0 : n λ ÷ I 1 ( y ) G ^ 0 : n λ [ t ( 1 y 1 / α ) 1 n + 1 ] G ^ 0 : n λ [ t ^ n λ ] diag ϖ ^ 0 : n λ , is a 1 × ( n + 1 ) row vector. Here, G ^ 0 : n λ = [ G ^ 0 λ , , G ^ n λ ] is an ( n + 1 ) × 1 column vector of SG polynomials, as defined in Table 1. The argument t ( 1 y 1 / α ) 1 n + 1 , where 1 n + 1 = [ 1 , , 1 ] is ( n + 1 ) × 1 , is an ( n + 1 ) × 1 column vector with all elements equal to t ( 1 y 1 / α ) . Evaluating G ^ 0 : n λ [ t ( 1 y 1 / α ) 1 n + 1 ] yields an ( n + 1 ) × ( n + 1 ) matrix, where each row j + 1 (for j J n + ) contains G ^ j λ ( t ( 1 y 1 / α ) ) :
G ^ 0 λ ( t ( 1 y 1 / α ) ) G ^ 0 λ ( t ( 1 y 1 / α ) ) G ^ 0 λ ( t ( 1 y 1 / α ) ) G ^ 1 λ ( t ( 1 y 1 / α ) ) G ^ 1 λ ( t ( 1 y 1 / α ) ) G ^ 1 λ ( t ( 1 y 1 / α ) ) G ^ n λ ( t ( 1 y 1 / α ) ) G ^ n λ ( t ( 1 y 1 / α ) ) G ^ n λ ( t ( 1 y 1 / α ) ) ( n + 1 ) × ( n + 1 ) .
Applying the integral I 1 ( y ) = 0 1 · d y element-wise, as per Table 1, produces an ( n + 1 ) × ( n + 1 ) matrix where each row j + 1 is filled with 0 1 G ^ j λ ( t ( 1 y 1 / α ) ) d y :
I 1 ( y ) G ^ 0 : n λ [ t ( 1 y 1 / α ) 1 n + 1 ] = a 0 a 0 a 0 a 1 a 1 a 1 a n a n a n ( n + 1 ) × ( n + 1 ) ,
where a j = 0 1 G ^ j λ ( t ( 1 y 1 / α ) ) d y for j J n + . This matrix is equivalent to [ a 0 , , a n ] 1 n + 1 , where 1 n + 1 is the transpose of 1 n + 1 . The matrix G ^ 0 : n λ [ t ^ n λ ] , where t ^ n λ = [ t ^ n , 0 λ , , t ^ n , n λ ] is ( n + 1 ) × 1 , is also ( n + 1 ) × ( n + 1 ) , with elements G ^ j λ ( t ^ n , k λ ) for j , k J n + . The Hadamard product ⊙ multiplies these matrices element-wise, yielding an ( n + 1 ) × ( n + 1 ) matrix. The row vector trp ƛ ^ 0 : n λ ÷ , of size 1 × ( n + 1 ) , multiplies this matrix to produce a 1 × ( n + 1 ) row vector, which is then multiplied by the ( n + 1 ) × ( n + 1 ) diagonal matrix diag ϖ ^ 0 : n λ , resulting in a 1 × ( n + 1 ) row vector. This vector is multiplied by f 0 : n , an ( n + 1 ) × 1 vector, to compute the RLFI approximation at point t.
Likewise, in Equation (16), the expression inside the brackets, trp ƛ ^ 0 : n λ ÷ G ^ 0 : n λ [ z M 1 n + 1 ] G ^ 0 : n λ [ 1 M + 1 t ^ n λ ] I M + 1 diag ϖ ^ 0 : n λ , is a 1 × ( M + 1 ) ( n + 1 ) row vector. The Kronecker product z M 1 n + 1 , with z M as ( M + 1 ) × 1 and 1 n + 1 as ( n + 1 ) × 1 , forms an ( M + 1 ) ( n + 1 ) × 1 column vector. Thus, G ^ 0 : n λ [ z M 1 n + 1 ] , where G ^ 0 : n λ is ( n + 1 ) × 1 , is an ( n + 1 ) × ( M + 1 ) ( n + 1 ) matrix. Similarly, 1 M + 1 t ^ n λ , with 1 M + 1 as ( M + 1 ) × 1 and t ^ n λ as ( n + 1 ) × 1 , is ( M + 1 ) ( n + 1 ) × 1 , and G ^ 0 : n λ [ 1 M + 1 t ^ n λ ] is ( n + 1 ) × ( M + 1 ) ( n + 1 ) . The Hadamard product ⊙ yields an ( n + 1 ) × ( M + 1 ) ( n + 1 ) matrix. Multiplying by trp ƛ ^ 0 : n λ ÷ , a 1 × ( n + 1 ) row vector, gives a 1 × ( M + 1 ) ( n + 1 ) row vector. The Kronecker product I M + 1 diag ϖ ^ 0 : n λ , where I M + 1 is ( M + 1 ) × ( M + 1 ) and diag ϖ ^ 0 : n λ is ( n + 1 ) × ( n + 1 ) , is an ( M + 1 ) ( n + 1 ) × ( M + 1 ) ( n + 1 ) matrix. The final multiplication produces a 1 × ( M + 1 ) ( n + 1 ) row vector, which is reshaped into an ( n + 1 ) × ( M + 1 ) matrix in column-major order, representing L 0 : n λ [ z M ] . This reshaping aligns the Lagrange basis functions L k λ ( z i ) for k J n + and i J M + .

References

  1. Mainardi, F. Fractional Calculus and Waves in Linear Viscoelasticity: An Introduction to Mathematical Models; World Scientific: Singapore, 2022. [Google Scholar]
  2. Gorenflo, R.; Mainardi, F. Random walk models for space-fractional diffusion processes. Fract. Calc. Appl. Anal. 1998, 1, 167–191. [Google Scholar]
  3. Monje, C.A.; Chen, Y.; Vinagre, B.M.; Xue, D.; Feliu-Batlle, V. Fractional-Order Systems and Controls: Fundamentals and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  4. Metzler, R.; Klafter, J. The random walk’s guide to anomalous diffusion: A fractional dynamics approach. Phys. Rep. 2000, 339, 1–77. [Google Scholar] [CrossRef]
  5. Bagley, R.L.; Torvik, P.J. A theoretical basis for the application of fractional calculus to viscoelasticity. J. Rheol. 1983, 27, 201–210. [Google Scholar] [CrossRef]
  6. Podlubny, I. Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  7. Zhang, J.; Zhou, F.; Mao, N. Numerical optimization algorithm for solving time-fractional telegraph equations. Phys. Scr. 2025, 100, 045237. [Google Scholar] [CrossRef]
  8. Ghasempour, A.; Ordokhani, Y.; Sabermahani, S. Mittag-Leffler wavelets and their applications for solving fractional optimal control problems. JVC/J. Vib. Control 2025, 31, 753–767. [Google Scholar] [CrossRef]
  9. Rabiei, K.; Razzaghi, M. Fractional-order Boubaker wavelets method for solving fractional Riccati differential equations. Appl. Numer. Math. 2021, 168, 221–234. [Google Scholar] [CrossRef]
  10. Rahimkhani, P.; Heydari, M.H. Numerical investigation of Ψ-fractional differential equations using wavelets neural networks. Comput. Appl. Math. 2025, 44, 54. [Google Scholar] [CrossRef]
  11. Damircheli, D.; Razzaghi, M. A wavelet collocation method for fractional Black–Scholes equations by subdiffusive model. Numer. Methods Partial. Differ. Equations 2024, 40, e23103. [Google Scholar] [CrossRef]
  12. Rahimkhani, P.; Ordokhani, Y.; Babolian, E. Numerical solution of fractional pantograph differential equations by using generalized fractional-order Bernoulli wavelet. J. Comput. Appl. Math. 2017, 309, 493–510. [Google Scholar] [CrossRef]
  13. Barary, Z.; Cherati, A.Y.; Nemati, S. An efficient numerical scheme for solving a general class of fractional differential equations via fractional-order hybrid Jacobi functions. Commun. Nonlinear Sci. Numer. Simul. 2024, 128, 107599. [Google Scholar] [CrossRef]
  14. Deniz, S.; Özger, F.; Özger, Z.Ö.; Mohiuddine, S.; Ersoy, M.T. Numerical solution of fractional Volterra integral equations based on rational Chebyshev approximation. Miskolc Math. Notes 2023, 24, 1287–1305. [Google Scholar] [CrossRef]
  15. Postavaru, O. An efficient numerical method based on Fibonacci polynomials to solve fractional differential equations. Math. Comput. Simul. 2023, 212, 406–422. [Google Scholar] [CrossRef]
  16. Akhlaghi, S.; Tavassoli Kajani, M.; Allame, M. Application of Müntz Orthogonal Functions on the Solution of the Fractional Bagley–Torvik Equation Using Collocation Method with Error Stimate. Adv. Math. Phys. 2023, 2023, 5520787. [Google Scholar] [CrossRef]
  17. Bazgir, H.; Ghazanfari, B. Existence and Uniqueness of Solutions for Fractional Integro-Differential Equations and Their Numerical Solutions. Int. J. Appl. Comput. Math. 2020, 6, 122. [Google Scholar] [CrossRef]
  18. Cao, Y.; Zaky, M.A.; Hendy, A.S.; Qiu, W. Optimal error analysis of space–time second-order difference scheme for semi-linear non-local Sobolev-type equations with weakly singular kernel. J. Comput. Appl. Math. 2023, 431, 115287. [Google Scholar] [CrossRef]
  19. Qiu, W.; Xu, D.; Guo, J. The Crank-Nicolson-type Sinc-Galerkin method for the fourth-order partial integro-differential equation with a weakly singular kernel. Appl. Numer. Math. 2021, 159, 239–258. [Google Scholar] [CrossRef]
  20. Cui, M. An alternating direction implicit compact finite difference scheme for the multi-term time-fractional mixed diffusion and diffusion–wave equation. Math. Comput. Simul. 2023, 213, 194–210. [Google Scholar] [CrossRef]
  21. Diethelm, K.; Ford, N.J.; Freed, A.D.; Luchko, Y. Algorithms for the fractional calculus: A selection of numerical methods. Comput. Methods Appl. Mech. Eng. 2005, 194, 743–773. [Google Scholar] [CrossRef]
  22. Edrisi-Tabriz, Y. Using the integral operational matrix of B-spline functions to solve fractional optimal control problems. Control. Optim. Appl. Math. 2022, 7, 77–98. [Google Scholar]
  23. Avcı, I.; Mahmudov, N.I. Numerical solutions for multi-term fractional order differential equations with fractional Taylor operational matrix of fractional integration. Mathematics 2020, 8, 96. [Google Scholar] [CrossRef]
  24. Xiaogang, Z.; Yufeng, N. An operational matrix method for fractional advection-diffusion equations with variable coefficients. Appl. Math. Mech. 2018, 39, 104–112. [Google Scholar]
  25. Krishnasamy, V.S.; Mashayekhi, S.; Razzaghi, M. Numerical solutions of fractional differential equations by using fractional Taylor basis. IEEE/CAA J. Autom. Sin. 2017, 4, 98–106. [Google Scholar] [CrossRef]
  26. Nikan, O.; Avazzadeh, Z. Numerical simulation of fractional evolution model arising in viscoelastic mechanics. Appl. Numer. Math. 2021, 169, 303–320. [Google Scholar] [CrossRef]
  27. Zhai, S.; Feng, X. Investigations on several compact ADI methods for the 2D time fractional diffusion equation. Numer. Heat Transf. Part Fundam. 2016, 69, 364–376. [Google Scholar] [CrossRef]
  28. Thakoor, N.; Behera, D.K. A new computational technique based on localized radial basis functions for fractal subdiffusion. In Proceedings of the 1st International Conference on Computational Applied Sciences & IT’S Applications, Jaipur, India, 28–29 April 2020; AIP Publishing: Melville, NY, USA, 2023; Volume 2768. [Google Scholar]
  29. Dimitrov, Y. Approximations of the fractional integral and numerical solutions of fractional integral equations. Commun. Appl. Math. Comput. 2021, 3, 545–569. [Google Scholar] [CrossRef]
  30. Ciesielski, M.; Grodzki, G. Numerical Approximations of the Riemann–Liouville and Riesz fractional integrals. Informatica 2024, 35, 21–46. [Google Scholar] [CrossRef]
  31. Nowak, A.; Kustal, D.; Sun, H.; Blaszczyk, T. Neural network approximation of the composition of fractional operators and its application to the fractional Euler-Bernoulli beam equation. Appl. Math. Comput. 2025, 501, 129475. [Google Scholar] [CrossRef]
  32. Elgindy, K.T.; Smith-Miles, K.A. Fast, accurate, and small-scale direct trajectory optimization using a Gegenbauer transcription method. J. Comput. Appl. Math. 2013, 251, 93–116. [Google Scholar] [CrossRef]
  33. Elgindy, K.T.; Smith-Miles, K.A. Optimal Gegenbauer quadrature over arbitrary integration nodes. J. Comput. Appl. Math. 2013, 242, 82–106. [Google Scholar] [CrossRef]
  34. Elgindy, K.T. High-order numerical solution of second-order one-dimensional hyperbolic telegraph equation using a shifted Gegenbauer pseudospectral method. Numer. Methods Partial. Differ. Equations 2016, 32, 307–349. [Google Scholar] [CrossRef]
  35. Elgindy, K.T. High-order, stable, and efficient pseudospectral method using barycentric Gegenbauer quadratures. Appl. Numer. Math. 2017, 113, 1–25. [Google Scholar] [CrossRef]
  36. Elgindy, K.T. Optimal control of a parabolic distributed parameter system using a fully exponentially convergent barycentric shifted Gegenbauer integral pseudospectral method. J. Ind. Manag. Optim. 2018, 14, 473. [Google Scholar] [CrossRef]
  37. Elgindy, K.T.; Refat, H.M. High-order shifted Gegenbauer integral pseudo-spectral method for solving differential equations of Lane–Emden type. Appl. Numer. Math. 2018, 128, 98–124. [Google Scholar] [CrossRef]
  38. Elgindy, K.T.; Refat, H.M. Direct integral pseudospectral and integral spectral methods for solving a class of infinite horizon optimal output feedback control problems using rational and exponential Gegenbauer polynomials. Math. Comput. Simul. 2024, 219, 297–320. [Google Scholar] [CrossRef]
  39. Elgindy, K.T.; Karasözen, B. Distributed optimal control of viscous Burgers’ equation via a high-order, linearization, integral, nodal discontinuous Gegenbauer-Galerkin method. Optim. Control. Appl. Methods 2020, 41, 253–277. [Google Scholar] [CrossRef]
  40. Elgindy, K. Gegenbauer Collocation Integration Methods: Advances in Computational Optimal Control Theory. Ph.D. Thesis, School of Mathematical Sciences, Faculty of Science, Monash University, Clayton, VIC, Australia, 2013. [Google Scholar]
  41. Batiha, I.M.; Alshorm, S.; Al-Husban, A.; Saadeh, R.; Gharib, G.; Momani, S. The n-point composite fractional formula for approximating Riemann–Liouville integrator. Symmetry 2023, 15, 938. [Google Scholar] [CrossRef]
  42. Gottlieb, D.; Shu, C. On the Gibbs phenomenon V: Recovering exponential accuracy from collocation point values of a piecewise analytic function. Numer. Math. 1995, 71, 511–526. [Google Scholar] [CrossRef]
  43. Elgindy, K.T.; Smith-Miles, K.A. Solving boundary value problems, integral, and integro-differential equations using Gegenbauer integration matrices. J. Comput. Appl. Math. 2013, 237, 307–325. [Google Scholar] [CrossRef]
Figure 1. Workflow of the GBFA method for RLFI approximation. The main path (solid arrows) shows the standard procedure: (1) interpolate the input function at SGG nodes, (2) apply the RLFI operator with variable transformation, (3) approximate the integrals of SG polynomials using SG quadrature, (4) construct the FSGIM, and (5) compute the final approximation. The dashed arrow indicates an alternative path: precomputing the FSGIM for direct evaluation and repeated use. The tunable parameters λ (interpolation) and λ q (quadrature) enable optimization across different problems.
Figure 1. Workflow of the GBFA method for RLFI approximation. The main path (solid arrows) shows the standard procedure: (1) interpolate the input function at SGG nodes, (2) apply the RLFI operator with variable transformation, (3) approximate the integrals of SG polynomials using SG quadrature, (4) construct the FSGIM, and (5) compute the final approximation. The dashed arrow indicates an alternative path: precomputing the FSGIM for direct evaluation and repeated use. The tunable parameters λ (interpolation) and λ q (quadrature) enable optimization across different problems.
Algorithms 18 00395 g001
Figure 2. (Left) Comparison of the functions T 2 (blue curve) and T 3 (red curve) over the interval R 1 / 2 . (Right) The product T 2 T 3 (purple), showing the combined behavior of the two functions. All plots demonstrate the dependence on the parameter λ in the negative domain near zero.
Figure 2. (Left) Comparison of the functions T 2 (blue curve) and T 3 (red curve) over the interval R 1 / 2 . (Right) The product T 2 T 3 (purple), showing the combined behavior of the two functions. All plots demonstrate the dependence on the parameter λ in the negative domain near zero.
Algorithms 18 00395 g002
Figure 3. Behavior of the bounding constant ϑ α , λ as α varies from 0.1 to 1, shown for eight representative values of λ ( 0.49 , 0.4 , 0.2 , 0.1351 , 0, 0.5 , 1, and 5). Each curve corresponds to a distinct λ value: dark red ( λ = 0.49 ), green ( λ = 0.4 ), olive ( λ = 0.2 ), bright red ( λ = 0.1351 ), dark green ( λ = 0 ), blue ( λ = 0.5 ), magenta ( λ = 1 ), and orange ( λ = 5 ).
Figure 3. Behavior of the bounding constant ϑ α , λ as α varies from 0.1 to 1, shown for eight representative values of λ ( 0.49 , 0.4 , 0.2 , 0.1351 , 0, 0.5 , 1, and 5). Each curve corresponds to a distinct λ value: dark red ( λ = 0.49 ), green ( λ = 0.4 ), olive ( λ = 0.2 ), bright red ( λ = 0.1351 ), dark green ( λ = 0 ), blue ( λ = 0.5 ), magenta ( λ = 1 ), and orange ( λ = 5 ).
Algorithms 18 00395 g003
Figure 4. Logarithmic absolute errors of the RLFI approximations for the power function f, computed using the GBFA method. The fractional order is set to α = 0.5 , and approximations are evaluated at t = 0.5 . Gegenbauer interpolant degrees match the function’s degrees for n = 3 : 2 : 11 . The figure presents errors under different conditions: (Left): Varying λ with fixed λ q = 0.5 and n q = 4 . (Second-left): Varying λ q with fixed λ = 0.5 and n q = 4 . (Second-right): Varying n q with λ = λ q = 0.5 . (Right): Comparison between RLIM and MATLAB’s integral function using n q = 12 and λ = λ q = 0.5 . “Error” refers to the difference between the true RLFI value and its approximation. Missing colored lines indicate zero error (approximation exact within numerical precision).
Figure 4. Logarithmic absolute errors of the RLFI approximations for the power function f, computed using the GBFA method. The fractional order is set to α = 0.5 , and approximations are evaluated at t = 0.5 . Gegenbauer interpolant degrees match the function’s degrees for n = 3 : 2 : 11 . The figure presents errors under different conditions: (Left): Varying λ with fixed λ q = 0.5 and n q = 4 . (Second-left): Varying λ q with fixed λ = 0.5 and n q = 4 . (Second-right): Varying n q with λ = λ q = 0.5 . (Right): Comparison between RLIM and MATLAB’s integral function using n q = 12 and λ = λ q = 0.5 . “Error” refers to the difference between the true RLFI value and its approximation. Missing colored lines indicate zero error (approximation exact within numerical precision).
Algorithms 18 00395 g004
Figure 5. Logarithmic absolute errors of the RLFI approximations for the exponential function g, computed using the GBFA method. The fractional order is set to α = 0.5 , and the approximations are evaluated at t = 0.5 using a 13th-degree Gegenbauer interpolant for k = 2 , 1 , 1 , 2 , except for the second-right subplot, where n varies. The figure presents errors under different conditions: (Left): Varying λ with fixed λ q = 0.5 and n q = 4 . (Second-left): Varying λ q with fixed λ = 0.5 and n q = 4 . (Middle): Varying n q with λ = λ q = 0.5 . (Second-right): Varying n with λ = λ q = 0.5 and n q = 12 . (Right): Comparison between RLIM and MATLAB’s integral function using n q = 12 and λ 1 = λ 2 = 0.5 . “Error” refers to the difference between the true RLFI value and its approximation.
Figure 5. Logarithmic absolute errors of the RLFI approximations for the exponential function g, computed using the GBFA method. The fractional order is set to α = 0.5 , and the approximations are evaluated at t = 0.5 using a 13th-degree Gegenbauer interpolant for k = 2 , 1 , 1 , 2 , except for the second-right subplot, where n varies. The figure presents errors under different conditions: (Left): Varying λ with fixed λ q = 0.5 and n q = 4 . (Second-left): Varying λ q with fixed λ = 0.5 and n q = 4 . (Middle): Varying n q with λ = λ q = 0.5 . (Second-right): Varying n with λ = λ q = 0.5 and n q = 12 . (Right): Comparison between RLIM and MATLAB’s integral function using n q = 12 and λ 1 = λ 2 = 0.5 . “Error” refers to the difference between the true RLFI value and its approximation.
Algorithms 18 00395 g005
Figure 6. Comparison of the elapsed computation times (displayed on a logarithmic scale) for the GBFA Method and MATLAB’s integral Method as a function of the polynomial degree n. All experiments were performed with t = 0.5 , α = 0.5 , λ 1 = λ 2 = 0.5 , and n q = 12 .
Figure 6. Comparison of the elapsed computation times (displayed on a logarithmic scale) for the GBFA Method and MATLAB’s integral Method as a function of the polynomial degree n. All experiments were performed with t = 0.5 , α = 0.5 , λ 1 = λ 2 = 0.5 , and n q = 12 .
Algorithms 18 00395 g006
Figure 7. The left subplot demonstrates the exponential convergence of the absolute error as n increases, with n q fixed at 12. The right subplot illustrates the exponential convergence of the absolute error as the n q parameter increases, with n fixed at 13. Both subplots show the absolute error on a logarithmic scale.
Figure 7. The left subplot demonstrates the exponential convergence of the absolute error as n increases, with n q fixed at 12. The right subplot illustrates the exponential convergence of the absolute error as the n q parameter increases, with n fixed at 13. Both subplots show the absolute error on a logarithmic scale.
Algorithms 18 00395 g007
Figure 8. The logarithmic absolute errors in the numerical evaluation of the RLFI of the function g at t = 0.5 for various values of λ extremely close to 0.5 , using n = 50 , n q = 4 , and λ q = 0.5 . Each curve corresponds to a different λ parameter, demonstrating the sensitivity of the GBFA method as λ approaches 0.5 .
Figure 8. The logarithmic absolute errors in the numerical evaluation of the RLFI of the function g at t = 0.5 for various values of λ extremely close to 0.5 , using n = 50 , n q = 4 , and λ q = 0.5 . Each curve corresponds to a different λ parameter, demonstrating the sensitivity of the GBFA method as λ approaches 0.5 .
Algorithms 18 00395 g008
Table 1. Table of Symbols and their meanings.
Table 1. Table of Symbols and their meanings.
Logical Operators and Quantifiers
∀: for all        a : for any        a a : for almost all        e : for each        s : for some        r s : for (a) relatively small
r l : for (a) relatively large       ∃: there exist(s)
Comparison and Relation Symbols
≪: much less than       ∼: asymptotically equivalent        < ˜ : asymptotically less than        / : not sufficiently close to
f ( n ) = O ( g ( n ) ) : n 0 , c > 0 : 0 f ( n ) c g ( n ) n n 0
f ( n ) = o ( g ( n ) ) : lim n f ( n ) g ( n ) = 0
Sets and Number Systems
C : set of complex numbers        R : set of real numbers        R Θ : set of nonzero real numbers        R 0 + : set of non-negative real numbers
R 1 / 2 : { x R : 1 / 2 < x < 0 }        Z : set of integers        Z + : set of positive integers        Z 0 + : set of non-negative integers        Z e + : set of positive even integers
J n : { 0 : n 1 }        J n + : J n { n }        N n : { 1 : n }        N m , n : { m : n }        { y 1 : n } : set of symbols y 1 , y 2 , , y n
G n λ : set of Gegenbauer-Gauss zeros of the ( n + 1 ) th-degree Gegenbauer polynomial with index λ > 1 / 2        G ^ n λ : set of SGG points in the interval [ 0 , 1 ]
Ω a , b : closed interval [ a , b ]        Ω : interior of the set Ω        Ω T : specific interval [ 0 , T ]        Ω L × T : Cartesian product Ω L × Ω T
N δ ( a ) = { x d ( x , a ) < δ } : 0 < δ 1 and d ( x , a ) is the distance (or metric) between the point x and the point a.
Lists and Sequences
i:j:k: list of numbers from i to k with increment j       i:k: list of numbers from i to k with increment 1
y 1 : n or y i i = 1 : n : list of symbols y 1 , y 2 , , y n
Functions and Special Functions
W: Lambert W function        Γ ( · ) : Gamma function        Γ ( · , · ) : upper incomplete gamma function        . : ceiling function        . : floor function
E α , β ( z ) : two-parameter Mittag-Leffler function
F 1 1 ( a ; c ; z ) : The confluent hypergeometric function defined as n = 0 a n ̲ c n ̲ z n n ! where q n ̲ is the Pochhammer symbol
F 1 2 ( a , b , c , z ) : The Gauss hypergeometric function defined as n = 0 a n ̲ b n ̲ c n ̲ z n n !
G n λ : nth-degree Gegenbauer polynomial with index λ > 1 / 2
G ^ n λ : nth-degree SG polynomial with index λ > 1 / 2 defined on Ω 1
( x ) m : The generalized falling factorial Γ ( x + 1 ) Γ ( x m + 1 ) x C , m Z 0 +        δ m , n : The Kronecker delta with integer indices m and n        supp ( f ) : support of function f
f * : complex conjugate of f        I j k : indicator (characteristic) function 1 if j k , 0 otherwise .
Notation and Shorthands
f n : f ( t n )     f N , n : f N ( t n )
Function Spaces
C : set of all complex-valued functions        F : set of all real-valued functions        Def Ω : space of all functions defined on Ω
C k ( Ω ) : space of k times continuously differentiable functions on Ω
L p ( Ω ) : Banach space of measurable functions u Def Ω with u L p = I Ω u p 1 / p <
L ( Ω ) : space of all essentially bounded measurable functions on Ω
H k , p ( Ω ) : Sobolev space of weakly differentiable functions with integrable weak derivatives up to order k
Integrals and Derivatives
I b ( t ) h : 0 b h ( t ) d t        I a , b ( t ) h : a b h ( t ) d t        I t ( t ) h : 0 t h ( . ) d ( . )        I b ( t ) h [ u ( t ) ] : 0 b h ( u ( t ) ) d t
I a , b ( t ) h [ u ( t ) ] : a b h ( u ( t ) ) d t        I Ω a , b ( x ) h : a b h ( x ) d x        I t α f : The left RLFI defined by 1 Γ ( α ) 0 t ( t τ ) α 1 f ( τ ) d τ
x : d / d x        x n : d n / d x n        D x α c f : α th-order Caputo fractional derivative of f at x
Norms and Metrics
f L ( Ω ) : L norm: sup x Ω | f ( x ) | = inf { M 0 : | f ( x ) | M a a x Ω }        · 1 : l 1 -norm        · 2 : Euclidean norm
Vectors and Matrices
t N : [ t N , 0 , t N , 1 , , t N , N ]        g 0 : N : [ g 0 , g 1 , , g N ]        g ( 0 : N ) : [ g , g , , g ( N ) ]        c 0 : N : [ 1 , c , c 2 , , c N ]        t N or [ t N , 0 : N ] : [ t N , 0 , t N , 1 , , t N , N ]
h ( y ) : vector with i-th element h ( y i )        h ( y ) or h 1 : m [ y ] : [ h 1 ( y ) , , h m ( y ) ]
y ÷ : vector of reciprocals of the elements of y . That is, if y = [ y 0 , y 1 , , y n ] , then y ÷ = [ 1 / y 0 , 1 / y 1 , , 1 / y n ]
O n : zero matrix of size n        trp A : transpose of matrix A       ⊙: Hadamard (element-wise) product
resh m , n A : reshape A into an m × n matrix        resh n A : reshape A into a square matrix of size n
A m : the matrix obtained by raising each entry in A to the power m
Remark: A vector is represented in print by a bold italicized symbol while a two-dimensional matrix is represented by a bold symbol, except for a row vector whose elements form a certain row of a matrix where we represent it in bold.
Table 2. Comparison of the exact 0.5 th-order RLFI of f ( t ) = 2 t 3 + 8 t over the interval [ 0 , 0.5 ] , the GBFA method, MATLAB’s integral function, and MATHEMATICA NIntegrate function approximations. The table also includes the CPU times for running the three numerical integration routines. All relative error approximations are rounded to 16 significant digits. The CPU times were computed in seconds (s).
Table 2. Comparison of the exact 0.5 th-order RLFI of f ( t ) = 2 t 3 + 8 t over the interval [ 0 , 0.5 ] , the GBFA method, MATLAB’s integral function, and MATHEMATICA NIntegrate function approximations. The table also includes the CPU times for running the three numerical integration routines. All relative error approximations are rounded to 16 significant digits. The CPU times were computed in seconds (s).
QuantityValueRelative ErrorCPU Time
Exact Value2.218878969089873
GBFA Method Approximation2.218878969089874 2.001412497195449 × 10 16 0.003125 s
( λ = λ q = 0.5 , n = 3 , n q = 4 )
MATLAB’s integral Function Approximation2.218878973119478 1.816054080462689 × 10 9 0.015625 s
( RelTol = AbsTol = 10 15 )
MATHEMATICA NIntegrate Function Approximation2.2188789399346960 1.3139598073477670 × 10 8 0.0098979 s
( PrecisionGoal = AccuracyGoal = 15 )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Elgindy, K.T. Super-Exponential Approximation of the Riemann–Liouville Fractional Integral via Gegenbauer-Based Fractional Approximation Methods. Algorithms 2025, 18, 395. https://doi.org/10.3390/a18070395

AMA Style

Elgindy KT. Super-Exponential Approximation of the Riemann–Liouville Fractional Integral via Gegenbauer-Based Fractional Approximation Methods. Algorithms. 2025; 18(7):395. https://doi.org/10.3390/a18070395

Chicago/Turabian Style

Elgindy, Kareem T. 2025. "Super-Exponential Approximation of the Riemann–Liouville Fractional Integral via Gegenbauer-Based Fractional Approximation Methods" Algorithms 18, no. 7: 395. https://doi.org/10.3390/a18070395

APA Style

Elgindy, K. T. (2025). Super-Exponential Approximation of the Riemann–Liouville Fractional Integral via Gegenbauer-Based Fractional Approximation Methods. Algorithms, 18(7), 395. https://doi.org/10.3390/a18070395

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop