Abstract
This paper introduces a Gegenbauer-based fractional approximation (GBFA) method for high-precision approximation of the left Riemann–Liouville fractional integral (RLFI). By using precomputable fractional-order shifted Gegenbauer integration matrices (FSGIMs), the method achieves super-exponential convergence for smooth functions, delivering near machine-precision accuracy with minimal computational cost. Tunable shifted Gegenbauer (SG) parameters enable flexible optimization across diverse problems, while rigorous error analysis confirms rapid error decay under optimal settings. Numerical experiments demonstrate that the GBFA method outperforms MATLAB’s integral, MATHEMATICA’s NIntegrate, and existing techniques by up to two orders of magnitude in accuracy, with superior efficiency for varying fractional orders . Its adaptability and precision make the GBFA method a transformative tool for fractional calculus, ideal for modeling complex systems with memory and non-local behavior.
Keywords:
Riemann–Liouville fractional integral; shifted Gegenbauer polynomials; pseudospectral methods; super-exponential convergence; fractional-order integration matrix MSC:
26A33; 41A10; 65D30
1. Introduction
Fractional calculus offers a powerful framework for modeling intricate systems characterized by memory and non-local interactions, finding applications in diverse fields such as viscoelasticity [1], anomalous diffusion [2], and control theory [3], among others. A fundamental concept in this domain is the left RLFI, defined for and as detailed in Table 1. In contrast to classical calculus, which operates under the assumption of local dependencies, fractional integrals, exemplified by the left RLFI, inherently account for cumulative effects over time through a singular kernel. This characteristic renders them particularly well-suited for modeling phenomena where past states significantly influence future behavior. The RLFI proves especially valuable in the description of complex dynamics exhibiting self-similarity, scale-invariance, or memory effects. For instance, in anomalous diffusion, the RLFI captures self-similar patterns and scale-invariant power-law behaviors, enabling the precise modeling of sub-diffusive processes [4]. In viscoelasticity, it accounts for memory effects by modeling stress relaxation with past-dependent dynamics [5] and reveals temporal symmetries, thus improving predictive accuracy [1]. These properties make the RLFI a critical tool for analyzing complex systems and improving their forecasting capabilities [3,6,7]. Nevertheless, the singular nature of the RLFI’s kernel, , presents substantial computational hurdles, as conventional numerical methods frequently encounter limitations in accuracy or incur high computational costs. Consequently, the development of efficient and high-precision approximation techniques for the RLFI is of paramount importance for advancing computational modeling across physics, engineering, and biology, where fractional calculus is increasingly employed to tackle real-world problems involving non-local behavior or fractal structures.
Table 1.
Table of Symbols and their meanings.
Existing approaches to RLFI approximation include wavelet-based methods [7,8,9,10,11,12], polynomial and orthogonal function techniques [13,14,15,16,17], finite difference and quadrature schemes [18,19,20,21], operational matrix methods [22,23,24,25], local meshless techniques [26], alternating direction implicit schemes [27], and radial basis functions [28]. While these methods have shown promise in specific contexts, they often struggle with trade-offs between accuracy, computational cost, and flexibility, particularly when adapting to diverse problem characteristics or varying fractional orders.
This study introduces the GBFA method, which overcomes these challenges through three key innovations: (i) Parameter adaptability, utilizing the tunable SG parameters (for interpolation) and (for quadrature approximation) to optimize performance across a wide range of problems; (ii) Super-exponential convergence, achieving rapid error decay for smooth functions, often reaching near machine-precision with modest node counts; and (iii) Computational efficiency, enabled by precomputable FSGIMs that minimize runtime costs. The proposed GBFA method addresses these challenges by using the orthogonality and flexibility of SG polynomials to achieve super-exponential convergence. This approach offers near machine-precision accuracy with minimal computational effort, particularly for systems necessitating repeated fractional integrations or displaying symmetric patterns. Numerical experiments demonstrate that the GBFA method significantly outperforms established tools like MATLAB’s integral function and MATHEMATICA’s NIntegrate, achieving up to two orders of magnitude higher accuracy in certain cases. It also surpasses prior methods, such as the trapezoidal approach of Dimitrov [29], the spline-based techniques of Ciesielski and Grodzki [30], and the neural network method [31], in both precision and efficiency. Sensitivity analysis reveals that setting often accelerates convergence, while rigorous error bounds confirm super-exponential decay under optimal parameter choices. The FSGIM’s invariance for fixed points and parameters enables precomputation, making the GBFA method ideal for problems requiring repeated fractional integrations with varying . While the method itself exploits the mathematical properties of orthogonal polynomials, its application can be crucial in analyzing systems where symmetry or repeating patterns are fundamental, as fractional calculus is used to model phenomena with memory where past states influence future behavior, and identifying symmetries in such systems can simplify analysis and prediction.
This paper is organized as follows: Section 2 presents the GBFA framework. Section 3 analyzes computational complexity. Section 4 provides a detailed error analysis. Section 4.1 provides actionable guidelines for selecting the tunable parameters and , balancing accuracy and computational efficiency. Section 5 evaluates numerical performance. Section 6 presents the conclusions of this study with future works of potential methodological extensions to non-smooth functions encountered in fractional calculus. Appendix A lists all acronyms used in the paper. Appendix B includes supporting mathematical proofs. Finally, Appendix C provides a detailed dimensional analysis of the matrix expressions in Equations (15) and (16), which are central to deriving our proposed RLFI approximation method.
2. Numerical Approximation of RLFI
This section presents the GBFA method adapted for approximating the RLFI. We shall briefly begin with some background on the classical Gegenbauer polynomials, as they form the foundation for the GBFA method developed in this study.
The classical Gegenbauer polynomials , defined for and , are orthogonal with respect to the weight function . They satisfy the three-term recurrence relation:
with and . Their orthogonality is given by
where
The SG polynomials , used in this study, are obtained via the transformation for , inheriting analogous properties adjusted for the shifted domain. In particular, they satisfy the three-term recurrence relation:
with initial conditions and . They are directly related to Jacobi polynomials by the following identity:
where is the nth-degree Jacobi polynomial with parameters [32] (Equation (A.1)). One may also directly evaluate the SG polynomial of any degree n using the identity
cf. [33] (Equation (D.2)). For a comprehensive treatment of their properties and quadrature rules, see [34,35,36,37].
Let , , and . The GBFA interpolant of f is given by
where is defined as
with normalization factors and Christoffel numbers:
. In matrix form, we can write Equation (8) as
This allows us to approximate the RLFI as follows:
Using the transformation
Formula (12) becomes
Notice here that the transformed integrand is non-singular in y. However, the singularity at maps to , and the behavior near reflects the original singularity via . The singularity is regularized in the new integrand, facilitating numerical computation while maintaining the RLFI’s mathematical structure. Substituting Equation (11) into Equation (14):
For multiple points , we extend Equation (11):
Thus,
where is an column vector, with each element raised to the power , and
Alternatively,
where
We term the “th-order FSGIM” for the RLFI and the “th-order FSGIM Generator.” Equation (17) is preferred for computational efficiency. For a detailed dimensional analysis of the matrix expressions in Equation (15), including the explicit form of and clarification of the dimensions and reshaping operation in Equation (16), see Appendix C.
To compute , we can use the SGIRV with SGG nodes :
cf. [35] (Algorithm 6 or 7). Formula (21) represents the -GBFA quadrature used for the numerical partial calculation of the RLFI. We denote the approximate th-order RLFI of a function at point t, computed using Equation (21) in conjunction with either Equation (17) or Equation (20), by (The parameters in the left superscripts and subscripts of are the same parameters used by the numerical method: n and indicate the degree and index of the SG polynomial , while and specify the parameters of the polynomial basis used in the quadrature scheme. E is a letter distinguishing this discrete operator from other operators in the literature, referring to the first initial of the author’s surname).
Figure 1 provides a visual summary of the GBFA method’s workflow, illustrating the seamless integration of interpolation, transformation, and quadrature steps. This schematic highlights the method’s flexibility, as the tunable parameters and allow practitioners to tailor the approximation to specific problem characteristics, optimizing both accuracy and computational efficiency. The alternative path of precomputing the FSGIM, as indicated by the dashed arrow, underscores the method’s suitability for applications requiring repeated evaluations.
Figure 1.
Workflow of the GBFA method for RLFI approximation. The main path (solid arrows) shows the standard procedure: (1) interpolate the input function at SGG nodes, (2) apply the RLFI operator with variable transformation, (3) approximate the integrals of SG polynomials using SG quadrature, (4) construct the FSGIM, and (5) compute the final approximation. The dashed arrow indicates an alternative path: precomputing the FSGIM for direct evaluation and repeated use. The tunable parameters (interpolation) and (quadrature) enable optimization across different problems.
Remark 1.
The selection of for defining the RLFI is motivated by both mathematical rigor and practical relevance. The kernel exhibits a singularity at , necessitating integrability of the product for the RLFI to be well-defined. The space , characterized by , ensures sufficient regularity to guarantee convergence. Since the singularity is integrable for , the RLFI remains finite when . Moreover, -integrability is consistent with many physical models in viscoelasticity, anomalous diffusion, and control theory, making it a natural function space for modeling. From another viewpoint, this choice does not conflict with the GBFA method’s superior convergence for smoother functions. While accommodates a broad class of functions, the GBFA method, based on pseudospectral techniques, achieves super-exponential convergence for analytic functions, as we demonstrate in Section 4, and algebraic rates for less regular functions, as is typical in pseudospectral methods. Thus, the method balances general applicability with rapid convergence under smoothness. Enhancements such as modal filtering and domain decomposition, as we discuss later in Section 6, can further improve performance for non-smooth functions, reinforcing the suitability of the setting.
3. Computational Complexity
This section provides a computational complexity analysis of constructing the th-order FSGIM and its generator . The analysis is based on the key matrix operations involved in the construction process, which we analyze individually in the following:
- The term involves raising each element of an -dimensional vector to the power . This operation requires operations.
- Constructing from involves a diagonal scaling by , which requires another operations.
- The matrix is constructed using several matrix multiplications and element-wise operations. For each entry of , the dominant steps include the following:
- -
- The computation of using the three-term recurrence relation requires operations per point. Since the polynomial evaluation is required for polynomials up to degree n, this requires operations.
- -
- The quadrature approximation involves evaluating a polynomial at transformed nodes. The cost of calculating depends on the chosen methods for computing factorials and the Gamma function, which can be considered a constant overhead. The computation of involves raising each element of the column vector to the power , which is linear in . The cost of the matrix–vector multiplication is also linear in . Therefore, the computational cost of this step is for each . The overall cost, considering all polynomial functions involved, is .
- -
- The Hadamard product introduces another operations.
- -
- The vector is the element-wise reciprocal of , an column vector of coefficients. To evaluate , we compute the reciprocal for each element , . This requires one floating-point division per element, totaling divisions. Thus, the evaluation of requires operations, as the operation scales linearly with the vector length.
- -
- The product of by the result from the Hadamard product requires operations.
- -
- The final diagonal scaling by contributes .
Summing the dominant terms, the overall computational complexity of constructing is per entry of . Therefore, the total number of operations required to construct the matrix for all entries of is .
Once the FSGIM is precomputed, applying it to compute the RLFI of a function requires only a matrix–vector multiplication with complexity . The FSGIM’s invariance for fixed points and parameters enables precomputation, making the GBFA method ideal for problems requiring repeated fractional integrations with varying . This indicates that the method is particularly efficient when (i) multiple integrations are needed with the same parameters, (ii) the same set of evaluation points is used repeatedly, and (iii) different functions need to be integrated with the same fractional order. The precomputation approach becomes increasingly advantageous as the number of repeated evaluations increases, since the one-time cost is amortized across multiple applications.
4. Error Analysis
The following theorem establishes the truncation error of the th-order GBFA quadrature associated with the th-order FSGIM in closed form.
Theorem 1.
Proof.
The Lagrange interpolation error associated with the GBFA interpolation (7) is given by
where is given by Equation (24), which can be directly derived from [34] (Equation (4.12)) by setting . Applying the RLFI operator on both sides of Equation (25) results in the truncation error
The proof is accomplished from Equation (26) by applying the change of variables (13) on the RLFI of . □
The following theorem provides an upper bound for the truncation error (23).
Theorem 2.
Let , and suppose that the assumptions of Theorem 1 hold. Then the truncation error satisfies the asymptotic bound
where
is a constant dependent on λ, and
Proof.
Observe first that . Notice also that
by definition. The asymptotic inequality (27) results immediately from (23) after applying the sharp inequalities of the Gamma function [36] ([InEquation (96)]) on Equation (29) and using [37] ([Lemma 5.1]), which gives the uniform norm of Gegenbauer polynomials and their associated shifted forms. □
Theorem 2 manifests that the error bound is influenced by the smoothness of the function f (through its derivatives) and the specific values of and with super-exponentially decay rate as , which guarantees that the error becomes negligible even for moderate values of n. By looking at the bounding constant , we notice that, while holding fixed, the factor decays exponentially as increases because dominates. For large , , so the factor behaves like 1. Thus, it does not significantly affect the behavior as increases. The factor grows exponentially as increases. Combining these observations, the dominant behavior as increases is determined by the exponential growth of and the exponential decay of . The exponential decay dominates, so decays as increases, leading to a tighter error bound and improved convergence rate. On the other hand, considering large values while holding fixed, the factor decays as increases. The factor approaches 1 as increases. For large , , so the factor behaves like . This grows as increases. Finally, the factor behaves like 1 as increases. Thus, it does not significantly affect the behavior as increases. Combining these observations, the dominant behavior as increases is determined by the growth of and the decay of . The decay of dominates, so also decays as increases. It is noteworthy that remains finite as , even though , where . This divergence is offset by the behavior of :
Specifically, for , using the approximation for large x:
Consequently,
Hence, as , driven by the product . For , we notice that is strictly concave with a maximum value at
rounded to 4 significant digits; cf. Theorem A1. Figure 2 shows further the plots of and
where grows at a slow, quasi-linearly rates of change, while decays at a comparable rate; thus, their product remains positive with a small bounded variation. This shows that, while holding fixed, the bounding constant displays a unimodal behavior: it rises from 0 as increases from , attains a maximum at , and subsequently decays monotonically for . Figure 3 highlights how decays with increasing while exhibiting varying sensitivity to , with the case (red curve) representing the parameter value that maximizes for fixed .
Figure 2.
(Left) Comparison of the functions (blue curve) and (red curve) over the interval . (Right) The product (purple), showing the combined behavior of the two functions. All plots demonstrate the dependence on the parameter in the negative domain near zero.
Figure 3.
Behavior of the bounding constant as varies from to 1, shown for eight representative values of (, , , , 0, , 1, and 5). Each curve corresponds to a distinct value: dark red (), green (), olive (), bright red (), dark green (), blue (), magenta (), and orange ().
It is important to note that the bounding constant modulates the error bound without altering its super-exponential decay as . Near , , which shrinks the truncation error bound, improving accuracy despite potential sensitivity in the SG polynomials. At , the maximized widens the bound, though it still vanishes as . Beyond , the overall error bound decreases monotonically, despite a slower n-dependent decay for larger positive . The super-exponential term ensures rapid convergence in all cases. The decreasing after , combined with the fact that the truncation error bound (excluding ) is smaller for compared to its value for , further suggests that appears “optimal” in practice for minimizing the truncation error bound in a stable numerical scheme, given the SG polynomial instability near . However, for relatively small or moderate values of n, other choices of may be optimal, as the derived error bounds are asymptotic and apply only as .
In what follows, we analyze the truncation error of the quadrature Formula (21), and demonstrate how its results complement the preceding analysis.
Theorem 3.
Let , and assume that is interpolated by the SG polynomials with respect to the variable y at the SGG nodes . Then such that the truncation error, , in the quadrature approximation (21) is given by
where is defined by
Proof.
The next theorem provides an upper bound on the quadrature truncation error derived in Theorem 3.
Theorem 4.
Proof.
Notice first that
by [37] (Lemma 5.1), since . We now consider the following two cases.
- Case I ():
- Lemmas A1 and A3 imply
- Case II ():
- Here by Lemma A3, and we have
When , the dominant term in becomes
Exponential decay occurs when
On the other hand, the dominant term in the error bound is given by
For convergence, we require
Given , and typically diverges unless
The relative choice of and in either case controls the error bound’s decay rate. In particular, choosing ensures faster convergence rates when due to presence of the polynomial factor . For , choosing accelerates the convergence if Condition (43) holds.
The following theorem provides a rigorous asymptotic bound on the total truncation error for the RLFI approximation, combining both sources of error, namely, the interpolation and quadrature errors.
Theorem 5
(Asymptotic Total Truncation Error Bound). Suppose that is approximated by the GBFA interpolant (7), and the assumptions of Theorems 1 and 3 hold true. Then the total truncation error in the RLFI approximation of f, denoted by , arising from both the series truncation (7) and the quadrature approximation (21), is asymptotically bounded above by
, where
with being a λ-dependent constant, and , and are constants with the definitions and properties outlined in Theorems 2 and 4, and Lemmas A1 and A3, and are as defined by [38] (Formulas (B.4) and (B.15)).
Proof.
The total truncation error combines the interpolation error from Theorem 2 and the accumulated quadrature errors from Theorem 4 for :
Using the bounds on Christoffel numbers and normalization factors from [38] (Lemmas B.1 and B.2), along with the uniform bound on Gegenbauer polynomials from [37] (Lemma 5.1), we obtain
The proof is completed by applying Theorems 2 and 4 on Formula (48), noting that occurs at . □
The total truncation error bound presented in Theorem 5 reveals several important insights about the convergence behavior of the RLFI approximation: (i) The total error consists of the interpolation error term and the accumulated quadrature error term. The interpolation error term decays at a super-exponential rate due to the factor . The quadrature error either vanishes when , decays exponentially when under Condition (41), or typically diverges when unless Condition (43) is fulfilled. Therefore, the quadrature nodes should scale appropriately with the interpolation mesh size in practice. (ii) The interpolation error bound tightens as increases, due to the factor’s decay. The parameter shows a unimodal influence on the interpolation error, with potentially maximum error size at about . The parameter should generally be chosen smaller than to accelerate the convergence of the quadrature error when . This analysis suggests that the proposed method achieves spectral convergence when the parameters are chosen appropriately. The quadrature precision should be selected to maintain balance between the two error components based on the desired accuracy and computational constraints.
Remark 2.
Theorem 5 provides a theoretical guarantee for the accuracy of the GBFA method when computing the RLFI. In simple terms, it shows that the error in our method’s approximations decreases extremely rapidly as we increase the number of computational points, particularly for smooth functions. This rapid error reduction, often referred to as super-exponential convergence, means that with just a few additional points, the GBFA method can achieve results that are nearly as accurate as the computer’s maximum precision allows. The theorem considers key parameters of the GBFA method: the polynomial degree n, the quadrature degree , and the Gegenbauer parameters λ and . It predicts that when these parameters are chosen wisely, the error shrinks faster than exponentially, as we demonstrate later in our numerical experiments. This is especially powerful for modeling complex systems with memory, such as those in viscoelasticity or anomalous diffusion, where high precision is critical. While the mathematical details of the theorem are complex, its core message is that the GBFA method is highly reliable and efficient, producing accurate results with minimal computational effort for a wide range of problems.
4.1. Practical Guidelines for Parameter Selection
The asymptotic analysis reveals dependencies on the parameters (for interpolation) and (for quadrature), which play crucial roles in determining the method’s accuracy and efficiency. Here, we provide practical guidance for selecting these parameters to balance interpolation error, quadrature error, and computational cost.
The effectiveness of the GBFA method for approximating the RLFI hinges on the appropriate selection of parameters and . Building upon established numerical approximation principles, our analysis incorporates specific considerations for RLFI computation. In particular, we identify the following effective operational range for the SG parameters and :
This range, previously recommended by Elgindy and Karasözen [39] based on extensive theoretical and numerical testing consistent with broader spectral approximation theory, helps avoid numerical instability caused by increased extrapolation effects associated with larger positive SG indices. Furthermore, SG polynomials exhibit blow-up behavior as their indices approach . Within , we observe a crucial balance between theoretical convergence and numerical stability for RLFI computation, a finding corroborated by our numerical investigations using the GBFA method, which consistently demonstrate superior performance across various test functions with parameter choices in this range.
Based on this analysis of error bounds and numerical performance, we recommend the following parameter selection strategies for RLFI approximation.
- and , the selection of is feasible. Furthermore, choosing smaller generally improves quadrature accuracy, with the notable exception of , where the quadrature error is often minimized, as we demonstrate later in Figure 4 and Figure 5.
Figure 4. Logarithmic absolute errors of the RLFI approximations for the power function f, computed using the GBFA method. The fractional order is set to , and approximations are evaluated at . Gegenbauer interpolant degrees match the function’s degrees for . The figure presents errors under different conditions: (Left): Varying with fixed and . (Second-left): Varying with fixed and . (Second-right): Varying with . (Right): Comparison between RLIM and MATLAB’s integral function using and . “Error” refers to the difference between the true RLFI value and its approximation. Missing colored lines indicate zero error (approximation exact within numerical precision).
Figure 5. Logarithmic absolute errors of the RLFI approximations for the exponential function g, computed using the GBFA method. The fractional order is set to , and the approximations are evaluated at using a 13th-degree Gegenbauer interpolant for , except for the second-right subplot, where n varies. The figure presents errors under different conditions: (Left): Varying with fixed and . (Second-left): Varying with fixed and . (Middle): Varying with . (Second-right): Varying n with and . (Right): Comparison between RLIM and MATLAB’s integral function using and . “Error” refers to the difference between the true RLFI value and its approximation. - and :
- -
- For precision computations: Select and , wherewith defined by Equation (33). Here, is a subset of the recommended interval for SG indices. Positive values of are excluded from , as the polynomial error factor increases with positive , as shown in Equation (27). The -neighborhood is excluded because , the leading factor in the asymptotic interpolation error bound, peaks at , potentially increasing the error. By avoiding this neighborhood, parameter choices that could amplify interpolation errors are circumvented, ensuring robust performance for high-precision applications.
- -
- For standard computational scenarios: Utilize , which corresponds to shifted Chebyshev approximation. This recommendation employs the well-established optimality of Chebyshev approximation for smooth functions, offering a robust default that balances accuracy and efficiency for RLFI approximation.
This parameter selection guideline provides practical and effective guidance for balancing interpolation and quadrature errors and computational cost across a wide range of applications.
5. Further Numerical Simulations
Example 1.
To demonstrate the accuracy of the derived numerical approximation formulas, we consider the power function , where , as our first test case. The RLFI of f is given analytically by
Figure 4 displays the logarithmic absolute errors of the RLFI approximations computed using the GBFA method, with fractional order evaluated at . The figure consists of four subplots that investigate (i) the effects of varying the parameters , , and , and (ii) a comparative analysis between the GBFA method and MATLAB’s integral function with tolerance parameters set to . Our numerical experiments reveal several key observations:
- Variation of while holding other parameters constant () shows negligible impact on the error. The error reaches near machine epsilon precision at , consistent with Theorems 1 and 3, which predict the collapse of both interpolation and quadrature errors when and .
- For , the total error reduces to pure quadrature error since while .
- Variation of significantly affects the error, with generally yielding higher accuracy.
- Increasing either n or while fixing the other parameter leads to exponential error reduction.
The GBFA method achieves near machine-precision accuracy with parameter values and , outperforming MATLAB’s integral function by nearly two orders of magnitude. The method demonstrates remarkable stability, as evidenced by consistent error trends for , with nearly exact approximations obtained for in optimal parameter ranges.
Figure 6 compares further the computation times of the GBFA method and MATLAB’s integral function, plotted on a logarithmic scale. The GBFA method demonstrates significantly lower computational times compared to MATLAB’s integral function. This highlights the efficiency of the GBFA method, which achieves high accuracy with minimal computational cost.
Figure 6.
Comparison of the elapsed computation times (displayed on a logarithmic scale) for the GBFA Method and MATLAB’s integral Method as a function of the polynomial degree n. All experiments were performed with , and .
Example 2.
Next, we evaluate the th-order RLFI of , where , over . The analytical solution on is
Figure 5 presents the logarithmic absolute errors of the GBFA method, with subplots analyzing (i) the impact of , , n, and , and (ii) a performance comparison against MATLAB’s integral function (tolerances ). Some of the main findings include
- Similar to Example 1, varying (with fixed small n, ) has minimal effect on accuracy, whereas consistently improves precision.
- Exponential error decay occurs when increasing either n or while holding the other constant.
- Near machine-epsilon accuracy is achieved for and , with the GBFA method surpassing integral by two orders of magnitude.
The method’s stability is further demonstrated by the uniform error trends for and its rapid exponential convergence (see Figure 7), underscoring its suitability for high-precision fractional calculus, particularly in absence of closed-form solutions.
Figure 7.
The left subplot demonstrates the exponential convergence of the absolute error as n increases, with fixed at 12. The right subplot illustrates the exponential convergence of the absolute error as the parameter increases, with n fixed at 13. Both subplots show the absolute error on a logarithmic scale.
Notably, this test problem was previously studied in [29,30] on . The former employed an asymptotic expansion for trapezoidal RLFI approximation, while the latter used linear, quadratic, and three cubic spline variants, with all computations performed in 128-bit precision. The methods were tested for grid sizes (step sizes ). At , Dimitrov’s method achieved the smallest error of , whereas Ciesielski and Grodzki [30] reported their smallest error of using Cubic Spline Variant 1. With and , our method attains errors close to machine epsilon (), surpassing both methods in [29,30] by several orders of magnitude, even with significantly fewer computational nodes.
Section 4.1 notes the potential instability of SG polynomials as , a behavior not yet investigated within the GBFA framework. Figure 8 visually confirms this trend: initially, the errors decrease as approaches , since the leading error factor as , as explained by Theorem 2 and evident when transitions from to . However, the ill-conditioning of SG polynomials becomes prominent as approaches more closely, leading to larger errors. This non-monotonic behavior near underscores the practical stability limits of the method, aligning with the early observations in [40] on the sensitivity of Gegenbauer polynomials with negative parameters. In particular, Elgindy [40] discovered that for large degrees M, Gegenbauer polynomials with exhibit strong sensitivity to small perturbations in x, unlike Legendre or Chebyshev polynomials. For example, it was reported in [40] that evaluating at (exact: ) versus (perturbed by ) introduces an absolute error of , whereas Legendre and Chebyshev polynomials of the same degree incur errors of only . This highlights the risks of using SG polynomials with for high-order approximations, where even minor numerical perturbations can disrupt spectral convergence. These results reinforce the need to avoid such parameter regimes for robust approximations, as prescribed in Section 4.1.
Figure 8.
The logarithmic absolute errors in the numerical evaluation of the RLFI of the function g at for various values of extremely close to , using , and . Each curve corresponds to a different parameter, demonstrating the sensitivity of the GBFA method as approaches .
Example 3.
We evaluate the th-order RLFI of the function over the interval . The analytical solution on is given by
which serves as the reference for numerical comparison. Table 2 presents a comprehensive comparison of numerical results obtained using three distinct approaches: (i) the proposed GBFA method, (ii) MATLAB’s high-precision
integral
function, and (iii) MATHEMATICA’s
NIntegrate
function. This test case has been previously investigated by Batiha et al. [41] on the interval . Their approach employed an n-point composite fractional formula derived from a three-point central fractional difference scheme, incorporating generalized Taylor’s theorem and fundamental properties of fractional calculus. Notably, their implementation with subintervals achieved a relative error of approximately , which is significantly higher than the errors produced by the current method.
Table 2.
Comparison of the exact th-order RLFI of over the interval , the GBFA method, MATLAB’s integral function, and MATHEMATICA NIntegrate function approximations. The table also includes the CPU times for running the three numerical integration routines. All relative error approximations are rounded to 16 significant digits. The CPU times were computed in seconds (s).
Example 4.
Consider the αth-order RLFI of the function over the interval . The exact RLFI on is given by
cf. [31]. In the latter work, a shallow neural network with 50 hidden neurons was used to solve this problem. Trained on 1000 random points with targets , the network yielded a Euclidean error norm of for , as reported in [31]. The GBFA method, with , and , achieved a much smaller error norm of about when evaluated over 1000 equidistant points in .
6. Conclusions and Future Works
This study presents the GBFA method, a powerful approach for approximating the left RLFI. By using precomputable FSGIMs, the method achieves super-exponential convergence for smooth functions, delivering near machine-precision accuracy with minimal computational effort. The tunable Gegenbauer parameters and enable tailored optimization, with sensitivity analysis showing that accelerates convergence for . The strategic parameter selection guideline outlined in Section 4.1 improves the GBFA method’s versatility, ensuring high accuracy for a wide range of problem types. Rigorous error bounds confirm rapid error decay, ensuring robust performance across diverse problems. Numerical results highlight the GBFA method’s superiority, surpassing MATLAB’s integral, MATHEMATICA’s NIntegrate, and prior techniques like trapezoidal [29], spline-based methods [30], and the neural network method [31] by up to several orders of magnitude in accuracy, while maintaining lower computational costs. The FSGIM’s invariance for fixed points improves efficiency, making the method ideal for repeated fractional integrations with varying . These qualities establish the GBFA method as a versatile tool for fractional calculus, with significant potential to advance modeling of complex systems exhibiting memory and non-local behavior, and to inspire further innovations in computational mathematics.
The error bounds derived in this study are asymptotic in nature, meaning they are most accurate for large values of n and . While these bounds provide valuable insights into the theoretical convergence behavior of the method, they may not be tight or predictive for small-to-moderate values of n and . For practitioners, this implies that while the asymptotic bounds guarantee eventual convergence, careful numerical experimentation is recommended to determine the optimal values of n, , , and for a given problem. The numerical experiments presented in Section 5 demonstrate that the method performs exceptionally well even for small and moderate values of n and , but the asymptotic bounds should be interpreted as theoretical guides rather than precise predictors for all cases. To address this limitation, future work could focus on deriving non-asymptotic error bounds or developing heuristic strategies for parameter selection in practical scenarios. Additionally, adaptive algorithms could be explored to dynamically adjust n and based on local error estimates, further improving the method’s robustness for real-world applications. While the current work focuses on , the extension of the GBFA method to is the subject of ongoing research by the author and will be addressed in future publications. Exploring further adaptations to operators such as the right RLFI presents exciting avenues for future work.
The super-exponential convergence of spectral and pseudospectral methods, including those using Gegenbauer polynomials and Gauss nodes, typically degrades to algebraic convergence when applied to functions with limited regularity. This degradation is a well-established phenomenon in approximation theory, with the rate of algebraic convergence directly related to the function’s degree of smoothness. Specifically, for a function possessing k continuous derivatives, theoretical analysis predicts a convergence rate of approximately , where N represents the number of collocation points [42]. To improve the applicability of the GBFA method for non-smooth functions, several methodological extensions can be incorporated: (i) Modal filtering techniques offer one promising approach, effectively dampening spurious high-frequency oscillations without significantly compromising overall accuracy. This process involves applying appropriate filter functions to the spectral coefficients, thereby regularizing the approximation while preserving accuracy in smooth solution regions. When implemented within the GBFA framework, filtering the coefficients in the SG polynomial expansion can potentially recover higher convergence rates for functions with isolated non-smoothness. The tunable parameters inherent in SG polynomials may provide valuable flexibility for optimizing filter characteristics to address specific types of non-smoothness. (ii) Adaptive interpolation strategies represent another valuable extension, employing local refinement near singularities or implementing moving-node approaches to more accurately capture localized features. By strategically concentrating computational resources in regions of limited regularity, these methods maintain high accuracy while effectively accommodating non-smooth behavior. Within the GBFA method, this approach could be realized through non-uniform SG collocation points with increased density near known singularities. (iii) Domain decomposition techniques offer particularly powerful capabilities by partitioning the computational domain into subdomains with potentially different resolutions or spectral parameters. This approach accommodates irregularities while preserving the advantages of the GBFA within each smooth subregion. Domain decomposition proves especially effective for problems featuring isolated singularities or discontinuities, allowing the GBFA method to maintain super-exponential convergence in smooth subdomains while appropriately addressing non-smooth regions through specialized treatment. (iv) For fractional problems involving weakly singular or non-smooth solutions with potentially unbounded derivatives, graded meshes provide an effective solution. These non-uniform discretizations concentrate points near singularities according to carefully chosen distributions, often recovering optimal convergence rates despite the presence of singularities. The inherent flexibility of SG parameters makes the GBFA method particularly amenable to such adaptations. (v) Hybrid spectral-finite element approaches represent yet another viable pathway, combining the high accuracy of spectral methods in smooth regions with the flexibility of finite element methods near singularities. Such hybrid frameworks effectively balance accuracy and robustness for problems with limited regularity. The GBFA method could be integrated into these frameworks by utilizing its super-exponential convergence where appropriate while delegating singularity treatment to more specialized techniques. These theoretical considerations and methodological extensions can significantly expand the potential applicability of the GBFA method to non-smooth functions commonly encountered in fractional calculus applications. While the current implementation focuses on smooth functions to demonstrate the method’s super-exponential convergence properties, the framework possesses sufficient flexibility to accommodate various extensions for handling non-smooth behavior. Future research may investigate these techniques to extend the GBFA method’s applicability to a wider range of practical problems in fractional calculus.
Funding
The Article Processing Charges (APCs) for this publication were funded by Ajman University, United Arab Emirates.
Data Availability Statement
The author declares that the data supporting the findings of this study are available within this article.
Conflicts of Interest
The author declares there are no conflicts of interests.
Appendix A. List of Acronyms
Table A1.
List of Acronyms.
Table A1.
List of Acronyms.
| Acronym | Meaning |
|---|---|
| FSGIM | Fractional-order shifted Gegenbauer integration matrix |
| GBFA | Gegenbauer-based fractional approximation |
| RLFI | Riemann–Liouville fractional integral |
| SGIRV | Shifted Gegenbauer integration row vector |
| SG | Shifted Gegenbauer |
| SGG | Shifted Gegenbauer–Gauss |
Appendix B. Mathematical Theorems and Proofs
Theorem A1.
The function is strictly concave over the interval , and attains its maximum value at
rounded to 4 significant digits.
Proof.
Notice that . Thus,
For , we find that , and is linear and decreasing in . Thus, for all . Since the logarithmic second derivative is negative, is strictly concave on . Since the logarithm is a strictly increasing function and is strictly concave, then itself is also strictly concave on the interval . This proves the first part of the theorem. Now, to prove that attains its maximum at , we set the logarithmic derivative equal to zero:
Let :
Let : . The solution of this the transcendental equation is . Thus, , from which the maximum point is determined. This completes the proof of the theorem. □
Figure A1 shows a sketch of on the interval .
Figure A1.
Behavior of for . The red dashed line indicates .
The following lemma establishes the asymptotic equivalence of .
Lemma A1.
Let and consider the leading coefficient of the th-degree, -indexed SG polynomial defined by
Then,
where
Proof.
We begin by expressing explicitly:
Since
by [43] (Lemma 2.2) and [34] (Lemma 4.2). Then,
Stirling’s approximations to the factorial and gamma functions give
Therefore,
Multiplying by the remaining terms:
This shows that
which completes the proof. □
The next lemma is needed for the proof of the following lemma.
Lemma A2
(Falling Factorial Approximation). Let . Then the falling factorial satisfies
Proof.
Express the falling factorial as
Take logarithms to convert the product into a sum:
Since m is large, we can approximate the sum by using the midpoint rule of integrals:
For and large m, Taylor series expansions produce
and
Substituting these approximations back yields
Exponentiating gives
□
The next lemma gives the asymptotic upper bound on the growth rate of the parameteric scalar .
Lemma A3.
Let and . Then
Proof.
Let . Then , and we can write by Lemma A2. Substituting this approximation and the sharp inequalities of the Gamma function [36] (InEquation (96)) into Equation (35) give
where , and is a -dependent constant. Now, consider the case when . Applying Stirling’s approximation of factorials and the same sharp inequalities of the Gamma function on the parameteric scalar give
where , and is a constant dependent on the parameters m and . Since is small , we have
Substituting (A18) into (A17) yields
from which the proof is accomplished. □
Appendix C. Dimensional Analysis of Matrix Expressions in RLFI Approximations
This appendix provides a detailed dimensional analysis of the matrix expressions within Equations (15) and (16), addressing the mechanics of operations such as the Hadamard product ⊙ and Kronecker product ⊗. The following clarifications ensure clarity regarding the shapes of intermediate matrices and vectors, using notation defined in Table 1.
In Equation (15), the expression inside the brackets, , is a row vector. Here, is an column vector of SG polynomials, as defined in Table 1. The argument , where is , is an column vector with all elements equal to . Evaluating yields an matrix, where each row (for ) contains :
Applying the integral element-wise, as per Table 1, produces an matrix where each row is filled with :
where for . This matrix is equivalent to , where is the transpose of . The matrix , where is , is also , with elements for . The Hadamard product ⊙ multiplies these matrices element-wise, yielding an matrix. The row vector , of size , multiplies this matrix to produce a row vector, which is then multiplied by the diagonal matrix , resulting in a row vector. This vector is multiplied by , an vector, to compute the RLFI approximation at point t.
Likewise, in Equation (16), the expression inside the brackets, , is a row vector. The Kronecker product , with as and as , forms an column vector. Thus, , where is , is an matrix. Similarly, , with as and as , is , and is . The Hadamard product ⊙ yields an matrix. Multiplying by , a row vector, gives a row vector. The Kronecker product , where is and is , is an matrix. The final multiplication produces a row vector, which is reshaped into an matrix in column-major order, representing . This reshaping aligns the Lagrange basis functions for and .
References
- Mainardi, F. Fractional Calculus and Waves in Linear Viscoelasticity: An Introduction to Mathematical Models; World Scientific: Singapore, 2022. [Google Scholar]
- Gorenflo, R.; Mainardi, F. Random walk models for space-fractional diffusion processes. Fract. Calc. Appl. Anal. 1998, 1, 167–191. [Google Scholar]
- Monje, C.A.; Chen, Y.; Vinagre, B.M.; Xue, D.; Feliu-Batlle, V. Fractional-Order Systems and Controls: Fundamentals and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
- Metzler, R.; Klafter, J. The random walk’s guide to anomalous diffusion: A fractional dynamics approach. Phys. Rep. 2000, 339, 1–77. [Google Scholar] [CrossRef]
- Bagley, R.L.; Torvik, P.J. A theoretical basis for the application of fractional calculus to viscoelasticity. J. Rheol. 1983, 27, 201–210. [Google Scholar] [CrossRef]
- Podlubny, I. Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
- Zhang, J.; Zhou, F.; Mao, N. Numerical optimization algorithm for solving time-fractional telegraph equations. Phys. Scr. 2025, 100, 045237. [Google Scholar] [CrossRef]
- Ghasempour, A.; Ordokhani, Y.; Sabermahani, S. Mittag-Leffler wavelets and their applications for solving fractional optimal control problems. JVC/J. Vib. Control 2025, 31, 753–767. [Google Scholar] [CrossRef]
- Rabiei, K.; Razzaghi, M. Fractional-order Boubaker wavelets method for solving fractional Riccati differential equations. Appl. Numer. Math. 2021, 168, 221–234. [Google Scholar] [CrossRef]
- Rahimkhani, P.; Heydari, M.H. Numerical investigation of Ψ-fractional differential equations using wavelets neural networks. Comput. Appl. Math. 2025, 44, 54. [Google Scholar] [CrossRef]
- Damircheli, D.; Razzaghi, M. A wavelet collocation method for fractional Black–Scholes equations by subdiffusive model. Numer. Methods Partial. Differ. Equations 2024, 40, e23103. [Google Scholar] [CrossRef]
- Rahimkhani, P.; Ordokhani, Y.; Babolian, E. Numerical solution of fractional pantograph differential equations by using generalized fractional-order Bernoulli wavelet. J. Comput. Appl. Math. 2017, 309, 493–510. [Google Scholar] [CrossRef]
- Barary, Z.; Cherati, A.Y.; Nemati, S. An efficient numerical scheme for solving a general class of fractional differential equations via fractional-order hybrid Jacobi functions. Commun. Nonlinear Sci. Numer. Simul. 2024, 128, 107599. [Google Scholar] [CrossRef]
- Deniz, S.; Özger, F.; Özger, Z.Ö.; Mohiuddine, S.; Ersoy, M.T. Numerical solution of fractional Volterra integral equations based on rational Chebyshev approximation. Miskolc Math. Notes 2023, 24, 1287–1305. [Google Scholar] [CrossRef]
- Postavaru, O. An efficient numerical method based on Fibonacci polynomials to solve fractional differential equations. Math. Comput. Simul. 2023, 212, 406–422. [Google Scholar] [CrossRef]
- Akhlaghi, S.; Tavassoli Kajani, M.; Allame, M. Application of Müntz Orthogonal Functions on the Solution of the Fractional Bagley–Torvik Equation Using Collocation Method with Error Stimate. Adv. Math. Phys. 2023, 2023, 5520787. [Google Scholar] [CrossRef]
- Bazgir, H.; Ghazanfari, B. Existence and Uniqueness of Solutions for Fractional Integro-Differential Equations and Their Numerical Solutions. Int. J. Appl. Comput. Math. 2020, 6, 122. [Google Scholar] [CrossRef]
- Cao, Y.; Zaky, M.A.; Hendy, A.S.; Qiu, W. Optimal error analysis of space–time second-order difference scheme for semi-linear non-local Sobolev-type equations with weakly singular kernel. J. Comput. Appl. Math. 2023, 431, 115287. [Google Scholar] [CrossRef]
- Qiu, W.; Xu, D.; Guo, J. The Crank-Nicolson-type Sinc-Galerkin method for the fourth-order partial integro-differential equation with a weakly singular kernel. Appl. Numer. Math. 2021, 159, 239–258. [Google Scholar] [CrossRef]
- Cui, M. An alternating direction implicit compact finite difference scheme for the multi-term time-fractional mixed diffusion and diffusion–wave equation. Math. Comput. Simul. 2023, 213, 194–210. [Google Scholar] [CrossRef]
- Diethelm, K.; Ford, N.J.; Freed, A.D.; Luchko, Y. Algorithms for the fractional calculus: A selection of numerical methods. Comput. Methods Appl. Mech. Eng. 2005, 194, 743–773. [Google Scholar] [CrossRef]
- Edrisi-Tabriz, Y. Using the integral operational matrix of B-spline functions to solve fractional optimal control problems. Control. Optim. Appl. Math. 2022, 7, 77–98. [Google Scholar]
- Avcı, I.; Mahmudov, N.I. Numerical solutions for multi-term fractional order differential equations with fractional Taylor operational matrix of fractional integration. Mathematics 2020, 8, 96. [Google Scholar] [CrossRef]
- Xiaogang, Z.; Yufeng, N. An operational matrix method for fractional advection-diffusion equations with variable coefficients. Appl. Math. Mech. 2018, 39, 104–112. [Google Scholar]
- Krishnasamy, V.S.; Mashayekhi, S.; Razzaghi, M. Numerical solutions of fractional differential equations by using fractional Taylor basis. IEEE/CAA J. Autom. Sin. 2017, 4, 98–106. [Google Scholar] [CrossRef]
- Nikan, O.; Avazzadeh, Z. Numerical simulation of fractional evolution model arising in viscoelastic mechanics. Appl. Numer. Math. 2021, 169, 303–320. [Google Scholar] [CrossRef]
- Zhai, S.; Feng, X. Investigations on several compact ADI methods for the 2D time fractional diffusion equation. Numer. Heat Transf. Part Fundam. 2016, 69, 364–376. [Google Scholar] [CrossRef]
- Thakoor, N.; Behera, D.K. A new computational technique based on localized radial basis functions for fractal subdiffusion. In Proceedings of the 1st International Conference on Computational Applied Sciences & IT’S Applications, Jaipur, India, 28–29 April 2020; AIP Publishing: Melville, NY, USA, 2023; Volume 2768. [Google Scholar]
- Dimitrov, Y. Approximations of the fractional integral and numerical solutions of fractional integral equations. Commun. Appl. Math. Comput. 2021, 3, 545–569. [Google Scholar] [CrossRef]
- Ciesielski, M.; Grodzki, G. Numerical Approximations of the Riemann–Liouville and Riesz fractional integrals. Informatica 2024, 35, 21–46. [Google Scholar] [CrossRef]
- Nowak, A.; Kustal, D.; Sun, H.; Blaszczyk, T. Neural network approximation of the composition of fractional operators and its application to the fractional Euler-Bernoulli beam equation. Appl. Math. Comput. 2025, 501, 129475. [Google Scholar] [CrossRef]
- Elgindy, K.T.; Smith-Miles, K.A. Fast, accurate, and small-scale direct trajectory optimization using a Gegenbauer transcription method. J. Comput. Appl. Math. 2013, 251, 93–116. [Google Scholar] [CrossRef]
- Elgindy, K.T.; Smith-Miles, K.A. Optimal Gegenbauer quadrature over arbitrary integration nodes. J. Comput. Appl. Math. 2013, 242, 82–106. [Google Scholar] [CrossRef]
- Elgindy, K.T. High-order numerical solution of second-order one-dimensional hyperbolic telegraph equation using a shifted Gegenbauer pseudospectral method. Numer. Methods Partial. Differ. Equations 2016, 32, 307–349. [Google Scholar] [CrossRef]
- Elgindy, K.T. High-order, stable, and efficient pseudospectral method using barycentric Gegenbauer quadratures. Appl. Numer. Math. 2017, 113, 1–25. [Google Scholar] [CrossRef]
- Elgindy, K.T. Optimal control of a parabolic distributed parameter system using a fully exponentially convergent barycentric shifted Gegenbauer integral pseudospectral method. J. Ind. Manag. Optim. 2018, 14, 473. [Google Scholar] [CrossRef]
- Elgindy, K.T.; Refat, H.M. High-order shifted Gegenbauer integral pseudo-spectral method for solving differential equations of Lane–Emden type. Appl. Numer. Math. 2018, 128, 98–124. [Google Scholar] [CrossRef]
- Elgindy, K.T.; Refat, H.M. Direct integral pseudospectral and integral spectral methods for solving a class of infinite horizon optimal output feedback control problems using rational and exponential Gegenbauer polynomials. Math. Comput. Simul. 2024, 219, 297–320. [Google Scholar] [CrossRef]
- Elgindy, K.T.; Karasözen, B. Distributed optimal control of viscous Burgers’ equation via a high-order, linearization, integral, nodal discontinuous Gegenbauer-Galerkin method. Optim. Control. Appl. Methods 2020, 41, 253–277. [Google Scholar] [CrossRef]
- Elgindy, K. Gegenbauer Collocation Integration Methods: Advances in Computational Optimal Control Theory. Ph.D. Thesis, School of Mathematical Sciences, Faculty of Science, Monash University, Clayton, VIC, Australia, 2013. [Google Scholar]
- Batiha, I.M.; Alshorm, S.; Al-Husban, A.; Saadeh, R.; Gharib, G.; Momani, S. The n-point composite fractional formula for approximating Riemann–Liouville integrator. Symmetry 2023, 15, 938. [Google Scholar] [CrossRef]
- Gottlieb, D.; Shu, C. On the Gibbs phenomenon V: Recovering exponential accuracy from collocation point values of a piecewise analytic function. Numer. Math. 1995, 71, 511–526. [Google Scholar] [CrossRef]
- Elgindy, K.T.; Smith-Miles, K.A. Solving boundary value problems, integral, and integro-differential equations using Gegenbauer integration matrices. J. Comput. Appl. Math. 2013, 237, 307–325. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).