1. Introduction
Fractional calculus offers a powerful framework for modeling intricate systems characterized by memory and non-local interactions, finding applications in diverse fields such as viscoelasticity [
1], anomalous diffusion [
2], and control theory [
3], among others. A fundamental concept in this domain is the left RLFI, defined for
and
as detailed in
Table 1. In contrast to classical calculus, which operates under the assumption of local dependencies, fractional integrals, exemplified by the left RLFI, inherently account for cumulative effects over time through a singular kernel. This characteristic renders them particularly well-suited for modeling phenomena where past states significantly influence future behavior. The RLFI proves especially valuable in the description of complex dynamics exhibiting self-similarity, scale-invariance, or memory effects. For instance, in anomalous diffusion, the RLFI captures self-similar patterns and scale-invariant power-law behaviors, enabling the precise modeling of sub-diffusive processes [
4]. In viscoelasticity, it accounts for memory effects by modeling stress relaxation with past-dependent dynamics [
5] and reveals temporal symmetries, thus improving predictive accuracy [
1]. These properties make the RLFI a critical tool for analyzing complex systems and improving their forecasting capabilities [
3,
6,
7]. Nevertheless, the singular nature of the RLFI’s kernel,
, presents substantial computational hurdles, as conventional numerical methods frequently encounter limitations in accuracy or incur high computational costs. Consequently, the development of efficient and high-precision approximation techniques for the RLFI is of paramount importance for advancing computational modeling across physics, engineering, and biology, where fractional calculus is increasingly employed to tackle real-world problems involving non-local behavior or fractal structures.
Existing approaches to RLFI approximation include wavelet-based methods [
7,
8,
9,
10,
11,
12], polynomial and orthogonal function techniques [
13,
14,
15,
16,
17], finite difference and quadrature schemes [
18,
19,
20,
21], operational matrix methods [
22,
23,
24,
25], local meshless techniques [
26], alternating direction implicit schemes [
27], and radial basis functions [
28]. While these methods have shown promise in specific contexts, they often struggle with trade-offs between accuracy, computational cost, and flexibility, particularly when adapting to diverse problem characteristics or varying fractional orders.
This study introduces the GBFA method, which overcomes these challenges through three key innovations: (i)
Parameter adaptability, utilizing the tunable SG parameters
(for interpolation) and
(for quadrature approximation) to optimize performance across a wide range of problems; (ii)
Super-exponential convergence, achieving rapid error decay for smooth functions, often reaching near machine-precision with modest node counts; and (iii)
Computational efficiency, enabled by precomputable FSGIMs that minimize runtime costs. The proposed GBFA method addresses these challenges by using the orthogonality and flexibility of SG polynomials to achieve super-exponential convergence. This approach offers near machine-precision accuracy with minimal computational effort, particularly for systems necessitating repeated fractional integrations or displaying symmetric patterns. Numerical experiments demonstrate that the GBFA method significantly outperforms established tools like MATLAB’s
integral function and MATHEMATICA’s
NIntegrate, achieving up to two orders of magnitude higher accuracy in certain cases. It also surpasses prior methods, such as the trapezoidal approach of Dimitrov [
29], the spline-based techniques of Ciesielski and Grodzki [
30], and the neural network method [
31], in both precision and efficiency. Sensitivity analysis reveals that setting
often accelerates convergence, while rigorous error bounds confirm super-exponential decay under optimal parameter choices. The FSGIM’s invariance for fixed points and parameters enables precomputation, making the GBFA method ideal for problems requiring repeated fractional integrations with varying
. While the method itself exploits the mathematical properties of orthogonal polynomials, its application can be crucial in analyzing systems where symmetry or repeating patterns are fundamental, as fractional calculus is used to model phenomena with memory where past states influence future behavior, and identifying symmetries in such systems can simplify analysis and prediction.
This paper is organized as follows:
Section 2 presents the GBFA framework.
Section 3 analyzes computational complexity.
Section 4 provides a detailed error analysis.
Section 4.1 provides actionable guidelines for selecting the tunable parameters
and
, balancing accuracy and computational efficiency.
Section 5 evaluates numerical performance.
Section 6 presents the conclusions of this study with future works of potential methodological extensions to non-smooth functions encountered in fractional calculus.
Appendix A lists all acronyms used in the paper.
Appendix B includes supporting mathematical proofs. Finally,
Appendix C provides a detailed dimensional analysis of the matrix expressions in Equations (
15) and (
16), which are central to deriving our proposed RLFI approximation method.
2. Numerical Approximation of RLFI
This section presents the GBFA method adapted for approximating the RLFI. We shall briefly begin with some background on the classical Gegenbauer polynomials, as they form the foundation for the GBFA method developed in this study.
The classical Gegenbauer polynomials
, defined for
and
, are orthogonal with respect to the weight function
. They satisfy the three-term recurrence relation:
with
and
. Their orthogonality is given by
where
The SG polynomials
, used in this study, are obtained via the transformation
for
, inheriting analogous properties adjusted for the shifted domain. In particular, they satisfy the three-term recurrence relation:
with initial conditions
and
. They are directly related to Jacobi polynomials by the following identity:
where
is the
nth-degree Jacobi polynomial with parameters
[
32] (Equation (A.1)). One may also directly evaluate the SG polynomial of any degree
n using the identity
cf. [
33] (Equation (D.2)). For a comprehensive treatment of their properties and quadrature rules, see [
34,
35,
36,
37].
Let
,
, and
. The GBFA interpolant of
f is given by
where
is defined as
with normalization factors and Christoffel numbers:
. In matrix form, we can write Equation (
8) as
This allows us to approximate the RLFI as follows:
Using the transformation
Formula (
12) becomes
Notice here that the transformed integrand
is non-singular in
y. However, the singularity at
maps to
, and the behavior near
reflects the original singularity via
. The singularity is regularized in the new integrand, facilitating numerical computation while maintaining the RLFI’s mathematical structure. Substituting Equation (
11) into Equation (
14):
For multiple points
, we extend Equation (
11):
Thus,
where
is an
column vector, with each element
raised to the power
, and
Alternatively,
where
We term
the “
th-order FSGIM” for the RLFI and
the “
th-order FSGIM Generator.” Equation (
17) is preferred for computational efficiency. For a detailed dimensional analysis of the matrix expressions in Equation (
15), including the explicit form of
and clarification of the dimensions and reshaping operation in Equation (
16), see
Appendix C.
To compute
, we can use the SGIRV
with SGG nodes
:
cf. [
35] (Algorithm 6 or 7). Formula (
21) represents the
-GBFA quadrature used for the numerical partial calculation of the RLFI. We denote the approximate
th-order RLFI of a function at point
t, computed using Equation (
21) in conjunction with either Equation (
17) or Equation (
20), by
(The parameters in the left superscripts and subscripts of
are the same parameters used by the numerical method:
n and
indicate the degree and index of the SG polynomial
, while
and
specify the parameters of the polynomial basis used in the quadrature scheme.
E is a letter distinguishing this discrete operator from other operators in the literature, referring to the first initial of the author’s surname).
Figure 1 provides a visual summary of the GBFA method’s workflow, illustrating the seamless integration of interpolation, transformation, and quadrature steps. This schematic highlights the method’s flexibility, as the tunable parameters
and
allow practitioners to tailor the approximation to specific problem characteristics, optimizing both accuracy and computational efficiency. The alternative path of precomputing the FSGIM, as indicated by the dashed arrow, underscores the method’s suitability for applications requiring repeated evaluations.
Remark 1. The selection of for defining the RLFI is motivated by both mathematical rigor and practical relevance. The kernel exhibits a singularity at , necessitating integrability of the product for the RLFI to be well-defined. The space , characterized by , ensures sufficient regularity to guarantee convergence. Since the singularity is integrable for , the RLFI remains finite when . Moreover, -integrability is consistent with many physical models in viscoelasticity, anomalous diffusion, and control theory, making it a natural function space for modeling. From another viewpoint, this choice does not conflict with the GBFA method’s superior convergence for smoother functions. While accommodates a broad class of functions, the GBFA method, based on pseudospectral techniques, achieves super-exponential convergence for analytic functions, as we demonstrate in Section 4, and algebraic rates for less regular functions, as is typical in pseudospectral methods. Thus, the method balances general applicability with rapid convergence under smoothness. Enhancements such as modal filtering and domain decomposition, as we discuss later in Section 6, can further improve performance for non-smooth functions, reinforcing the suitability of the setting. 4. Error Analysis
The following theorem establishes the truncation error of the th-order GBFA quadrature associated with the th-order FSGIM in closed form.
Theorem 1. Suppose that is approximated by the GBFA interpolant (7). Assume also that the integralsare computed exactly . Then such that the truncation error, , in the RLFI approximation (15) is given bywhere is the leading coefficient of the nth-degree, λ-indexed SG polynomial, defined as Proof. The Lagrange interpolation error associated with the GBFA interpolation (
7) is given by
where
is given by Equation (
24), which can be directly derived from [
34] (Equation (4.12)) by setting
. Applying the RLFI operator
on both sides of Equation (
25) results in the truncation error
The proof is accomplished from Equation (
26) by applying the change of variables (
13) on the RLFI of
. □
The following theorem provides an upper bound for the truncation error (
23).
Theorem 2. Let , and suppose that the assumptions of Theorem 1 hold. Then the truncation error satisfies the asymptotic boundwhere is a constant dependent on λ, and Proof. Observe first that
. Notice also that
by definition. The asymptotic inequality (
27) results immediately from (
23) after applying the sharp inequalities of the Gamma function [
36] ([InEquation (96)]) on Equation (
29) and using [
37] ([Lemma 5.1]), which gives the uniform norm of Gegenbauer polynomials and their associated shifted forms. □
Theorem 2 manifests that the error bound is influenced by the smoothness of the function
f (through its derivatives) and the specific values of
and
with super-exponentially decay rate as
, which guarantees that the error becomes negligible even for moderate values of
n. By looking at the bounding constant
, we notice that, while holding
fixed, the factor
decays exponentially as
increases because
dominates. For large
,
, so the factor
behaves like 1. Thus, it does not significantly affect the behavior as
increases. The factor
grows exponentially as
increases. Combining these observations, the dominant behavior as
increases is determined by the exponential growth of
and the exponential decay of
. The exponential decay dominates, so
decays as
increases, leading to a tighter error bound and improved convergence rate. On the other hand, considering large
values while holding
fixed, the factor
decays as
increases. The factor
approaches 1 as
increases. For large
,
, so the factor
behaves like
. This grows as
increases. Finally, the factor
behaves like 1 as
increases. Thus, it does not significantly affect the behavior as
increases. Combining these observations, the dominant behavior as
increases is determined by the growth of
and the decay of
. The decay of
dominates, so
also decays as
increases. It is noteworthy that
remains finite as
, even though
, where
. This divergence is offset by the behavior of
:
Specifically, for
, using the approximation
for large
x:
Consequently,
Hence,
as
, driven by the product
. For
, we notice that
is strictly concave with a maximum value at
rounded to 4 significant digits; cf. Theorem A1.
Figure 2 shows further the plots of
and
where
grows at a slow, quasi-linearly rates of change, while
decays at a comparable rate; thus, their product remains positive with a small bounded variation. This shows that, while holding
fixed, the bounding constant
displays a unimodal behavior: it rises from 0 as
increases from
, attains a maximum at
, and subsequently decays monotonically for
.
Figure 3 highlights how
decays with increasing
while exhibiting varying sensitivity to
, with the
case (red curve) representing the parameter value that maximizes
for fixed
.
It is important to note that the bounding constant modulates the error bound without altering its super-exponential decay as . Near , , which shrinks the truncation error bound, improving accuracy despite potential sensitivity in the SG polynomials. At , the maximized widens the bound, though it still vanishes as . Beyond , the overall error bound decreases monotonically, despite a slower n-dependent decay for larger positive . The super-exponential term ensures rapid convergence in all cases. The decreasing after , combined with the fact that the truncation error bound (excluding ) is smaller for compared to its value for , further suggests that appears “optimal” in practice for minimizing the truncation error bound in a stable numerical scheme, given the SG polynomial instability near . However, for relatively small or moderate values of n, other choices of may be optimal, as the derived error bounds are asymptotic and apply only as .
In what follows, we analyze the truncation error of the quadrature Formula (
21), and demonstrate how its results complement the preceding analysis.
Theorem 3. Let , and assume that is interpolated by the SG polynomials with respect to the variable y at the SGG nodes . Then such that the truncation error, , in the quadrature approximation (21) is given bywhere is defined by Proof. Let
denote the
mth-derivative of
. Ref. [
34] (Theorem 4.1) tells us that
The error bound (
34) is accomplished by applying the Chain Rule on Equation (
36), which gives
The proof is complete by realizing that
. □
The next theorem provides an upper bound on the quadrature truncation error derived in Theorem 3.
Theorem 4. Suppose that the assumptions of Theorem 3 hold true. Then the truncation error, , in the quadrature approximation (21) vanishes , and is bounded above by, where , is as defined by (A1), is a λ-dependent constant, is a -dependent constant, and is a constant dependent on and λ. Proof. Notice first that
by [
37] (Lemma 5.1), since
. We now consider the following two cases.
- Case I ():
- Case II ():
Here
by Lemma A3, and we have
Formula (
38) is obtained by substituting (
39) and (
40) into (
34). □
When
, the dominant term in
becomes
Exponential decay occurs when
On the other hand, the dominant term in the error bound
is given by
For convergence, we require
Given
, and
typically diverges unless
The relative choice of
and
in either case controls the error bound’s decay rate. In particular, choosing
ensures faster convergence rates when
due to presence of the polynomial factor
. For
, choosing
accelerates the convergence if Condition (
43) holds.
The following theorem provides a rigorous asymptotic bound on the total truncation error for the RLFI approximation, combining both sources of error, namely, the interpolation and quadrature errors.
Theorem 5 (Asymptotic Total Truncation Error Bound)
. Suppose that is approximated by the GBFA interpolant (7), and the assumptions of Theorems 1 and 3 hold true. Then the total truncation error in the RLFI approximation of f, denoted by , arising from both the series truncation (7) and the quadrature approximation (21), is asymptotically bounded above by, wherewith being a λ-dependent constant, and , and are constants with the definitions and properties outlined in Theorems 2 and 4, and Lemmas A1 and A3, and are as defined by [38] (Formulas (B.4) and (B.15)). Proof. The total truncation error combines the interpolation error from Theorem 2 and the accumulated quadrature errors from Theorem 4 for
:
Using the bounds on Christoffel numbers
and normalization factors
from [
38] (Lemmas B.1 and B.2), along with the uniform bound on Gegenbauer polynomials from [
37] (Lemma 5.1), we obtain
The proof is completed by applying Theorems 2 and 4 on Formula (
48), noting that
occurs at
. □
The total truncation error bound presented in Theorem 5 reveals several important insights about the convergence behavior of the RLFI approximation: (i) The total error consists of the interpolation error term
and the accumulated quadrature error term. The interpolation error term decays at a super-exponential rate
due to the factor
. The quadrature error either vanishes when
, decays exponentially when
under Condition (
41), or typically diverges when
unless Condition (
43) is fulfilled. Therefore, the quadrature nodes should scale appropriately with the interpolation mesh size in practice. (ii) The interpolation error bound tightens as
increases, due to the
factor’s decay. The
parameter shows a unimodal influence on the interpolation error, with potentially maximum error size at about
. The
parameter should generally be chosen smaller than
to accelerate the convergence of the quadrature error when
. This analysis suggests that the proposed method achieves spectral convergence when the parameters are chosen appropriately. The quadrature precision should be selected to maintain balance between the two error components based on the desired accuracy and computational constraints.
Remark 2. Theorem 5 provides a theoretical guarantee for the accuracy of the GBFA method when computing the RLFI. In simple terms, it shows that the error in our method’s approximations decreases extremely rapidly as we increase the number of computational points, particularly for smooth functions. This rapid error reduction, often referred to as super-exponential convergence, means that with just a few additional points, the GBFA method can achieve results that are nearly as accurate as the computer’s maximum precision allows. The theorem considers key parameters of the GBFA method: the polynomial degree n, the quadrature degree , and the Gegenbauer parameters λ and . It predicts that when these parameters are chosen wisely, the error shrinks faster than exponentially, as we demonstrate later in our numerical experiments. This is especially powerful for modeling complex systems with memory, such as those in viscoelasticity or anomalous diffusion, where high precision is critical. While the mathematical details of the theorem are complex, its core message is that the GBFA method is highly reliable and efficient, producing accurate results with minimal computational effort for a wide range of problems.
4.1. Practical Guidelines for Parameter Selection
The asymptotic analysis reveals dependencies on the parameters (for interpolation) and (for quadrature), which play crucial roles in determining the method’s accuracy and efficiency. Here, we provide practical guidance for selecting these parameters to balance interpolation error, quadrature error, and computational cost.
The effectiveness of the GBFA method for approximating the RLFI hinges on the appropriate selection of parameters
and
. Building upon established numerical approximation principles, our analysis incorporates specific considerations for RLFI computation. In particular, we identify the following effective operational range for the SG parameters
and
:
This range, previously recommended by Elgindy and Karasözen [
39] based on extensive theoretical and numerical testing consistent with broader spectral approximation theory, helps avoid numerical instability caused by increased extrapolation effects associated with larger positive SG indices. Furthermore, SG polynomials exhibit blow-up behavior as their indices approach
. Within
, we observe a crucial balance between theoretical convergence and numerical stability for RLFI computation, a finding corroborated by our numerical investigations using the GBFA method, which consistently demonstrate superior performance across various test functions with parameter choices in this range.
Based on this analysis of error bounds and numerical performance, we recommend the following parameter selection strategies for RLFI approximation.
and
, the selection of
is feasible. Furthermore, choosing smaller
generally improves quadrature accuracy, with the notable exception of
, where the quadrature error is often minimized, as we demonstrate later in
Figure 4 and
Figure 5.
and :
- -
For precision computations: Select
and
, where
with
defined by Equation (
33). Here,
is a subset of the recommended interval
for SG indices. Positive values of
are excluded from
, as the polynomial error factor
increases with positive
, as shown in Equation (
27). The
-neighborhood
is excluded because
, the leading factor in the asymptotic interpolation error bound, peaks at
, potentially increasing the error. By avoiding this neighborhood, parameter choices that could amplify interpolation errors are circumvented, ensuring robust performance for high-precision applications.
- -
For standard computational scenarios: Utilize , which corresponds to shifted Chebyshev approximation. This recommendation employs the well-established optimality of Chebyshev approximation for smooth functions, offering a robust default that balances accuracy and efficiency for RLFI approximation.
This parameter selection guideline provides practical and effective guidance for balancing interpolation and quadrature errors and computational cost across a wide range of applications.
5. Further Numerical Simulations
Example 1. To demonstrate the accuracy of the derived numerical approximation formulas, we consider the power function , where , as our first test case. The RLFI of f is given analytically by Figure 4 displays the logarithmic absolute errors of the RLFI approximations computed using the GBFA method, with fractional order
evaluated at
. The figure consists of four subplots that investigate (i) the effects of varying the parameters
,
, and
, and (ii) a comparative analysis between the GBFA method and MATLAB’s
integral function with tolerance parameters set to
. Our numerical experiments reveal several key observations:
Variation of while holding other parameters constant () shows negligible impact on the error. The error reaches near machine epsilon precision at , consistent with Theorems 1 and 3, which predict the collapse of both interpolation and quadrature errors when and .
For , the total error reduces to pure quadrature error since while .
Variation of significantly affects the error, with generally yielding higher accuracy.
Increasing either n or while fixing the other parameter leads to exponential error reduction.
The GBFA method achieves near machine-precision accuracy with parameter values and , outperforming MATLAB’s integral function by nearly two orders of magnitude. The method demonstrates remarkable stability, as evidenced by consistent error trends for , with nearly exact approximations obtained for in optimal parameter ranges.
Figure 6 compares further the computation times of the GBFA method and MATLAB’s
integral function, plotted on a logarithmic scale. The GBFA method demonstrates significantly lower computational times compared to MATLAB’s
integral function. This highlights the efficiency of the GBFA method, which achieves high accuracy with minimal computational cost.
Example 2. Next, we evaluate the th-order RLFI of , where , over . The analytical solution on is Figure 5 presents the logarithmic absolute errors of the GBFA method, with subplots analyzing (i) the impact of
,
,
n, and
, and (ii) a performance comparison against MATLAB’s
integral function (tolerances
). Some of the main findings include
Similar to Example 1, varying (with fixed small n, ) has minimal effect on accuracy, whereas consistently improves precision.
Exponential error decay occurs when increasing either n or while holding the other constant.
Near machine-epsilon accuracy is achieved for and , with the GBFA method surpassing integral by two orders of magnitude.
The method’s stability is further demonstrated by the uniform error trends for
and its rapid exponential convergence (see
Figure 7), underscoring its suitability for high-precision fractional calculus, particularly in absence of closed-form solutions.
Notably, this test problem was previously studied in [
29,
30] on
. The former employed an asymptotic expansion for trapezoidal RLFI approximation, while the latter used linear, quadratic, and three cubic spline variants, with all computations performed in 128-bit precision. The methods were tested for grid sizes
(step sizes
). At
, Dimitrov’s method achieved the smallest error of
, whereas Ciesielski and Grodzki [
30] reported their smallest error of
using Cubic Spline Variant 1. With
and
, our method attains errors close to machine epsilon (
), surpassing both methods in [
29,
30] by several orders of magnitude, even with significantly fewer computational nodes.
Section 4.1 notes the potential instability of SG polynomials as
, a behavior not yet investigated within the GBFA framework.
Figure 8 visually confirms this trend: initially, the errors decrease as
approaches
, since the leading error factor
as
, as explained by Theorem 2 and evident when
transitions from
to
. However, the ill-conditioning of SG polynomials becomes prominent as
approaches
more closely, leading to larger errors. This non-monotonic behavior near
underscores the practical stability limits of the method, aligning with the early observations in [
40] on the sensitivity of Gegenbauer polynomials with negative parameters. In particular, Elgindy [
40] discovered that for large degrees
M, Gegenbauer polynomials
with
exhibit strong sensitivity to small perturbations in
x, unlike Legendre or Chebyshev polynomials. For example, it was reported in [
40] that evaluating
at
(exact:
) versus
(perturbed by
) introduces an absolute error of
, whereas Legendre and Chebyshev polynomials of the same degree incur errors of only
. This highlights the risks of using SG polynomials with
for high-order approximations, where even minor numerical perturbations can disrupt spectral convergence. These results reinforce the need to avoid such parameter regimes for robust approximations, as prescribed in
Section 4.1.
Example 3. We evaluate the th-order RLFI of the function over the interval . The analytical solution on is given bywhich serves as the reference for numerical comparison. Table 2 presents a comprehensive comparison of numerical results obtained using three distinct approaches: (i) the proposed GBFA method, (ii) MATLAB’s high-precision
integral
function, and (iii) MATHEMATICA’s
NIntegrate
function. This test case has been previously investigated by Batiha et al. [41] on the interval . Their approach employed an n-point composite fractional formula derived from a three-point central fractional difference scheme, incorporating generalized Taylor’s theorem and fundamental properties of fractional calculus. Notably, their implementation with subintervals achieved a relative error of approximately , which is significantly higher than the errors produced by the current method. Example 4. Consider the αth-order RLFI of the function over the interval . The exact RLFI on is given bycf. [31]. In the latter work, a shallow neural network with 50 hidden neurons was used to solve this problem. Trained on 1000 random points with targets , the network yielded a Euclidean error norm of for , as reported in [31]. The GBFA method, with , and , achieved a much smaller error norm of about when evaluated over 1000 equidistant points in . 6. Conclusions and Future Works
This study presents the GBFA method, a powerful approach for approximating the left RLFI. By using precomputable FSGIMs, the method achieves super-exponential convergence for smooth functions, delivering near machine-precision accuracy with minimal computational effort. The tunable Gegenbauer parameters
and
enable tailored optimization, with sensitivity analysis showing that
accelerates convergence for
. The strategic parameter selection guideline outlined in
Section 4.1 improves the GBFA method’s versatility, ensuring high accuracy for a wide range of problem types. Rigorous error bounds confirm rapid error decay, ensuring robust performance across diverse problems. Numerical results highlight the GBFA method’s superiority, surpassing MATLAB’s
integral, MATHEMATICA’s
NIntegrate, and prior techniques like trapezoidal [
29], spline-based methods [
30], and the neural network method [
31] by up to several orders of magnitude in accuracy, while maintaining lower computational costs. The FSGIM’s invariance for fixed points improves efficiency, making the method ideal for repeated fractional integrations with varying
. These qualities establish the GBFA method as a versatile tool for fractional calculus, with significant potential to advance modeling of complex systems exhibiting memory and non-local behavior, and to inspire further innovations in computational mathematics.
The error bounds derived in this study are asymptotic in nature, meaning they are most accurate for large values of
n and
. While these bounds provide valuable insights into the theoretical convergence behavior of the method, they may not be tight or predictive for small-to-moderate values of
n and
. For practitioners, this implies that while the asymptotic bounds guarantee eventual convergence, careful numerical experimentation is recommended to determine the optimal values of
n,
,
, and
for a given problem. The numerical experiments presented in
Section 5 demonstrate that the method performs exceptionally well even for small and moderate values of
n and
, but the asymptotic bounds should be interpreted as theoretical guides rather than precise predictors for all cases. To address this limitation, future work could focus on deriving non-asymptotic error bounds or developing heuristic strategies for parameter selection in practical scenarios. Additionally, adaptive algorithms could be explored to dynamically adjust
n and
based on local error estimates, further improving the method’s robustness for real-world applications. While the current work focuses on
, the extension of the GBFA method to
is the subject of ongoing research by the author and will be addressed in future publications. Exploring further adaptations to operators such as the right RLFI presents exciting avenues for future work.
The super-exponential convergence of spectral and pseudospectral methods, including those using Gegenbauer polynomials and Gauss nodes, typically degrades to algebraic convergence when applied to functions with limited regularity. This degradation is a well-established phenomenon in approximation theory, with the rate of algebraic convergence directly related to the function’s degree of smoothness. Specifically, for a function possessing
k continuous derivatives, theoretical analysis predicts a convergence rate of approximately
, where
N represents the number of collocation points [
42]. To improve the applicability of the GBFA method for non-smooth functions, several methodological extensions can be incorporated: (i) Modal filtering techniques offer one promising approach, effectively dampening spurious high-frequency oscillations without significantly compromising overall accuracy. This process involves applying appropriate filter functions to the spectral coefficients, thereby regularizing the approximation while preserving accuracy in smooth solution regions. When implemented within the GBFA framework, filtering the coefficients in the SG polynomial expansion can potentially recover higher convergence rates for functions with isolated non-smoothness. The tunable parameters inherent in SG polynomials may provide valuable flexibility for optimizing filter characteristics to address specific types of non-smoothness. (ii) Adaptive interpolation strategies represent another valuable extension, employing local refinement near singularities or implementing moving-node approaches to more accurately capture localized features. By strategically concentrating computational resources in regions of limited regularity, these methods maintain high accuracy while effectively accommodating non-smooth behavior. Within the GBFA method, this approach could be realized through non-uniform SG collocation points with increased density near known singularities. (iii) Domain decomposition techniques offer particularly powerful capabilities by partitioning the computational domain into subdomains with potentially different resolutions or spectral parameters. This approach accommodates irregularities while preserving the advantages of the GBFA within each smooth subregion. Domain decomposition proves especially effective for problems featuring isolated singularities or discontinuities, allowing the GBFA method to maintain super-exponential convergence in smooth subdomains while appropriately addressing non-smooth regions through specialized treatment. (iv) For fractional problems involving weakly singular or non-smooth solutions with potentially unbounded derivatives, graded meshes provide an effective solution. These non-uniform discretizations concentrate points near singularities according to carefully chosen distributions, often recovering optimal convergence rates despite the presence of singularities. The inherent flexibility of SG parameters makes the GBFA method particularly amenable to such adaptations. (v) Hybrid spectral-finite element approaches represent yet another viable pathway, combining the high accuracy of spectral methods in smooth regions with the flexibility of finite element methods near singularities. Such hybrid frameworks effectively balance accuracy and robustness for problems with limited regularity. The GBFA method could be integrated into these frameworks by utilizing its super-exponential convergence where appropriate while delegating singularity treatment to more specialized techniques. These theoretical considerations and methodological extensions can significantly expand the potential applicability of the GBFA method to non-smooth functions commonly encountered in fractional calculus applications. While the current implementation focuses on smooth functions to demonstrate the method’s super-exponential convergence properties, the framework possesses sufficient flexibility to accommodate various extensions for handling non-smooth behavior. Future research may investigate these techniques to extend the GBFA method’s applicability to a wider range of practical problems in fractional calculus.