A Convergent Collocation Approach for Generalized Fractional Integro-Differential Equations Using Jacobi Poly-Fractonomials

: In this paper, we present a convergent collocation method with which to ﬁnd the numerical solution of a generalized fractional integro-differential equation (GFIDE). The presented approach is based on the collocation method using Jacobi poly-fractonomials. The GFIDE is deﬁned in terms of the B-operator introduced recently, and it reduces to Caputo fractional derivative and other fractional derivatives in special cases. The convergence and error analysis of the proposed method are also established. Linear and nonlinear cases of the considered GFIDEs are numerically solved and simulation results are presented to validate the theoretical results.


Introduction
Fractional calculus is the branch of applied mathematics in which we deal with integration and differentiation of arbitrary order [1][2][3][4][5][6][7]. Fractional-order derivatives and integrations arise in the modeling of several problems in various domains including physics, engineering, and biology. In the last few decades, applications of fractional differential equations (FDEs) have increased very rapidly in different fields like bioengineering [8], fluid dynamics [9], electrochemistry [10], electromagnetism [11], Control theory [12], and viscoelasticity [13]. Introductory overview of fractional order derivatives and recent developments are presented in [1]. In recent years, fractional integro-differential equations (FIDEs) have been investigated to represent the physical phenomena in various fields such as electromagnetics [14]. Several researchers have been concentrating on the development of numerical and analytical techniques for FIDEs. For example, Angell and Olmstead [15] solved integro-differential equation modeling filament stretching using by Singular perturbation method. Khosro et al. [16] presented a numerical technique based on Bernstein's operational matrix for a family of FIDEs utilizing the trapezoidal rule. Kilbas et al. [17] developed some basic concepts for solving FIDEs, and also provided the existence and uniqueness theorem. In [18], a new set of functions was constructed to obtain the numerical solution of FIDEs, called the fractional-order Euler function, which is based on the Euler function. By using the property of the fractional-order Euler function, the authors found the approximate solution using the operational matrix approach, and also discussed the convergence analysis of the problem. Saddatmandi and Dehghan [19] developed a numerical method for solving linear and nonlinear FIDEs by defining the fractional derivative in

Generalized Fractional Integro-Differential Equations
We first define GFIDEs in terms of K and A/B-operators. These operators [35] are defined as follows: where x ∈ [a, b] , P = a, x, b, r, s denote the all parameters, w α (x, t) is a kernel defined on the space I × I. we assume that w α (x, t) and f (t) both are square integrable function such that Equation (1) exists. K operator satisfies the linearity properties, i.e., for any two functions f 1 (t) and f 2 (t), then Define A and B-operators [35] as follows, By using Equation (4), Equation (1) can be written as, where m − 1 < α < m, m is an integer and P = a, x, b, r, s and D n f (t) denotes the nth derivative of the function f (t). In the definition of B-operator, we assume that D n f (x) is integrable once on domain I. Details about all this operator can be found in [35]. Now, we will define GFIDEs using B-operator as follows. where where functions φ(x) and g(x) are square integrable functions in I with, g(x) = 0, and y(x) is unknown. This problem is considered in the interval [0, 1] and kernel ρ(x, t) is weakly singular of the form Also, in our definition of B-operator, there is another kernel, ω α (x, t). We have taken kernel ω α (x, t) ∈ L 2 (I × I), and G defined by Equation (7) can be either a linear or a nonlinear operator.
We assume that Equations (5) and (6) have a unique solution for all real values of r and s, either r = 0 for all s or s = 0 for all r, and one special case when r = 1 and s = 0 The objective is to find the numerical method to solve GFIDEs, as given by Equations (5) and (6).

Definition and Properties of Shifted Jacobi Poly-Fractonomials
Jacobi poly-fractonomials are the eigenfunctions of the Fractional Sturm-Liouville eigen-problems of first kind, which are defined by P a n (x) = (1 + x) a J −a,a where J −a,a n denotes the standard Jacobi polynomial, and n is the degree. J −a,a n Forms Hilbert space in L 2 w [−1, 1], with respect to the weight function w(t) = (1 − t) −a (1 + t) a , and satisfies orthogonal property with respect to the weight function w(x), i.e., where δ nm is the Kronecker function, and Y a,b n = 2 a+b+1 Γ(n+a+1)Γ(n+b+1) (2n+a+b+1)Γ(n+1)Γ(n+a+b+1) , is an orthogonally constant.
It has been proved in [36] that Jacobi poly-fractonomials P a n (x) are orthogonal with respect to the weight function, w(x) = (1 − x) −a (1 + x) −a , and form Hilbert space in L 2 w [−1, 1]. We use the transformation, x = 2x T − 1 which transforms the standard interval [−1, 1] to [0, T]. Then we obtain the corresponding shifted Jacobi poly-fractonomials of the first kind P a n (x) = 2 T a x a J −a,a For interval [0, 1], Equation (12) takes the form, P a n (x)= 2) a x a J −a,a n (2x − 1 ), x ∈ [0, 1], n = 0, 1, 2, 3 . . . P a n (x) can be written in series combination using the definition of Jacobi polynomial, P a n (x)= 2 a Γ(n − a + 1) Corresponding to weight function, w(x) = (1 − x) −a (x) −a , we get the orthogonal property Let X = L 2 (I) be the square integrable over interval I, and for g 1 , g 2 ∈ X, the inner product is defined by and corresponding norm is defined as follows,

Function Approximation Using Jacobi Poly-Fractonomials
Any function g(x) ∈ L 2 (I) can be written as, In practice, if we consider only the first (R + 1) terms of P a k (x). Then where c k and P a k (x) are given by Theorem 1 [2]. Let g(t)be a real, sufficiently smooth function, and g R (x) = C T P a n (x) denote the shifted Jacobi poly-fractonomials in the expansion of g(x), where C = [c 0 , c 1 , . . . c R ] T , and Mathematics 2021, 9, 979 5 of 17 1], and suppose sup|g(x)| ≤ £, then Jacobi -approximation of g(x) given by Equation (18) converges uniformly, and we have 1] can be written by Equation (18), and the coefficient is determined by Since sup |g(x)| ≤ £, so from Equation (23) we get, Substituting the value of w(x) and P a r (x) in Equation (24), we have where A[r, a, n] = ∑ n r=0 n r Now, substituting the value of || P a r (x)|| 2 2 from Equation (13) and A[r, a, n] from Equation (25) in Equation (26), we have From this calculation, we observe that the partial sum of the coefficient is bounded so ∑ n r=0 c r converges absolutely, and hence, ∑ n r=0 c r P a r (x) converges uniformly to g(x).

Collocation Method for GFIDEs
In this part, we describe the collocation method for solving GFIDEs given by Equations (5) and (6). Collocation methods are based on a projection method in which we take a finite dimensional basis to express the approximate solution which is believed to be closed to the true solution. With the help of this family of functions, we approximate the solution of the GFIDEs given by Equations (5) and (6).
Using Equation (18), we now approximate y(x) as, where c r are unkown expansion coefficients, which are to be determined. It should be noted that the approximate solution y R (x) satisfies the homogeneous initial condition. Replacing the exact solution y(x) by approximate solution y R (x) in Equation (5), we get and y R (0) = y 0 .
If we are given nonhomogenous initial conditions, y R (0) = y 0 = 0, then we will first make it homogenous by the transformation y R (x) = y(x) − y 0 and then replace y(x) by y R (x) . This transformation was considered as a Jacobi Poly-Fractonomial basis, and is defined according to homogenous initial conditions. From Equations (28) and (29), we have To apply collocation method, the node points x t ∈ I = [0, 1], are chosen such that and ∑ R r=0 c r P a r (0) = y 0 .
Equations (32) and (33) form a system of linear equations with unknown coefficients {c r }. We solve this system using any standard method to find the coefficient {c r }. Hence, the approximate solution is obtained.

Convergence Analysis
In this part, we calculate the convergence analysis of the proposed method. For this, we approximate the function by its derivative and try to show that its infinite sum is bounded. To study the convergence analysis of the presented method for solving GFIDEs, we will use the following Lemmas: Lemma 1 [29]. Let X = L 2 (I) denote the vector space of square-integrable functions on I = [0, 1] and ξ be a Volterra integral operator on X defined by with kernel ρ(x, t) satisfying Then K is bounded in L 2 (I).That is, Taking the sum of the above series up to R − 1 level, and replacing the exact solution by approximate solution, we get Subtracting Equation (37) from the Equation (36), we get From Equation (36), we get Substituting the value of P a r (t) from Equation (13) and which completes the proof of Lemma 2.

Error Analysis
In this part, we estimate the error analysis by considering the different cases. In the linear case, it is done by calculating exact and approximate solution. In the nonlinear case, first we prove that it satisfies the Lipschitz condition, and then apply the usual process to estimate the error.
Let E R (x) = y(x) − y R (x) be the error function, where y(x) is exact solution and y R is the approximate solution. From Equation (5), we get, subtracting Equation (42) from the Equation (5), and after simplifying, we get, After substituting all values, Here, we consider two case for the function G.
Case 1. When G satisfies linearity condition, we have, Now, by using Gronwall's inequality, where K 1 and K 2 are defined by (48) Since w 1−α (x, t) ∈ L 2 , then by Lemma 1, there are constants k 1 , k 2 such that, Using Lemma 2, From Equations (46) and (49), we have, Since

Case 2.
When G is nonlinear.
Lipschitz condition: A function f (x, y) satisfies a Lipschitz condition in the variable y on a set D subset of R 2 if there is a constant L > 0, such that, We assume that G satisfies the Lipschitz condition in variable y, so, |G(y 1 (t)) − G(y 2 (t))|≤ L|y 1 (t) − y 2 (t)|, where L is a Lipschitz constant.
Now, following similar steps as those discussed for Equation (44), the convergence bound can be proved, as in Equation (51).

Error Estimate
In this section, we discuss the calculation of the error of the given problem. Let E R (x) = y(x) − y R (x) denote the error function of y R (x) to the exact solution y(x). Replacing y(x) by the approximate solution y R (x) in Equation (5), we obtain with y(0) = (y) R .
Here, y R (x) is the perturbation function that can be calculated as Subtracting Equation (52) from Equation (5), we get with the initial condition E R (0) = (E 0 ) R . Equation (54) can be solved by applying general methods, as discussed in the Section 4.

Numerical Examples
To verify the theoretical approximation of the discussed problem, we consider convolution type kernels in GFIDE. Jacobi poly-fractonomials are considered as a basis to find the approximate solution. We calculate the maximum absolute error by changing the number of elements in the basis in the collocation method. In all numerical results, the number of basis elements and maximum absolute error are denoted by R and MAE respectively. The simulation results and figures are performed on a computer with (a) RAM: 4.00 GB (3.88 GB usable), and (b) System type: 64-bit Operating System, x64-based processor, running on MATLAB R2015a (The MathWorks, Inc., Natick, MA, USA).
In this Example, the exact solution of the problem is given as the second degree polynomial, so in the approximation, the basis R = 2, 3, 4 is chosen, and the corresponding maximum absolute error is calculated for the different choices of basis; these findings are shown in the Table 1. We also observe the variation in the approximate solution for the different values of a = 1 4 , 1 8 , 1 16 , 1 32 , as shown in Figure 1 The MAE is calculated and the graph of the error is shown in Figure 2. We observe that when a tends to zero, the error is reduced and the approximate solution approaches the exact solution.  In this Example, the exact solution of the problem is given as the second degree polynomial, so in the approximation, the basis = 2,3,4 is chosen, and the corresponding maximum absolute error is calculated for the different choices of basis; these findings are shown in the Table 1. We also observe the variation in the approximate solution for the different values of = , , , , as shown in Figure 1 The MAE is calculated and the graph of the error is shown in Figure 2. We observe that when a tends to zero, the error is reduced and the approximate solution approaches the exact solution.
We solved this problem by choosing a different number of basis elements R = 2, 3, 4. The corresponding MAE for the different value of R are presented in the Table 1, and the corresponding graph is shown in Figure 3 for R = 3. The approximated solutions of Example 2 are presented in Figure 4 by taking the different value of α = 1 4 , 1 8 , 1 16 , 1 32 and 1 64 . For α = 1 4 , the numerical solution overlaps the exact solution. Figure 5 shows the relationship between the exact and the approximate solutions of the present example. It is clear that whenever α approaches 1 4 , the numerical solution converges to the exact solution. Table 2 represents the comparison of the MAEs with the method proposed in [29].       In this case, the approximation solution is obtained by selecting different values of = 2, 3, 4, 5,6,7,8,9,10, & 11. MAEs corresponding to these cases are calculated, and the corresponding MAE comparison with other polynomials is shown in Table 3. We also plotted the graph of MAE versus , as shown in Figure 6. It can be observed that as R increases, MAE tends to zero. A graph between the exact and approximate solutions is presented in Figure 7.  Example 3. Consider the nonlinear case of the problem Equations (5)-(6) with α = 1 2 , n = 1, with y(0) = 0, q(x) = 2 √ x + 2x 3/2 − √ x + x 3/2 ln(1 + x) and the exact solution is ln(1 + x).
In this case, the approximation solution is obtained by selecting different values of R = 2, 3, 4, 5, 6, 7, 8, 9, 10 & 11. MAEs corresponding to these cases are calculated, and the corresponding MAE comparison with other polynomials is shown in Table 3. We also plotted the graph of MAE versus R, as shown in Figure 6. It can be observed that as R increases, MAE tends to zero. A graph between the exact and approximate solutions is presented in Figure 7.        Example 4. Consider the problem: x 0 , with α = 2 3 , n = 1, y(0) = 0. The exact solution of this problem is x 3 + x.
Here, as in previous cases, we calculate the MAE corresponding to different values of R = 3, 4, 5; the case for R = 4 is shown in Figure 8. We also depict the behavior of the solution by changing the value of d = 1 4 , 1 8 , 1 16 , 1 32 and 1 64 . Furthermore, as d tends to zero, the approximate solution converges to exact solution, which is shown in Figure 9. In Figure 5, a graph of the exact and the approximate solutions is given. Here, as in previous cases, we calculate the MAE corresponding to different values of = 3, 4, 5; the case for = 4 is shown in Figure 8. We also depict the behavior of the solution by changing the value of = , , , and . Furthermore, as tends to zero, the approximate solution converges to exact solution, which is shown in Figure 9. In Figure 5, a graph of the exact and the approximate solutions is given.

Example 5. Consider,
This problem has the exact solution ( ) = with (0) = 0. For the given problem, we find the approximate solution by varying the number of elements in the basis and also calculating the corresponding MAEs which are shown in Table 4. The graph of MAE verses is shown in Figure 10. We notice that the MAE decreases as we increase , and for ≥ 11, no change in MAE is observed. In Figure 7, a graph of the exact and the approximate solutions is shown. We also plotted a graph for the MAE for = 12, as shown in Figure 11.
For the given problem, we find the approximate solution by varying the number of elements in the basis and also calculating the corresponding MAEs which are shown in Table 4. The graph of MAE verses R is shown in Figure 10. We notice that the MAE decreases as we increase R, and for R ≥ 11, no change in MAE is observed. In Figure 7, a graph of the exact and the approximate solutions is shown. We also plotted a graph for the MAE for R = 12, as shown in Figure 11.

Conclusions
A convergent collocation method is developed in this paper with which to solve GFIDEs in terms of the B-operator. Jacobi poly-fractonomials are used as the basis in the proposed collocation method. The choice of the Jacobi poly-fractonomials helps to

Conclusions
A convergent collocation method is developed in this paper with which to solve GFIDEs in terms of the B-operator. Jacobi poly-fractonomials are used as the basis in the proposed collocation method. The choice of the Jacobi poly-fractonomials helps to increase the accuracy in the approximated solution. The presented method works well on Figure 11. Plot of MAE Example 5 for R = 12.

Conclusions
A convergent collocation method is developed in this paper with which to solve GFIDEs in terms of the B-operator. Jacobi poly-fractonomials are used as the basis in the proposed collocation method. The choice of the Jacobi poly-fractonomials helps to increase the accuracy in the approximated solution. The presented method works well on linear and nonlinear types of the GFIDEs and produces accurate solutions.