Numerical Methods for Caputo–Hadamard Fractional Differential Equations with Graded and Non-Uniform Meshes

: We consider the predictor-corrector numerical methods for solving Caputo–Hadamard fractional differential equations with the graded meshes log t j = log a + (cid:0) log t N a (cid:1)(cid:0) jN (cid:1) r , j = 0,1,2,. . ., N with a ≥ 1 and r ≥ 1, where log a = log t 0 < log t 1 < · · · < log t N = log T is a partition of [ log t 0 ,log T ] . We also consider the rectangular and trapezoidal methods for solving Caputo–Hadamard fractional differential equations with the non-uniform meshes log t j = log a + (cid:0) log t N a (cid:1) j ( j + 1 ) N ( N + 1 ) , j = 0,1,2, . . . , N . Under the weak smoothness assumptions of the Caputo–Hadamard fractional derivative, e.g., CH D α a , t y ( t ) / ∈ C 1 [ a , T ] with α ∈ ( 0,2 ) , the optimal convergence orders of the proposed numerical methods are obtained by choosing the suitable graded mesh ratio r ≥ 1. The numerical examples are given to show that the numerical results are consistent with the theoretical ﬁndings.


Abstract:
We consider the predictor-corrector numerical methods for solving Caputo-Hadamard fractional differential equations with the graded meshes log t j = log a + log t N a j N r , j = 0, 1, 2, . . . , N with a ≥ 1 and r ≥ 1, where log a = log t 0 < log t 1 < · · · < log t N = log T is a partition of [log t 0 , log T]. We also consider the rectangular and trapezoidal methods for solving Caputo-Hadamard fractional differential equations with the non-uniform meshes log t j = log a + log t N a j(j+1)

Introduction
Recently, fractional differential equations have become an active research area due to their applications in a wide range of fields including mechanics, computer science, and biology [1][2][3][4]. There are different kinds of fractional derivatives, e.g., Caputo, Riemman-Liouville, Riesz, which have been studied extensively in the literature. However, the Hadamard fractional derivative is also very important and used to model the different physical problems [5][6][7][8][9][10][11].
The Hadamard fractional derivative was suggested in early 1892 [12]. More recently, a new derivative which involved a Caputo-type modification on the Hadamard derivative known as the Caputo-Hadamard derivative was suggested [8]. The aim of this paper is to study and analyze some useful numerical methods for solving Caputo-Hadamard fractional differential equations with graded and non-uniform meshes under the weak smoothness assumptions of the Caputo-Hadamard fractional derivative, e.g., CH D α a,t y(t) / ∈ C 1 [a, T] with α ∈ (0, 2).
We thus consider the following Caputo-Hadamard fractional differential equation, with α > 0 [8] CH D α a,t y(t) = f (t, y(t)), 1 ≤ a ≤ t ≤ T, δ k y(a) = y where f (t, y) is a nonlinear function with respect to y ∈ R, and the initial values y (k) a are given and n − 1 < α < n, for n = 1, 2, 3, . . . . Here the fractional derivative CH D α a,t denotes the Caputo-Hadamard derivative defined by with δ n y(s) = (s d ds ) n y(s), and where α denotes the smallest integer greater than or equal to α [8].
To make sure that (1) has a unique solution, we assume that the function f is continuous and satisfies the following Lipschitz condition with respect to the second variable y [7,13] | f (t, y 1 ) − f (t, y 1 )| ≤ L|y 1 − y 2 | for L > 0, y 1 , y 2 ∈ R.
For some recent existence and uniqueness results for Caputo-Hadamard fractional differential equations, the readers can refer to [14][15][16] and the references therein.
It is well known that the Equation (1) is equivalent to the following Volterra integral equation, with α > 0, [5] Let us review some numerical methods for solving (1). Gohar et al. [7] studied the existence and uniqueness of the solution of (1) and Euler and predictor-corrector methods were considered. Gohar et al. [13] further considered the rectangular, trapezoidal, and predictor-corrector methods for solving (1) with uniform meshes under the smooth assumption of the fractional derivative, e.g., CH D α a,t y(t) ∈ C 2 [a, T] with α ∈ (0, 1). There are also some numerical methods for solving Caputo-Hadamard time fractional partial differential equations [7,17]. In this paper, we shall assume that CH D α a,t y(t) / ∈ C 2 [a, T] with α ∈ (0, 2) and assume that CH D α a,t y(t) behaves as log t a σ with σ ∈ (0, 1) which implies that the derivatives of CH D α a,t y(t) have the singularities at log a. In such case, we can not expect the numerical methods with uniform meshes have the optimal convergence orders. To obtain the optimal convergence orders, we shall use the graded and non-uniform meshes as in Liu et al. [18,19] for solving Caputo fractional differential equations. We shall show that the predictor-corrector method has the optimal convergence orders with the graded meshes log t j = log a + log t N a j N r , j = 0, 1, 2, . . . , N for some suitable r ≥ 1. We also show that the rectangular, trapezoidal methods also have the optimal convergence orders with some non-uniform meshes log t j = log a + log t N a j(j+1) N(N+1) , j = 0, 1, 2, . . . , N. For some recent works for the numerical methods for solving fractional differential equations with graded and non-uniform meshes, we refer to [17,[20][21][22]. In particular, Stynes et al. [23,24] applied a graded mesh on a finite difference method for solving subdiffusion equations when the solutions of the equations are not sufficiently smooth. Liu et al. [18,19] applied a graded mesh for solving Caputo fractional differential equation by using a fractional Adams method with the assumption that the solution was not sufficiently smooth. The aim of this work is to extend the ideas in Liu et al. [18,19] for solving Caputo fractional differential equations to solve the Caputo-Hadamard fractional differential equations.
The paper is organized as follows. In Section 2 we consider the error estimates of the predictor-corrector method for solving (1) with the graded meshes. In Section 3 we consider the error estimates of the rectangular, trapezoidal methods for solving (1) with non-uniform meshes. In Section 4 we will provide several numerical examples which support the theoretical conclusions made in Sections 2 and 3.
Throughout this paper, we denote by C a generic constant depending on y, T, α, but independent of t > 0 and N, which could be different at different occurrences.

Predictor-Corrector Method with Graded Meshes
In this section, we shall consider the error estimates of the predictor-corrector method for solving (1) with graded meshes. We first recall the following smoothness properties of the solutions to (1).
Then there exists a function φ ∈ C 1 [a, T] and some constants c 1 , c 2 , . . . , cv ∈ R such that the solution y of (1) can be expressed in the following form An example of this would be when 0 < α < 1, f ∈ C 2 (G). We would havev = 1 α − 1 ≥ 1 and This implies that the solution y of (1) would behave as (log t a ) α , 0 < α < 1. As such the solution y / ∈ C 2 [a, T].
With the above two theorems, we can see that if one of y and CH D α a,t y(t) is sufficiently smooth then the other will not be sufficiently smooth unless some special conditions have been met.
Recall that, by (3), the solution of (1) can be written as the following form, with α ∈ (0, 1) and y a = y (0) a , Therefore it is natural to introduce the following smoothness assumptions for the fractional derivative CH D α a,t y(t) in (1).

Assumption 1.
Let 0 < σ < 1 and α > 0. Let y be the solution of (1). Assume that CH D α a,t y(t) can be expressed as a function of log t, that is, there exists a smooth function G a : [0, ∞) → R such that G a (log t) := CH D α a,t y(t) ∈ C 2 (a, T]. Further we assume that G a (·) satisfies the following smooth assumptions, with 1 ≤ a ≤ t ≤ T, where G a (·) and G a (·) denote the first and second order derivatives of G a , respectively. Denote We then have Hence the assumptions (6) is equivalent to, with 1 ≤ a ≤ t ≤ T, which are similar to the smoothness assumptions given in Liu et al. [18] for the Caputo fractional derivative C D α 0,t y(t).
Remark 1. Assumption 1 gives the behavior of g a (t) near t = a and implies that g a (t) has the singularity near t = a. It is obvious that g a / ∈ C 2 [a, T]. For example, we may choose g a (t) = (log t a ) σ with 0 < σ < 1.
Let N be a positive integer and let a = t 0 < t 1 < · · · < t N = T be the partition on [a, T]. We define the following graded mesh on [log(a), log(T)] with log a = log t 0 < log t 1 < · · · < log t N = log T, such that, with r ≥ 1, Denote y k ≈ y(t k ), k = 0, 1, 2, . . . , N the approximation of y(t k ). Let us introduce the different numerical methods for solving (3) with α ∈ (0, 1) below. Similarly we may define the numerical methods for solving (3) with α ≥ 1. The fractional rectangular method for solving (3) is defined as where the weights b j,k+1 are defined as The fractional trapezoidal method for solving (3) is defined as where the weights a j,k+1 for j = 0, 1, 2, . . . , k + 1 are defined as The predictor-corrector Adams method for solving (3) is defined as, with α ∈ (0, 1), k = 0, 1, . . . , N − 1, where the weights b j,k+1 and a j,k+1 are defined as above.
If we assume that g a (t) := CH D α a,t y(t) satisfies Assumption 1, we shall prove the following error estimate. Theorem 3. Assume that g a (t) := CH D α a,t y(t) satisfies Assumption 1. Further assume that y(t j ) and y j are the solutions of (3) and (13), respectively.

Proof of Theorem 3
In this subsection, we shall prove Theorem 3. To help with this we will start by proving some preliminary Lemmas. In Lemma 1 we will be finding the error estimate between g a (s) and the piecewise linear function P 1 (s) for both 0 < α ≤ 1 and α > 1. This will be used to estimate one of the terms in our main proof. Lemma 1. Assume that g a (t) satisfies Assumption 1 1. If where P 1 (s) is the piecewise linear function defined by, For I 1 , we have Note that, with s ∈ [a, t 1 ], which implies that, by Assumption 1, Thus we have, by (14), Note that there exists a constant C > 0 such that Thus we have, for 0 < α ≤ 1, For α > 1, we have For I 2 we have, with ξ j ∈ (t j , t j+1 ), j = 1, 2, . . . , k − 1 and k = 2, 3, . . . , N − 1, where we have used the following fact, with s ∈ (t j , t j+1 ), which can be seen easily by noting g a (s) = G a (log s) and (7). By Assumption 1 and by using [24] (Section 5.2), we have, with k ≥ 4, where k−1 2 defines the ceiling function defined as before. For each of these integrals we shall consider the cases when 0 < α ≤ 1 and when α > 1.
In Lemma 2 below, we state that the weights a j,k+1 and b j,k+1 are positive for all values of j.
Proof. The proof is obvious, we omit the proof here.
For Lemma 3, we are attempting to find an upper bound for a k+1,k+1 . This will be used in the main proof when addressing the a k+1,k+1 term.
Proof. We have, by (12), with ξ k ∈ (k, k + 1), In Lemma 4 we will be finding the error estimate between g a (s) and the piecewise constant function P 0 (s) for both 0 < α ≤ 1 and α > 1. This will be used to estimate one of the terms in our main proof. Lemma 4. Assume that g a (t) satisfies Assumption 1.
For Lemma 5, we are attempting to find an upper bound for the sum of our weights. This will be used in the main proof when simplifying several terms.
We will now use the above lemmas to prove the error estimates of Theorem 3.
The term I is estimated by Lemma 1. For II, we have, by Lemma 2 and the Lipschitz condition of f , For I I I, we have, by Lemma 2 and the Lipschitz condition for f, Note that Thus, The term I I I 1 is estimated by Lemma 4. For I I I 2 , we have, by Lemma 2, Hence, we obtain The rest of the proof is exactly the same as the proof of [18] (Theorem 1.4). The proof of Theorem 3 is complete.

Rectangular and Trapezoidal Methods with Non-Uniform Meshes
In this section, we will consider the error estimates for the fractional rectangular and trapezoidal methods for solving (1). These results are based on the error estimates proposed by Liu et al. [19]. First, we will introduce the non-uniform meshes for solving (1).
Let N be a positive integer and let a = t 0 < t 1 < · · · < t N = T be the partition on [a, T]. We define the following non-uniform mesh on [log(a), log(T)] with log a = log t 0 < log t 1 < · · · < log t N = log T, which implies that log t j = log a + log t N − log a j(j + 1) N(N + 1) .
Now we see when j = 0, we have log t 0 = log a. When j = N we have log t N = log T. Further we have

Rectangular Method
In this subsection, we prove the following error estimate for the rectangular method over the given non-uniform mesh. Theorem 4. Assume that g a (t) := CH D α a,t y(t) satisfies Assumption 1. Further assume that y(t j ) and y j are the solutions of (3) and (9), respectively.
To prove Theorem 4, we need some preliminary lemmas. Here we only state the lemmas without proofs since the proofs are similar as in Liu et al. [19]. In Lemma 6 we will be defining a key estimate which we will be using in our main proof. Lemma 6. Assume that g a (t) := CH D α a,t y(t) satisfies Assumption 1. 1.

2.
If 1 < α < 2, then we have In Lemma 7 we will find some upper bounds for our weights b j,k+1 and a j,k+1 .

Lemma 7.
If α > 0, k is a non-negative integer and τ j ≤ τ j+1 , j = 0, 1, . . . , k − 1, then the weights b j,k+1 and a j,k+1 defined by equations (10) and (12), have the following estimates: In Lemma 8 we will give an adapted Gronwall inequality to be used in the main results.
Proof of Theorem 4. For k = 0, 1, 2, . . . , N − 1, we have The first term I can be estimated by Lemma 6. For I I, we can apply Lemma 2 and the Lipschitz condition of f, Substituting into the original we get |y(t k+1 ) − y k+1 | ≤ I + L k ∑ j=0 b j,k+1 |y(t j ) − y j |.
By applying Lemma 8, we will get This completes the proof of Theorem 4.

Trapezoid Formula
In this subsection we will consider the error estimates of the trapezoid method over the non-uniform mesh. We shall prove the following theorem Theorem 5. Assume that g a (t) := CH D α a,t y(t) satisfies Assumption 1. Further assume that y(t j ) and y j are the solutions of (3) and (11), respectively.

1.
If 0 < α ≤ 1, then we have To prove Theorem 5, we need the following lemma. In Lemma 9 we will be defining a key estimate which we will be using in our main proof. Lemma 9. Assume that g a (t) := CH D α a,t y(t) satisfies Assumption 1.
The term I is estimated by Lemma 9. For II we can apply Lemma 7 and the Lipschitz condition of f, Thus we obtain By using the corresponding Gronwall Lemma 8 we have |y(t k+1 ) − y k+1 | ≤ CI. This completes the proof of Theorem 5.

Numerical Examples
In this section, we will consider some numerical examples to confirm the theoretical results obtained in the previous sections. For simplicity, all the examples below will take 0 < α < 1. All the following results may be adapted for all α > 1.

Example 1.
Consider the following nonlinear fractional differential equation, with α ∈ (0, 1) and a = 1, where The exact solution of this equation is y(t) = (log t) 5 − (log t) 4 + 2(log t) 3 . We will be solving Example 1 over the interval [1,2]. Let N be a positive integer and let log a = log t 0 < log t 1 < · · · < log t N = log T be the graded mesh on the interval [log a, log T].
This mesh is defines as log t j = log a + log T a (j/N) r for j = 0, 1, 2, . . . , N with r ≥ 1. Therefore, we have by Theorem 3, In Table 1 we can see the maximum absolute error and experimental order of convergence (EOC) for the predictor-corrector method at varying α and N values. For our different 0 < α < 1, we have chosen N values as N = 10 × 2 l , l = 0, 1, 2, . . . , 7. For this example we have taken r = 1. The maximum absolute errors ||e N || ∞ were obtained as shown above with respect to N and we calculate the experimental order of convergence or EOC as log As we can see, the EOCs for this example are almost O(N −(1+α) ) which was predicted by Theorem 3. Due to the solution of the FODE being sufficiently smooth, any value of r will give the optimal convergence order given above. As we are using r = 1, this means that we are using a uniform mesh and so can compare these results with the methods introduced by Gohar et al. [13]. We can see, we have obtained a similar result. In Figure 1, we have plotted the order of convergence for Example 1. From Equation (24) we have, with h = 1/N, Let y = log 2 ||e N || and let x = log 2 h . We then plotted a graph for y against x for h = 1 5×2 l , l = 0, 1, . . . , 7. Doing this, we get that the gradient of the graph would equal the EOC. To compare this to the theoretical order of convergence, we have also plotted the straight line y = (1 + α)x. For Figure 1 we choose α = 0.8. We can observe that the two lines drawn are parallel. Therefore we can conclude that the order of convergence of this predictor-corrector method is O(h 1+α ).

Example 2.
Consider the following nonlinear fractional differential equation, with α, β ∈ (0, 1) and a = 1, where We will be solving Example 2 over the interval [1,2]. The exact solution of this equation is y = (log t a ) β and CH D α a,t y(t) = Γ(1+β) Γ(1+β−α) (log t) β−α . This implies that the regularity of CH D α a,t y(t) behaves as (log t) β−α . This means that CH D α a,t y(t) satisfies Assumption 1. We will be using the same graded mesh as in Example 1. Therefore, we have by Theorem 3, with σ = β − α, In Tables 2-4 we can see the EOC for the predictor-corrector method with varying values of α and with r values at r = 1 and r = 1+α β . With a fixed β = 0.9 we have obtain the EOC and maximum absolute error for increasing values of N. By doing so we can see that the EOC are almost O(N −rβ ) = 0.9 when r = 1 and the EOC are almost O(N −(1+α) ) = 1 + α when r = 1+α β . When r = 1, we are using a uniform mesh and we can see that the EOC obtained is the same as those obtained by Gohar et al. [13]. Comparing these to the results of the graded mesh when r = 1+α β we can see that a higher EOC has been obtained and an optimal order of convergence is recovered.
In Figure 2, we have plotted the order of convergence for Example 2 when r = 1+α β and α = 0.8. This plot is the same as for Figure 1. We have also plotted the straight line y = (1 + α)x. We can observe that the two lines drawn are parallel. Therefore we can conclude that the order of convergence of this predictor-corrector method is O(h 1+α ). Example 3. Consider the following nonlinear fractional differential equation, with α, β ∈ (0, 1) and a = 1, CH D α a,t y(t) + y(t) = 0, 1 ≤ a < t ≤ T, y(a) = 1, The exact solution of this FODE is y(t) = E α,1 − (log t) α . Therefore CH D α a,t y(t) = −E α,1 − (log t) α , where E α,γ (z) is defined as the Mittag-Leffler function This shows that CH D α a,t y(t) behaves as c + c log t α . This means that CH D α a,t y(t) satisfies Assumption 1. Therefore, with σ = α, we have by Theorem 3, We will be solving this equation over the same graded mesh as in Example 1 with varying r values. In Tables 5-7, we have calculated the EOC and maximum absolute error with respect to increasing N values and with r values at r = 1 and r = 1+α 2α . The experimental orders of convergence are shown to be almost O(N r(2α) ) if we choose r = 1 and almost O(N r(1+α) ) if we choose r = (1+α) 2α . Once again it is shown when we use a graded mesh at the optimal r value, we get a higher order of convergence to that obtained by the uniform mesh at r = 1. In Figure 3, we have plotted the order of convergence for Example 3 when r = 1+α β and α = 0.8. This plot is the same as for Figure 1. We have also plotted the straight line y = (1 + α)x. We can observe that the two lines drawn are parallel. Therefore we can conclude that the order of convergence of this predictor-corrector method is O(h 1+α ) for choosing the suitable graded mesh ratio r. Example 4. In this example we will be applying the rectangular and trapezoidal methods for solving (27). Let N be a positive integer and let log t j = log a + log t N − log a j(j+1) N(N+1) be the graded mesh on the interval [log a, log T] for j = 0, 1, 2, . . . , N. We will be using a = 1 and T = 2.
In Table 8, we have calculated the EOC and maximum absolute error with respect to increasing N values and with α = 0.2, 0.4, 0.6 for the rectangular method. By once again using the fact that σ = α and applying Theorem 4 we can say max 0≤j≤N The experimental orders of convergence are shown to be almost O(N −4α ) if we choose α < 0.25 and almost O(N −1 ) if we choose α ≥ 0.25. This confirms the theoretical error estimates calculated in Section 4. In Table 9, we have used the same method to solve (27) but using the uniform mesh. This shows how a larger EOC is achieved when using non-uniform mesh over a uniform mesh.
In Table 10, we have calculated the EOC and maximum absolute error with respect to increasing N values and with α = 0.2, 0.4, 0.6 for the trapezoidal method. By once again using the fact that σ = α and applying Theorem 4 we can say max 0≤j≤N The experimental orders of convergence are shown to be almost O(N −4α ) if we choose α < 0.5 and almost O(N −2 ) if we choose α ≥ 0.5. This confirms the theoretical error estimates calculated in Section 4. In Table 11, we have used the same method to solve (27) but using the uniform mesh. This shows how a larger EOC is achieved when using graded mesh over a uniform mesh.

Conclusions
In this paper we propose several numerical methods for solving Caputo-Hadamard fractional differential equations with graded and non-uniform meshes. We first introduce a predictor-corrector method and calculate the convergence and error estimates over a graded mesh so to show that the optimal convergence orders can be recovered when the solutions are not sufficiently smooth. We then introduce the error estimates on the fractional rectangle and fractional trapezoidal methods with some non-uniform meshes. Finally, we consider several numerical simulations to support the theoretical results made for the above methods on the convergence orders and error estimates.