 Previous Article in Journal
Semi-Local Convergence of a Seventh Order Method with One Parameter for Solving Non-Linear Equations
Previous Article in Special Issue
On Λ-Fractional Differential Equations

Article

# Detailed Error Analysis for a Fractional Adams Method on Caputo–Hadamard Fractional Differential Equations

Department of Physical, Mathematical and Engineering, University of Chester, Chester CH1 4BJ, UK
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Academic Editor: Sotiris K. Ntouyas
Foundations 2022, 2(4), 839-861; https://doi.org/10.3390/foundations2040057
Received: 28 August 2022 / Revised: 12 September 2022 / Accepted: 13 September 2022 / Published: 22 September 2022
(This article belongs to the Special Issue Recent Advances in Fractional Differential Equations and Inclusions)

## Abstract

We consider a predictor–corrector numerical method for solving Caputo–Hadamard fractional differential equation over the uniform mesh $\mathrm{log}{t}_{j}=\mathrm{log}a+\left(\mathrm{log}\frac{{t}_{N}}{a}\right)\left(\frac{j}{N}\right),\phantom{\rule{0.166667em}{0ex}}j=0,1,2,\dots ,N$ with $a\ge 1$, where $\mathrm{log}a=\mathrm{log}{t}_{0}<\mathrm{log}{t}_{1}<\dots <\mathrm{log}{t}_{N}=\mathrm{log}T$ is a partition of $\left[\mathrm{log}a,\mathrm{log}T\right]$. The error estimates under the different smoothness properties of the solution y and the nonlinear function f are studied. Numerical examples are given to verify that the numerical results are consistent with the theoretical results.

## 1. Introduction

Fractional differential equations have recently become an active research area due to the applications in fields including mechanics, computer science, and biology [1,2,3,4]. There are different fractional derivatives, e.g., Caputo, Riemman–Liouville, Riesz, etc. Such fractional derivatives are well studied in literature. One particular fractional derivative, the Hadamard fractional derivative, is also important and has been used to model physical problems in many fields [5,6,7,8,9,10,11]. The Hadamard fractional derivative was first introduced in early 1892 , and the Caputo–Hadamard derivative was suggested by Jarad et al. . In this paper, we will discuss the numerical method for solving a Caputo–Hadamard fractional initial value problem. We will be analyzing the smoothness properties of various aspects of such equations and explain how these properties will affect the convergence order of the numerical method.
Consider the following Caputo–Hadamard fractional differential equation, with $α > 0$, 
$C H D a , t α y ( t ) = f t , y ( t ) , δ k y ( a ) = y a ( k ) , k = 0 , 1 , … , ⌈ α ⌉ − 1 ,$
for $1 ≤ a ≤ t ≤ T$. Here, $⌈ α ⌉$ denotes the least integer bigger than or equal to $α$. Here, $f ( t , y )$ is a nonlinear function with respect to $y ∈ R$, and the initial values $y a ( k )$ are given. We also define $α$ such that $n − 1 < α < n$, for $n = 1 , 2 , 3 …$. Here, the fractional derivative $C H D a , t α$ denotes the Caputo–Hadamard derivative defined as,
$C H D a , t α y ( t ) = 1 Γ ( ⌈ α ⌉ − α ) ∫ a t log t s ⌈ α ⌉ − α − 1 δ n y ( s ) d s s , t ≥ a ≥ 1 ,$
with $δ n y ( s ) = ( s d d s ) n y ( s )$. It is well known that (1) is equivalent to the following Volterra integral equation, with $α > 0$ [13,14],
$y ( t ) = ∑ ν = 0 ⌈ α ⌉ − 1 y a ( ν ) ( log t a ) ν ν ! + 1 Γ ( α ) ∫ a t log t s α − 1 f s , y ( s ) d s s .$
Let us begin by reviewing some different numerical methods for solving (1). Diethelm et al.  first introduced the fractional Adams method for the Caputo fractional differential equation as a generalization of the classical numerical method for solving first-order ODEs. They discussed the error estimates for the proposed method, including the convergence orders under the different smoothness assumptions of y and f. This work was then extended by Liu et al.  to show that when $D C a , t α y ( t ) ∉ C 2 [ 0 , T ]$, the optimal convergence order is not achieved but can be recovered by implimenting a graded mesh. Gohar et al.  studied the existence and uniqueness of the solution of (1), and several numerical methods were introduced including the Euler and predictor–corrector methods. Gohar et al.  extended their work by introducing the rectangular, trapezoidal, and predictor–corrector methods for solving (1) with a uniform mesh under the smooth assumptions $D C H a , t α y ( t ) ∈ C 2 [ a , T ]$ with $α ∈ ( 0 , 1 )$. Green et al.  extended the results in  to the Caputo–Hadamard differential Equation (1). Numerical methods for solving Caputo–Hadamard time fractional partial differential equations can be found in [7,17]. More recent works on numerical methods for solving fractional differential equations can be referred to [17,18,19,20,21].
Caputo–Hadamard type fractional differential equations have the significant interests when they come to real applications due to the logarithmic nature of the integral kernel. This is especially true in mechanics and engineering. An example of this is the use of Hadamard type equations in fracture analysis and the modeling of elasticity . Furthermore, the Caputo–Hadamard fractional derivative has been used in fractional turbulent flow models . In biology, the Hadamard fractional derivative has been used in modeling for cancer treatments by radiotherapy . Many of these models require efficient numerical methods for solving them. In literature, the error estimates of the numeircal methods proposed for solving the Caputo–Hadamard fractional differential Equation (1) are based on the assumptions that the solution y and f are sufficiently smooth [7,13]. In this paper, we will consider the error estimates of the numerical methods under the different smoothness assumptions of y and f.
Diethelm et al.  considered a variety of smoothness assumptions of y and f to the fractional Adams method for solving Caputo differential equations. The aim of this work is to extend the ideas in  for solving Caputo fractional differential equation to Caputo–Hadamard fractional differential Equation (1).
Let us start by briefly recalling the Adams–Bashforth–Moulton method for the Caputo–Hadamard fractional derivative. To construct such a method, we require an approximation of the integral in (3). We will apply the following product trapezoidal quadrature rule,
$∫ a t k + 1 log ( t k + 1 ) − log ( z ) α − 1 g ( z ) d z z ≈ ∫ a t k + 1 log ( t k + 1 ) − log ( z ) α − 1 g ˜ k + 1 ( z ) d z z ,$
where the approximation $g ˜ k + 1 ( z )$ is the piecewise linear interpolant for g at $t j , j = 0 , 1 , 2 , … , k + 1$. Using the standard techniques from quadrature theory, we can write the right hand side integral as,
$∫ a t k + 1 log ( t k + 1 ) − log ( z ) α − 1 g ˜ k + 1 ( z ) = ∑ j = 0 k + 1 a j , k + 1 g ( t j ) ,$
where,
$a j , k + 1 = ∫ a t k + 1 log ( t k + 1 ) − log ( z ) α − 1 ϕ j , k + 1 d z ,$
and,
$ϕ j , k + 1 ( z ) = log z − log t j − 1 log t j − log t j − 1 , if t j − 1 < z < t j , log t j + 1 − log u log t j + 1 − log t j , if t j < z < t j + 1 , 0 , otherwise .$
Evaluating the integral, we get,
$a j , k + 1 = 1 α ( α + 1 ) 1 log t 1 a A 0 , j = 0 , 1 log t j + 1 t j A j + 1 log t j − 1 t j B j , j = 1 , 2 , … , k , log t k + 1 t k α , j = k + 1 ,$
$A j = log t k + 1 t j + 1 α + 1 − log t k + 1 t j α + 1 + ( α + 1 ) log t j + 1 t j log t k + 1 t j α , j = 0 , 1 , . . . , k , B j = log t k + 1 t j α + 1 − log t k + 1 t j − 1 α + 1 + ( α + 1 ) log t j t j − 1 log t k + 1 t j − 1 α , j = 1 , 2 , . . . , k .$
This gives us the formula for the corrector, known as the fractional one-step Adams–Moulton method,
$y k + 1 = y 0 + 1 Γ ( α ) ∑ j = 0 k a j , k + 1 f ( t j , y j ) + a k + 1 , k + 1 f ( t k + 1 , y k + 1 P ) .$
We now must determine the predictor formula required to calculate $y k + 1 P$. For this, we will generalize the one-step Adams–Bashforth method for ODEs. To do this, we follow a similar method, but now, we will be replacing the integral on the right-hand side by the product rectangle formula rule,
$∫ a t k + 1 log ( t k + 1 ) − log ( z ) α − 1 g ( z ) d z z ≈ ∑ j = 0 k b j , k + 1 g ( t j ) ,$
where,
$b j , k + 1 = 1 Γ ( α + 1 ) log t k + 1 t j α − log t k + 1 t j + 1 α , j = 0 , 1 , 2 , … , k .$
Therefore, the predictor can be calculated using the fractional Adams–Bashforth method,
$y k + 1 P = y 0 + ∑ j = 0 k b j , k + 1 f ( t j , y j ) .$
Finally, we can conclude that our basic method, the fractional Adams–Bashforth–Moulton method, is described with the predictor Equation (12), the corrector Equation (9), and weights (8) and (11). For this method, however, we will be using a uniform mesh defined below.
Let N be a positive integer and let $a = t 0 < t 1 < ⋯ < t N = T$ be the partition on $[ a , T ]$. We define the following uniform mesh on $[ log ( a ) , log ( T ) ]$ with $log a = log t 0 < log t 1 < ⋯ < log t N = log T ,$
$log t j − log a log t N − log a = j N ,$
which implies that,
$log t j = log a + j h , with h = ( log t N − log a ) / N .$
when $j = N$ we have $log t N = log T$ and when $j = 0$, we have $log t 0 = log a$. By applying such a mesh on our weights, we can simplify them to,
$a j , k + 1 = h α Γ ( α + 2 ) × k α + 1 − ( k − α ) ( k + 1 ) α , if j = 0 , ( k − j + 2 ) α + 1 + ( k − j ) α + 1 − 2 ( k − j + 1 ) α , if 1 ≤ j ≤ k , 1 , if j = k + 1 ,$
and
$b j , k + 1 = h α Γ ( α + 1 ) ( k + 1 − j ) α − ( k − j ) α .$
The fractional Adams–Bashforth–Moulton method has many useful properties. First, we may use this method to solve nonlinear fractional differential equation by transforming the nonlinear equation into a Volterra integral equation with a singular kernel and approximating the corresponding Volterra integral equation with some suitable quadrature formula. Second, the fractional Adams–Bashforth–Moulton method is an explicit method, and as such, it can save computer memory and can be more computationally time-efficient. Finally, this method has been shown to have a convergence order of $O ( h 2 − α )$ for suitable meshes.
We shall consider the error estimates of the predictor–corrector schemes (9) and (12) under the various smoothness assumptions of f and y in this paper. The paper is organized as follows. In Section 2, we will consider some auxiliary results that will aid us for the remainder of the paper in proving the error estimates. In Section 3, we consider the error estimates of the predictor–corrector method for solving (1) with a uniform mesh under different smoothness conditions. In Section 4, we will provide several numerical examples that support the theoretical conclusions made in Section 2 and Section 3.

## 2. Auxiliary Results

For the remainder of this work, we will be using the Adams method described by the predictor Equation (12) and the corrector Equation (9) and the uniform mesh described by (13) to solve the fractional initial value problem (1). To begin, we must apply some conditions on the function f, namely that f is continuous and follows the Lipschitz condition with respect to its second argument with the Lipschitz constant, L, on a suitable set G. By forcing these conditions, we may use  to show that a unique solution y of the initial value problem exists on the interval $[ a , T ]$. Our goal is to find a suitable approximation for this unique solution.
As such, we will introduce several auxiliary results on certain smoothness properties to help us in our error analysis of this method. Our first result is taken from .
Theorem 1.
(a) Let $α > 0$. Assume that $f ∈ C 2 ( G )$ where $G ⊂ R 2$ is a suitable set. Define $v ^ = ⌈ 1 α ⌉ − 1$. Then, there exists a function $ϕ ∈ C 1 [ a , T ]$ and some constants $c 1 , c 2 , … , c v ^ ∈ R$ such that the solution y of (1) can be expressed in the following form:
$y ( t ) = ϕ ( t ) + ∑ v = 1 v ^ c v log t a v α .$
(b) Assume that $f ∈ C 3 ( G )$. Define $v ^ = ⌈ 2 α ⌉ − 1$. and $v ˜ = ⌈ 1 α ⌉ − 1$. Then, there exists a function $ψ ∈ C 2 [ a , T ]$ and some constants $c 1 , c 2 , … , c v ^ ∈ R$ and $d 1 , d 2 … , d v ˜ ∈ R$ such that the solution y of (1) can be expressed in the form,
$y ( t ) = ψ ( t ) + ∑ v = 1 v ^ c v log t a v α + ∑ v = 1 v ˜ d v log t a 1 + v α .$
We can also relate such smoothness properties of the solution with that of the Caputo–Hadamard derivatives. From , we have,
Theorem 2.
If $y ∈ C m [ a , T ]$ for some $m ∈ N$ and $0 < α < m$, then,
$D C H a , t α y ( t ) = Φ log t a + ∑ l = 0 m − ⌈ α ⌉ − 1 δ l + ⌈ α ⌉ y ( a ) Γ ( ⌈ a ⌉ − α + l + 1 ) log t a ⌈ a ⌉ − α + l ,$
where $Φ ∈ C m − ⌈ α ⌉ [ a , T ]$ and $δ n y ( s ) = ( s d ds ) n y ( s )$ with $n ∈ N$. Moreover, the ($m − ⌈ α ⌉ )$th derivative of g satisfies a Lipschitz condition of order $⌈ α ⌉ − α$.
A useful corollary can then be drawn from this theorem and can be used to generalize the classical result for derivatives of integer order.
Corollary 1.
Let $y ∈ C m [ a , T ]$ for some $m ∈ N$ and assume that $0 < α < m$. Then, $D C H a , t α y ( t ) ∈ C [ a , T ]$.
We will now introduce some error estimates for both the product rectangle rule and product trapezoidal rule, which we have implemented as the predictor and corrector of our method. Doing so will aid us in producing error estimates and convergence orders of the fractional Adams method.
Theorem 3.
(a) Let $z ( t ) = z log t a ∈ C 1 [ a , T ]$ and $δ z = ( t d d t ) z$. Let h be defined by (14). Then,
$| ∫ a t k + 1 log t k + 1 t α − 1 z log t a d t t − ∑ j = 0 k b j , k + 1 z log t a | ≤ 1 α | | δ z | | ∞ log t k + 1 a α h .$
(b) Let $z log t a = log t a p$ for some $p ∈ ( 0 , 1 )$. Then,
$| ∫ a t k + 1 log t k + 1 t α − 1 z log t a d t t − ∑ j = 0 k b j , k + 1 z log t a | ≤ C α , p R e log t k + 1 a α + p − 1 h .$
Proof.
Using the construction of the product rectangle formula rule, we can show that in both cases above, the quadrature error can be represented by,
$∫ a t k + 1 log t k + 1 t α − 1 z log t a d t t − ∑ j = 0 k b j , k + 1 z log t j a = ∑ j = 0 k ∫ t j t j + 1 log t k + 1 t α − 1 z log t a − z log t j a d t t .$
We will begin by proving statement (a). We first apply the Mean Value Theorem on z in the second factor of the integrand and note that $δ z ( t ) = ( t d d t ) z ( t ) = z ′ ( log t a )$. This gives us, with $∥ δ z ∥ ∞ = max t ∈ [ 0 , T ] | δ z ( t ) |$,
$| ∫ a t k + 1 log t k + 1 t α − 1 z log t a d t t − ∑ j = 0 k b j , k + 1 z log t j a | ≤ | | δ z | | ∞ ∑ j = 0 k ∫ t j t j + 1 log t k + 1 t α − 1 ( log t − log t j ) d t t = | | δ z | | ∞ h 1 + α α ∑ j = 0 k 1 1 + α ( k + 1 − j ) 1 + α − ( k − j ) 1 + α − ( k − j ) α = | | δ z | | ∞ h 1 + α α ( k + 1 ) 1 + α 1 + α − ∑ j = 0 k j α = | | δ z | | ∞ h 1 + α α ∫ 0 k + 1 t α − ∑ j = 0 k j α .$
The final bracket can be interpreted as the remainder of the standard rectangle quadrature formula for the function $t α$ over the interval $[ 0 , k + 1 ]$. We can then apply a standard result from quadrature theory that states that this term is bounded by the total variation of the integrand, $( k + 1 ) α$. Therefore, we can conclude that,
$| ∫ a t k + 1 log t k + 1 t α − 1 z log t a d t t − ∑ j = 0 k b j , k + 1 z log t j a | ≤ | | δ z | | ∞ h 1 + α α ( k + 1 ) α .$
Now to prove (b). We use the fact that $z log t a = log t a p$ is monotonic and repeated applications of the Mean Value Theorem. We are required to take cases when $0 < α < 1$ and $α > 1$.
Case 1: $0 < α < 1$,
$| ∫ a t k + 1 log t k + 1 t α − 1 z log t a d t t − ∑ j = 0 k b j , k + 1 z log t j a | ≤ ∑ j = 0 k | z log t j + 1 a − z log t j a | ∫ t j t j + 1 log t k + 1 t α − 1 d t t = h α + p α ∑ j = 0 k ( j + 1 ) p − j p ( k + 1 − j ) α − ( k − j ) α ≤ h α + p α ( k + 1 ) α − k α + p α ∑ j = 1 k − 1 j p − 1 ( k − j ) α − 1 + ( k + 1 ) p − k p ≤ h α + p α α k α − 1 + p α ∑ j = 1 k − 1 j p − 1 ( k − j ) α − 1 + p k p − 1 .$
We shall show that,
$∑ j = 1 k − 1 j p − 1 ( k − j ) α − 1 ≤ C k α + p − 1 .$
Assuming (25) holds at the moment, we then have,
$I ≤ h α + p α α k α − 1 + p α C k α + p − 1 + p k p − 1 ≤ C h α + p k α + p − 1 + C k α + p − 1 + k α + p − 1 ≤ C h α + p k α + p − 1 = C h α + p log t k + 1 α α + p − 1 h − ( α + p − 1 ) = C h log t k + 1 α α + p − 1 .$
It remains to prove (25), which we shall do now.
$∑ j = 1 k − 1 j p − 1 ( k − j ) α − 1 = 1 p − 1 ( k − 1 ) α − 1 + 2 p − 1 ( k − 2 ) α − 1 + … + ( k − 1 ) p − 1 ( 1 ) α − 1 .$
Let $F ( x ) = x p − 1 ( k − x ) α − 1$ for $0 ≤ x ≤ k$. Then,
$F ′ ( x ) = x p − 2 ( k − x ) α − 2 ( 2 − α − p ) x − ( 1 − p ) k .$
By letting the parenthesis equal zero, we find that $F ( x )$ has a turning point at $x p = ( 1 − p ) k 2 − α − p$, meaning $F ( x )$ is decreasing on $[ 0 , x p ]$ and increasing on $[ x p , k ]$. Therefore,
$∑ j = 1 k − 1 j p − 1 ( k − j ) α − 1 ≤ ∫ 0 x p x p − 1 ( k − x ) α − 1 d x + ∫ x p k x p − 1 ( k − x ) α − 1 d x ≤ ( k − x p ) α − 1 ∫ 0 x p x p − 1 d x + x p p − 1 ∫ x p k ( k − x ) α − 1 d x = k − 1 − p 2 − α − p k α − 1 x p p p + x p p − 1 ( k − x p ) α α = C 1 k α − 1 k p + C 2 k p − 1 k α ≤ C k p + α − 1 .$
Case 2: $α > 1$,
$| ∫ a t k + 1 log t k + 1 t α − 1 z log t a d t t − ∑ j = 0 k b j , k + 1 z log t j a | = h α + p α ∑ j = 0 k ( j + 1 ) p − j p ( k + 1 − j ) α − ( k − j ) α ≤ h α + p α ( k + 1 ) α − k α + p α ∑ j = 1 k − 1 j p − 1 ( k − j + 1 ) α − 1 + ( k + 1 ) p − k p ≤ h α + p α α ( k + 1 ) α − 1 + p α ∑ j = 1 k − 1 j p − 1 ( k − j + 1 ) α − 1 + ( k + 1 ) p − k p .$
We shall show that,
$∑ j = 1 k − 1 j p − 1 ( k − j + 1 ) α − 1 ≤ C k α + p − 1 .$
Assuming (26) holds at the moment, we then have,
$I ≤ h α + p α α ( k + 1 ) α − 1 + p α C k α + p − 1 + p k p − 1 ≤ C h α + p C k α + p − 1 + C k α + p − 1 + C k α + p − 1 ≤ C h α + p k α + p − 1 = C h α + p log t k + 1 α α + p − 1 h − ( α + p − 1 ) = C h log t k + 1 α α + p − 1 .$
It remains to prove (26), which we shall do now.
$∑ j = 1 k − 1 j p − 1 ( k − j ) α − 1 = 1 p − 1 ( k ) α − 1 + 2 p − 1 ( k − 1 ) α − 1 + … + ( k − 1 ) p − 1 ( 2 ) α − 1 .$
Let $F ( x ) = x p − 1 ( k − x ) α − 1$ for $0 ≤ x ≤ k$. Then,
$F ′ ( x ) = x p − 2 ( k − x ) α − 2 ( 2 − α − p ) x − ( 1 − p ) ( k + 1 ) .$
By letting the parenthesis equal zero, we find that $F ( x )$ has a turning point at $x p = ( 1 − p ) ( k + 1 ) 2 − α − p$, meaning F(x) is decreasing on $[ 0 , x p ]$ and increasing on $[ x p , k ]$. Therefore,
(i)
If $2 − α − p > 0$, then $x p > 0$. Therefore,
$∑ j = 1 k − 1 j p − 1 ( k − j ) α − 1 ≤ ∫ 0 x p x p − 1 ( k + 1 − x ) α − 1 d x + ∫ x k p x p − 1 ( k + 1 − x ) α − 1 d x ≤ ( k + 1 − 0 ) α − 1 x p p p + x p p − 1 ( k + 1 − x p ) α α = ( k + 1 ) α − 1 C 1 ( k + 1 ) p + C 2 ( k + 1 ) p − 1 C 3 ( k + 1 ) α ≤ C ( k + 1 ) α + p − 1 ≤ C k α + p − 1 .$
(ii)
If $2 − α − p < 0$, then $x p < 0$. This means that $F ′ ( x ) < 0$ and shows that $F ( x )$ is a decreasing function. Therefore,
$∑ j = 1 k − 1 j p − 1 ( k − j ) α − 1 ≤ ∫ 0 k x p − 1 ( k + 1 − x ) α − 1 d x ≤ ( k + 1 − 0 ) α − 1 ∫ 0 k x p − 1 d x ≤ ( k + 1 ) α − 1 k p p ≤ C k α + p − 1 ,$
which gives us the desired result. □
Next, we come to corresponding results for the trapezoidal formula that has been used for the corrector of our method. The proofs of this theorem are similar to those of the previous theorem.
Theorem 4.
(a) Let $z ( t ) = z log t a ∈ C 2 [ a , T ]$ and $δ 2 z = ( t d d t ) 2 z$. Let h be defined by (14). Then, there exists a constant $C α T r$ depending on α such that,
$| ∫ a t k + 1 log t k + 1 t α − 1 z log t a d t t − ∑ j = 0 k a j , k + 1 z log t j a | ≤ C α T r | | δ 2 z | | ∞ log t k + 1 a α h 2 .$
(b) Let $z log t a ∈ C 1 [ a , T ]$ and assume that $δ z$ fulfils a Lipschitz condition of order μ for some $μ ∈ ( 0 , 1 )$. Then, there exists a positive constant $B α , μ T r$ and $M ( z , μ )$ such that,
$| ∫ a t k + 1 log t k + 1 t α − 1 z log t a d t t − ∑ j = 0 k a j , k + 1 z log t j a | ≤ B α , μ T r M ( z , μ ) log t k + 1 a α h 1 + μ .$
(c) Let $z log t a = log t a p$ for some $p ∈ ( 0 , 2 )$ and $ς : = min ( 2 , p + 1 )$. Then,
$| ∫ a t k + 1 ( log t k + 1 t ) α − 1 z log t a d t t − ∑ j = 0 k a j , k + 1 z log t j a | ≤ C α , p T r log t k + 1 a α + p − ς h ς .$
Proof.
By construction of the product trapezoidal formula rule, we can show that in the first two cases above, the quadrature error can be represented as,
$∫ a t k + 1 log t k + 1 t α − 1 z log t a d t t − ∑ j = 0 k a j , k + 1 log t j a = ∑ j = 0 k ∫ t j t j + 1 log t k + 1 t α − 1 z log t a − P 1 ( t ) d t t ,$
where,
$P 1 ( t ) = log t t j + 1 log t j t j + 1 z ( t j ) + log t t j log t j + 1 t j z ( t j + 1 ) , t ∈ [ t j , t j + 1 ] .$
We will begin by proving statement (a). To find an estimate for (30), we must simplify the second factor of the integrand by applying the Mean Value Theorem. Therefore,
$z ( log t a ) − P 1 ( t ) = log t t j + 1 log t j t j + 1 z log t a − z log t j a + log t t j log t j + 1 t j z log t a − z log t j + 1 a = log t t j + 1 log t j t j + 1 δ z ( c 1 ) ( log t − log t j ) + log t t j log t j + 1 t j δ z ( c 2 ) ( log t − log t j + 1 ) = ( log t − log t j + 1 ) ( log t − log t j ) ( log t j + 1 − log t j ) δ z ( c 1 ) − δ z ( c 2 ) ≤ h δ 2 z ( c 3 ) ( c 1 − c 2 ) ≤ | | δ 2 z | | ∞ h 2 .$
Applying the above to (30), we get,
$∑ j = 0 k ∫ t j t j + 1 log t k + 1 t α − 1 z log t a − P 1 ( t ) d t t ≤ ∑ j = 0 k ∫ t j t j + 1 log t k + 1 t α − 1 | | δ 2 z | | ∞ h 2 d t t = | | δ 2 z | | ∞ h 2 ∑ j = 0 k ∫ t j t j + 1 log t k + 1 t α − 1 d t t = | | δ 2 z | | ∞ h 2 ∑ j = 0 k − 1 α log t k + 1 t α t j t j + 1 = | | δ 2 z | | ∞ h 2 − 1 α ∑ j = 0 k log t k + 1 t j + 1 α − log t k + 1 t j α = 1 α | | δ 2 z | | ∞ h 2 ∑ j = 0 k h α ( k − j + 1 ) α − h α ( k − j ) α = 1 α | | δ 2 z | | ∞ h 2 h α ( k + 1 ) α = 1 α | | δ 2 z | | ∞ h 2 log t k + 1 a α .$
Now to prove (b). As $z ∈ C 1 [ a , T ]$, this time, we are unable to apply the mean value theorem for a second time. Instead, we will apply the Lipschitz condition of order $μ$ on $δ z$ for some $μ ∈ ( 0 , 1 )$. Therefore,
$z ( log t a ) − P 1 ( t ) = log t t j + 1 log t j t j + 1 z log t a − z log t j a + log t t j log t j + 1 t j z log t a − z log t j + 1 a = log t t j + 1 log t j t j + 1 δ z ( c 1 ) ( log t − log t j ) + log t t j log t j + 1 t j δ z ( c 2 ) ( log t − log t j + 1 ) = ( log t − log t j + 1 ) ( log t − log t j ) ( log t j + 1 − log t j ) δ z ( c 1 ) − δ z ( c 2 ) ≤ h M | c 1 − c 2 | μ ≤ M h 1 + μ ,$
where M is a constant depending on z and $μ$. Applying the above to (30), we get,
$∑ j = 0 k ∫ t j t j + 1 log t k + 1 t α − 1 z log t a − P 1 ( t ) d t t ≤ ∑ j = 0 k ∫ t j t j + 1 log t k + 1 t α − 1 M h 1 + μ d t t = M h 1 + μ ∑ j = 0 k ∫ t j t j + 1 log t k + 1 t α − 1 d t t = M h 1 + μ ∑ j = 0 k − 1 α log t k + 1 t α t j t j + 1 = M h 1 + μ − 1 α ∑ j = 0 k log t k + 1 t j + 1 α − log t k + 1 t j α = 1 α M h 1 + μ ∑ j = 0 k h α ( k − j + 1 ) α − h α ( k − j ) α = 1 α M h 1 + μ h α ( k + 1 ) α = 1 α M h 1 + μ log t k + 1 a α .$
Finally, we shall prove (c). We will start by proving the theorem when $0 < p < 1$.
Case 1: $0 < α < 1$. Let
$A = | ∫ a t k + 1 ( log t k + 1 t ) α − 1 z log t a d t t − ∑ j = 0 k a j , k + 1 z log t j a | = | ∑ j = 0 k ∫ t j t j + 1 log t k + 1 t α − 1 log t a p − log t t j + 1 log t j t j + 1 log t j a p + log t t j log t j + 1 t j log t j + 1 a p d t t | ≤ | ∫ t 0 t 1 log t k + 1 t α − 1 log t a p − log t t 0 log t 1 t 0 log t 1 a p d t t | + | ∑ j = 1 k ∫ t j t j + 1 log t k + 1 t α − 1 log t a p − log t t j + 1 log t j t j + 1 log t j a p + log t t j log t j + 1 t j log t j + 1 a p d t t | = I + I I .$
For I, we have,
$I = | ∫ t 0 t 1 log t k + 1 t α − 1 log t a p − log t t 0 log t 1 t 0 log t 1 a p d t t | = | ∫ t 0 t 1 log t k + 1 t α − 1 log t a p − log t a h h p d t t | ≤ ∫ t 0 t 1 log t k + 1 t α − 1 h p d t t = h p h α ( k + 1 ) α − k α α = h α + p α k α − 1 α = h p + α k α − 1 = C h p + α ( k + 1 ) α − 1 = C h p + 1 log t k + 1 a α − 1 .$
For $I I$, we have,
$I I = ∑ j = 1 k ∫ t j t j + 1 log t k + 1 t α − 1 [ log t a p − ( log t t j + 1 log t j t j + 1 log t j a p + log t t j log t j + 1 t j log t j + 1 a p ) ] d t t = ∑ j = 1 k ∫ t j t j + 1 log t k + 1 t α − 1 [ log t t j + 1 log t j t j + 1 log t a p − log t j a p + log t t j log t j + 1 t j log t a p − log t j + 1 a p ] d t t = ∑ j = 1 k ∫ t j t j + 1 log t k + 1 t α − 1 [ log t t j + 1 log t j t j + 1 p ξ 2 p − 1 ( log t − log t j + 1 ) + log t t j log t j + 1 t j p ξ 1 p − 1 ( log t − log t j ) ] d t t = C ∑ j = 1 k ∫ t j t j + 1 log t k + 1 t α − 1 log t t j log t t j + 1 log t j + 1 t j ξ 2 p − 1 − ξ 1 p − 1 d t t .$
Therefore,
$| I I | ≤ C ∑ j = 1 k ∫ t j t j + 1 log t k + 1 t α − 1 | log t t j log t t j + 1 log t j + 1 t j | log t j a p − 1 − log t j + 1 a p − 1 d t t ≤ C h p ∑ j = 1 k ∫ t j t j + 1 log t k + 1 t α − 1 j p − 1 − ( j + 1 ) p − 1 d t t = C h p + α ∑ j = 1 k − 1 j p − 1 − ( j + 1 ) p − 1 ( k + 1 − j ) α − ( k − j ) α + k p − 1 − ( k + 1 ) p − 1 ≤ C h α + p ∑ j = 1 k − 1 | ( p − 1 ) j p − 2 | α ( k − j ) α − 1 + k p − 2 ≤ C h α + p ∑ j = 1 k − 1 j p − 2 ( k − j ) α − 1 + k p − 2 .$
Similar to our previous proof, we now must find the bounds for the summation.
$∑ j = 1 k − 1 j p − 2 ( k − j ) α − 1 ≤ ∫ 1 x p x p − 2 ( k − x ) α − 1 d x + ∫ x p k x p − 2 ( k − x ) α − 1 d x ≤ ( k − x p ) α − 1 ∫ 1 x p x p − 2 d x + x p p − 2 ∫ x p k ( k − x ) α − 1 d x = ( k − x p ) α − 1 1 1 − p + x p p − 2 ( k − x p ) α α = C 1 k α − 1 + C 2 k p − 2 · C 3 k α ≤ C k α − 1 + C k α + p − 2 .$
Thus,
$| I I | ≤ C h α + p ∑ j = 1 k − 1 j p − 2 ( k − j ) α − 1 + k p − 2 ≤ C h α + p C k α − 1 + C k α + p − 2 + k p − 2 ≤ C h α + p C k α − 1 + C k α + p − 2 + k α + p − 2 ≤ C h α + p C k α − 1 + C k α + p − 2 ≤ C h p + 1 log t k a α − 1 + C h 2 log t k + 1 a α + p − 2 ≤ C h p + 1 log t k + 1 a α − 1 .$
Thus, we get the desired result.
Case 2: $1 < α < 2$. Let
$A = | ∫ a t k + 1 ( log t k + 1 t ) α − 1 z log t a d t t − ∑ j = 0 k a j , k + 1 z log t j a | ≤ | ∫ t 0 t 1 log t k + 1 t α − 1 log t a p − log t t 0 log t 1 t 0 log t 1 a p d t t | + | ∑ j = 1 k ∫ t j t j + 1 log t k + 1 t α − 1 log t a p − log t t j + 1 log t j t j + 1 log t j a p + log t t j log t j + 1 t j log t j + 1 a p d t t | = I + I I .$
For I, we have,
$I = | ∫ t 0 t 1 log t k + 1 t α − 1 log t a p − log t t 0 log t 1 t 0 log t 1 a p d t t | = | ∫ t 0 t 1 log t k + 1 t α − 1 log t a p − log t a h h p d t t | ≤ ∫ t 0 t 1 log t k + 1 t α − 1 h p d t t = h p h α ( k + 1 ) α − k α α = h α + p α ( k + 1 ) α − 1 α = h p + α ( k + 1 ) α − 1 = h p + α ( k + 1 ) α − 1 = C h p + 1 t k + 1 α − 1 .$
For $I I$, we have,
$| I I | ≤ C ∑ j = 1 k ∫ t j t j + 1 log t k + 1 t α − 1 | log t t j log t t j + 1 log t j + 1 t j | log t j a p − 1 − log t j + 1 a p − 1 d t t ≤ C h p ∑ j = 1 k ∫ t j t j + 1 log t k + 1 t α − 1 j p − 1 − ( j + 1 ) p − 1 d t t = C h p + α ∑ j = 1 k − 1 j p − 1 − ( j + 1 ) p − 1 ( k + 1 − j ) α − ( k − j ) α + k p − 1 − ( k + 1 ) p − 1 ≤ C h α + p ∑ j = 1 k − 1 | ( p − 1 ) j p − 2 | α ( k − j + 1 ) α − 1 + k p − 2 ≤ C h α + p ∑ j = 1 k − 1 j p − 2 ( k − j + 1 ) α − 1 + k p − 2 .$
Similar to our previous proof, we now must find the bounds for the summation.
$∑ j = 1 k − 1 j p − 2 ( k − j + 1 ) α − 1 ≤ ∫ 1 x p x p − 2 ( k + 1 − x ) α − 1 d x + ∫ x p k x p − 2 ( k + 1 − x ) α − 1 d x = ( k + 1 − x p ) α − 1 1 1 − p + x p p − 2 ( k + 1 − x p ) α α = C 1 ( k + 1 ) α − 1 + C 2 ( k + 1 ) p − 2 · C 3 ( k + 1 ) α ≤ C ( k + 1 ) α − 1 + C ( k + 1 ) α + p − 2 .$
Thus,
$| I I | ≤ C h α + p ∑ j = 1 k − 1 j p − 2 ( k − j ) α − 1 + k p − 2 ≤ C h α + p C ( k + 1 ) α − 1 + C ( k + 1 ) α + p − 2 + k p − 2 ≤ C h α + p C ( k + 1 ) α − 1 + C ( k + 1 ) α + p − 2 + ( k + 1 ) α + p − 2 ≤ C h α + p C ( k + 1 ) α − 1 + C ( k + 1 ) α + p − 2 ≤ C h p + 1 log t k + 1 a α − 1 + C h 2 log t k + 1 a α + p − 2 ≤ C h p + 1 log t k + 1 a α − 1 .$
Thus, we get the desired result. The proof for this theorem when $1 < p < 2$ is similar to when $0 < p < 1$. As such, it has been omitted. □
Remark 1.
In the final part of Theorem 4, there is a case when $α , p < 1$. This would result in a $ς = p + 1$, and, in turn, this would make the right hand side exponent of $log ( t k + 1 / a )$ become negative, taking a value of $α − 1$. This results in a case in which the error increases as the interval of integration decreases. The explanation for such a situation is that as the interval of integration decreases, so too does the weight function, making the integral more difficult to calculate and resulting in an increase in error.

## 3. Error Analysis for the Adams Method

In this section, we will be presenting the main error estimates for the Adams method for solving (1). We will be investigating different smoothness conditions for each of y and f and how they affect the error and convergence order.

#### 3.1. A General Result

Using the error estimates established in the previous section, we will present the general convergence order of the Adams–Bashforth–Moulton method relating to the smoothness properties of the given function f and the solution y.
Lemma 1.
Assume that the solution y of (1) is such that,
$| ∫ a t k + 1 log t k + 1 t α − 1 C H D a , t α y ( t ) d t t − ∑ j = 0 k b j , k + 1 C H D a , t α y ( t j ) | ≤ C 1 log t k + 1 a γ 1 h δ 1 ,$
and,
$| ∫ a t k + 1 log t k + 1 t α − 1 C H D a , t α y ( t ) d t t − ∑ j = 0 k a j , k + 1 C H D a , t α y ( t j ) | ≤ C 1 log t k + 1 a γ 2 h δ 2 ,$
with some $γ 1 , γ 2 ≥ 0$ and $δ 1 , δ 2 > 0$. Let $y j$ be the approximate solution of (1). Then, for some suitably chosen $T > 1$, we have,
$max 0 ≤ j ≤ N | y ( t j ) − y j | = O ( h q ) ,$
where $q = min { δ 1 + α , δ 2 }$.
Proof.
We will show that, for sufficiently small h,
$| y ( t j ) − y j | ≤ C h q ,$
for all $j ∈ { 0 , 1 , 2 , … , N }$, where C is a suitable constant. This proof will be based on mathematical induction. By using the initial conditions, we can confirm the basis step at $j = 0$. We now assume that (34) is true for all $j = 0 , 1 , … , k$ for some $k ≤ N − 1$. Finally, we will prove that inequality (34) is true for $j = k + 1$. To show this, we must start by finding the error of the predictor $y k + 1 P$. By the definition of the predictor, we can show the following:
$| y ( t k + 1 − y k + 1 P | = 1 Γ ( α ) | ∫ a t k + 1 log t k + 1 t α − 1 f ( t , y ( t ) ) d t t − ∑ j = 0 k b j , k + 1 f ( t j , y j ) y ( t j ) | ≤ 1 Γ ( α ) | ∫ a t k + 1 log t k + 1 t α − 1 C H D a , t α y ( t ) d t t − ∑ j = 0 k b j , k + 1 C H D a , t α y ( t j ) | − 1 Γ ( α ) ∑ j = 0 k b j , k + 1 | f t j , y ( t j ) − f ( t j , y j ) | ≤ C 1 log t k + 1 a γ 1 Γ ( α ) h γ 1 + 1 Γ ( α ) ∑ j = 0 k b j , k + 1 L C h q ≤ C 1 log t k + 1 a γ 1 Γ ( α ) h γ 1 + C L log T a α Γ ( α + 1 ) h q .$
For this proof, we have used several properties. These include the Lipschitz condition placed on f, the assumption of the error on the rectangle formula, and the understanding of the underlying predictor weights, $b j , k + 1 > 0$ for all j and k and,
$∑ j = 0 k b j , k + 1 ≤ 1 α log T a α .$
Now, we have a bound for the predictor error. We also need to analyze the corrector error. To do so, we will be using a similar argument as with the predictor case as well as using the assumption made for mathematical induction. Note that,
$| y ( t k + 1 ) − y k + 1 | = 1 Γ ( α ) | ∫ a t k + 1 log t k + 1 t α − 1 f t , y ( t ) d t t − ∑ j = 0 k a j , k + 1 f ( t j , y j ) − a k + 1 , k + 1 f ( t k + 1 , y k + 1 P ) | ≤ | ∫ a t k + 1 log t k + 1 t α − 1 C H D a , t α y ( t ) d t t − ∑ j = 0 k a j , k + 1 C H D a , t α y ( t j ) | + 1 Γ ( α ) ∑ j = 0 k a j , k + 1 | f t j , y ( t j ) − f ( t j , y j ) | + 1 Γ ( α ) a k + 1 , k + 1 | f t k + 1 , y ( t k + 1 ) − f ( t k + 1 , y k + 1 P ) | ≤ C 2 log t k + 1 a γ 2 Γ ( α ) h δ 2 + C L Γ ( α ) h q ∑ j = 0 k a j , k + 1 + a k + 1 , k + 1 L Γ ( α ) C 1 log T a γ 1 Γ ( α ) h γ 1 + C L log T a α Γ ( α + 1 ) h q ≤ C 2 log T a γ 2 Γ ( α ) + C L log T a α Γ ( α + 1 ) + C 1 L log T a γ 1 Γ ( α ) Γ ( α + 2 ) + C L 2 log T a α Γ ( α + 1 ) Γ ( α + 2 ) h α h q .$
Due to both $γ 1$ and $γ 2$ being non-negative and the relations $δ 2 ≤ q$ and $δ 1 + α ≤ q$, we may choose a sufficiently small T such that the second summand in the parentheses is bounded above by $C / 2$. Then, by fixing the value of T, we may bound the rest of the terms by $C / 2$ as well, given a small enough value of h and by choosing a large C value. Finally, we can state that the error estimate for the corrector is now bounded above by $C h q$. □

#### 3.2. Error Estimates with Smoothness Assumptions on the Solution

In this subsection, we will be introducing some error estimates given certain smoothness assumptions being placed on the solution of our inital value problem. To do this, we will be using the general error estimate introduced above as well as the auxiliary results demonstrated in Section 2. For our first case, we will assume that y is sufficiently differentiable. This means the outcome is dependent on $α$—more specifically, when $α < 1$ and when $α ≥ 1$.
Theorem 5.
Let $α > 0$ and assume $C H D a , t α y ∈ C 2 [ a , T ]$ for some suitable T. Then,
$max 0 ≤ j ≤ N | y ( t j ) − y j | = O ( h 2 ) α ≥ 1 , O ( h 1 + α ) α < 1 .$
We can note from this theorem that the order of convergence depends on $α$. More specifically, a larger value of $α$ gives a better order of convergence. The reason for this is due to the fact that we have chosen to discretize an integral operator that will behave more smoothly as the value of $α$ increases, thus creating a higher order. We could have chosen to discretize the differential operator directly from the inital value problem. It can be shown that the opposite result occurs compared to the integral operator. As $α$ increases, the smoothness of the operator deteriorates, and the convergence order diminishes. More specifically, it has been shown that when $α ≥ 2$, no convergence is achieved. This means that such a method is effective when $α$ is small but will be ineffective for larger $α$ values. This is one of the main advantages of this method for the Caputo–Hadamard derivative, as convergence is not only achieved but is most optimal at larger values of $α$.
Proof.
By applying Theorems 3 and 4 and Lemma 1 with $γ 1 = γ 2 = α > 0$, $δ 1 = 1$, and $δ 2 = 2$, we can show that the an $O ( h q )$ error bound where,
$q = min { 1 + α , 2 } = 2 if α ≥ 1 , 1 + α if α < 1 .$
We can note from this theorem that this is for an optimal situation. The function we must approximate when applying the Adams method is the function $f ( · , y ( · ) ) = C H D a , t α y$. Therefore, the error bound is heavily dependent on obtaining a good approximations of f. To obtain such good error bounds, we can look to quadrature theory, which gives a well-known condition for obtaining this, namely the function $f ∈ C 2$ over the interval of the integral. As we can see from the above theorem, that is the case that we are looking at here. Therefore, this can be considered as an optimal situation. However, this can be also considered as an unusual situation. Rarely are we given enough information such that we can determine the smoothness of y or, in this case, $C H D a , t α y$, so we cannot rely on such a theorem. In general, we must formulate our theorems using the data given on the function f, which will be considered in the next subsection.
We next move on to giving some theorems that now give the smoothness assumptions based on the unknown solution y instead of $C H D a , t α y$. Theorem 2 implies that the smoothness of y often implies the nonsmoothness of $C H D a , t α y$; we are expecting some difficulties in finding the error estimates. However, Theorem 2 also gives us the form of $C H D a , t α y$ under such smoothness conditions and gives us information about the singularities and smoothness of $C H D a , t α y$. As such, we can use this information to give the following result.
Theorem 6.
Let $α > 1$ and assume that $y ∈ C 1 + ⌈ α ⌉ [ a , T ]$ for some suitable T. Then,
$max 0 ≤ j ≤ N | y ( t j ) − y j | = O ( h 1 + ⌈ α ⌉ − α ) .$
Proof.
By applying Theorem 2, we have that $C H D a , t α y ( t ) = C log t a ⌈ α ⌉ + g log t a$ where $g ∈ C 1 [ a , T ]$ and $δ g$ follows the Lipschitz condition of order $⌈ α ⌉ − α$. Therefore, by applying Theorems 3 and 4 and Lemma 1, we get that $γ 1 = 0 , γ 2 = α − 1$, $δ 1 = 1$ and $δ 2 = 1 + ⌈ α ⌉ − α$. With $α > 1$, we can say that $δ 1 + α = 1 + α > 2 > δ 2$. Therefore, $min { δ 1 + α , δ 2 } = δ 2$, so an error bound of $O ( h 2 δ )$ is finally given. □
From the above theorem, we can draw some conclusions, namely that the fractional part of $α$ plays a huge role in the overall convergence order. More specifically, as $⌈ α ⌉ − α$ increases or the fractional part of $α$ decreases, the convergence order improves. This means that the convergence order no longer forms a monotonic function of $α$ but rather oscillates between 1 and 2 depending on $α$. However, this theorem does show that under such smoothness properties, this method converges for all $α > 0$.
Theorem 7.
Let $0 < α < 1$ and assume that $y ∈ C 2 [ a , T ]$ for some suitable T. Then, for $1 ≤ j ≤ N$,
$| y ( t j ) − y j | ≤ C log t j a α − 1 × h 1 + α if 0 < α < 1 2 , h 2 − α if 1 2 ≤ α < 1 ,$
where C is a constant independent of j.
From this, we can take the following corollary.
Corollary 2.
Under the assumption of Theorem 7, we have,
$max 0 ≤ j ≤ N | y ( t j ) − y j | = O ( h 2 α ) if 0 < α < 1 2 , O ( h ) if 1 2 ≤ α < 1 .$
Moreover, for every $ϵ ∈ ( a , T )$, we have,
$max t j ∈ [ ϵ , T ] | y ( t j ) − y j | = O ( h 1 + α ) if 0 < α < 1 2 , O ( h 2 − α ) if 1 2 ≤ α < 1 .$
Proof of Theorem 7.
For this proof, we would be following that of Theorem 6. However, as $0 < α < 1$, we have that $γ 2 = α − 1 < 0$, meaning that we can no longer apply Lemma 1. We will need to adapt the proof of this lemma in order to apply it to this case. To do so, we will retain the main structure of the proof and application of mathematical induction but will now change the induction hypothesis to that of (39). With such a change in hypothesis, we must now find estimates for the terms $∑ j = 1 k − 1 b j , k + 1 log t j a δ 2$ and $∑ j = 1 k − 1 a j , k + 1 log t j a δ 2$. By applying the Mean Value Theorem, it is known that $0 ≤ b j , k + 1 ≤ h α ( k − j ) α − 1$ and $0 ≤ a j , k + 1 ≤ c h α ( k − j ) α − 1$ for $1 ≤ j ≤ k − 1$ and c is independent of j and k. Applying these bounds for the weights, we reduce the problem to finding bounds for the sum $S k : = ∑ j = 1 k − 1 j γ 2 ( k − j ) α − 1$. Looking back at Theorems 3 and 4, we have shown similar results, and it is easily shown that $S k = C k γ 2 + α$ when both the indices $γ 2$ and $α − 1$ are in the interval $( 0 , 1 )$. By applying this, we can complete this proof using structure of Lemma 1. □

#### 3.3. Error Estimates with Smoothness Assumptions on the Given Data

In this final subsection, we will be looking at how changing the smoothness assumptions of the given function f can change the error and convergence order for this method. We will be looking at both when $α < 1$ and when $α > 1$.
Theorem 8.
Let $α > 1$. Then, with $f ∈ C 3 ( G )$,
$max 0 ≤ j ≤ N | y ( t j ) − y j | = O ( h 2 ) .$
Proof.
We will split this proof into when $α ≥ 2$ an when $1 < α < 2$. When $α ≥ 2$, we can adapt a result from Miller and Feldstein  to apply here which shows that $y ∈ C 2 [ a , T ]$. Then, given the smoothness assumpions on f, and applying the chain rule, $C H D a , t α y ∈ C 2 [ a , T ]$. This then fulfills the conditions of Theorem 5, which gives the desired result.
Now, for when $1 < α < 2$, we wish to apply Lemma 1. To do this, we must find the constants $γ 1 , γ 2 , δ 1$ and $δ 2$ in the lemma. As in the case of our previous theorems, we must determine the behavior and smoothness of y. We find this information by applying Theorem 1b, which tells us that y is in the form,
$y ( t ) = ψ ( t ) + ∑ v = 1 v ^ c v log t a v α + ∑ v = 1 v ˜ d v log t a 1 + v α .$
As $ψ ∈ C 2 [ a , T ]$, this implies that $y ∈ C 1 [ a , T ]$. Similar to the case of $α ≥ 2$, we can deduce $C H D a , t α y ∈ C 1 [ a , T ]$. This then fulfills the conditions of Theorem 3a, giving us that $γ 1 = α$ and $δ 1 = 1$. Furthermore, we may apply Theorem 4a,c to find the remaining values such that $γ 2 = min { α , 2 α − 2 } = 2 α − 2 ≥ 0$ and $δ 2 = 2$. By applying Lemma 1, the required result is achieved. □

## 4. Numerical Examples

In this section, we will be considering some numerical examples to confirm the theoretical results presented in the previous sections. We will be presenting examples below with $0 < α < 2$ as the applications for $α ≥ 2$ are not currently of interest. We will be solving the following examples for $a = 1$ and $T = 2$. The following examples and graphs were created in blueMATLAB using the following algorithm.
• Give an initial value $y 0$.
• Find $y 1 P$ using (11) and (12) such that,
$y 1 P = y 0 + b 0 , 1 f ( t 0 , y 0 ) ,$
where,
$b 0 , 1 = 1 Γ ( α + 1 ) log t 1 t 0 α − log t 1 t 1 α = 1 Γ ( α + 1 ) log t 1 a α .$
• Find $y 1$ using (8) and (9) such that,
$y 1 = y 0 + 1 Γ ( α ) a 0 , 1 f ( t 0 , y 0 ) + a 1 , 1 f ( t 1 , y 1 P ) .$
• Repeat steps 2–3 to find $y 2 , y 3 , … , y N$.
Example 1
($C H D a , t α y$ is smooth). Consider the following nonlinear fractional differential equation, with $0 < α < 2$,
$D C H a , t α y ( t ) = f ( t , y ) , 1 ≤ a < t ≤ T , y ( a ) = 0 , δ y ( a ) = 0 ,$
where,
$f ( t , y ) = Γ ( 9 ) Γ ( 9 − α ) log t a 8 − α − 3 Γ ( 5 + α / 2 ) Γ ( 5 − α / 2 ) log t a 4 − α / 2 + 9 4 Γ ( α + 1 ) + 3 2 log t a α / 2 − log t a 4 3 − y 3 / 2 .$
This example has a nonlinear and non smooth f. However, due to the form of this equation, it is well known that the solution y is given as,
$y ( t ) = log t a 8 − 3 log t a 4 + α / 2 + 9 4 log t a α .$
As such we can say,
$D C H a , t α y ( t ) = Γ ( 9 ) Γ ( 9 − α ) log t a 8 − α − Γ ( 5 + α / 2 ) Γ ( 5 − α / 2 ) log t a 4 − α / 2 + 9 4 Γ ( α + 1 ) .$
For $α ≤ 4$, we have that $D C H a , t α y ∈ C 2 [ a , T ]$; this fulfills the requirements for Theorem 5. As such, we can show the theorem holds for such an example. Let N be a positive integer and let $log a = log t 0 < log t 1 < … < log t N = log T$ be the uniform mesh on the interval $[ log a , log T ]$. such that $log t j = log a + j h$ for $j = 0 , 1 , 2 , … , N$ and $h = log T a / N$. Therefore, we have by Theorem 5,
$| | e N | | ∞ = max 0 ≤ j ≤ N | y ( t j ) − y j | = O ( h 2 ) α ≥ 1 , O ( h 1 + α ) α < 1 .$
In Table 1 and Table 2, we can see the maximum absolute error and experimental order of convergence (EOC) for the predictor–corrector method at varying $α$ and N values. For our different $0 < α < 2$, we have chosen N values as $N = 10 × 2 l , l = 0 , 1 , 2 , … , 7$.The maximum absolute errors $| | e N | | ∞$ were obtained as shown above with respect to N, and we calculate the experimental order of convergence or EOC as $log | | e N | | ∞ | | e 2 N | | ∞ .$ As we can see, the EOCs for the above example follow that of Theorem 5.
In Figure 1, we have plotted the order of convergence for Example 1. From Equation (51), we have for $α < 1$,
$log 2 | | e N | | ≤ log 2 C + log 2 h ( 1 + α ) ≤ log 2 C + 1 + α log 2 h .$
We can now plot this graph such that $y = log 2 | | e N | |$ and let $x = log 2 h$ and $h = 1 5 × 2 l , l = 0 , 1 , … , 7$. Doing this, we get that the gradient of the graph would equal the EOC. To compare this to the theoretical order of convergence, we have also plotted the straight line $y = ( 1 + α ) x$. For Figure 1, we use $α = 0.8$. We can observe that the two lines drawn are parallel. Therefore, we can conclude that the order of convergence of this predictor–corrector method is $O ( h 1 + α )$ for when $α < 1$. A similar result can be obtained for when $α > 1$. Figure 2 shows the same graph but for $α = 1.75$. However, now we can see that the line is parallel to the straight line $y = 2 x$, which is what we expected as the convergence order should tend to 2 for $α > 1$.
Example 2
(y is smooth). Consider the following nonlinear fractional differential equation, with $0 < α < 2$,
$D C H a , t α y ( t ) = f ( t , y ) , 1 ≤ a < t ≤ T , y ( a ) = 0 , δ y ( a ) = 1 ,$
where,
$f ( t , y ) = 2 Γ ( 3 − α ) log t a 2 − α − y + log t a 2 − log t a , for α > 1 , 2 Γ ( 3 − α ) log t a 2 − α − 1 Γ ( 2 − α ) log t a 1 − α − y + log t a 2 − log t a , for α ≤ 1 .$
The exact solution of this equation is,
$y ( t ) = log t a 2 − log t a .$
In Table 3, Table 4 and Table 5, we can see the maximum absolute error and experimental order of convergence (EOC) for the predictor–corrector method at varying $α$ and N values. For our different $0 < α < 2$, we have chosen N values as $N = 10 × 2 l , l = 0 , 1 , 2 , … , 7$. As $y ∈ C 2 [ a , T ]$ we can apply Theorems 6 and 7 and more specifically, Corollary 2. As we can see, the EOCs for the above example verifies these theorems. When $α ≤ 0.5$, we find that the EOC is close to $1 + α$ and when $0.5 < α < 1$, we find that the EOC is close to $2 − α$. Finally, when $α > 1$, we have that the EOC is close to $1 + ⌈ α ⌉ − α$.
Example 3
(f is smooth). Consider the following nonlinear fractional differential equation, with $0 < α < 2$,
$D C H a , t α y ( t ) = − y ( t ) , 1 ≤ a < t ≤ T , y ( a ) = 1 , δ y ( a ) = 0 .$
The exact solution of this equation is $y ( t ) = E α , 1 − ( log t ) α$. Therefore, $D C H a , t α y ( t ) = − E α , 1 − ( log t ) α$, where $E α , γ ( z )$ is defined as the Mittag–Leffler function,
$E α , γ ( z ) = ∑ k = 0 ∞ z k Γ ( α k + γ ) , α , γ > 0 .$
Therefore, f is smooth. As such, this equation fulfils the conditions of Theorem 8. In Table 6 and Table 7, we can see the maximum absolute error and experimental order of convergence (EOC) for the predictor–corrector method at varying $α$ and N values. For our different $0 < α < 2$, we have chosen N values as $N = 10 × 2 l , l = 0 , 1 , 2 , … , 7$. As f is sufficiently smooth, we can apply Theorem 8. As we can see, the EOCs for the above example verifies these theorems. When $0 < α < 1$, we have that the EOC is close to $1 + α$. For $α > 1$, we have the EOC is close to 2.

## 5. Conclusions

In this paper, we proposed a predictor–corrector method for solving Caputo–Hadamard fractional differential equations. Both the initial data f and the unknown solution y were investigated to see how the different smoothness conditions affected the convergence order. It was found under optimal conditions and with $α ≥ 1$ that we can obtain a theoretical convergence order of 2. However, under certain smoothness conditions, a suboptimal convergence order is achieved, often depending on the fractional part of $α$. Several numerical simulations are given to support the theoretical results obtained above in terms of the error estimates.

## Author Contributions

We contributed equally to this work. C.W.H.G. considered the theoretical analysis, performed the numerical simulation, and wrote the original version of the work. Y.Y. introduced and guided this research topic. All authors have read and agreed to the published version of the manuscript.

## Funding

This research received no external funding.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Diethelm, K. The Analysis of Fractional Differential Equations; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
2. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; North-Holland Mathematics Studies; Elsevier: Amsterdam, The Netherlands, 2006; Volume 204. [Google Scholar]
3. Oldman, B.K.; Spanier, J. The Fractional Calculus; Academic Press: New York, NY, USA, 1974. [Google Scholar]
4. Podlubny, I. Fractional Differential Equations; Math, Science and Engineering; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
5. Adjabi, Y.; Jarad, F.; Baleanu, D.; Abdeljawad, T. On Cauchy problems with Caputo Hadamard fractional derivatives. J. Comput. Anal. Appl. 2016, 21, 661–681. [Google Scholar]
6. Garra, R.; Mainardi, F.; Spada, G. A generalization of the Lomnitz logarithmic creep law via Hadamard fractional calculus. Chaos Solitons Fractals 2017, 102, 333–338. [Google Scholar] [CrossRef]
7. Gohar, M.; Li, C.; Yin, C. On Caputo Hadamard fractional differential equations. Int. J. Comput. Math. 2020, 97, 1459–1483. [Google Scholar] [CrossRef]
8. Jarad, F.; Abdeljawad, T.; Baleanu, D. Caputo-type modification of the Hadamard fractional derivatives. Adv. Differ. Equ. 2012, 2012, 142. [Google Scholar] [CrossRef]
9. Kilbas, A.A. Hadamard-type fractional calculus. J. Korean Math. Soc. 2001, 38, 1191–1204. [Google Scholar]
10. Li, C.; Cai, M. Theory and Numerical Approximations of Fractional Integrals and Derivatives; SIAM: Philadelphia, PA, USA, 2019. [Google Scholar]
11. Ma, L. On the kinetics of Hadamard-type fractional differential systems. Fract. Calc. Appl. Anal. 2020, 23, 553–570. [Google Scholar] [CrossRef]
12. Hadamard, J. Essai sur létude des fonctions donnes par leur developpment de Taylor. J. Pure Appl. Math. 1892, 4, 101–186. [Google Scholar]
13. Gohar, M.; Li, C.; Li, Z. Finite Difference Methods for Caputo-Hadamard Fractional Differential Equations. Mediterr. J. Math. 2020, 17, 194. [Google Scholar] [CrossRef]
14. Green, C.W.H.; Liu, Y.; Yan, Y. Numerical methods for Caputo- Hadamard fractional differential equations with graded and non-uniform meshes. Mathematics 2021, 9, 2728. [Google Scholar] [CrossRef]
15. Diethelm, K.; Ford, N.J.; Freed, A.D. Detailed error analysis for a fractional Adams method. Numer. Algorithms 2004, 36, 31–52. [Google Scholar] [CrossRef]
16. Liu, Y.; Roberts, J.; Yan, Y. Detailed error analysis for a fractional Adams method with graded meshes. Numer. Algorithms 2018, 78, 1195–1216. [Google Scholar] [CrossRef]
17. Li, C.; Li, Z.; Wang, Z. Mathematical analysis and the local discontinuous Galerkin method for Caputo-Hadamard fractional partial differential equation. J. Sci. Comput. 2020, 85, 41. [Google Scholar] [CrossRef]
18. Zhang, Y.; Sun, Z.; Liao, H. Finite difference methods for the time fractional diffusion equation on non-uniform meshes. Int. J. Comput. Math. 2014, 265, 195–210. [Google Scholar] [CrossRef]
19. Li, C.; Yi, Q.; Chen, A. Finite difference methods with non-uniform meshes for nonlinear fractional differential equations. J. Comput. Phys. 2016, 316, 614–631. [Google Scholar] [CrossRef]
20. Diethelm, K. Generalized compound quadrature formulae for finite-part integrals. IMA J. Numer. Anal. 1997, 17, 479–493. [Google Scholar] [CrossRef]
21. Zabidi, N.A.; Majid, Z.A.; Kilicman, A. Numerical solution of fractional differential equations with Caputo derivative by using numerical fractional predict-correct technique. Adv. Contin. Discret. Model. 2022, 2022, 26. [Google Scholar] [CrossRef]
22. Ahmad, B.; Alsaedi, A.; Ntouyas, S.K.; Tariboon, J. Hadamard-Type Fractional Differential Equations, Inclusions and Inequalities; Springer: Cham, Switzerland, 2017. [Google Scholar]
23. Wang, G.; Ren, X.; Zhang, L.; Ahmad, B. Explicit iteration and unique positive solution for a Caputo-Hadamard fractional turbulent flow model. IEEE Access 2019, 7, 109833–109839. [Google Scholar] [CrossRef]
24. Awadalla, M.; Yameni, Y.Y.; Abuassba, K. A new fractional model for the cancer treatment by radiotherapy using the Hadamard fractional derivative. OMJ 1971, 1, 14–18. [Google Scholar]
25. Diethelm, K.; Ford, N.J. Analysis of fractional differential equations. J. Math. Anal. Appl. 2002, 265, 229–248. [Google Scholar] [CrossRef]
26. Miller, R.K.; Feldstein, A. Smoothness of solutions of Volterra integral equations with weakly singular kernels. SIAM J. Math. Anal. 1971, 2, 242–258. [Google Scholar] [CrossRef]
Figure 1. Graph showing the experimental order of convergence (EOC) at T = 2 in Example 1 with $α = 0.8$.
Figure 1. Graph showing the experimental order of convergence (EOC) at T = 2 in Example 1 with $α = 0.8$. Figure 2. Graph showing the experimental order of convergence (EOC) at T = 2 in Example 1 with $α = 1.75$.
Figure 2. Graph showing the experimental order of convergence (EOC) at T = 2 in Example 1 with $α = 1.75$. Table 1. Table showing the maximum absolute error and EOC for solving (47) using the predictor–corrector method for $0 < α < 1$.
Table 1. Table showing the maximum absolute error and EOC for solving (47) using the predictor–corrector method for $0 < α < 1$.
N$α = 0.4$EOC$α = 0.6$EOC$α = 0.8$EOC
106.839 × 10$− 2$ 2.607 × 10$− 2$ 1.350 × 10$− 2$
202.064 × 10$− 2$1.7287.433 × 10$− 3$1.8103.542 × 10$− 3$1.930
406.576 × 10$− 3$1.6502.209 × 10$− 3$1.7519.534 × 10$− 4$1.894
802.193 × 10$− 3$1.5846.768 × 10$− 4$1.7062.610 × 10$− 4$1.869
1607.561 × 10$− 4$1.5362.120 × 10$− 4$1.6757.225 × 10$− 5$1.853
3202.671 × 10$− 4$1.5026.744 × 10$− 5$1.6532.015 × 10$− 5$1.842
6409.600 × 10$− 5$1.4762.169 × 10$− 5$1.6375.652 × 10$− 6$1.834
12803.497 × 10$− 5$1.4577.026 × 10$− 6$1.6261.591 × 10$− 6$1.829
Table 2. Table showing the maximum absolute error and EOC for solving (47) using the predictor–corrector method for $α > 1$.
Table 2. Table showing the maximum absolute error and EOC for solving (47) using the predictor–corrector method for $α > 1$.
N$α = 1.25$EOC$α = 1.50$EOC$α = 1.75$EOC
106.166 × 10$− 3$ 5.475 × 10$− 3$ 5.369 × 10$− 3$
201.463 × 10$− 3$2.0761.319 × 10$− 3$2.0531.319 × 10$− 3$2.026
403.497 × 10$− 4$2.0653.207 × 10$− 4$2.0403.259 × 10$− 4$2.017
808.408 × 10$− 5$2.0567.853 × 10$− 5$2.0308.091 × 10$− 5$2.010
1602.033 × 10$− 5$2.0491.934 × 10$− 5$2.0222.014 × 10$− 5$2.006
3204.935 × 10$− 6$2.0424.783 × 10$− 6$2.0165.022 × 10$− 6$2.004
6401.203 × 10$− 6$2.0361.187 × 10$− 6$2.0111.253 × 10$− 6$2.002
12802.943 × 10$− 7$2.0312.951 × 10$− 7$2.0083.131 × 10$− 7$2.001
Table 3. Table showing the maximum absolute error and EOC for solving (53) using the predictor–corrector method for $0 < α ≤ 0.5$.
Table 3. Table showing the maximum absolute error and EOC for solving (53) using the predictor–corrector method for $0 < α ≤ 0.5$.
N$α = 0.1$EOC$α = 0.3$EOC$α = 0.5$EOC
102.225 × 10$− 2$ 9.375 × 10$− 3$ 5.123 × 10$− 3$
201.261 × 10$− 2$0.8193.345 × 10$− 3$1.4871.622 × 10$− 3$1.660
405.625 × 10$− 3$1.1641.196 × 10$− 3$1.4845.255 × 10$− 4$1.626
802.387 × 10$− 3$1.2374.343 × 10$− 4$1.4611.739 × 10$− 4$1.595
1601.004 × 10$− 3$1.2491.606 × 10$− 4$1.4365.856 × 10$− 5$1.571
3204.242 × 10$− 4$1.2436.033 × 10$− 5$1.4121.998 × 10$− 5$1.552
6401.805 × 10$− 4$1.2332.299 × 10$− 5$1.3926.884 × 10$− 6$1.537
12807.742 × 10$− 5$1.2218.864 × 10$− 6$1.3752.389 × 10$− 6$1.527
Table 4. Table showing the maximum absolute error and EOC for solving (53) using the predictor–corrector method for $0.5 < α < 1$.
Table 4. Table showing the maximum absolute error and EOC for solving (53) using the predictor–corrector method for $0.5 < α < 1$.
N$α = 0.7$EOC$α = 0.9$EOC
105.507 × 10$− 3$ 1.162 × 10$− 2$
201.931 × 10$− 3$1.5125.100 × 10$− 3$1.188
407.031 × 10$− 4$1.4572.300 × 10$− 3$1.151
802.635 × 10$− 4$1.4161.051 × 10$− 3$1.129
1601.008 × 10$− 4$1.3864.847 × 10$− 4$1.116
3203.920 × 10$− 5$1.3632.247 × 10$− 4$1.109
6401.541 × 10$− 5$1.3471.045 × 10$− 4$1.105
12806.110 × 10$− 6$1.3354.863 × 10$− 5$1.103
Table 5. Table showing the maximum absolute error and EOC for solving (53) using the predictor–corrector method for $α > 1$.
Table 5. Table showing the maximum absolute error and EOC for solving (53) using the predictor–corrector method for $α > 1$.
N$α = 1.25$EOC$α = 1.50$EOC$α = 1.75$EOC
106.166 × 10$− 3$ 5.475 × 10$− 3$ 5.369 × 10$− 3$
201.463 × 10$− 3$2.0761.319 × 10$− 3$2.0531.319 × 10$− 3$2.026
403.497 × 10$− 4$2.0653.207 × 10$− 4$2.0403.259 × 10$− 4$2.017
808.408 × 10$− 5$2.0567.853 × 10$− 5$2.0308.091 × 10$− 5$2.010
1602.033 × 10$− 5$2.0491.934 × 10$− 5$2.0222.014 × 10$− 5$2.006
3204.935 × 10$− 6$2.0424.783 × 10$− 6$2.0165.022 × 10$− 6$2.004
6401.203 × 10$− 6$2.0361.187 × 10$− 6$2.0111.253 × 10$− 6$2.002
12802.943 × 10$− 7$2.0312.951 × 10$− 7$2.0083.131 × 10$− 7$2.001
Table 6. Table showing the maximum absolute error and EOC for solving Example 3 using the predictor–corrector method for $α < 1$.
Table 6. Table showing the maximum absolute error and EOC for solving Example 3 using the predictor–corrector method for $α < 1$.
N$α = 0.3$EOC$α = 0.6$EOC$α = 0.9$EOC
101.353 × 10$− 3$ 6.520 × 10$− 4$ 3.414 × 10$− 4$
204.324 × 10$− 4$1.6461.937 × 10$− 4$1.7518.704 × 10$− 5$1.972
401.466 × 10$− 4$1.5606.019 × 10$− 5$1.6862.270 × 10$− 5$1.939
805.159 × 10$− 5$1.5071.919 × 10$− 5$1.6495.990 × 10$− 6$1.922
1601.863 × 10$− 5$1.4706.211 × 10$− 6$1.6281.591 × 10$− 6$1.913
3206.863 × 10$− 6$1.4412.027 × 10$− 6$1.6154.239 × 10$− 7$1.908
6402.570 × 10$− 6$1.4176.652 × 10$− 7$1.6081.132 × 10$− 7$1.905
12809.752 × 10$− 7$1.3982.189 × 10$− 7$1.6043.026 × 10$− 8$1.904
Table 7. Table showing the maximum absolute error and EOC for solving Example 3 using the predictor–corrector method for $α > 1$.
Table 7. Table showing the maximum absolute error and EOC for solving Example 3 using the predictor–corrector method for $α > 1$.
N$α = 1.25$EOC$α = 1.50$EOC$α = 1.75$EOC
102.457 × 10$− 4$ 2.108 × 10$− 4$ 1.507 × 10$− 4$
205.847 × 10$− 5$2.0715.141 × 10$− 5$2.0363.723 × 10$− 5$2.017
401.400 × 10$− 5$2.0621.261 × 10$− 5$2.0279.236 × 10$− 6$2.011
803.374 × 10$− 6$2.0533.109 × 10$− 6$2.0202.298 × 10$− 6$2.007
1608.213 × 10$− 7$2.0397.693 × 10$− 7$2.0155.728 × 10$− 7$2.004
3202.016 × 10$− 7$2.0271.909 × 10$− 7$2.0111.430 × 10$− 7$2.003
6404.975 × 10$− 8$2.0194.748 × 10$− 8$2.0083.570 × 10$− 8$2.002
12801.233 × 10$− 8$2.0131.183 × 10$− 8$2.0058.919 × 10$− 9$2.001
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.