Next Article in Journal
Generalized Local Charge Conservation in Many-Body Quantum Mechanics
Previous Article in Journal
Pricing and Return Strategies in Omni-Channel Apparel Retail Considering the Impact of Fashion Level
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fractional Adams Method for Caputo Fractional Differential Equations with Modified Graded Meshes

1
Department of Mathematics and Artificial Intelligence, Lyuliang University, Lishi, Lüliang 033000, China
2
School of Computer and Engineering Sciences, University of Chester, Chester CH1 4BJ, UK
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2025, 13(5), 891; https://doi.org/10.3390/math13050891
Submission received: 10 February 2025 / Revised: 2 March 2025 / Accepted: 5 March 2025 / Published: 6 March 2025
(This article belongs to the Section E: Applied Mathematics)

Abstract

:
In this paper, we introduce an Adams-type predictor–corrector method based on a modified graded mesh for solving Caputo fractional differential equations. This method not only effectively handles the weak singularity near the initial point but also reduces errors associated with large intervals in traditional graded meshes. We prove the error estimates in detail for both 0 < α < 1 and 1 < α < 2 cases, where α is the order of the Caputo fractional derivative. Numerical experiments confirm the convergence of the proposed method and compare its performance with the traditional graded mesh approach.

1. Introduction

In this paper, we introduce a fractional Adams method with a modified graded mesh for solving the following nonlinear fractional differential equation, with 0 < α < 2 :
D t α 0 C u ( t ) = g ( t , u ( t ) ) , t > 0 , u ( k ) ( 0 ) = u 0 ( k ) , k = 0 , 1 , , α 1 ,
where u 0 ( k ) are arbitrary real numbers and D t α 0 C u ( t ) represents the Caputo fractional derivative, defined by:
D t α 0 C u ( t ) = 1 Γ ( α α ) 0 t ( t s ) α α 1 u ( α ) ( s ) d s ,
with Γ ( · ) denoting the Gamma function and α representing the smallest integer greater than or equal to α . The function g ( t , u ) satisfies the Lipschitz condition with respect to the second variable, i.e.,
| g ( t , u 1 ) g ( t , u 2 ) | L | u 1 u 2 | , u 1 , u 2 R ,
where L is a positive constant.
We shall focus only on the case 0 < α < 2 , as α > 2 does not appear to be of significant practical interest ([1], lines 4–5 on page 46). The error estimates for the case α > 2 can be derived in a similar manner.
It is well-known that Equation (1) is equivalent to the following integral representation:
u ( t ) = ν = 0 α 1 u 0 ( ν ) t ν ν ! + 1 Γ ( α ) 0 t ( t s ) α 1 g ( s , u ( s ) ) d s .
The existence and uniqueness of the solution to Equation (1) have been thoroughly discussed in [1].
The numerical solution of fractional differential equations (FDEs) has been a topic of significant research interest in recent decades due to their applications in fields such as physics, biology, and engineering [2,3]. Exact solutions for FDEs are often difficult to obtain. Therefore, it is necessary to develop some efficient numerical methods for solving FDEs.
In addition to Adams methods, other numerical techniques for solving FDEs have been extensively explored. One approach involves directly approximating the fractional derivative, as discussed in [4,5,6]. Another transforms the FDEs into equivalent integral forms, which are then solved using quadrature-based schemes [7,8,9,10,11,12,13]. Furthermore, alternative strategies, such as variational iteration [14], Adomian decomposition [15], finite-element [16], and spectral methods [17], have been developed to address specific FDEs.
The Adams methods, particularly the predictor–corrector variants, have received significant attention for their efficiency in solving FDEs. For instance, Deng [18] enhanced the Adams-type predictor–corrector method by incorporating the short memory principle of fractional calculus, thereby reducing computational complexity. Nguyen and Jang [19] introduced a new prediction stage with the same accuracy order as the correction stage, while Zhou et al. [20] developed a fast second-order Adams method on graded meshes to solve nonlinear time-fractional equations, such as the Benjamin–Bona–Mahony–Burgers equation. Moreover, Lee et al. [21] and Mokhtarnezhadazar [22] proposed an efficient predictor–corrector method based on the Caputo–Fabrizio derivative and a high-order method on non-uniform meshes, respectively. These advancements help reduce computational effort while maintaining high precision.
Among the many numerical methods available for solving FDEs, Diethelm et al. [1,23,24,25] provided the theoretical foundation for the fractional Adams method. They proposed an Adams-type predictor–corrector scheme on uniform meshes and provided rigorous error estimates under the assumption that g ( t ) : = D t α 0 C u ( t ) C 2 [ 0 , T ] . The method achieves convergence rates of O ( N ( 1 + α ) ) for 0 < α 1 and O ( N 2 ) for α > 1 , where N is the number of the nodes of the time partition on [ 0 , T ] . These results have since inspired various extensions and refinements. Liu et al. [26] introduced graded meshes to better handle the singular behavior of solutions near t = 0 . Their analysis refined error estimates and demonstrated that graded meshes significantly improve accuracy for FDEs with initial singularities, making them a practical choice for challenging problems. Furthermore, fractional calculus is more flexible than classical calculus, and recently, some new fractional definitions have been developed (see [27]). These developments provide new perspectives and tools for the numerical solution of fractional differential equations.
In this paper, we propose a modified Adams-type predictor–corrector method with a modified graded mesh. This type of mesh was first introduced in [28]. The modified graded mesh employs a non-uniform grid near the initial point to capture weak singularities, while a uniform grid is used away from the initial point, effectively reducing numerical errors. Our approach not only preserves the advantages of traditional graded meshes but also further optimizes the grid distribution, improving the accuracy of the numerical solutions.
Let 0 = t 0 < t 1 < t 2 < < t N = T be a partition; we shall consider the following modified graded mesh [28]. Define M ¯ ( t ) as a positive monitor function:
M ¯ ( t ) = max T , K t α 2 1 ,
where K ( 0 , T ] is a constant and 0 < α < 2 . The mesh is constructed such that M ¯ ( t ) is evenly distributed, i.e.,
t n t n + 1 M ¯ ( s ) d s = 1 N 0 T M ¯ ( s ) d s , for n = 0 , 1 , , N 1 .
Define σ ¯ = T K 2 α 2 and choose a suitable K ( 0 , T ] such that t J = σ ¯ for some J N . The modified graded mesh { t n } n = 0 N is defined as follows:
t n = α P n 2 K N 2 α , n = 0 , 1 , , J 1 , 1 2 α σ ¯ + P n T N , n = J , J + 1 , , N ,
where P = T 2 + T σ ¯ 2 α 1 . The grid points { t n n = 0 , 1 , , J 1 } constitute a non-uniform grid, whereas the grid points { t n n = J , J + 1 , , N } form a uniform grid [28].
Let u k u ( t k ) for k = 0 , 1 , 2 , , n + 1 , with n = 0 , 1 , 2 , , N 1 being the approximation of u ( t k ) . Suppose we know the approximate values u 0 , u 1 from other methods. For n 2 , we define the following predictor–corrector Adams method to solve Equation (3) for α ( 0 , 2 ) :
u n + 1 P = ν = 0 α 1 u 0 ( ν ) t n + 1 ν ! + 1 Γ ( α ) k = 0 n p k , n + 1 g ( t k , u k ) ,
u n + 1 = ν = 0 α 1 u 0 ( ν ) t n + 1 ν ! + 1 Γ ( α ) k = 0 n q k , n + 1 g ( t k , u k ) + q n + 1 , n + 1 g ( t n + 1 , u n + 1 P ) .
The predictor term u n + 1 P in (5) is derived by approximating the integral n 2 ,
0 t n + 1 ( t n + 1 s ) α 1 g ( s , u ( s ) ) d s ,
with the following approximation, n 2 ,
0 t n + 1 ( t n + 1 s ) α 1 P 0 ( s ) d s ,
where P 0 ( s ) is a piecewise constant function defined on [ 0 , t n + 1 ] as, n 2 ,
P 0 ( s ) = g ( t k , u ( t k ) ) , s [ t k , t k + 1 ] , k = 0 , 1 , 2 , , n .
The corrector term u n + 1 in (6) is derived by approximating the same integral, n 2 ,
0 t n + 1 ( t n + 1 s ) α 1 g ( s , u ( s ) ) d s ,
with the following approximation, n 2 ,
0 t n + 1 ( t n + 1 s ) α 1 P 1 ( s ) d s ,
where P 1 ( s ) is a piecewise linear function defined on [ 0 , t n + 1 ] as, n 2 ,
P 1 ( s ) = s t k + 1 t k t k + 1 g ( t k , u ( t k ) ) + s t k t k + 1 t k g ( t k + 1 , u ( t k + 1 ) ) , s [ t k , t k + 1 ] , k = 0 , 1 , 2 , , n .
Here, the weights p k , n + 1 in (5) for k = 0 , 1 , 2 , , n are given in Appendix A.
The weights q k , n + 1 in (6) for k = 0 , 1 , 2 , , n + 1 , satisfy
q k , n + 1 = 1 α ( t 1 t 0 ) t n + 1 α t 1 + 1 α + 1 ( t n + 1 t 1 ) α + 1 ( t n + 1 t 0 ) α + 1 , if k = 0 , 1 α ( α + 1 ) ( t k 1 t k ) ( t n + 1 t k ) α + 1 ( t n + 1 t k 1 ) α + 1 1 α ( α + 1 ) ( t k t k + 1 ) ( t n + 1 t k + 1 ) α + 1 ( t n + 1 t k ) α + 1 , if k = 1 , 2 , , n , 1 α ( α + 1 ) ( t n + 1 t n ) α , if k = n + 1 .
Assumption 1
([26]). Let 0 < σ < 1 and f : = D t α 0 C u satisfy f C 2 ( 0 , T ] for α ( 0 , 2 ) . There exists a constant c > 0 such that:
| f ( t ) | c t σ 1 , | f ( t ) | c t σ 2 .
Remark 1.
Assumption 1 characterizes the local behavior of f ( t ) : = D t α 0 C u near t = 0 and indicates that D t α 0 C u exhibits a singularity at this point. It is evident that f C 2 [ 0 , T ] . A simple example is f ( t ) = t σ , where 0 < σ < 1 .
Our main results of this work are summarized in the following two theorems.
Theorem 1.
Suppose 0 < α < 1 and f : = D t α 0 C u satisfies Assumption 1. Assume that u ( t k ) and u k are the solutions of Equations (3) and (6), respectively. Then, the following error estimates hold, with n = 2 , 3 , , N 1 .
1. 
If t J t n + 1 2 1 , then we have
max 0 k n + 1 | u ( t k ) u k | C N ( α + σ ) .
2. 
If t n + 1 2 t J t n + 1 , then we have
max 0 k n + 1 | u ( t k ) u k | C N ( α + σ ) .
3. 
If t J > t n + 1 , then we have
max 0 k n + 1 | u ( t k ) u k | C N r ( α + σ ) , if r ( α + σ ) < 1 + α , C N ( 1 + α ) ln N , if r ( α + σ ) = 1 + α , C N ( α + 1 ) , if r ( α + σ ) > 1 + α .
Theorem 2.
Suppose 1 < α < 2 and f : = D t α 0 C u satisfies Assumption 1. Assume that u ( t k ) and u k are the solutions of Equations (3) and (6), respectively. Then, the following error estimates hold, with n = 2 , 3 , , N 1 .
1. 
If t J t n + 1 2 1 , then we have
max 0 k n + 1 | u ( t k ) u k | C N ( 1 + α ) .
2. 
If t n + 1 2 t J t n + 1 , then we have
max 0 k n + 1 | u ( t k ) u k | C N min { 2 , α + σ , r ( 1 + σ ) } .
3. 
If t J > t n + 1 , then we have
max 0 k n + 1 | u ( t k ) u k | C N r ( 1 + σ ) , if r ( 1 + σ ) < 2 , C N 2 ln N , if r ( 1 + σ ) = 2 , C N 2 , if r ( 1 + σ ) > 2 .
The structure of this paper is as follows. In Section 1, we introduce the predictor–corrector method on modified graded meshes for solving Equation (1). Section 2 presents several lemmas for the case 0 < α < 1 , and Section 3 discusses lemmas for the case 1 < α < 2 . In Section 4, we provide proofs of the theorems. Section 5 provides numerical examples demonstrating the consistency between the numerical results and the theoretical predictions.
Throughout the paper, the symbol C denotes a generic constant, which may vary across different occurrences but remains independent of the mesh size.

2. Some Lemmas for 0 < α < 1

Denote
C 1 = α P 2 K 2 α , C 2 = 1 2 α σ ¯ , C 3 = P T ,
where P, K, σ ¯ are defined in (4). Then, t n in (4) can be rewritten as follows:
t n = C 1 n N r , n = 0 , 1 , , J 1 , C 2 + C 3 n N , n = J , J + 1 , , N ,
where r = 2 α , C 2 < 0 , C 3 = T C 2 > 0 and J is defined in (4).
Lemma 1.
There exists a positive constant C 4 > 0 , such that
t n C 4 n N , n = J , J + 1 , , N ,
where J is defined in (4).
Proof. 
Choose C 4 > 0 such that, since C 2 > 0 ,
1 T C 2 + C 4 C 2 > 0 .
Note that
t n n N = C 2 + C 3 ( n N ) n N = C 3 + C 2 n N = T C 2 + N n C 2 = C 2 ( N n 1 ) + T ,
which implies that when n > N 1 T C 2 + C 4 C 2 , we have
t n n N > C 4 .
Choose
J 1 = N 1 T C 2 + C 4 C 2 ,
and we see that when n > J 1 ,
t n C 4 n N .
Further, we have J > J 1 . In fact, t J = C 2 + C 3 ( J N ) implies that J = ( t J C 2 ) N C 3 . Hence, with ϵ > 0 ,
J J 1 ( t J C 2 ) N C 3 · C 2 T + ϵ C 2 N = ( t J C 2 ) N T C 2 · C 2 T + ϵ C 2 N = ( C 2 t J ) N T C 2 · T C 2 ϵ C 2 N C 2 t J C 2 1 .
Thus, for t n t J , we obtain
t n C 4 n N .
The proof of Lemma 1 is complete. □
In the rest of the paper, we assume J > 2 .
Lemma 2.
Suppose 0 < α < 1 and f : = D t α 0 C u satisfies Assumption 1. Let n 2 .
1. 
If t J t n + 1 2 1 , then we have
0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 1 ( s ) d s C N ( α + σ ) .
2. 
If t n + 1 2 t J t n + 1 , then we have
0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 1 ( s ) d s C N ( α + σ ) .
3. 
If t J > t n + 1 , then we have
0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 1 ( s ) d s C N r ( α + σ ) , if r ( α + σ ) < 2 , C N 2 ln N , if r ( α + σ ) = 2 , C N 2 , if r ( α + σ ) > 2 .
In the above, P 1 ( s ) denotes a piecewise linear approximation of f ( s ) defined on each interval [ t k , t k + 1 ] for k = 0 , 1 , 2 , , n ,
P 1 ( s ) = s t k + 1 t k t k + 1 f ( t k ) + s t k t k + 1 t k f ( t k + 1 ) , s [ t k , t k + 1 ] .
Proof. 
For n = 2 , 3 , , N 1 , we decompose the integral into three parts,
0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 1 ( s ) d s = 0 t 1 + k = 1 n 1 t k t k + 1 + t n t n + 1 ( t n + 1 s ) α 1 f ( s ) P 1 ( s ) d s = : H 1 + H 2 + H 3 .
Using Assumption 1, we have
| H 1 | = 0 t 1 ( t n + 1 s ) α 1 f ( s ) s t 1 t 0 t 1 f ( t 0 ) s t 0 t 1 t 0 f ( t 1 ) d s = 0 t 1 ( t n + 1 s ) α 1 s t 1 t 1 0 s f ( τ ) d τ s t 1 s t 1 f ( τ ) d τ d s 0 t 1 ( t n + 1 s ) α 1 s t 1 t 1 C τ σ σ | 0 s s t 1 C τ σ σ | s t 1 d s C 0 t 1 ( t n + 1 s ) α 1 s σ d s + C 0 t 1 ( t n + 1 s ) α 1 t 1 σ d s .
If t J t n + 1 , since t n + 1 t J > t J 1 , we have
t n + 1 t 1 t n + 1 = 1 t 1 t n + 1 1 t 1 t J 1 = 1 C 1 ( 1 N ) r C 1 ( J 1 N ) r = 1 1 ( J 1 ) r 1 2 .
If t J > t n + 1 , since r > 1 and n 2 , we obtain
t n + 1 t 1 t n + 1 = 1 t 1 t n + 1 = 1 C 1 ( 1 N ) r C 1 ( n + 1 N ) r = 1 1 ( n + 1 ) r 1 1 2 r 1 2 .
Thus, there exists a constant C > 0 such that
t n + 1 t n + 1 t 1 C t n + 1 , n = 2 , 3 , , N 1 .
For 0 < α < 1 , we have
| H 1 | C ( t n + 1 t 1 ) α 1 0 t 1 t 1 σ d s + C ( t n + 1 t 1 ) α 1 ( t 1 ) σ + 1 C ( t n + 1 t 1 ) α 1 ( t 1 ) σ t 1 + C ( t n + 1 t 1 ) α 1 ( t 1 ) σ + 1 C ( t n + 1 t 1 ) α 1 ( t 1 ) σ + 1 C ( t n + 1 ) α 1 ( t 1 ) σ + 1 C ( t n ) α 1 ( t 1 ) σ + 1 .
When t J t n + 1 , by Lemma 1, we obtain
| H 1 | C [ C 2 + C 3 ( n N ) ] α 1 [ C 1 ( 1 N ) r ] σ + 1 C [ C 4 ( n N ) ] α 1 [ C 1 ( 1 N ) r ] σ + 1 C 1 n 1 α ( 1 N ) ( α 1 ) + r ( σ + 1 ) C N ( α 1 ) r ( σ + 1 ) .
When t J > t n + 1 , we get
| H 1 | C [ C 1 ( n N ) r ] α 1 [ C 1 ( 1 N ) r ] σ + 1 C 1 n ( 1 α ) r ( 1 N ) r ( α + σ ) C N r ( α + σ ) .
For H 2 , we have, with k = 1 , 2 , , n 1 and n = 2 , 3 , , N 1 , by Assumption 1,
| H 2 | = k = 1 n 1 t k t k + 1 ( t n + 1 s ) α 1 f ( s ) s t k + 1 t k t k + 1 f ( t k ) s t k t k + 1 t k f ( t k + 1 ) d s .
There exist η k ( t k , t k + 1 ) , such that
| H 2 | = k = 1 n 1 t k t k + 1 ( t n + 1 s ) α 1 ( s t k ) ( s t k + 1 ) t k t k + 1 ( η 1 η 2 ) f ( η k ) d s C k = 1 n 1 t k t k + 1 ( t n + 1 s ) α 1 ( s t k ) ( s t k + 1 ) f ( η k ) d s C k = 1 n 1 ( t k + 1 t k ) 2 ( t k ) σ 2 t k t k + 1 ( t n + 1 s ) α 1 d s C k = 1 n + 1 2 1 ( t k + 1 t k ) 2 ( t k ) σ 2 t k t k + 1 ( t n + 1 s ) α 1 d s + C k = n + 1 2 n 1 ( t k + 1 t k ) 2 ( t k ) σ 2 t k t k + 1 ( t n + 1 s ) α 1 d s = : H 21 + H 22 .
For H 21 , when 0 < α < 1 , we have
H 21 C k = 1 n + 1 2 1 ( t k + 1 t k ) 3 ( t k ) σ 2 ( t n + 1 t k + 1 ) α 1 .
Case 1.  t J t n + 1 2 1 . There holds
H 21 C k = 1 J 1 ( t k + 1 t k ) 3 ( t k ) σ 2 ( t n + 1 t k + 1 ) α 1 + C k = J n + 1 2 1 ( t k + 1 t k ) 3 ( t k ) σ 2 ( t n + 1 t k + 1 ) α 1 H 21 1 + H 21 2 .
For k = 1 , 2 , , J 1 , there exists η k [ k , k + 1 ] , such that
t k + 1 t k = C 1 ( k + 1 N ) r C 1 ( k N ) r = C 1 N r ( r η k r 1 ) C 1 N r r ( k + 1 ) r 1 C k r 1 N r .
For k = J , J + 1 , , n + 1 2 1 , we obtain
t k + 1 t k = C 2 + C 3 ( k + 1 N ) C 2 C 3 ( k N ) = C 3 ( k + 1 N ) C 3 ( k N ) = C 3 N 1 .
For k = 1 , 2 , , n + 1 2 1 , by Lemma 1, we get
( t n + 1 t k + 1 ) α 1 = t n + 1 t n + 1 2 α 1 t n + 1 α 1 t n α 1 C 2 + C 3 ( n N ) α 1 C 4 ( n N ) α 1 C N n 1 α .
Thus, by (12) and (14),
H 21 1 C k = 1 J 1 C k r 1 N r 3 C 1 k N r σ 2 C N n 1 α C k = 1 J 1 k r ( 1 + σ ) + α 4 N r ( 1 + σ ) + ( 1 α ) ( k n ) 1 α C k = 1 J 1 k r ( 1 + σ ) + α 4 N r ( 1 + σ ) + ( 1 α ) .
If r ( σ + 1 ) + α < 3 , we have
H 21 C N r ( 1 + σ ) + ( 1 α ) .
If r ( σ + 1 ) + α = 3 , we have
H 21 1 C k = 1 J 1 k 1 N 2 C N 2 k = 1 N k 1 C N 2 1 + 1 2 + + 1 N C N 2 1 N 1 x d x C N 2 ln N .
If r ( σ + 1 ) + α > 3 , we have
H 21 1 C k = 1 J 1 k r ( 1 + σ ) + α 4 N r ( 1 + σ ) + ( 1 α ) C k = 1 n k r ( 1 + σ ) + α 4 N r ( 1 + σ ) + ( 1 α ) C N r ( 1 + σ ) + ( 1 α ) ( 1 r ( 1 + σ ) + α 4 + 2 r ( 1 + σ ) + α 4 + + n r ( 1 + σ ) + α 4 ) C N r ( 1 + σ ) + ( 1 α ) 1 n x r ( 1 + σ ) + α 4 d x C N r ( 1 + σ ) + ( 1 α ) n r ( 1 + σ ) + α 3 C ( n N ) r ( 1 + σ ) + α 3 N 2 C N 2 .
Hence, we obtain, with 0 < α < 1 ,
H 21 1 C N r ( 1 + σ ) + ( 1 α ) , if r ( σ + 1 ) + α < 3 , C N 2 ln N , if r ( σ + 1 ) + α = 3 , C N 2 , if r ( σ + 1 ) + α > 3 .
For H 21 2 , by (13), (14), and Lemma 1, we arrive at
H 21 2 C k = J n + 1 2 1 C 3 N 1 3 C 2 + C 3 ( k N ) σ 2 C N n 1 α C k = J n + 1 2 1 C 3 N 1 3 C 4 ( k N ) σ 2 C N n 1 α C k = 1 n + 1 2 1 k σ + α 3 N ( α + σ ) ( k n ) 1 α C k = 1 n + 1 2 1 k σ + α 3 N ( α + σ ) C N ( α + σ ) , if α + σ < 2 , C N 2 ln N , if α + σ = 2 .
Case 2.  t n + 1 2 t J t n + 1 . For k = 1 , 2 , , n + 1 2 1 , there exists η k [ k , k + 1 ] , such that
t k + 1 t k = C 1 ( k + 1 N ) r C 1 ( k N ) r = C 1 N r ( r η k r 1 ) C 1 N r r ( k + 1 ) r 1 C k r 1 N r ,
and
( t n + 1 t k + 1 ) α 1 = t n + 1 t n + 1 2 α 1 t n + 1 α 1 t n α 1 C 2 + C 3 ( n N ) α 1 C 4 ( n N ) α 1 C N n 1 α .
Thus, by (15) and (16), we get
H 21 C k = 1 n + 1 2 1 C k r 1 N r 3 C 1 k N r σ 2 C N n 1 α C k = 1 n + 1 2 1 k r ( 1 + σ ) + α 4 N r ( 1 + σ ) + ( 1 α ) ( k n ) 1 α C k = 1 n + 1 2 1 k r ( 1 + σ ) + α 4 N r ( 1 + σ ) + ( 1 α ) C N r ( 1 + σ ) + ( 1 α ) , if r ( σ + 1 ) + α < 3 , C N 2 ln N , if r ( σ + 1 ) + α = 3 , C N 2 , if r ( σ + 1 ) + α > 3 .
Case 3.  t J > t n + 1 . For k = 1 , 2 , , n + 1 2 1 , there exists η k [ k , k + 1 ] , such that
t k + 1 t k = C 1 ( k + 1 N ) r C 1 ( k N ) r = C 1 N r ( r η k r 1 ) C 1 N r r ( k + 1 ) r 1 C k r 1 N r ,
and
( t n + 1 t k + 1 ) α 1 t n + 1 t n + 1 2 α 1 t n + 1 α 1 t n α 1 C 1 n N r α 1 C N n r ( 1 α ) .
Thus, by (17) and (18), we arrive at
H 21 C k = 1 n + 1 2 1 C k r 1 N r 3 C 1 ( k N ) r σ 2 C N n r ( 1 α ) C k = 1 n + 1 2 1 k r ( σ + α ) 3 N r ( σ + α ) ( k n ) r ( 1 α ) C k = 1 n + 1 2 1 k r ( σ + α ) 3 N r ( σ + α ) C N r ( σ + α ) , if r ( σ + α ) < 2 , C N 2 ln N , if r ( σ + α ) = 2 , C N 2 , if r ( σ + α ) > 2 .
Next, we consider H 22 with 0 < α < 2 .
Case 1.  t J t n + 1 2 1 . For k = n + 1 2 , n + 1 2 + 1 , , n 1 , we have
t k + 1 t k = C 2 + C 3 ( k + 1 N ) C 2 C 3 ( k N ) = C 3 N 1 .
By (19), Lemma 1, and noting that
( t k ) σ 2 = C 2 + C 3 ( k N ) σ 2 C 4 k N σ 2 = C 4 σ 2 N k 2 σ C 4 σ 2 C N n 2 σ C N n 2 σ ,
we obtain
H 22 C k = n + 1 2 n 1 ( C 3 N 1 ) 2 N n 2 σ t k t k + 1 ( t n + 1 s ) α 1 d s .
Note that
t n + 1 2 t n ( t n + 1 s ) α 1 d s = 1 α ( t n + 1 t n + 1 2 ) α ( t n + 1 t n ) α 1 α ( t n + 1 t n + 1 2 ) α 1 α ( t n + 1 ) α C 1 α ( t n ) α = C α C 2 + C 3 ( n N ) α C α C 3 ( n N ) α C n N α ,
We arrive at
H 22 C ( C 3 N 1 ) 2 N n 2 σ C n N α C n 2 + α + σ N ( σ + α ) C N ( σ + α ) , if σ + α < 2 , C N 2 , if σ + α 2 .
Case 2.  t n + 1 2 t J t n + 1 . We have
H 22 = C k = n + 1 2 J 1 ( t k + 1 t k ) 2 ( t k ) σ 2 t k t k + 1 ( t n + 1 s ) α 1 d s + C J n 1 ( t k + 1 t k ) 2 ( t k ) σ 2 t k t k + 1 ( t n + 1 s ) α 1 d s = : H 22 1 + H 22 2 .
We first consider H 22 1 . For k = n + 1 2 , n + 1 2 + 1 , , J 1 , we have
t k + 1 t k = C 1 ( k + 1 N ) r C 1 ( k N ) r C k r 1 N r ,
( t k ) σ 2 = C 1 ( k N ) r σ 2 = C 1 σ 2 k N r ( σ 2 ) C 1 σ 2 C n N r ( σ 2 ) C N n r ( 2 σ ) ,
and
t n + 1 2 t J ( t n + 1 s ) α 1 d s = 1 α ( t n + 1 t n + 1 2 ) α ( t n + 1 t J ) α 1 α ( t n + 1 t n + 1 2 ) α 1 α ( t n + 1 ) α C 1 α ( t n ) α = C α C 2 + C 3 ( n N ) α C α C 3 ( n N ) α C n N α .
By (21)–(23), we arrive at
H 22 1 C ( C k r 1 N r ) 2 N n r ( 2 σ ) C n N α C ( C n r 1 N r ) 2 N n r ( 2 σ ) C n N α C n 2 + α + r σ N ( r σ + α ) C N ( r σ + α ) , if r σ + α < 2 , C N 2 , if r σ + α 2 .
Now we turn to H 22 2 . For k = J , J + 1 , , n 1 , we have
t k + 1 t k = C 2 + C 3 ( k + 1 N ) C 2 C 3 ( k N ) = C 3 N 1 ,
and, by Lemma 1,
( t k ) σ 2 = C 2 + C 3 ( k N ) σ 2 C 4 k N σ 2 C 4 σ 2 C N n 2 σ C N n 2 σ ,
and
t J t n ( t n + 1 s ) α 1 d s = 1 α ( t n + 1 t J ) α ( t n + 1 t n ) α 1 α ( t n + 1 t J ) α 1 α ( t n + 1 ) α C 1 α ( t n ) α = C α C 2 + C 3 ( n N ) α C α C 3 ( n N ) α C n N α .
By (24)–(26), we have
H 22 2 C ( C 3 N 1 ) 2 N n 2 σ C n N α C n 2 + α + σ N ( σ + α ) C N ( σ + α ) , if σ + α < 2 , C N 2 , if σ + α 2 .
Case 3.  t J > t n + 1 . For k = n + 1 2 , n + 1 2 + 1 , , J 1 , we have
t k + 1 t k = C 1 ( k + 1 N ) r C 1 ( k N ) r C k r 1 N r ,
( t k ) σ 2 = C 1 ( k N ) r σ 2 = C 1 σ 2 k N r ( σ 2 ) C N n r ( 2 σ ) ,
and
t n + 1 2 t n ( t n + 1 s ) α 1 d s = 1 α ( t n + 1 t n + 1 2 ) α ( t n + 1 t n ) α 1 α ( t n + 1 t n + 1 2 ) α 1 α ( t n + 1 ) α C 1 α ( t n ) α = C α C 1 ( n N ) r α C n N r α .
By (27)–(29), we arrive at
H 22 C ( C k r 1 N r ) 2 N n r ( 2 σ ) C n N r α C ( C n r 1 N r ) 2 N n r ( 2 σ ) C n N r α C n 2 + r ( α + σ ) N r ( α + σ ) C N r ( α + σ ) , if r ( α + σ ) < 2 , C N 2 , if r ( α + σ ) 2 .
For H 3 , there exist η n ( t n , t n + 1 ) , n = 2 , 3 , , N 1 , such that
| H 3 | = t n t n + 1 ( t n + 1 s ) α 1 f ( s ) P 1 ( s ) d s t n t n + 1 ( t n + 1 s ) α 1 f ( η n ) ( s t n ) ( s t n + 1 ) d s .
Using Assumption 1, we have, with 0 < α < 2 ,
| H 3 | C ( t n + 1 t n ) 2 ( t n ) σ 2 t n t n + 1 ( t n + 1 s ) α 1 d s = C 1 α ( t n + 1 t n ) 2 ( t n ) σ 2 ( t n + 1 t n ) α C ( t n + 1 t n ) 2 + α ( t n ) σ 2 .
When t J t n + 1 , by (24) and Lemma 1, we obtain
| H 3 | C ( C 3 N 1 ) 2 + α ( C 2 + C 3 ( n N ) ) σ 2 C ( C 3 N 1 ) 2 + α ( C 4 ( n N ) ) σ 2 C n σ 2 N ( α + σ ) C N ( α + σ ) .
When t J > t n + 1 , by (17), we arrive at
| H 3 | C ( C n r 1 N r ) 2 + α ( C 1 ( n N ) r ) σ 2 C n r ( α + σ ) α 2 N r ( α + σ ) C N r ( α + σ ) , if r ( α + σ ) < 2 + α , C N ( 2 + α ) , if r ( α + σ ) 2 + α .
Thus, for t J t n + 1 2 1 , noting 0 < α < 1 and σ + α < 2 , we have the following cases.
If σ + α < 2 < r ( σ + 1 ) + α 1 , we have
| H | C N ( α 1 ) r ( σ + 1 ) + C N 2 + C N ( α + σ ) + C N ( α + σ ) + C N ( α + σ ) C N ( α + σ ) .
If r ( σ + 1 ) + α 1 < 2 , we obtain
| H | C N ( α 1 ) r ( σ + 1 ) + C N r ( σ + 1 ) + 1 α + C N ( α + σ ) + C N ( α + σ ) + C N ( α + σ ) C N ( α + σ ) .
The remaining cases can be considered similarly. □
The following Lemmas 3 and 4 hold for 0 < α < 2 .
Lemma 3.
Let 0 < α < 2 and n 2 . The weights q k , n + 1 and p k , n + 1 defined in (7) and (8), respectively, satisfy the following properties:
1. 
For all k = 0 , 1 , 2 , , n + 1 , we have
q k , n + 1 > 0 .
2. 
For all k = 0 , 1 , 2 , , n , we have
p k , n + 1 > 0 .
Proof. 
For k = 0 , it holds that
q 0 , n + 1 = t 0 t 1 ( t n + 1 s ) α 1 s t 1 t 0 t 1 d s .
For k = 1 , 2 , , n , it follows that
q k , n + 1 = t k 1 t k ( t n + 1 s ) α 1 s t k 1 t k t k 1 d s + t k t k + 1 ( t n + 1 s ) α 1 s t k + 1 t k t k + 1 d s .
For k = n + 1 , we have
q k , n + 1 = t n t n + 1 ( t n + 1 s ) α 1 s t n t n + 1 t n d s .
Hence, we show q k , n + 1 > 0 .
Note that, with k = 0 , 1 , 2 , , n ,
p k , n + 1 = t k t k + 1 ( t n + 1 s ) α 1 d s .
Since the ( t n + 1 s ) α 1 is positive over the integration interval, it follows that p k , n + 1 > 0 . □
Lemma 4.
Let 0 < α < 2 . For n = 2 , 3 , , N 1 , we have
q n + 1 , n + 1 C N α , if t J t n + 1 , C n ( r 1 ) α N r α , if t J > t n + 1 ,
where q n + 1 , n + 1 is defined in (6).
Proof. 
By (7), we consider two cases:
When t J t n + 1 , we have
q n + 1 , n + 1 = ( t n + 1 t n ) α α ( α + 1 ) C N α .
When t J > t n + 1 , we have
q n + 1 , n + 1 = ( t n + 1 t n ) α α ( α + 1 ) C N r α n ( r 1 ) α .
The proof of Lemma 4 is complete. □
Lemma 5.
Suppose 0 < α < 1 and f : = D t α 0 C u satisfies Assumption 1. Let n 2 .
1. 
If t J t n + 1 2 1 , then we have
q n + 1 , n + 1 0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 0 ( s ) d s C N ( 2 σ + α ) , if σ + α < 1 , C N ( 1 + α ) ln N , if σ + α = 1 , C N ( 1 + α ) , if σ + α > 1 .
2. 
If t n + 1 2 t J t n + 1 , then we have
q n + 1 , n + 1 0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 0 ( s ) d s C N ( 2 σ + α ) , if σ + α < 1 , C N ( 1 + α ) , if σ + α 1 .
3. 
If t J > t n + 1 , then we have
q n + 1 , n + 1 0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 0 ( s ) d s C N r ( α + σ ) , if r ( α + σ ) < 1 + α , C N r ( α + σ ) ln N , if r ( α + σ ) = 1 + α , C N ( 1 + α ) , if r ( α + σ ) > 1 + α .
Here, P 0 ( s ) denotes a piecewise constant approximation of f ( s ) defined on [ t k , t k + 1 ] for k = 0 , 1 , 2 , , n ,
P 0 ( s ) = f ( t k ) , s [ t k , t k + 1 ] .
Proof. 
The following proof is similar to the proof of Lemma 2. Let
q n + 1 , n + 1 0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 0 ( s ) d s = q n + 1 , n + 1 0 t 1 + k = 1 n 1 t k t k + 1 + t n t n + 1 ( t n + 1 s ) α 1 f ( s ) P 0 ( s ) d s = : H 1 + H 2 + H 3 .
For H 1 , by Lemma 3 and Assumption 1, we obtain
| H 1 | q n + 1 , n + 1 0 t 1 ( t n + 1 s ) α 1 | f ( s ) | d s + 0 t 1 ( t n + 1 s ) α 1 | P 0 ( s ) | d s q n + 1 , n + 1 0 t 1 ( t n + 1 s ) α 1 s σ d s + 0 t 1 ( t n + 1 s ) α 1 0 σ d s = q n + 1 , n + 1 0 t 1 ( t n + 1 s ) α 1 s σ d s .
By (9), it follows that
| H 1 | q n + 1 , n + 1 ( t n + 1 t 1 ) α 1 ( t 1 ) σ + 1 C q n + 1 , n + 1 ( t n + 1 ) α 1 ( t 1 ) σ + 1 C q n + 1 , n + 1 ( t n ) α 1 ( t 1 ) σ + 1 .
For t J t n + 1 , by Lemma 4 and (10), we have
| H 1 | C C N α C N ( α 1 ) r ( σ + 1 ) C N ( 2 α 1 ) r ( σ + 1 ) .
For t J > t n + 1 , by Lemma 4 and (11), we have
| H 1 | C C N r α n ( r 1 ) α C N r ( α + σ ) = C n N r α n α C N r ( α + σ ) C N r ( α + σ ) .
For H 2 , with η k ( t k , t k + 1 ) , where k = 1 , 2 , , n 1 , we have
| H 2 | q n + 1 , n + 1 k = 1 n 1 t k t k + 1 ( t n + 1 s ) α 1 | f ( η k ) | ( s t k ) d s .
By Assumption 1, we get,
| H 2 | C q n + 1 , n + 1 k = 1 n + 1 2 1 + k = n + 1 2 n 1 ( t k + 1 t k ) ( t k ) σ 1 t k t k + 1 ( t n + 1 s ) α 1 d s = : H 21 + H 22 .
For H 21 , we consider the following three cases:
Case 1.  t J t n + 1 2 1 . We have
H 21 = C q n + 1 , n + 1 k = 1 J 1 ( t k + 1 t k ) ( t k ) σ 1 t k t k + 1 ( t n + 1 s ) α 1 d s + C q n + 1 , n + 1 k = J n + 1 2 1 ( t k + 1 t k ) ( t k ) σ 1 t k t k + 1 ( t n + 1 s ) α 1 d s = : H 21 1 + H 21 2 .
By Lemma 4, (12), (14), and r > 2 , we have
H 21 1 C q n + 1 , n + 1 k = 1 J 1 ( t k + 1 t k ) 2 ( t k ) σ 1 ( t n + 1 t k + 1 ) α 1 C N α k = 1 J 1 C k r 1 N r 2 C 1 k N r σ 1 N n 1 α C N 2 α r ( 1 + σ ) + 1 k = 1 J 1 k r ( 1 + σ ) + α 3 k n 1 α C N 2 α r ( 1 + σ ) + 1 k = 1 J 1 k r ( 1 + σ ) + α 3 C N 2 α r ( 1 + σ ) + 1 k = 1 n k r ( 1 + σ ) + α 3 C N 2 α r ( 1 + σ ) + 1 1 n x r ( 1 + σ ) + α 3 d x C N ( 1 + α ) .
By Lemma 4, (13), (14), and Lemma 1, we have
H 21 2 C q n + 1 , n + 1 k = J n + 1 2 1 ( t k + 1 t k ) 2 ( t k ) σ 1 ( t n + 1 t k + 1 ) α 1 C N α k = 1 n + 1 2 1 C 3 N 1 2 C 2 + C 3 ( k N ) σ 1 N n 1 α C N α k = 1 n + 1 2 1 C 3 N 1 2 C 4 ( k N ) σ 1 N n 1 α C N ( 2 α + σ ) k = 1 n + 1 2 1 k α + σ 2 k n 1 α C N ( 2 σ + α ) , if σ + α < 1 , C N ( 1 + α ) ln N , if σ + α = 1 , C N ( 1 + α ) , if σ + α > 1 .
Case 2.  t n + 1 2 t J t n + 1 . By Lemma 4, (15), (16), and r > 2 , we have
H 21 C q n + 1 , n + 1 k = 1 n + 1 2 1 ( t k + 1 t k ) 2 ( t k ) σ 1 ( t n + 1 t k + 1 ) α 1 C N α k = 1 n + 1 2 1 C k r 1 N r 2 C 1 k N r σ 1 N n 1 α C N 2 α r ( 1 + σ ) + 1 k = 1 n + 1 2 1 k r ( 1 + σ ) + α 3 k n 1 α C N 2 α r ( 1 + σ ) + 1 k = 1 n + 1 2 1 k r ( 1 + σ ) + α 3 C N 2 α r ( 1 + σ ) + 1 k = 1 n k r ( 1 + σ ) + α 3 C N 2 α r ( 1 + σ ) + 1 1 n x r ( 1 + σ ) + α 3 d x C N ( 1 + α ) .
Case 3.  t J > t n + 1 . By Lemma 4, (17), and (18), we have
H 21 C q n + 1 , n + 1 k = 1 n + 1 2 1 ( t k + 1 t k ) 2 ( t k ) σ 1 ( t n + 1 t k + 1 ) α 1 C n ( r 1 ) α N r α k = 1 n + 1 2 1 C k r 1 N r 2 C 1 k N r σ 1 N n r ( 1 α ) C n N r α k n α k = 1 n + 1 2 1 N r ( α + σ ) k r ( α + σ ) α 2 k n r ( 1 α ) C k = 1 n + 1 2 1 N r ( α + σ ) k r ( α + σ ) α 2 C N r ( α + σ ) , if r ( α + σ ) < 1 + α , C N ( 1 + α ) ln N , if r ( α + σ ) = 1 + α , C N ( 1 + α ) , if r ( α + σ ) > 1 + α .
For H 22 , we have
H 22 = C q n + 1 , n + 1 k = n + 1 2 n 1 ( t k + 1 t k ) ( t k ) σ 1 t k t k + 1 ( t n + 1 s ) α 1 d s .
Case 1.  t J t n + 1 2 1 . By Lemma 4, (19), and for n + 1 2 k n 1 (with n 2 ), and noting that
( t k ) σ 1 = C 2 + C 3 k N σ 1 C 4 k N σ 1 C 4 σ 1 N k ( 1 σ ) C N n 1 σ ,
we have
H 22 C N α k = n + 1 2 n 1 ( t k + 1 t k ) ( t k ) σ 1 t k t k + 1 ( t n + 1 s ) α 1 d s C N α k = n + 1 2 n 1 C 3 N 1 C N n 1 σ t k t k + 1 ( t n + 1 s ) α 1 d s .
Thus, with n 2 and 0 < α < 2 , by (20), we get
H 22 C N α k = n + 1 2 n 1 C 3 N 1 C N n 1 σ C n N α = C n σ 1 + α N 2 α σ C N 2 α σ , if σ + α < 1 , C N 1 α , if σ + α 1 .
Case 2.  t n + 1 2 t J t n + 1 . We have
H 22 q n + 1 , n + 1 k = n + 1 2 J 1 + k = J n 1 ( t k + 1 t k ) ( t k ) σ 1 t k t k + 1 ( t n + 1 s ) α 1 d s = : H 22 1 + H 22 2 .
For n + 1 2 k J 1 (with n 2 ), we have
( t k ) σ 1 = C 1 ( k N ) r σ 1 = C 1 σ 1 k N r ( σ 1 ) C N n r ( 1 σ ) .
Thus, By Lemma 4, (21), (31), and (23), with n 2 and 0 < α < 2 , we get
H 22 1 C N α ( C k r 1 N r ) C N n r ( 1 σ ) C n N α C N α ( C n r 1 N r ) C N n r ( 1 σ ) C n N α = C n r σ 1 + α N 2 α r σ C N 2 α r σ , if r σ + α < 1 , C N 1 α , if r σ + α 1 .
For J k n 1 (with n 2 ), by Lemma 1, we have
( t k ) σ 1 = C 2 + C 3 k N σ 1 C 4 k N σ 1 C 4 σ 1 N k ( 1 σ ) C N n 1 σ .
Thus, By Lemma 4, (24), (32), and (26), with n 2 and 0 < α < 2 , we get
H 22 2 C N α C 3 N 1 C N n 1 σ C n N α = C n σ 1 + α N 2 α σ C N 2 α σ , if σ + α < 1 , C N 1 α , if σ + α 1 .
Case 3.  t J > t n + 1 . For n + 1 2 k n 1 (with n 2 ), we have
( t k ) σ 1 = C 1 ( k N ) r σ 1 = C 1 σ 1 ( k N ) r σ 1 C 1 σ 1 C ( n N ) r σ 1 C N n r ( 1 σ ) .
By Lemma 4, (27), (29), and (33), with n 2 and 0 < α < 2 , we get
H 22 C n ( r 1 ) α N r α ( C k r 1 N r ) C N n r ( 1 σ ) C n N r α C n ( r 1 ) α N r α ( C n r 1 N r ) C N n r ( 1 σ ) C n N r α = C n r ( σ + α ) α 1 N r ( σ + α ) C N r ( σ + α ) , if r ( σ + α ) α < 1 , C N ( 1 + α ) , if r ( σ + α ) α 1 .
For H 3 , for 0 < α < 2 , by Assumption 1, there exists η n ( t n , t n + 1 ) , such that
| H 3 | = q n + 1 , n + 1 t n t n + 1 ( t n + 1 s ) α 1 f ( s ) P 0 ( s ) d s = q n + 1 , n + 1 t n t n + 1 ( t n + 1 s ) α 1 f ( s ) f ( t n ) d s = q n + 1 , n + 1 t n t n + 1 ( t n + 1 s ) α 1 f ( η n ) ( s t n ) d s q n + 1 , n + 1 ( t n + 1 t n ) α C ( t n ) σ 1 ( t n + 1 t n ) C q n + 1 , n + 1 ( t n + 1 t n ) 1 + α ( t n ) σ 1 .
When t J t n + 1 , by Lemma 4, Lemma 1, and 0 < σ < 1 , we have
| H 3 | C N α C 3 N 1 1 + α C 2 + C 3 n N σ 1 C N α C 3 N 1 1 + α C 4 n N σ 1 C n σ 1 N 2 α σ C N 2 α σ .
When t J > t n + 1 , by Lemma 4, we have
| H 3 | C n ( r 1 ) α N r α ( C n r 1 N r ) 1 + α ( C 1 ( n N ) r ) σ 1 C ( n N ) r α n α n r ( α + σ ) α 1 N r ( α + σ ) C N r ( α + σ ) , if r ( α + σ ) < 1 + α , C N ( 1 + α ) , if r ( α + σ ) 1 + α .
Thus, when 0 < α < 1 , for t J t n + 1 2 1 , if σ + α < 1 , we have
| q n + 1 , n + 1 0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 0 ( s ) d s | C N ( 2 α 1 ) r ( σ + 1 ) + C N ( 1 + α ) + C N ( 2 α + σ ) + C N ( 2 α + σ ) + C N ( 2 α + σ ) C N ( 2 α + σ ) .
If σ + α = 1 , we have
| q n + 1 , n + 1 0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 0 ( s ) d s | C N ( 2 α 1 ) r ( σ + 1 ) + C N ( 1 + α ) + C N ( 1 + α ) ln N + C N ( 1 + α ) + C N ( 2 α + σ ) C N ( 1 + α ) ln N .
If σ + α > 1 , we have
| q n + 1 , n + 1 0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 0 ( s ) d s | C N ( 2 α 1 ) r ( σ + 1 ) + C N ( 1 + α ) + C N ( 1 + α ) + C N ( 1 + α ) + C N ( 2 α + σ ) C N ( 1 + α ) .
The remaining cases can be proven similarly. This completes the proof of Lemma 5. □
We remark that, in the proof of Lemma 5, some inequalities hold for 0 < α < 2 . The following Lemma 6 holds for 0 < α < 2 .
Lemma 6.
Let 0 < α < 2 , then there exists a constant C > 0 such that the following inequalities hold,
k = 0 n p k , n + 1 C T α ,
k = 0 n q k , n + 1 C T α ,
where p k , n + 1 and q k , n + 1 are weights defined by (5) and (6), for k = 0 , 1 , 2 , , n .
Proof. 
We will prove inequality (35), while the proof of (34) follows analogously.
0 t n + 1 ( t n + 1 s ) α 1 f ( s ) d s = k = 0 n + 1 q k , n + 1 f ( t k ) + R n ,
where R n denotes the remainder term. By setting f ( s ) = 1 in the integral, we have
k = 0 n + 1 q k , n + 1 = 0 t n + 1 ( t n + 1 s ) α 1 d s = 1 α ( t n + 1 ) α C T α .
From Lemma 3, q n + 1 , n + 1 > 0 . Therefore, inequality (35) holds. □

3. Some Lemmas for the Case 1 < α < 2

Lemma 7.
Suppose 1 < α < 2 and f : = D t α 0 C u satisfies Assumption 1. Let n 2 .
1. 
If t J t n + 1 2 1 , then we have
0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 1 ( s ) d s C N ( 1 + σ ) .
2. 
If t n + 1 2 t J t n + 1 , then we have
0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 1 ( s ) d s C N min { α + σ , r ( 1 + σ ) , 2 } .
3. 
If t J > t n + 1 , then we have
0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 1 ( s ) d s C N r ( 1 + σ ) , if r ( 1 + σ ) < 2 , C N 2 ln N , if r ( 1 + σ ) = 2 , C N 2 , if r ( 1 + σ ) > 2 .
Proof. 
For n = 2 , 3 , , N 1 , we decompose the integral into three parts,
0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 1 ( s ) d s = 0 t 1 + k = 1 n 1 t k t k + 1 + t n t n + 1 ( t n + 1 s ) α 1 f ( s ) P 1 ( s ) d s = : H 1 + H 2 + H 3 .
If 1 < α < 2 , we have
| H 1 | C ( t n + 1 t 1 ) α 1 0 t 1 s σ d s + C ( t n + 1 t 1 ) α 1 ( t 1 ) σ + 1 C ( t n + 1 ) α 1 0 t 1 s σ d s + C ( t n + 1 ) α 1 ( t 1 ) σ + 1 C ( t n + 1 ) α 1 0 t 1 t 1 σ d s + C ( t n + 1 ) α 1 ( t 1 ) σ + 1 C ( t n + 1 ) α 1 ( t 1 ) σ t 1 + C ( t n + 1 ) α 1 ( t 1 ) σ + 1 C ( t n + 1 ) α 1 ( t 1 ) σ + 1 C ( t n ) α 1 ( t 1 ) σ + 1 .
When t J t n + 1 , since C 2 < 0 , we obtain
| H 1 | C [ C 2 + C 3 ( n N ) ] α 1 [ C 1 ( 1 N ) r ] σ + 1 C [ C 3 ( n N ) ] α 1 [ C 1 ( 1 N ) r ] σ + 1 C ( n N ) α 1 ( 1 N ) r ( σ + 1 ) C N r ( 1 + σ ) .
When t J > t n + 1 , we have
| H 1 | C [ C 1 ( n N ) r ] α 1 [ C 1 ( 1 N ) r ] σ + 1 C ( n N ) r ( α 1 ) ( 1 N ) r ( σ + 1 ) C N r ( 1 + σ ) .
For H 2 , by Lemma 2, we have
| H 2 | C k = 1 n + 1 2 1 ( t k + 1 t k ) 2 ( t k ) σ 2 t k t k + 1 ( t n + 1 s ) α 1 d s + C k = n + 1 2 n 1 ( t k + 1 t k ) 2 ( t k ) σ 2 t k t k + 1 ( t n + 1 s ) α 1 d s = : H 21 + H 22 .
For 1 < α < 2 , we only consider H 21 , as H 22 has already been discussed in Lemma 2.
H 21 C k = 1 n + 1 2 1 ( t k + 1 t k ) 2 ( t k ) σ 2 ( t n + 1 t k + 1 ) α 1 ( t k + 1 t k ) C k = 1 n + 1 2 1 ( t k + 1 t k ) 3 ( t k ) σ 2 ( t n + 1 ) α 1 .
Case 1.  t J t n + 1 2 1 . We have
H 21 C k = 1 J 1 ( t k + 1 t k ) 3 ( t k ) σ 2 ( t n + 1 ) α 1 + C k = J n + 1 2 1 ( t k + 1 t k ) 3 ( t k ) σ 2 ( t n + 1 ) α 1 = : H 21 1 + H 21 2 .
By (12) and C 2 < 0 , we have
H 21 1 C k = 1 J 1 ( C k r 1 N r ) 3 ( C 1 ( k N ) r ) σ 2 ( C 2 + C 3 ( n + 1 N ) ) α 1 C k = 1 J 1 ( C k r 1 N r ) 3 ( C 1 ( k N ) r ) σ 2 ( C 3 ( n + 1 N ) ) α 1 C k = 1 J 1 ( C k r 1 N r ) 3 ( C 1 ( k N ) r ) σ 2 ( C 3 ( n N ) ) α 1 C k = 1 J 1 k r ( 1 + σ ) 3 N r ( 1 + σ ) C N r ( 1 + σ ) , if r ( 1 + σ ) < 2 , C N 2 ln N , if r ( 1 + σ ) = 2 , C N 2 , if r ( 1 + σ ) > 2 .
By (13), Lemma 1, and C 2 < 0 , we have
H 21 2 C k = J n + 1 2 1 ( C 3 N 1 ) 3 ( C 2 + C 3 ( k N ) ) σ 2 ( C 2 + C 3 ( n + 1 N ) ) α 1 C k = J n + 1 2 1 ( C 3 N 1 ) 3 ( C 4 ( k N ) ) σ 2 ( C 3 ( n + 1 N ) ) α 1 C k = J n + 1 2 1 ( C 3 N 1 ) 3 ( C 4 ( k N ) ) σ 2 ( C 3 ( n N ) ) α 1 C N ( 1 + σ ) k = 1 n + 1 2 1 k σ 2 C N ( 1 + σ ) .
Case 2.  t n + 1 2 t J t n + 1 . By (15) and C 2 < 0 , we have
H 21 C k = 1 n + 1 2 1 ( C k r 1 N r ) 3 ( C 1 ( k N ) r ) σ 2 ( C 2 + C 3 ( n + 1 N ) ) α 1 C k = 1 n + 1 2 1 ( C k r 1 N r ) 3 ( C 1 ( k N ) r ) σ 2 ( C 3 ( n + 1 N ) ) α 1 C k = 1 n + 1 2 1 ( C k r 1 N r ) 3 ( C 1 ( k N ) r ) σ 2 ( C 3 ( n N ) ) α 1 C k = 1 n + 1 2 1 k r ( 1 + σ ) 3 N r ( 1 + σ ) C N r ( 1 + σ ) , if r ( 1 + σ ) < 2 , C N 2 ln N , if r ( 1 + σ ) = 2 , C N 2 , if r ( 1 + σ ) > 2 .
Case 3.  t J > t n + 1 . By (17), we have
H 21 C k = 1 n + 1 2 1 ( C k r 1 N r ) 3 ( C 1 ( k N ) r ) σ 2 ( C 1 ( n + 1 N ) r ) α 1 C k = 1 n + 1 2 1 ( C k r 1 N r ) 3 ( C 1 ( k N ) r ) σ 2 C k = 1 n + 1 2 1 k r ( 1 + σ ) 3 N r ( 1 + σ ) C N r ( 1 + σ ) , if r ( 1 + σ ) < 2 , C N 2 ln N , if r ( 1 + σ ) = 2 , C N 2 , if r ( 1 + σ ) > 2 .
The cases of H 22 and H 3 have also been discussed in Lemma 2.
For t J t n + 1 2 1 , if r ( 1 + σ ) < α + σ , when 2 < r ( 1 + σ ) , we have
0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 1 ( s ) d s C N r ( 1 + σ ) + C N 2 + C N 2 + C N ( 1 + σ ) + C N ( α + σ ) C N ( 1 + α ) .
Similarly, the cases r ( 1 + σ ) < 2 < σ + α and α + σ < 2 can be considered. Other cases can also be considered in the same way. □
Lemma 8.
Suppose 1 < α < 2 and f : = D t α 0 C u satisfies Assumption 1. Let n 2 .
1. 
If t J t n + 1 2 1 , then we have
q n + 1 , n + 1 0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 0 ( s ) d s C N ( 1 + α ) .
2. 
If t n + 1 2 t J t n + 1 , then we have
q n + 1 , n + 1 0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 0 ( s ) d s C N ( 1 + α ) .
3. 
If t J > t n + 1 , then we have
q n + 1 , n + 1 0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 0 ( s ) d s C N r ( σ + α ) , if r ( σ + α ) < 1 + α , C N ( 1 + α ) , if r ( σ + α ) 1 + α .
Proof. 
The following proof is similar to the proof of Lemma 7. Note that
q n + 1 , n + 1 0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 0 ( s ) d s = q n + 1 , n + 1 0 t 1 + k = 1 n 1 t k t k + 1 + t n t n + 1 ( t n + 1 s ) α 1 g ( s ) P 0 ( s ) d s = : H 1 + H 2 + H 3 .
By Lemma 5, we get
| H 1 | q n + 1 , n + 1 0 t 1 ( t n + 1 s ) α 1 s σ d s q n + 1 , n + 1 t n α 1 t 1 σ + 1 .
When t J t n + 1 , by Lemma 4, we have
| H 1 | C N α [ C 2 + C 3 ( n N ) ] α 1 [ C 1 ( 1 N ) r ] σ + 1 C N α [ C 3 ( n N ) ] α 1 [ C 1 ( 1 N ) r ] σ + 1 C N r ( 1 + σ ) α .
When t J > t n + 1 , by Lemma 4, we have
| H 1 | C N r α n ( r 1 ) α ( C 1 ( n N ) r ) α 1 ( C 1 ( 1 N ) r ) σ + 1 C n N ( r 1 ) α N α N r ( 1 + σ ) C N r ( 1 + σ ) α .
For H 2 , by Lemma 5, with η k ( t k , t k + 1 ) , where k = 1 , 2 , , n 1 , we have
| H 2 | q n + 1 , n + 1 k = 1 n 1 t k t k + 1 ( t n + 1 s ) α 1 | f ( η k ) | ( s t k ) d s .
By Assumption 1, we get
| H 2 | C q n + 1 , n + 1 k = 1 n 1 ( t k + 1 t k ) ( t k ) σ 1 t k t k + 1 ( t n + 1 s ) α 1 d s C q n + 1 , n + 1 k = 1 n 1 ( t k + 1 t k ) 2 ( t k ) σ 1 ( t n + 1 ) α 1 C q n + 1 , n + 1 k = 1 n 1 ( t k + 1 t k ) 2 ( t k ) σ 1 ( t n ) α 1 C q n + 1 , n + 1 k = 1 n + 1 2 1 ( t k + 1 t k ) 2 ( t k ) σ 1 ( t n ) α 1 + C q n + 1 , n + 1 k = n + 1 2 n 1 ( t k + 1 t k ) 2 ( t k ) σ 1 ( t n ) α 1 = : H 21 + H 22 .
Case 1.  t J t n + 1 2 1 . We have
H 21 C q n + 1 , n + 1 k = 1 J 1 ( t k + 1 t k ) 2 ( t k ) σ 1 ( t n ) α 1 + C q n + 1 , n + 1 k = J n + 1 2 1 ( t k + 1 t k ) 2 ( t k ) σ 1 ( t n ) α 1 .
By Lemma 4 and (12), we have
H 21 1 C N α k = 1 J 1 ( t k + 1 t k ) 2 ( t k ) σ 1 ( t n ) α 1 C N α k = 1 J 1 C k r 1 N r 2 C 1 k N r σ 1 C 2 + C 3 n N α 1 C N α k = 1 J 1 C k r 1 N r 2 k N r ( σ 1 ) n N α 1 C N α N 2 r r ( σ 1 ) k = 1 J 1 k 2 ( r 1 ) + r σ r n N α 1 C N α r r σ k = 1 n k r + r σ 2 C N α r r σ 1 n x r + r σ 2 d x C N α r r σ n r + r σ 1 C n N r + r σ 1 N 1 α C N ( 1 + α ) .
By Lemma 4, (13), and Lemma 1, we have
H 21 2 C N α k = J n + 1 2 1 C 3 N 1 2 C 2 + C 3 k N σ 1 C 2 + C 3 n N α 1 C N α k = J n + 1 2 1 C 3 N 1 2 C 4 k N σ 1 C 3 n N α 1 C N ( α + σ ) 1 k = J n + 1 2 1 k σ 1 C N ( α + σ ) 1 k = 1 n k σ 1 C N ( 1 + α ) .
Case 2.  t n + 1 2 t J t n + 1 . By Lemma 4 and (15), we have
H 21 1 C N α k = 1 n + 1 2 1 ( t k + 1 t k ) 2 ( t k ) σ 1 ( t n ) α 1 C N α k = 1 n + 1 2 1 C k r 1 N r 2 C 1 k N r σ 1 C 2 + C 3 n N α 1 C N α k = 1 n + 1 2 1 C k r 1 N r 2 k N r ( σ 1 ) n N α 1 C N α N 2 r r ( σ 1 ) k = 1 n + 1 2 1 k 2 ( r 1 ) + r σ r n N α 1 C N α r r σ k = 1 n k r + r σ 2 C N α r r σ 1 n x r + r σ 2 d x C N α r r σ n r + r σ 1 C n N r + r σ 1 N 1 α C N ( 1 + α ) .
Case 3.  t J > t n + 1 . By Lemma 4 and (17), we have
H 21 1 C N r α n ( r 1 ) α k = 1 n + 1 2 1 ( t k + 1 t k ) 2 ( t k ) σ 1 ( t n ) α 1 C N r α n ( r 1 ) α k = 1 n + 1 2 1 k r 1 N r 2 C 1 k N r σ 1 C 1 n N r α 1 C n N ( r 1 ) α N α k = 1 n + 1 2 1 k r 1 N r 2 k N r ( σ 1 ) C N r ( 1 + σ ) α k = 1 n k r ( 1 + σ ) 2 C N r ( 1 + σ ) α 1 n x r ( 1 + σ ) 2 d x C N r ( 1 + σ ) α n r ( 1 + σ ) 1 C n N r + r σ 1 N 1 α C N ( 1 + α ) .
The cases of H 21 2 and H 3 have also been discussed in Lemma 5.
Thus, for t J t n + 1 2 1 , since α > 1 , σ + α 1 , we have
q n + 1 , n + 1 0 t n + 1 ( t n + 1 s ) α 1 f ( s ) P 0 ( s ) d s C N ( α + r ( 1 + σ ) ) + C N ( 1 + α ) + C N ( 1 + α ) + C N ( 1 + α ) + C N ( 2 α + σ ) C N ( 1 + α ) .
The remaining cases can be considered similarly. □

4. Proofs of Theorems 1 and 2

We first prove Theorem 1.
Proof of Theorem 1.
Subtracting (3) from (6), we get
u ( t n + 1 ) u n + 1 = 1 Γ ( α ) { 0 t n + 1 ( t n + 1 s ) α 1 g ( s , u ( s ) ) P 1 ( s ) d s + k = 0 n q k , n + 1 g ( t k , u ( t k ) ) g ( t k , u k ) + q n + 1 , n + 1 g ( t n + 1 , u ( t n + 1 ) ) g ( t n + 1 , u n + 1 P ) } = : 1 Γ ( α ) ( H 1 + H 2 + H 3 ) .
The first term, H 1 , can be estimated using Lemma 2. For the second term, H 2 , by Lemma 3 and the Lipschitz condition of g, we have
| H 2 | = k = 0 n q k , n + 1 g ( t k , u ( t k ) ) g ( t k , u k ) k = 0 n q k , n + 1 g ( t k , u ( t k ) ) g ( t k , u k ) L k = 0 n q k , n + 1 | u ( t k ) u k | .
For the third term, H 3 , applying Lemma 3 and the Lipschitz condition of g, we have
| H 3 | = q n + 1 , n + 1 g ( t n + 1 , u ( t n + 1 ) ) g ( t n + 1 , u n + 1 P ) q n + 1 , n + 1 L | u ( t n + 1 ) u n + 1 P | .
Note that
u ( t n + 1 ) u n + 1 P = 1 Γ ( α ) 0 t n + 1 ( t n + 1 s ) α 1 g ( s , u ( s ) ) P 0 ( s ) d s + k = 0 n p k , n + 1 g ( t k , u ( t k ) ) g ( t k , u k ) .
We obtain
| H 3 | C q n + 1 , n + 1 0 t n + 1 ( t n + 1 s ) α 1 g ( s , u ( s ) ) P 0 ( s ) d s + C q n + 1 , n + 1 k = 0 n p k , n + 1 g ( t k , u ( t k ) ) g ( t k , u k ) = : H 3 1 + H 3 2 .
The term H 3 1 can be estimated using Lemma 5. For H 3 2 , applying Lemmas 3 and 4, we have
H 3 2 C q n + 1 , n + 1 k = 0 n p k , n + 1 L | u ( t k ) u k | C N r α n ( r 1 ) α k = 0 n p k , n + 1 | u ( t k ) u k | C n N ( r 1 ) α N α k = 0 n p k , n + 1 | u ( t k ) u k | C N α k = 0 n p k , n + 1 | u ( t k ) u k | .
Combining the estimates of H 1 , H 2 , H 3 1 , and H 3 2 , we obtain
| u ( t n + 1 ) u n + 1 | C | H 1 | + C k = 0 n q k , n + 1 | u ( t k ) u k | + C | H 3 1 | + C N α k = 0 n p k , n + 1 | u ( t k ) u k | .
Next, we prove the theorem using mathematical induction. We begin by considering the case 0 < α < 1 .
Case 1.  t J t n + 1 2 1 . We discuss the case when σ + α < 1 . Suppose there exists a constant C 0 > 0 such that, for k = 0 , 1 , 2 , , n and n = 0 , 1 , 2 , , N 1 , the following inequality holds:
| u ( t k ) u k | C 0 N ( α + σ ) .
We aim to prove that
| u ( t n + 1 ) u n + 1 | C 0 N ( α + σ ) .
Using Lemmas 2 and 5, we have
| u ( t n + 1 ) u n + 1 | C N ( α + σ ) + C k = 0 n q k , n + 1 | u ( t k ) u k | + C N ( 2 α + σ ) + C N α k = 0 n p k , n + 1 | u ( t k ) u k | .
Substituting the assumption into the inequality, we get
| u ( t n + 1 ) u n + 1 | C N ( α + σ ) + C T α C 0 N ( α + σ ) + C N ( 2 α + σ ) + C N α C T α C 0 N ( α + σ ) .
Following the proof strategy in [1], we first choose T sufficiently small such that the second term C T α C 0 N ( α + σ ) on the right-hand side in (39) is less than C 0 2 N ( α + σ ) . Then, we select N sufficiently large and C 0 sufficiently large so that the sum of the remaining three terms on the right-hand side is also less than C 0 2 N ( α + σ ) . Thus, we have
| u ( t n + 1 ) u n + 1 | C 0 N ( α + σ ) .
For the case when σ + α 1 , a similar argument yields
| u ( t n + 1 ) u n + 1 | C 0 N ( α + σ ) .
Case 2.  t n + 1 1 t J t n + 1 . Let σ + α 1 . Assume that
| u ( t k ) u k | C 0 N ( σ + α ) .
Using similar steps to Case 1, we obtain
| u ( t n + 1 ) u n + 1 | C 0 N ( σ + α ) .
For the case when σ + α < 1 , a similar argument yields
| u ( t n + 1 ) u n + 1 | C 0 N ( α + σ ) .
Case 3.  t J > t n + 1 , similar to the proof of Theorem 1.4 in [26]. □
We now turn to the proof of Theorem 2.
Proof of Theorem 2.
In the case of 1 < α < 2 , similar to the proof of Theorem 1, we consider the following three cases:
Case 1.  t J t n + 1 2 1 . Let σ + α 1 . Assume that
| u ( t k ) u k | C 0 N ( 1 + α ) .
Using Lemma 7 and Lemma 8, we have
| u ( t n + 1 ) u n + 1 | C N ( 1 + α ) + C k = 0 n q k , n + 1 | u ( t k ) u k | + C N ( 1 + α ) + C N α k = 0 n p k , n + 1 | u ( t k ) u k | C N ( 1 + α ) + C T α C 0 N ( 1 + α ) + C N ( 1 + α ) + C N α C T α C 0 N ( 1 + α ) .
Following the proof strategy in [1], we first choose T sufficiently small such that the second term C T α C 0 N ( 1 + α ) on the right-hand side in (40) is less than C 0 2 N ( 1 + α ) . Then, we select N sufficiently large and C 0 sufficiently large so that the sum of the remaining three terms on the right-hand side is also less than C 0 2 N ( 1 + α ) . Thus, we have
| u ( t n + 1 ) u n + 1 | N ( 1 + α ) .
Case 2.  t n + 1 1 t J t n + 1 . Assume that
| u ( t k ) u k | C 0 N min { 2 , α + σ , r ( 1 + σ ) } .
Using similar steps to Case 1, we obtain
| u ( t n + 1 ) u n + 1 | C 0 N min { 2 , α + σ , r ( 1 + σ ) } .
Case 3.  t J > t n + 1 , similar to the proof of Theorem 1.4 in [26]. □

5. Numerical Simulations

In this section, we will consider some numerical examples to illustrate the convergence orders of the proposed numerical method (6) under different smoothness conditions of D t α 0 C u . We focus on the case for α ( 0 , 1 ) . Similarly, we can consider the case for α > 1 .
Let N be a positive integer. Let 0 = t 0 < t 1 < < t N = T be the partition of [ 0 , T ] . For the graded mesh, we choose t k = T ( k N ) r , k = 0 , 1 , 2 , , N with r 1 . When r = 1 , this mesh is the uniform mesh. For the modified mesh, we have t k = α P k 2 K N 2 α for k = 0 , 1 , J 1 and t k = 1 2 α σ ¯ + P k T N for k = J , J + 1 , , N .
In Figure 1, we choose N = 2048 and T = 1 and plot the graded mesh with r = 4 and uniform mesh with r = 1 and the modified mesh with K = 0.44 , r = 4 , and t J = 0.335596 with J = 1369 . The modified graded mesh is uneven from t 0 to t J , and uniform from t J to t N .
Example 1.
Consider the following fractional differential equation,
D t α 0 C u ( s ) = Γ ( 1 + β ) Γ ( 1 + β α ) s β α + s 2 β u ( s ) 2 , t ( 0 , T ] ,
subject to the initial condition
u ( 0 ) = u 0 ,
where 0 < α < 1 , 0 < β < 1 , and α < β , u 0 = 0 , and the exact solution is u ( t ) = s β . Here, D t α 0 C u ( s ) = Γ ( 1 + β ) Γ ( 1 + β α ) s β α , which implies that the regularity of D t α 0 C u ( t ) behaves as s β α , which satisfies Assumption 1.
Assume that u ( t k ) and u k , k = 0 , 1 , 2 , , N are the solutions of (3) and (6), respectively. By Theorem 1 with σ = β α , we have the following error estimate (note that t n + 1 = t N > t J ):
e N : = max 0 k n + 1 | u ( t k ) u k | C N ( α + σ )
When α = 0.7 , β = 0.9 , T = 1 , and N = 2048 , we compare the exact solution and the numerical solutions for the graded mesh ( r = 2.8571 ) and the modified graded mesh ( r = 2.8571 , K = 0.6 ) . Figure 2 shows the exact solution along with the numerical solutions obtained using the graded mesh and the modified graded mesh. From the figure, it is evident that both methods approximate the exact solution well, but the modified graded mesh achieves a smaller error compared to the graded mesh. In our numerical tests, we see that the errors from the modified graded mesh depend on the value of K.
For the different values of α ( 0 , 1 ) , we select the appropriate values of r and set N = 64 × 2 l 1 , where l = 1 , 2 , 3 , 4 , 5 , 6 . Then, we compute the maximum nodal error e N (as previously defined) for various N and determine the experimental order of convergence (EOC) using the following formula:
log 2 e N e 2 N .
In Table 1, Table 2 and Table 3, we set β = 0.9 and present the experimental order of convergence (EOC) alongside the maximum nodal errors for different values of N. The numerical results indicate that the error of the modified mesh is smaller than that of the graded mesh.
Example 2.
Consider the following
D t α 0 C u ( t ) + u ( t ) = 0 , t ( 0 , T ] , u ( 0 ) = u 0 ,
where 0 < α < 1 and u 0 = 1 . The exact solution u ( t ) = E α , 1 ( t α ) , where E α , γ ( z ) is the Mittag-Leffler function defined by
E α , γ ( z ) = k = 0 z k Γ ( α k + γ ) , α , γ > 0 .
Hence,
D t α 0 C u ( t ) = 1 ( t α ) Γ ( α + 1 ) ( t α ) 2 Γ ( 2 α + 1 ) ,
which suggests that the regularity of D t α 0 C u ( t ) behaves as c + c t α , where 0 < α < 1 .
According to Theorem 1, when σ = α , the error estimate is given by
e N : = max 0 k n + 1 | u ( t k ) u k | C N 2 α .
Table 4, Table 5 and Table 6 summarize the experimental order of convergence (EOC) along with the maximum nodal errors for different values of N. The observed EOC closely aligns with the theoretical prediction: O ( N 2 α ) .
Through the analysis and numerical experiments, it is clear that the modified graded mesh achieves smaller errors compared to the graded mesh. The traditional graded mesh, with its non-uniform step size, is effective at addressing the weak singularity near the initial time t = 0 . However, as the time nodes t k move further away from the initial point, the sparsity of the mesh can lead to significant errors. In contrast, the modified graded mesh adopts the graded mesh near t = 0 to better handle the singularity and transitions to a uniform mesh in later regions, effectively reducing the overall error.

6. Conclusions

In this paper, a modified graded mesh Adams-type predictor–corrector method is proposed for solving fractional differential equations. The traditional graded mesh works well near the initial time t = 0 because of its non-uniform step sizes, which handle the weak singularity effectively. However, as the time nodes t k move away from the initial point, the mesh becomes sparse, leading to larger errors. On the other hand, the modified graded mesh uses a graded mesh near t = 0 to better handle the singularity and switches to a uniform mesh in areas farther from the initial point, significantly reducing the overall error. Numerical experiments further confirm that the modified graded mesh method outperforms the traditional graded mesh in terms of accuracy. This makes the improved Adams-type predictor–corrector method an efficient tool for solving fractional differential equations.
In recent years, some new fractional definitions have been developed, providing new perspectives and tools for the numerical solution of fractional differential equations. Future research directions include extending this method to Caputo–Hadamard fractional derivatives and other fractional definitions (see [27]). We plan to explore numerical methods under these new definitions in future work to further enhance the applicability and accuracy of the proposed approach.

Author Contributions

We have both contributed equal amounts towards this paper. Y.Y. (Yuhui Yang) conducted the theoretical analysis, wrote the original draft, and carried out the numerical simulations. Y.Y. (Yubin Yan) introduced and provided guidance in this research area. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All data are contained within the article.

Acknowledgments

The authors are grateful to the Reviewers and the Associate Editor for their helpful comments.

Conflicts of Interest

The authors declare that they have no competing interest.

Appendix A

The weights p j , n + 1 , k = 0 , 1 , 2 , , n in (5) satisfy the following:
Case 1.  n + 1 < J . For k = 0 , 1 , 2 , , n , we have
p k , n + 1 = C 1 α N r α α ( n + 1 ) r k r α ( n + 1 ) r ( k + 1 ) r α .
Case 2.  n + 1 = J . For k = 0 , 1 , 2 , , n , we have
p k , n + 1 = N ( r + 1 ) α α C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 k r N α C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 ( k + 1 ) r N α , if k = 0 , 1 , 2 , , J 2 , N ( r + 1 ) α α C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 k r N α C 3 ( n k ) N r α , if k = J 1 .
Case 3.  n + 1 = J + 1 .
p k , n + 1 = N ( r + 1 ) α α C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 k r N α C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 ( k + 1 ) r N α , if k = 0 , 1 , 2 , , J 2 , N ( r + 1 ) α α C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 k r N α C 3 ( n k ) N r α , if k = J 1 , C 3 α N α α ( n + 1 k ) α ( n k ) α , if k = J .
Case 4.  n + 1 > J + 1 .
p k , n + 1 = N ( r + 1 ) α α C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 k r N α C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 ( k + 1 ) r N α , if k = 0 , 1 , 2 , , J 2 , N ( r + 1 ) α α C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 k r N α C 3 ( n k ) N r α , if k = J 1 , C 3 α N α α ( n + 1 k ) α ( n k ) α , if k = J , J + 1 , , n .
The weights q k , n + 1 in (6) satisfy
q 0 , n + 1 = C 1 α N r α α ( 1 + α ) ( n + 1 ) r α ( α + 1 ) + ( n + 1 ) r 1 α + 1 ( n + 1 ) r ( α + 1 ) , if n + 1 J 1 , N r α α 1 C 1 α ( 1 + α ) ( ( α + 1 ) C 2 N + C 3 ( n + 1 ) α C 1 N 1 + r α + C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 N α + 1 C 2 N + C 3 ( n + 1 ) α + 1 N r ( α + 1 ) ) , if n + 1 J .
For k = 1 , 2 , , n ( 1 k n ), the weights satisfy the following:
Case 1.  n + 1 < J .
q j , n + 1 = C 1 α N r α α ( 1 + α ) [ ( n + 1 ) r ( k 1 ) r ] α + 1 [ ( n + 1 ) r k r ] α + 1 k r ( k 1 ) r + [ ( n + 1 ) r ( k + 1 ) r ] α + 1 [ ( n + 1 ) r k r ] α + 1 ( k + 1 ) r k r .
Case 2.  n + 1 = J .
q k , n + 1 = 1 α ( α + 1 ) · N r C 1 k r ( k 1 ) r · C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 k r N α + 1 C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 ( k 1 ) r N α + 1 N ( 1 + r ) ( α + 1 ) 1 α ( α + 1 ) · N r + 1 C 1 k r N C 2 N r + 1 C 3 ( k + 1 ) N r · C 3 α + 1 ( n k ) α + 1 N α + 1 C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 k r N α + 1 N ( 1 + r ) ( α + 1 ) ,
Case 3.  n + 1 = J + 1 .
q k , n + 1 = 1 α ( α + 1 ) · N r C 1 k r ( k 1 ) r · C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 k r N α + 1 C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 ( k 1 ) r N α + 1 N ( 1 + r ) ( α + 1 ) + 1 α ( α + 1 ) · N r C 1 k r ( k + 1 ) r · C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 k r N α + 1 C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 ( k + 1 ) r N α + 1 N ( 1 + r ) ( α + 1 ) , if k = 1 , 2 , , J 2 , 1 α ( α + 1 ) · N r C 1 k r ( k 1 ) r · C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 k r N α + 1 C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 ( k 1 ) r N α + 1 N ( 1 + r ) ( α + 1 ) 1 α ( α + 1 ) · N r + 1 C 1 k r N C 2 N r + 1 C 3 ( k + 1 ) N r · ( C 3 α + 1 ( n k ) α + 1 N α + 1 C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 k r N α + 1 N ( 1 + r ) ( α + 1 ) ) , if k = J 1 .
Case 4.  n + 1 > J .
q k , n + 1 = 1 α ( α + 1 ) · N r C 1 k r ( k 1 ) r · C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 k r N α + 1 C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 ( k 1 ) r N α + 1 N ( 1 + r ) ( α + 1 ) + 1 α ( α + 1 ) · N r C 1 k r ( k + 1 ) r · C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 k r N α + 1 C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 ( k + 1 ) r N α + 1 N ( 1 + r ) ( α + 1 ) , if J = 1 , 2 , , J 2 , 1 α ( α + 1 ) · N r C 1 k r ( k 1 ) r · C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 k r N α + 1 C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 ( k 1 ) r N α + 1 N ( 1 + r ) ( α + 1 ) 1 α ( α + 1 ) · N r + 1 C 1 k r N C 2 N r + 1 C 3 ( k + 1 ) N r · ( C 3 α + 1 ( n k ) α + 1 N α + 1 C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 k r N α + 1 N ( 1 + r ) ( α + 1 ) ) , if k = J 1 , 1 α ( α + 1 ) · N r + 1 C 1 ( k 1 ) r N C 2 N r + 1 C 3 j N r · ( C 3 α + 1 ( n + 1 k ) α + 1 N α + 1 C 2 N r + 1 + C 3 ( n + 1 ) N r C 1 ( k 1 ) r N α + 1 N ( 1 + r ) ( α + 1 ) ) + N α α ( α + 1 ) C 3 α + 1 [ ( n k ) α + 1 ( n + 1 k ) α + 1 ] , if k = J , N α C 3 α α ( α + 1 ) · ( ( n k ) α + 1 + ( n + 2 k ) α + 1 2 ( n + 1 k ) α + 1 ) , if k = J + 1 , J + 2 , , n .
For k = n + 1 , the weight is given by
q n + 1 , n + 1 = C 1 α N r α α ( 1 + α ) ( n + 1 ) r n r α , if n + 1 < J , N ( 1 + r ) α α ( 1 + α ) C 2 N 1 + r + C 3 ( n + 1 ) N r C 1 n r N α , if n + 1 = J , C 3 α N α α ( 1 + α ) , if n + 1 > J .

References

  1. Diethelm, K.; Ford, N.J.; Freed, A.D. Detailed error analysis for a fractional Adams method. Numer. Algorithms 2004, 36, 31–52. [Google Scholar] [CrossRef]
  2. Diethelm, K. The Analysis of Fractional Differential Equations; Lecture Notes in Mathematics; Springer: Berlin, Germany, 2010. [Google Scholar]
  3. Jin, B.; Lazarov, R.; Zhou, Z. Numerical methods for time-fractional evolution equations with nonsmooth data: A concise overview. Comput. Methods Appl. Mech. Eng. 2019, 346, 332–358. [Google Scholar] [CrossRef]
  4. Chen, H.; Stynes, M. Error analysis of a second-order method on fitted meshes for a time-fractional diffusion problem. J. Sci. Comput. 2019, 79, 624–647. [Google Scholar] [CrossRef]
  5. Kopteva, N.; Meng, X. Error analysis for a fractional-derivative parabolic problem on quasi-graded meshes using barrier functions. Siam J. Numer. Anal. 2020, 58, 1217–1238. [Google Scholar] [CrossRef]
  6. Stynes, M.; O’Riordan, E.; Gracia, J.L. Error analysis of a finite difference method on graded meshes for a time-fractional diffusion equation. Siam J. Numer. Anal. 2017, 55, 1057–1079. [Google Scholar] [CrossRef]
  7. Cao, W.; Zeng, F.; Zhang, Z.; Karniadakis, G.E. Implicit-explicit difference schemes for nonlinear fractional differential equations with nonsmooth solutions. Siam J. Sci. Comput. 2016, 38, A3070–A3093. [Google Scholar] [CrossRef]
  8. Li, C.; Yi, Q.; Chen, A. Finite difference methods with non-uniform meshes for nonlinear fractional differential equations. J. Comput. Phys. 2016, 316, 614–631. [Google Scholar] [CrossRef]
  9. Liu, Y.; Roberts, J.; Yan, Y. A note on finite difference methods for nonlinear fractional differential equations with non-uniform meshes. Int. J. Comput. Math. 2018, 95, 1151–1169. [Google Scholar] [CrossRef]
  10. Lubich, C. Fractional linear multistep methods for Abel-Volterra integral equations of the second kind. Math. Comput. 1985, 45, 463–469. [Google Scholar] [CrossRef]
  11. Zhou, Y.; Suzuki, J.L.; Zhang, C.; Zayernouri, M. Implicit-explicit time integration of nonlinear fractional differential equations. Appl. Numer. Math. 2020, 156, 555–583. [Google Scholar] [CrossRef]
  12. Cao, J.; Xu, C. A high-order scheme for the numerical solution of fractional ordinary differential equations. J. Comput. Phys. 2013, 238, 154–168. [Google Scholar] [CrossRef]
  13. Pedas, A.; Tamme, E. Numerical solution of nonlinear fractional differential equations by spline collocation methods. J. Comput. Appl. Math. 2014, 255, 216–230. [Google Scholar] [CrossRef]
  14. Inc, M. The approximate and exact solutions of the space-and time-fractional Burgers equations with initial conditions by variational iteration method. J. Math. Anal. Appl. 2008, 345, 476–484. [Google Scholar] [CrossRef]
  15. Jafari, H.; Daftardar-Gejji, V. Solving linear and nonlinear fractional diffusion and wave equations by Adomian decomposition. Appl. Math. Comput. 2006, 180, 488–497. [Google Scholar] [CrossRef]
  16. Jin, B.; Lazarov, R.; Pasciak, J.; Zhou, Z. Error analysis of a finite element method for the space-fractional parabolic equation. Siam J. Numer. Anal. 2014, 52, 2272–2294. [Google Scholar] [CrossRef]
  17. Zayernouri, M.; Karniadakis, G.E. Discontinuous spectral element methods for time- and space-fractional advection equations. Siam J. Sci. Comput. 2014, 36, B684–B707. [Google Scholar] [CrossRef]
  18. Deng, W. Short memory principle and a predictor-corrector approach for fractional differential equations. J. Comput. Appl. Math. 2007, 206, 174–188. [Google Scholar] [CrossRef]
  19. Nguyen, T.B.; Jang, B. A high-order predictor-corrector method for solving nonlinear differential equations of fractional order. Fract. Calc. Appl. Anal. 2017, 20, 447–476. [Google Scholar] [CrossRef]
  20. Zhou, Y.; Li, C.; Stynes, M. A fast second-order predictor-corrector method for a nonlinear time-fractional Benjamin-Bona–Mahony-Burgers equation. Numer. Algorithms 2024, 95, 693–720. [Google Scholar] [CrossRef]
  21. Lee, S.; Lee, J.; Kim, H.; Jang, B. A fast and high-order numerical method for nonlinear fractional-order differential equations with non-singular kernel. Appl. Numer. Math. 2021, 163, 57–76. [Google Scholar] [CrossRef]
  22. Mokhtarnezhad, F. A high-order predictor-corrector method with non-uniform meshes for fractional differential equations. Fract. Calc. Appl. Anal. 2024, 27, 2577–2605. [Google Scholar] [CrossRef]
  23. Diethelm, K.; Ford, N.J. Analysis of fractional differential equations. J. Math. Anal. Appl. 2002, 265, 229–248. [Google Scholar] [CrossRef]
  24. Diethelm, K.; Freed, A.D. The FracPECE subroutine for the numerical solution of differential equations of fractional order. In Forschung und Wissenschaftliches Rechnen 1998; Heinzel, S., Plesser, T., Eds.; Gesellschaft für Wissenschaftliche Datenverarbeitung: Göttingen, Germany, 1999; pp. 57–71. [Google Scholar]
  25. Diethelm, K.; Ford, N.J.; Freed, A.D. A predictor-corrector approach for the numerical solution of fractional differential equations. Nonlinear Dyn. 2002, 29, 3–22. [Google Scholar] [CrossRef]
  26. Liu, Y.; Roberts, J.; Yan, Y. Detailed error analysis for a fractional Adams method with graded meshes. Numer. Algorithms 2017, 78, 1195–1216. [Google Scholar] [CrossRef]
  27. Sadeka, L.; Baleanu, D.; Abdo, M.S.; Shatanawi, W. Introducing novel Θ-fractional operators: Advances in fractional calculus. J. King Saud Univ. Sci. 2024, 36, 103352. [Google Scholar] [CrossRef]
  28. Liu, L.; Xu, L.; Zhang, Y. Error analysis of a finite difference scheme on a modified graded mesh for a time-fractional diffusion equation. Math. Comput. Simul. 2023, 209, 87–101. [Google Scholar] [CrossRef]
Figure 1. Three kinds of temporal mesh partitions.
Figure 1. Three kinds of temporal mesh partitions.
Mathematics 13 00891 g001
Figure 2. The exact solution and the numerical solutions.
Figure 2. The exact solution and the numerical solutions.
Mathematics 13 00891 g002
Table 1. Maximum errors at the grid points and convergence rates for Example 1 with parameters α = 0.3 , β = 0.9 , r = 6.6667 , K = 0.3 at T = t N = 1 .
Table 1. Maximum errors at the grid points and convergence rates for Example 1 with parameters α = 0.3 , β = 0.9 , r = 6.6667 , K = 0.3 at T = t N = 1 .
N6412825651210242048
G-mesh 5.2892 × 10 2 1.7169 × 10 2 5.3465 × 10 3 1.7297 × 10 3 5.8865 × 10 4 2.0884 × 10 4
1.62321.68321.62811.55501.4950
MG-mesh 1.1599 × 10 2 3.5888 × 10 3 1.1848 × 10 3 4.1230 × 10 4 1.4904 × 10 4 5.5437 × 10 5
1.69241.59891.52291.46801.4267
Table 2. Maximum errors at the grid points and convergence rates for Example 1 with parameters α = 0.5 , β = 0.9 , r = 4 , K = 0.3 , and T = t N = 1 .
Table 2. Maximum errors at the grid points and convergence rates for Example 1 with parameters α = 0.5 , β = 0.9 , r = 4 , K = 0.3 , and T = t N = 1 .
N6412825651210242048
G-mesh 4.1827 × 10 3 1.2606 × 10 3 3.9725 × 10 4 1.2959 × 10 4 4.3325 × 10 5 1.4732 × 10 5
1.73031.66601.61601.58071.5563
MG-mesh 1.1558 × 10 3 3.6932 × 10 4 1.2194 × 10 4 4.1150 × 10 5 1.4089 × 10 5 4.8689 × 10 6
1.64601.59861.56731.54641.5329
Table 3. Maximum errors at the grid points and convergence rates for Example 1 with parameters α = 0.7 , β = 0.9 , r = 2.8571 , K = 0.5 , and T = t N = 1 .
Table 3. Maximum errors at the grid points and convergence rates for Example 1 with parameters α = 0.7 , β = 0.9 , r = 2.8571 , K = 0.5 , and T = t N = 1 .
N6412825651210242048
G-mesh 5.1630 × 10 4 1.4911 × 10 4 4.4181 × 10 5 1.3300 × 10 5 4.0415 × 10 6 1.2353 × 10 6
1.79181.75491.73201.71841.7100
MG-mesh 2.4983 × 10 4 7.3910 × 10 5 2.2275 × 10 5 6.7840 × 10 6 2.0783 × 10 6 6.3721 × 10 7
1.75711.73041.71521.70681.7055
Table 4. Maximum errors at the grid points and convergence rates for Example 2 with parameters α = 0.7 , T = 1 , r = 2.8571 , and K = 0.27 .
Table 4. Maximum errors at the grid points and convergence rates for Example 2 with parameters α = 0.7 , T = 1 , r = 2.8571 , and K = 0.27 .
N6412825651210242048
G-mesh 2.5638 × 10 4 7.7441 × 10 5 2.3647 × 10 5 7.2682 × 10 6 2.2630 × 10 6 1.3785 × 10 6
1.72711.71141.7021.68330.7152
MG-mesh 9.1514 × 10 5 2.8779 × 10 5 9.0681 × 10 6 2.8540 × 10 6 9.0091 × 10 7 3.3928 × 10 7
1.66901.66611.66781.66351.4089
Table 5. Maximum errors at the grid points and convergence rates for Example 2 with parameters α = 0.8 , T = 1 , r = 2.8571 , and K = 0.1 .
Table 5. Maximum errors at the grid points and convergence rates for Example 2 with parameters α = 0.8 , T = 1 , r = 2.8571 , and K = 0.1 .
N6412825651210242048
G-mesh 1.5459 × 10 4 4.4222 × 10 5 1.2740 × 10 5 3.6820 × 10 6 1.0676 × 10 6 3.2429 × 10 7
1.80561.79541.79081.78611.7190
MG-mesh 3.7816 × 10 5 1.1363 × 10 5 3.4120 × 10 6 1.0207 × 10 6 3.0404 × 10 7 9.0227 × 10 8
1.73461.73571.74111.74721.7526
Table 6. Maximum errors at the grid points and convergence rates for Example 2 with parameters α = 0.9 , T = 1 , r = 2.2222 , and K = 0.58 .
Table 6. Maximum errors at the grid points and convergence rates for Example 2 with parameters α = 0.9 , T = 1 , r = 2.2222 , and K = 0.58 .
N6412825651210242048
G-mesh 9.0168 × 10 5 2.4194 × 10 5 6.5229 × 10 6 1.7627 × 10 6 4.7700 × 10 7 1.3092 × 10 7
1.89801.89101.88771.88571.8653
MG-mesh 4.5476 × 10 5 1.2291 × 10 5 3.3370 × 10 6 9.0745 × 10 7 2.4760 × 10 7 7.0494 × 10 8
1.88751.88101.87871.87381.8125
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, Y.; Yan, Y. A Fractional Adams Method for Caputo Fractional Differential Equations with Modified Graded Meshes. Mathematics 2025, 13, 891. https://doi.org/10.3390/math13050891

AMA Style

Yang Y, Yan Y. A Fractional Adams Method for Caputo Fractional Differential Equations with Modified Graded Meshes. Mathematics. 2025; 13(5):891. https://doi.org/10.3390/math13050891

Chicago/Turabian Style

Yang, Yuhui, and Yubin Yan. 2025. "A Fractional Adams Method for Caputo Fractional Differential Equations with Modified Graded Meshes" Mathematics 13, no. 5: 891. https://doi.org/10.3390/math13050891

APA Style

Yang, Y., & Yan, Y. (2025). A Fractional Adams Method for Caputo Fractional Differential Equations with Modified Graded Meshes. Mathematics, 13(5), 891. https://doi.org/10.3390/math13050891

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop