Next Article in Journal
Generative Design for Dimensioning of Retaining Walls
Previous Article in Journal
Fractional Growth Model Applied to COVID-19 Data

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Spatial Discretization for Stochastic Semi-Linear Subdiffusion Equations Driven by Fractionally Integrated Multiplicative Space-Time White Noise

by 1,†, 2,† and 2,*,†
1
Department of Mathematics, LuLiang University, Lishi 033000, China
2
Department of Mathematical and Physical Sciences, University of Chester, Chester CH1 4BJ, UK
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2021, 9(16), 1917; https://doi.org/10.3390/math9161917
Received: 30 June 2021 / Revised: 24 July 2021 / Accepted: 3 August 2021 / Published: 12 August 2021

## Abstract

:
Spatial discretization of the stochastic semi-linear subdiffusion equations driven by fractionally integrated multiplicative space-time white noise is considered. The nonlinear terms f and $σ$ satisfy the global Lipschitz conditions and the linear growth conditions. The space derivative and the fractionally integrated multiplicative space-time white noise are discretized by using the finite difference methods. Based on the approximations of the Green functions expressed by the Mittag–Leffler functions, the optimal spatial convergence rates of the proposed numerical method are proved uniformly in space under some suitable smoothness assumptions of the initial value.

## 1. Introduction

In this paper, we will consider the spatial discretization of the following stochastic semi-linear subdiffusion equations driven by fractionally integrated multiplicative space-time white noise, with $0 < α ≤ 1 , 0 ≤ γ ≤ 1$ [1,2],
$0 C D t α u ( t , x ) − ∂ 2 u ( t , x ) ∂ x 2 = f ( u ( t , x ) ) + 0 D t − γ σ ( u ( t , x ) ) ∂ 2 W ( t , x ) ∂ t ∂ x , 0 < x < 1 , t > 0 , u ( t , 0 ) = u ( t , 1 ) = 0 , t ≥ 0 , u ( 0 , x ) = u 0 ( x ) , 0 ≤ x ≤ 1 ,$
where $0 C D t α v$ and $0 D t − γ v$ denote the Caputo fractional derivative and the Riemann–Liouville fractional integral of v, respectively [3,4]. Here, $u 0$ is the initial value which satisfies $u 0 ∈ C ( [ 0 , 1 ] )$ and $u 0 ( 0 ) = u 0 ( 1 ) = 0$, where $C ( [ 0 , 1 ] )$ denotes the continuous function space.
The main aim of this paper is to extend the spatial discretization schemes discussed in Gyöngy [1] and Anton et al. [5] for the stochastic quasi-linear parabolic partial differential equations driven by multiplicative space-time white noise to the stochastic subdiffusion equations driven by integrated multiplicative space-time white noise. We obtain the error estimates uniformly in space for the proposed finite difference method. The error estimates are based on the bounds of the Green functions and its discrete analogue of (1) as well as the errors between them under some suitable norms. Such Green functions are expressed in terms of the Mittag–Leffler functions involving the parameters $0 < α ≤ 1$ and $0 ≤ γ ≤ 1$. It is well known that the Mittag–Leffler function $E α , β ( z ) , 0 < α ≤ 1 , β ∈ R$ satisfies the following asymptotic properties: Theorem 1.6 in [4], Equation (1.8.28) in [3], with $π α 2 < μ < min ( π , α π )$,
$E α , β ( z ) ≤ C ( 1 + | z | ) − 1 , μ ≤ | arg ( z ) | ≤ π ,$
and
$E α , α ( z ) ≤ C ( 1 + | z | ) − 2 , μ ≤ | arg ( z ) | ≤ π ,$
which make the numerical analysis of the stochastic subdiffusion Equation (1) much more challenging than the stochastic parabolic equation discussed in [1,5]. To the best of our knowledge, there are no error estimates uniformly in space for the stochastic subdiffusion equations driven by space-time white noise in literature. In this paper, we aim at filling this gap by providing the detailed error estimates based on the error bounds developed in this paper for the Green functions of (1).
Let $( Ω , F , ( F ) t ≥ 0 , P )$ be a stochastic basis carrying an $F t$-adapted Brownian sheet $W = { W ( t , x ) : t ≥ 0 , x ∈ [ 0 , 1 ] }$. We recall that W is a zero-mean Gaussian random field with covariance [1], p. 3 and [2],
$E ( W ( t , x ) W ( s , y ) ) = ( t ∧ s ) ( x ∧ y ) ,$
where $E$ denotes the expectation and $t ∧ s : = min ( s , t )$ and $x ∧ y : = min ( x , y )$ for $s , t ≥ 0$ and $x , y ∈ [ 0 , 1 ]$.
We assume that the nonlinear terms f and $σ$ satisfy the following globally Lipschitz and the linear growth conditions [1], p. 6 and [5], that is, with some positive constant $C > 0$,
$( L ) | f ( r ) − f ( v ) | + | σ ( r ) − σ ( v ) | ≤ C | r − v | , f o r a l l r , v ∈ R , ( L G ) | f ( r ) | + | σ ( r ) | ≤ C ( 1 + | r | ) , f o r r ∈ R .$
Further, we assume that $α$ and $γ$ satisfy the following condition [6], pp. 1473–1474, (1.2) in [7].
Assumption 1.
$0 < α ≤ 1 , 0 ≤ γ ≤ 1 , α + γ > 1 2 .$
Under (L), (LG), and the Assumption 1, one may show that the model (1) has a unique solution [2,6] in some suitable spaces.
The model (1) is used to describe the random effects on transport of particles in medium with memory or particles subject to sticking and trapping [6]. The fractional integrated noise reflects the fact that the internal energy depends also on the past random effects. In recent years, the model (1) has been very actively researched [6,8,9,10,11]. Chen et al. [6] studied the $L 2$ theory of (1) in both divergence and non-divergence forms. Anh et al. [8] discussed sufficient conditions for a Gaussian solution (in the mean-square sense) and derived temporal, spatial, and spatial-temporal Hölder continuity of the solution. Chen [9] analyzed moments, Hölder continuity and intermittency of the solution for the nonlinear stochastic subdiffusion in one-dimensional case. Liu et al. [11] analyzed the existence and uniqueness of the solution of the model (1) with fairly general quasi-linear elliptic operators.
Let us review some numerical methods for solving (1). Jin et al. [7] considered a fully discrete scheme for approximating (1) with $f = 0$ and $σ ( u ) = 1$ and the space-time noise is the Hilbert space-valued Wiener process with covariance operator Q and the error estimates in the $L p , p > 1$ norm in space is obtained. Wu et al. [12] introduced the L1 scheme to approximate (1) with $f = 0$ and $σ ( u ) = 1$ and the space-time noise is defined as in Jin et al. [7]. Gunzburger et al. [13] considered the finite element approximation of stochastic partial-differential equations driven by white noise. Li et al. [14] studied the finite element method for stochastic space-time fractional wave equations. Li and Yang [15] considered the finite element method for solving stochastic time fractional partial differential equations. Zou [16] investigated the finite element method for solving stochastic time fractional heat equation.
To the best of our knowledge, we did not find any numerical analysis for solving (1) in the multiplicative (i.e., $σ ( u ) ≠ 1$) space-time white noise case in literature. In this paper, we will approximate the derivative $∂ 2 u ( t , x ) ∂ x 2$ and the space-time white noise $∂ 2 W ( t , x ) ∂ t ∂ x$ with the finite difference methods as in Gyöngy [1] and Anton et al. [5] and obtain a spatial discretization scheme for approximating (1). The convergence rate in the mean-square sense is obtained, uniformly for $x ∈ [ 0 , 1 ]$.
There are many works for the numerical methods for solving the stochastic parabolic equations driven by additive or multiplicative noises, i.e., the case with $α = 1 , γ = 0$ in (1), see, e.g., in [17,18,19,20,21,22,23,24,25,26,27,28,29,30,31] and the references therein. Most of these references are concerned with an interpretation of stochastic partial differential equations in Hilbert spaces and thus error estimates are provided in the $L 2 ( ( 0 , 1 ) )$ norm (or similar norms).
Let $0 = x 0 < x 1 < ⋯ < x M − 1 < x M = 1$ be the partition of $[ 0 , 1 ]$ and $Δ x = 1 / M$ the space step size. At $x = x k , k = 1 , 2 , ⋯ , M − 1$, we approximate the spatial derivative $∂ 2 u ( t , x ) ∂ x 2$ and the space-time white noise $∂ 2 W ( t , x ) ∂ t ∂ x$ by
$∂ 2 u ( t , x ) ∂ x 2 | x = x k ≈ u ( t , x k + 1 ) − 2 u ( t , x k ) + u ( t , x k − 1 ) Δ x 2 ,$
and
$∂ 2 W ( t , x ) ∂ t ∂ x | x = x k ≈ d d t W ( t , x k + 1 ) − W ( t , x k ) Δ x .$
Denote $u M ( t , x k ) ≈ u ( t , x k ) , k = 0 , 1 , 2 , ⋯ , M$ the approximate solution of $u ( t , x k )$. We define the following finite difference method for solving (1).
$0 C D t α u M ( t , x k ) − u M ( t , x k + 1 ) − 2 u M ( t , x k ) + u M ( t , x k − 1 ) Δ x 2 = f ( u M ( t , x k ) ) + 0 D t − γ σ ( u M ( t , x k ) ) d d t W ( t , x k + 1 ) − W ( t , x k ) Δ x , k = 1 , 2 , ⋯ , M − 1 , t > 0 , u M ( t , 0 ) = u M ( t , 1 ) = 0 , t ≥ 0 , u M ( 0 , x k ) = u 0 ( x k ) , k = 0 , 1 , 2 , ⋯ , M .$
When $α = 1 , γ = 0$, i.e., the stochastic parabolic equation case, Gyöngy Theorem 3.1 in [1] proved the following spatial convergence rates:
(Gi). If $u 0 ∈ C ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$, then there exists a constant $C = C ( t ) , t > 0$ such that
$sup x ∈ [ 0 , 1 ] E | u M ( t , x ) − u ( t , x ) | 2 ≤ C ( t ) Δ x 1 2 − ϵ , t > 0 , ϵ > 0 .$
(Gii). If $u 0$ is sufficiently smooth, e.g., $u 0 ∈ C 3 ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$, then, for any fixed $T > 0$, there exists a constant C which is independent of $t > 0$ and the space step size $Δ x$, such that for all $t ∈ [ 0 , T ]$ and all $M ≥ 1$,
$sup x ∈ [ 0 , 1 ] E | u M ( t , x ) − u ( t , x ) | 2 ≤ C Δ x .$
Remark 1.
The smoothness assumption for the initial value $u 0$ in (Gi), i.e., $u 0 ∈ C ( [ 0 , 1 ] ) ,$ $u 0 ( 0 ) = u 0 ( 1 ) = 0$ is not sufficient to get the error bound $C ( t ) Δ x 1 2 − ϵ$ in (Gi) (see the proof of Gyöngy Proposition 3.8 in [1]). One may need stronger smoothness assumption of $u 0$, e.g., $u 0 ∈ C β ( [ 0 , 1 ] ) , β ∈ ( 0 , 1 2 ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$ to get the error bound $C ( t ) Δ x 1 2 − ϵ$ in (Gi). See Remark 9 for the further explanations why stronger smoothness assumption of $u 0$ is needed to obtain the bounds in (Gi).
In this paper, we extend the error estimates in (Gi) and (Gii) for the stochastic parabolic equation to the error estimates for the stochastic subdiffusion Equation (1). We obtained the following results.
Theorem 1.
Assume $( L ) , ( L G )$ and Assumption 1 hold. Let $u ( t , x )$ and $u M ( t , x k ) , k = 0 , 1 , 2 , ⋯ ,$M be the solutions of (1) and (4), respectively. Further assume that $u 0 ∈ C 1 ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$. Let $ϵ > 0$ be any small number.
(i)
If $f = 0$, then there exists a constant C which is independent of $t > 0$ and the space step size $Δ x$, such that, with $t > 0$,
$sup x ∈ [ 0 , 1 ] E | u M ( t , x ) − u ( t , x ) | 2 ≤ C t − 1 + ϵ Δ x r 1 + C Δ x r 3 + C Δ x r 1 , i f 2 ( α + γ − 1 ) − α 2 + ϵ ≥ 0 , C Δ x 2 ( α + γ − 1 ) − α 2 + min { 1 , r 1 + ϵ } , i f 2 ( α + γ − 1 ) − α 2 + ϵ < 0 .$
(ii)
If $f ≠ 0$, then there exists a constant C which is independent of $t > 0$ and the space step size $Δ x$, such that, with $t > 0$,
$sup x ∈ [ 0 , 1 ] E | u M ( t , x ) − u ( t , x ) | 2 ≤ C t − 1 + ϵ Δ x r 1 + C ( Δ x r 2 + Δ x r 3 ) + C Δ x r 1 , i f 2 ( α + γ − 1 ) − α 2 + ϵ ≥ 0 , C Δ x 2 ( α + γ − 1 ) − α 2 + min { 1 , r 1 + ϵ } , i f 2 ( α + γ − 1 ) − α 2 + ϵ < 0 .$
where
$r 1 = 2 , i f 0 ≤ α ≤ 2 ( 1 − ϵ ) 3 , 4 1 − ϵ 2 α − 1 , i f 2 ( 1 − ϵ ) 3 ≤ α ≤ 1 ,$
and
$r 2 = 3 − 2 α , i f 3 − 2 / α > 0 ,$
and
$r 3 = 2 , i f 2 γ − 1 ≥ 0 , 2 , i f 2 γ − 1 < 0 , 0 ≤ 2 ( 1 − 2 γ ) α ≤ 1 , 3 − 2 ( 1 − 2 γ ) α , i f 2 γ − 1 < 0 , 1 ≤ 2 ( 1 − 2 γ ) α ≤ 3 .$
Remark 2.
When $α = 1 , γ = 0$, i.e., the stochastic parabolic equation case, we obtain that, from (8)–(11) in Theorem 1, for the initial data $u 0 ∈ C 1 ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$, with $t > 0$,
$sup k E | u M ( t , x k ) − u ( t , x k ) | 2 ≤ C t − 1 + ϵ Δ x 1 − 2 ϵ + C Δ x + C Δ x − 1 2 + ( 1 − 2 ϵ + ϵ ) = C t − 1 + ϵ Δ x 1 − 2 ϵ + C Δ x 1 2 − ϵ ,$
which is consistent with the spatial convergence rate obtained in Theorem 3.1 in [1]. Actually the smoothness assumption of $u 0$ in this case can be weakened to $u 0 ∈ C β ( [ 0 , 1 ] )$ with $β ∈ ( 0 , 1 2 )$ and $u 0 ( 0 ) = u 0 ( 1 ) = 0$, see Remarks 1 and 9.
Remark 3.
We may consider the error estimates with respect to the norm $sup k E | u M ( t , x k ) − u ( t , x k ) | 2 p$ for any $p ≥ 1$ as in Theorem 3.1 in [1]. For simplicity of the notations, we only consider the case with $p = 1$ in Theorem 1.
When the initial value $u 0$ is sufficiently smooth, that is, $u 0 ∈ C 3 ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$, we may get higher convergence rates for some $2 / 3 < α ≤ 1$, uniformly for $t ∈ [ 0 , T ]$ with any fixed $T > 0$. More precisely, we have the following theorem.
Theorem 2.
Assume $( L ) , ( L G )$ and Assumption 1 hold. Let $u ( t , x )$ and $u M ( t , x k ) , k = 0 , 1 , 2 , ⋯ ,$M be the solutions of (1) and (4), respectively. Further, assume that $u 0 ∈ C 3 ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$. Let $T > 0$ be any fixed number.
(i)
If $f = 0$, then there exists a constant C which is independent of t and the space step size $Δ x$, such that, for $2 ( 1 − 2 γ ) α < 3$, for all $t ∈ [ 0 , T ]$ and $M ≥ 1$,
$sup k E | u M ( t , x k ) − u ( t , x k ) | 2 ≤ C Δ x min ( r 2 , r 3 , 2 ) ,$
where $r 2$ and $r 3$ are defined by (10) and (11), respectively.
(ii)
If $f ≠ 0$, then there exists a constant C which is independent of t and the space step size $Δ x$, such that, for $2 α < 3$, that is, $2 / 3 < α ≤ 1$, for all $t ∈ [ 0 , T ]$ and $M ≥ 1$,
$sup k E | u M ( t , x k ) − u ( t , x k ) | 2 ≤ C Δ x min ( r 2 , r 3 , 2 ) ,$
where $r 3$ is defined by (11).
Remark 4.
Note that, by (10), $r 2 = 3 − 2 / α$; therefore, the condition $2 / 3 < α ≤ 1$ is also necessary in case (i) in Theorem 2. In other words, we may only get the higher convergence rates for $2 / 3 < α ≤ 1$ when the initial value is sufficiently smooth, e.g., $u 0 ∈ C 3 ( [ 0 , 1 ] ) ,$ $u 0 ( 0 ) = u 0 ( 1 ) = 0$.
Remark 5.
When $α = 1 , γ = 0$, i.e., the stochastic parabolic equation case, we obtain, from Theorem 2, for the sufficiently smooth initial data $u 0$, e.g., $u 0 ∈ C 3 ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$,
$sup k E | u M ( t , x k ) − u ( t , x k ) | 2 ≤ C Δ x ,$
which is consistent with the spatial convergence rate obtained in Theorem 3.1 in [1] for the stochastic parabolic equation driven by space-time white noise.
Remark 6.
We may consider the error estimates with respect to the norm $sup k E | u M ( t , x k ) − u ( t , x k ) | 2 p$ for any $p ≥ 1$ as in Theorem 3.1 in [1]. For simplicity of the notations of the proof, we only consider the case with $p = 1$ in Theorem 2.
Remark 7.
We may also consider the case where the nonlinear terms f and g are not Lipschitz continuous as in Gyöngy Section 4 in [1] under the following assumptions:
(E). There is a solution $u M$ of (4) for every $M ≥ 1$.
(PU). The pathwise uniqueness holds for (1): whenever u and v are random fields carried by some filtered probability space $( Ω ˜ , F ˜ , F ˜ t , P ˜ )$ equipped with a Brownian sheet $W ˜$ such that $u , v$ are solutions of (1), with $W ˜$ in place of W, on a stochastic interval $[ 0 , τ )$, then $u ( t , · ) = v ( t , · )$ almost surely for all $t ∈ [ 0 , τ ( ω ) )$.
As we are mainly interested in the error estimates of the stochastic subdiffusion problem (1), for the sake of paper length, we only consider the cases where the nonlinear terms $f , σ$ satisfy the globally Lipschitz conditions and linear growth conditions in this paper.
The paper is organized as follows. In Section 2, we consider the continuous problem (1). We obtain the mild solution of the problem and the spatial regularity of the mild solution. Section 3 is devoted to the spatial discretization of the problem (1). The regularity of the solution of the spatial discretization problem is obtained. In Section 4, we consider the error estimates under the different assumptions for the smoothness of the initial value. Finally, in Appendix A, we consider the bounds and the error estimates of the approximations of the Green functions of (1).
Throughout this paper, we denote by C a generic constant depending on $u , u 0 , T , α , γ$, but independent of $t > 0$ and the space step size $Δ x$, which could be different at different occurrences. Further, $ϵ > 0$ is always a small positive number.

## 2. Continuous Problem

In this section, we shall consider the mild solution of (1) and study its spatial regularity.
Let ${ λ j , φ j } j = 1 ∞$ be the eigenpairs of the Laplacian operator $A = − d 2 d x 2$ with $D ( A ) = H 0 1 ( 0 , 1 ) ∩ H 2 ( 0 , 1 )$, that is,
$λ j = j 2 π 2 , φ j ( x ) = 2 sin j π x , j = 1 , 2 , ⋯ .$
It is well known that ${ φ j ( x ) } j = 1 ∞$ forms an orthonormal basis in $H = L 2 ( 0 , 1 )$.
Let $E α , β ( z ) , 0 < α ≤ 1 , β ∈ R$ denote the Mittag–Leffler function defined by [4]
$E α , β ( z ) = ∑ k = 0 ∞ z k Γ ( α k + β ) , 0 < α ≤ 1 , β ∈ R .$
We have the following differentiation formulas of Mittag–Leffler functions which we shall use frequently in the error estimates of the Green functions in the Appendix A.
Lemma 1.
(1.83) in [4] Let $0 < α ≤ 1 , 0 ≤ γ ≤ 1$. We have
$d d t E α , 1 ( − t α λ ) = λ t α − 1 E α , α ( − t α λ ) , λ > 0 ,$
$d d t t α + γ − 1 E α , α + γ ( − t α λ ) = t α + γ − 2 E α , α + γ − 1 ( − t α λ ) , λ > 0 , α + γ ≠ 1 .$

#### 2.1. The Mild Solution of (1)

In this subsection, we shall give the mild solution of (1).
Lemma 2.
Assume $( L ) , ( L G )$ and Assumption 1 hold. Let $u ( t , x )$ be the solution of (1). Further, assume that $u 0 ∈ C ( [ 0 , 1 ] )$. Then, (1) has the following unique mild solution, with $t > 0$,
$u ( t , x ) = ∫ 0 1 G 1 ( t , x , y ) u 0 ( y ) d y + ∫ 0 t ∫ 0 1 G 2 ( t − s , x , y ) f ( u ( s , y ) ) d y d s$
$+ ∫ 0 t ∫ 0 1 G 3 ( t − s , x , y ) σ ( u ( s , y ) ) d W ( s , y ) ,$
where
$G 1 ( t , x , y ) : = ∑ j = 1 ∞ E α , 1 ( − t α λ j ) φ j ( x ) φ j ( y ) ,$
$G 2 ( t , x , y ) : = ∑ j = 1 ∞ t α − 1 E α , α ( − t α λ j ) φ j ( x ) φ j ( y ) ,$
$G 3 ( t , x , y ) : = ∑ j = 1 ∞ t α + γ − 1 E α , α + γ ( − t α λ j ) φ j ( x ) φ j ( y ) .$
Here, $E α , β ( z )$ denotes the Mittag–Leffler function defined in (13) and ${ λ j , φ j } j = 1 ∞$ are eigenpairs defined in (12). The integral
$∫ 0 t ∫ 0 1 G 3 ( t − s , x , y ) σ ( u ( s , y ) ) d W ( s , y ) = ∫ 0 t ∫ 0 1 G 3 ( t − s , x , y ) σ ( u ( s , y ) ) ∂ 2 W ( s , y ) ∂ s ∂ y d y d s$
is understood in Itô’s sense [1], p. 3.
Proof.
One may prove this lemma by the method of separation of variables. Assume that the solution $u ( t , x )$ has the form
$u ( t , x ) = ∑ k = 1 ∞ u k ( t ) φ k ( x ) ,$
substituting this form into (1), one may easily obtain the mild solution (16). We omit the details here.

#### 2.2. The Spatial Regularity of the Mild Solution of (1)

In this subsection, we shall consider the spatial regularity of the mild solution of (1). To do this, we write the mild solution of (1) into
$u ( t , x ) = v ( t , x ) + w ( t , x ) ,$
where $v ( t , x )$ satisfies the following homogeneous problem with nonzero initial value $u 0$,
$0 C D t α v ( t , x ) − ∂ 2 v ( t , x ) ∂ x 2 = 0 , 0 < x < 1 , t > 0 , v ( t , 0 ) = v ( t , 1 ) = 0 , t ≥ 0 , v ( 0 , x ) = u 0 ( x ) , 0 ≤ x ≤ 1 ,$
which has the solution
$v ( t , x ) = ∫ 0 1 G 1 ( t , x , y ) u 0 ( y ) d y ,$
and $w ( t , x )$ satisfies the following inhomogeneous problem with zero initial value,
$0 C D t α w ( t , x ) − ∂ 2 w ( t , x ) ∂ x 2 = f ( u ( t , x ) ) + 0 D t − γ σ ( u ( t , x ) ) ∂ 2 W ( t , x ) ∂ t ∂ x , 0 < x < 1 , t > 0 , w ( t , 0 ) = w ( t , 1 ) = 0 , t ≥ 0 , w ( 0 , x ) = 0 , 0 ≤ x ≤ 1 ,$
which has the solution
$w ( t , x ) = ∫ 0 t ∫ 0 1 G 2 ( t − s , x , y ) f ( u ( s , y ) ) d y d s$
$+ ∫ 0 t ∫ 0 1 G 3 ( t − s , x , y ) σ ( u ( s , y ) ) ∂ 2 W ( s , y ) ∂ s ∂ y d y d s .$
Here, $G 1 , G 2 , G 3$ are defined by (17), (18), (19), respectively.
Let $0 = y 0 < y 1 < ⋯ < y M − 1 < y M = 1$ be a partition of $[ 0 , 1 ]$ and $Δ x = 1 / M$ be the space step size. We define the piecewise constant function $k M ( y ) , 0 ≤ y ≤ 1$ by
$k M ( y ) = y j , y j ≤ y < y j + 1 , j = 0 , 1 , ⋯ , M − 1 , y M , y = y M .$

#### 2.2.1. The Spatial Regularity of the Homogeneous Problem (20) with the Initial Data $u 0 ∈ C ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$

In this subsection, we shall consider the spatial regularity of the homogeneous problem (20) with $u 0 ∈ C ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$. We have the following lemma.
Lemma 3.
Let $v ( t , x )$ be the solution of the homogeneous problem (20). Let $u 0 ∈ C ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$. Then, there exists a constant C which is independent of t and the space step size $Δ x$, such that
$E | v ( t , y ) − v ( t , K M ( y ) ) | 2 ≤ C t − 1 + ϵ Δ x r 1 ,$
where $r 1$ is denoted by (9).
Proof.
Note that, by Cauchy–Schwarz inequality,
$| v ( t , y ) − v ( t , k M ( y ) | 2 = | ∫ 0 1 G 1 ( t , y , z ) − G 1 ( t , k M ( y ) , z ) u 0 ( z ) d z | 2 ≤ ∫ 0 1 | G 1 ( t , y , z ) − G ( t , k M ( y ) , z ) | 2 d z ∫ 0 1 | u 0 ( z ) | 2 d z .$
By Lemma A1, we get
$| v ( t , y ) − v ( t , k M ( y ) ) | 2 ≤ C t − 1 + ϵ Δ x r 1 ,$
where $r 1$ is defined by (9), which completes the proof of Lemma 3. □

#### 2.2.2. The Spatial Regularity of the Homogeneous Problem (20) with the Initial Data $u 0 ∈ C 2 ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$

In this subsection, we shall consider the spatial regularity of the homogeneous problem (20) with $u 0 ∈ C 2 ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$. We have the following lemma.
Lemma 4.
Let $v ( t , x )$ be the solution of the homogeneous problem (20). Let $u 0 ∈ C 2 ( [ 0 , 1 ] ) , u 0 ( 0 )$ $= u 0 ( 1 ) = 0$. Then, there exists a constant C which is independent of t and the space step size $Δ x$, such that
$E | v ( t , y ) − v ( t , K M ( y ) ) | 2 ≤ C Δ x r 2 ,$
where $r 2$ is defined by (10).
Proof.
We have
$v ( t , y ) = ∫ 0 1 G 1 ( t , y , z ) u 0 ( z ) d z = ∫ 0 1 ∑ j = 1 ∞ E α , 1 ( − t α λ j ) φ j ( y ) φ j ( z ) u 0 ( z ) d z .$
By Lemma 1, we have
$v ( t , y ) = ∫ 0 1 ∑ j = 1 ∞ ∫ 0 t s α − 1 λ j E α , α ( − s α λ j ) d s + 1 φ j ( y ) φ j ( z ) u 0 ( z ) d z = ∑ j = 1 ∞ ∫ 0 1 φ j ( y ) φ j ( z ) u 0 ( z ) d z + ∫ 0 t ∫ 0 1 ∑ j = 1 ∞ s α − 1 E α , α ( − s α λ j ) φ j ( y ) λ j φ j ( z ) u 0 ( z ) d s .$
Note that
$∫ 0 1 λ j φ j ( z ) u 0 ( z ) d z = − ∫ 0 1 φ j ″ ( z ) u 0 ( z ) d z = − ∫ 0 1 φ j ( z ) u 0 ″ ( z ) d z .$
We get
$v ( t , y ) = u 0 ( y ) − ∫ 0 t ∫ 0 1 G 2 ( s , y , z ) u 0 ″ ( z ) d z d s .$
where $G 2$ is defined by (18).
Thus, by using Cauchy–Schwarz inequality, with $y k ≤ y ≤ y k + 1 , k = 0 , 1 , ⋯ , M − 1$,
$| v ( t , y ) − v ( t , k M ( y ) | 2 ≤ C | u 0 ( y ) − u 0 ( k M ( y ) ) | 2 + C | ∫ 0 t ∫ 0 1 G 2 ( s , y , z ) − G 2 ( s , k M ( y ) , z ) u 0 ″ ( z ) d z d s | 2 ≤ C | u 0 ( y ) − u 0 ( k M ( y ) ) | 2 + C ∫ 0 t ∫ 0 1 G 2 ( s , y , z ) − G 2 ( s , k M ( y ) , z ) 2 d z d s ∫ 0 t ∫ 0 1 | u 0 ″ ( z ) | 2 d z d s .$
By Lemma A4 and using the error estimates of the liner interpolation function, we obtain
$| v ( t , y ) − v ( t , k M ( y ) ) | 2 ≤ C Δ x 2 ∥ u 0 ∥ C 1 ( [ 0 , 1 ] ) 2 + C Δ x r 2 ∥ u 0 ∥ C 2 ( [ 0 , 1 ] ) ,$
which completes the proof of Lemma 4. □

#### 2.2.3. The Spatial Regularity of the Inhomogeneous Problem (22)

In this subsection, we shall consider the spatial regularity of the inhomogeneous problem (22). We have the following lemma.
Lemma 5.
Assume $( L ) , ( L G )$ and Assumption 1 hold. Let $w ( t , x )$ be the solution of the inhomogeneous problem (22). Then there exists a constant C which is independent of t and the space step size $Δ x$, such that
$E | w ( t , y ) − w ( t , K M ( y ) ) | 2 ≤ C Δ x r 2 + Δ x r 3 ,$
where $r 2$ and $r 3$ are defined by (10), (11), respectively.
Proof.
Denote $h ( s , z ) = f ( u ( s , z ) )$ or $σ ( u ( s , z ) )$. One may easily prove (we omit the proof here due to the length of the paper) that, under the assumptions (L) and (LG), $h ( t , x )$, $t ≥ 0$, $0 ≤ x ≤ 1$ satisfy
$sup t , x E | h ( t , x ) | 2 ≤ C .$
Denote
$F ( t , x ) = ∫ 0 t ∫ 0 1 G 2 ( t − s , x , z ) h ( s , z ) d z d s , H ( t , x ) = ∫ 0 t ∫ 0 1 G 3 ( t − s , x , z ) h ( s , z ) d W ( s , z ) ,$
where $G 2$ and $G 3$ are defined by (18) and (19), respectively.
We will show that
$E | F ( t , y ) − F ( t , k M ( y ) ) | 2 ≤ C Δ x r 2 ,$
$E | H ( t , y ) − H ( t , k M ( y ) ) | 2 ≤ C Δ x r 3 ,$
where $r 2$ and $r 3$ are defined by (10) and (11), respectively.
We only prove (29) here since the proof of (28) is similar. By Burkholder’s inequality [1], p. 9 and the boundedness of h in (27), we have
$E | H ( t , y ) − H ( t , k M ( y ) ) | 2 = E | ∫ 0 t ∫ 0 1 G 3 ( t − s , y , z ) h ( s , z ) − G 3 ( t − s , k M ( y ) , z ) h ( s , z ) d W ( s , z ) | 2 ≤ C E ∫ 0 t ∫ 0 1 | G 3 ( t − s , y , z ) − G 3 ( t − s , k M ( y ) , z ) | 2 | h ( s , z ) | 2 d z d s ≤ C ∫ 0 t ∫ 0 1 | G 3 ( t − s , y , z ) − G 3 ( t − s , k M ( y ) , z ) | 2 d z d s ,$
which implies, by Lemma A4,
$E | H ( t , y ) − H ( t , k M ( y ) ) | 2 ≤ C Δ x r 3 ,$
where $r 3$ is defined by (11).
The proof of Lemma 5 is complete.

## 3. Spatial Discretization

In this section, we shall consider the spatial discretization of (1).

#### 3.1. The Mild Solution of the Spatial Discretization Problem (4)

Let ${ λ j M , φ → j M } j = 1 M − 1$ be the eigenpairs of the following discrete Laplacian matrix $A →$ defined by
$A → = 2 − 1 0 − 1 ⋱ ⋱ ⋱ ⋱ − 1 0 − 1 2 ( M − 1 ) × ( M − 1 ) .$
It is well known that (2.4) in page 4 in [1]
$λ j M = sin 2 j π 2 M 1 2 M 2 , φ → j M = Δ x φ j ( x 1 ) φ j ( x 2 ) ⋮ φ j ( x M − 1 ) , j = 1 , 2 , ⋯ , M − 1 ,$
and $φ → j M , j = 1 , 2 , ⋯ , M − 1$ forms an orthonormal basis in $R M − 1$.
Lemma 6.
Assume $( L ) , ( L G )$ and Assumption 1 hold. Let $u M ( t , x k ) , k = 0 , 1 , 2 , ⋯ , M$ be the solution of spatial discretization problem (4). Further, assume that $u 0 ∈ C ( [ 0 , 1 ] )$. Then, (4) has the following unique mild solution
$u M ( t , x ) = ∫ 0 1 G 1 M ( t , x , y ) u 0 ( k M ( y ) ) d y + ∫ 0 t ∫ 0 1 G 2 M ( t − s , x , y ) f ( u M ( s , k M ( y ) ) ) d y d s$
$+ ∫ 0 t ∫ 0 1 G 3 M ( t − s , x , y ) σ ( u M ( s , k M ( y ) ) ) ∂ 2 W M ( s , y ) ∂ s ∂ y d y d s ,$
where $u M ( t , x )$ is the piecewise linear interpolation function of $u M ( t , x k ) , k = 0 , 1 , 2 , ⋯ , M$ and
$G 1 M ( t , x , y ) : = ∑ j = 1 M − 1 E α , 1 ( − t α λ j M ) φ j M ( x ) φ j ( k M ( y ) ) ,$
$G 2 M ( t , x , y ) : = ∑ j = 1 M − 1 t α − 1 E α , α ( − t α λ j M ) φ j M ( x ) φ j ( k M ( y ) ) ,$
$G 3 M ( t , x , y ) : = ∑ j = 1 M − 1 t α + γ − 1 E α , α + γ ( − t α λ j M ) φ j M ( x ) φ j ( k M ( y ) ) ,$
and
$∂ 2 W M ( t , y ) ∂ t ∂ y : = d d t W ( t , k M ( y ) + 1 M ) − W ( t , k M ( y ) ) Δ x , 0 ≤ y ≤ 1 .$
Here, $E α , β ( z )$ denotes the Mittag–Leffler functions defined by (13) and ${ λ j M , φ → j M } j = 1 M − 1$ are the eigenpairs of the discrete Laplacian $A →$ defined in (31). Here, $k M ( y ) , 0 ≤ y ≤ 1$ is defined by (24) and $φ j M ( x )$ are the piecewise linear interpolation functions of $φ j ( x ) j = 1 , 2 , ⋯ , M − 1$ with respect to the grids $x l , l = 0 , 1 , ⋯ , M$.
Proof.
We write (4) into the following matrix form:
$0 C D t α u → M ( t ) + A → u → M ( t ) = F → 1 M ( t ) + 0 D t − γ F → 2 M ( t ) , t > 0 , u → M ( 0 ) = u 0 ( x 1 ) u 0 ( x 2 ) ⋮ u 0 ( x M − 1 ) ,$
where
$u → M ( t ) = u M ( t , x 1 ) u M ( t , x 2 ) ⋮ u M ( t , x M − 1 ) , F → 1 M ( t ) = f ( u M ( t , x 1 ) ) f ( u M ( t , x 2 ) ) ⋮ f ( u M ( t , x M − 1 ) ) ,$
and
$F → 2 M ( t ) = σ ( u M ( t , x 1 ) ) d d t W ( t , x 2 ) − W ( t , x 1 ) Δ x σ ( u M ( t , x 2 ) ) d d t W ( t , x 3 ) − W ( t , x 2 ) Δ x ⋮ σ ( u M ( t , x M − 1 ) ) d d t W ( t , x M ) − W ( t , x M − 1 ) Δ x .$
The solution of (37) can then be written into the following integration form:
$u → M ( t ) = E α , 1 ( − t α A → ) u → M ( 0 ) + ∫ 0 t ( t − s ) α − 1 E α , α ( − ( t − s ) α A → ) F → 1 M ( s ) d s + ∫ 0 t ( t − s ) α + γ − 1 E α , α + γ ( − ( t − s ) α A → ) F → 2 M ( s ) d s .$
Thus, we have, noting that ${ λ j M , φ → j M } j = 1 M − 1$ is an orthonormal basis in $R M − 1$,
$u → M ( t ) = ∑ j = 1 M − 1 ( u → M ( 0 ) , φ → j M ) E α , 1 ( − λ j M t α ) φ → j M + ∑ j = 1 M − 1 ∫ 0 t ( t − s ) α − 1 E α , α ( − λ j M ( t − s ) α ) F → 1 M ( s ) , φ → j M φ → j M d s + ∑ j = 1 M − 1 ∫ 0 t ( t − s ) α + γ − 1 E α , α + γ ( − λ j M ( t − s ) α ) F → 2 M ( s ) , φ → j M φ → j M d s ,$
which implies that, with $k = 1 , 2 , ⋯ , M − 1$,
$u M ( t , x k ) = ∑ j = 1 M − 1 Δ x ∑ l = 1 M − 1 u 0 ( x l ) φ j ( x l ) E α , 1 ( − λ j M t α ) φ j ( x k ) + ∑ j = 1 M − 1 ∫ 0 t ( t − s ) α − 1 E α , α ( − λ j M ( t − s ) α ) Δ x ∑ l = 1 M − 1 f ( u M ( s , x l ) ) φ j ( x l ) φ j ( x k ) d s + ∑ j = 1 M − 1 ∫ 0 t ( t − s ) α + γ − 1 E α , α + γ ( − λ j M ( t − s ) α ) ·$
$· Δ x ∑ l = 1 M − 1 σ ( u M ( s , x l ) ) d d s W ( s , x l + 1 ) − W ( s , x l ) Δ x φ j ( x l ) φ j ( x k ) d s .$
Let $φ j M ( x )$ be the piecewise linear interpolation function of $φ j ( x k ) , k = 0 , 1 , ⋯ , M$ defined by
$φ j M ( x ) = φ j ( x k ) + φ j ( x k + 1 ) − φ j ( x k ) Δ x ( x − x k ) , x k ≤ x ≤ x k + 1 , k = 0 , 1 , 2 , ⋯ , M − 1 .$
Replacing $φ j ( x k )$ by the piecewise linear interpolation function $φ j M ( x )$ and writing the summation terms $∑ l = 1 M$ into the integral forms in (38), we then obtain the following piecewise linear interpolation function of $u M ( t , x k ) , k = 0 , 1 , 2 , ⋯ , M$,
$u M ( t , x ) = ∫ 0 1 ∑ j = 1 M − 1 E α , 1 ( − λ j M t α ) φ j M ( x ) φ j ( k M ( y ) ) u 0 ( k M ( y ) ) d y + ∫ 0 t ∫ 0 1 ∑ j = 1 M − 1 ( t − s ) α − 1 E α , α ( − λ j M ( t − s ) α ) φ j M ( x ) φ j ( k M ( y ) ) f ( u M ( s , k M ( y ) ) d y d s + ∫ 0 t ∫ 0 1 ∑ j = 1 M − 1 ( t − s ) α + γ − 1 E α , α + γ ( − λ j M ( t − s ) α ) φ j M ( x ) φ j ( k M ( y ) ) σ ( u M ( s , k M ( y ) ) ∂ 2 W M ( s , y ) ∂ s ∂ y d y d s ,$
which shows (32), where $k M ( y )$ and $∂ 2 W M ( s , y ) ∂ s ∂ y$ are defined by (24) and (36), respectively.
The proof of Lemma 6 is now complete. □

#### 3.2. Spatial Regularity of the Spatial Discretization Problem

In this subsection, we shall consider the regularity of the mild solution (32) of the spatial discretization problem. To do this, we write the solution $u M ( t , x )$ in (32) into
$u M ( t , x ) = v M ( t , x ) + w M ( t , x ) ,$
where $v M ( t , x )$ is the solution of the corresponding homogeneous problem defined by
$v M ( t , x ) : = ∫ 0 1 G 1 M ( t , x , y ) u 0 ( k M ( y ) ) d y ,$
and $w M ( t , x )$ is the solution of the corresponding inhomogeneous problem defined by
$w M ( t , x ) = ∫ 0 t ∫ 0 1 G 2 M ( t − s , x , y ) f ( u M ( s , k M ( y ) ) d y d s$
$+ ∫ 0 t ∫ 0 1 G 3 M ( t − s , x , y ) σ ( u M ( s , k M ( y ) ) ∂ 2 W M ( s , y ) ∂ s ∂ y d y d s .$

#### 3.2.1. Spatial Regularity of the Homogeneous Spatial Discretization Problem with the Initial Data $u 0 ∈ C ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$

In this subsection, we shall consider the spatial regularity of the homogeneous spatial discretization problem with the initial data $u 0 ∈ C ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$. We have the following lemma.
Lemma 7.
Let $v M ( t , x )$ be the solution in (39). Let $u 0 ∈ C ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$. Then there exists a constant C which is independent of t and the space step size $Δ x$, such that
$E | v M ( t , y ) − v M ( t , K M ( y ) ) | 2 ≤ C t − 1 + ϵ Δ x r 1 ,$
where $r 1$ is defined by (9).
Proof.
Note that, by Cauchy–Schwarz inequality,
$| v M ( t , y ) − v M ( t , k M ( y ) | 2 = | ∫ 0 1 G 1 M ( t , y , z ) − G 1 M ( t , k M ( y ) , z ) u 0 ( k M ( z ) ) d z | 2 ≤ ∫ 0 1 | G 1 M ( t , y , z ) − G 1 M ( t , k M ( y ) , z ) | 2 d z ∫ 0 1 | u 0 ( k M ( z ) ) | 2 d z .$
By Lemma A2, we get
$| v M ( t , y ) − v M ( t , k M ( y ) ) | 2 ≤ C t − 1 + ϵ Δ x r 1 ,$
which completes the proof of Lemma 7. □

#### 3.2.2. Spatial Regularity of the Homogeneous Spatial Discretization Problem with the Initial Data $u 0 ∈ C 2 ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$

In this subsection, we shall consider the spatial regularity of the homogeneous spatial discretization problem with the initial data $u 0 ∈ C 2 ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$. We have the following lemma.
Lemma 8.
Let $v M ( t , x )$ be the solution in (39). Let $u 0 ∈ C 2 ( [ 0 , 1 ] ) , u 0 ( 0 ) = u 0 ( 1 ) = 0$. Then, there exists a constant C which is independent of t and the space step size $Δ x$, such that
$E | v M ( t , y ) − v M ( t , K M ( y ) ) | 2 ≤ C Δ x r 2 ,$
where $r 2$ is defined by (10).
Proof.
We have
$v M ( t , y ) = ∫ 0 1 G 1 M ( t , y , z ) u 0 ( k M ( z ) ) d z = ∫ 0 1 ∑ j = 1 M − 1 E α , 1 ( − t α λ j M ) φ j M ( y ) φ j ( k M ( z ) ) u 0 ( k M ( z ) ) d z .$
By Lemma 1, we have
$v M ( t , y ) = ∫ 0 1 ∑ j = 1 M − 1 ∫ 0 t s α − 1 λ j M E α , α ( − s α λ j M ) d s + 1 φ j M ( y ) φ j ( k M ( z ) ) u 0 ( k M ( z ) ) d z = ∑ j = 1 M − 1 ∫ 0 1 φ j M ( y ) φ j ( k M ( z ) ) u 0 ( k M ( z ) ) d z$
$+ ∫ 0 t ∫ 0 1 ∑ j = 1 M − 1 s α − 1 E α , α ( − s α λ j M ) φ j M ( y ) λ j M φ j ( k M ( z ) ) u 0 ( k M ( z ) ) dz d s .$
For the first term of the last equality in (42), we have, with $k = 0 , 1 , 2 , ⋯ , M$, and noting that $φ j M ( y )$ is the piecewise linear interpolation function of $φ j ( y )$ on $y j , j = 0 , 1 , ⋯ , M$,
$∑ j = 1 M − 1 ∫ 0 1 φ j M ( y k ) φ j ( k M ( z ) ) u 0 ( k M ( z ) ) d z = ∑ j = 1 M − 1 φ j ( y k ) ∫ 0 1 φ j ( k M ( z ) ) u 0 ( k M ( z ) ) d z = ∑ j = 1 M − 1 Δ x φ j ( y k ) ∑ l = 0 M − 1 Δ x φ j ( z l ) u 0 ( z l ) = ∑ j = 1 M − 1 u → 0 M ( 0 ) , φ → j M φ → j M ( k ) = u → 0 M ( 0 ) ( k ) = u 0 ( y k ) .$
Therefore, $∑ j = 1 M − 1 ∫ 0 1 φ j M ( y ) φ j ( k M ( z ) ) u 0 ( k M ( z ) ) d z$ is the piecewise linear interpolation function of $u 0 ( y k ) , k = 0 , 1 , 2 , ⋯ , M$ and we denote
$I h u 0 ( y ) : = ∑ j = 1 M − 1 ∫ 0 1 φ j M ( y ) φ j ( k M ( z ) ) u 0 ( k M ( z ) ) d z .$
Further, we assume that the following equality holds at the moment:
$∫ 0 1 λ j M φ j ( k M ( z ) ) u 0 ( k M ( z ) ) d z = − ∫ 0 1 φ j ( k M ( z ) ) u 0 ( k M ( z ) + 1 M ) − 2 u 0 ( k M ( z ) ) + u 0 ( k M ( z ) − 1 M ) Δ x 2 d z ,$
which we shall prove later. We then get
$v M ( t , y ) = I h u 0 ( y ) − ∫ 0 t ∫ 0 1 G 2 M ( s , y , z ) u 0 ( k M ( z ) + 1 M ) − 2 u 0 ( k M ( z ) ) + u 0 ( k M ( z ) − 1 M ) Δ x 2 d z d s ,$
where $G 2 M$ is defined by (34).
By using the Cauchy–Schwarz inequality, we have
$| v M ( t , y ) − v M ( t , k M ( y ) | 2 ≤ C | I h u 0 ( y ) − I h u 0 ( k M ( y ) ) | 2 + C | ∫ 0 t ∫ 0 1 G 2 M ( s , y , z ) − G 2 M ( s , k M ( y ) , z ) u 0 ( k M ( z ) + 1 M ) − 2 u 0 ( k M ( z ) ) + u 0 ( k M ( z ) − 1 M ) Δ x 2 d z d s | 2 ≤ C | I h u 0 ( y ) − u 0 ( y ) | 2 + C | u 0 ( y ) − I h u 0 ( k M ( y ) ) | 2 + C ∫ 0 t ∫ 0 1 G 2 M ( s , y , z ) − G 2 M ( s , k M ( y ) , z ) 2 d z d s · ∫ 0 t ∫ 0 1 | u 0 ( k M ( z ) + 1 M ) − 2 u 0 ( k M ( z ) ) + u 0 ( k M ( z ) − 1 M ) Δ x 2 | 2 d z d s .$
By Lemma A4, and using the error estimates of the liner interpolation function and mean-value theorem, we obtain
$| v M ( t , y ) − v M ( t , k M ( y ) ) | 2 ≤ C Δ x 2 ∥ u 0 ∥ C 1 ( [ 0 , 1 ] ) 2 + C Δ x r 2 ∥ u 0 ∥ C 2 ( [ 0 , 1 ] ) .$
It remains to prove (43). In fact, we have, noting that $φ j ( y 0 ) = 0$,
$− ∫ 0 1 λ j M φ j ( k M ( y ) ) u 0 ( k M ( y ) ) d y = ∫ y 1 y 2 + ⋯ + ∫ y M − 1 y M ( − λ j M ) φ j ( k M ( y ) ) u 0 ( k M ( y ) ) d y = Δ x ( − λ j M ) φ → j M , u → 0 M R M − 1 = Δ x φ → j M , A → u → 0 M R M − 1$
$= ∫ 0 1 φ j ( k M ( y ) ) u 0 ( k M ( z ) + 1 M ) − 2 u 0 ( k M ( z ) ) + u 0 ( k M ( z ) − 1 M ) Δ x 2 d y ,$
where we use the fact $u 0 ( y 0 ) = u 0 ( y M ) = 0$ in the last equality in (45). Hence (43) holds.
The proof of Lemma 8 is now complete. □

#### 3.2.3. Spatial Regularity of the Inhomogeneous Spatial Discretization Problem

In this subsection, we shall consider the spatial regularity of the inhomogeneous spatial discretization problem. Following the same lines of the proof of Lemma 5, we may prove the following.
Lemma 9.
Assume $( L ) , ( L G )$ and Assumption 1 hold. Let $w M ( t , x )$ be the solution in (40). Then, there exists a constant C which is independent of t and the space step size $Δ x$, such that
$E | w M ( t , y ) − w M ( t , K M ( y ) ) | 2 ≤ C Δ x r 2 + Δ x r 3 ,$
where $r 2$ and $r 3$ are defined by (10) and (11), respectively.

## 4. Error Estimates

In this section, we will consider the error estimates of $u M ( t , x )$ for approximating $u ( t , x )$ under the suitable smoothness assumptions of the initial value $u 0$. We need the following Grönwall Lemma Lemma 3.4 in [1].
Lemma 10.
Let $z : R + → R +$ be a Borel function satisfying for all $t ∈ [ 0 , T ]$ the inequality
$0 ≤ z ( t ) ≤ a + K ∫ 0 t ( t − s ) σ z ( s ) d s ,$
with some constants $a ≥ 0 , K > 0$ and $σ > − 1$. Then, there exists a constant $C = C ( σ , K , T )$ such that $z ( t ) ≤ a C$ for all $t ∈ [ 0 , T ]$.

#### 4.1. Proof of Theorem 2

In this subsection, we shall prove Theorem 2 where the initial data $u 0 ∈ C 3 ( [ 0 , 1 ] ) , u 0 ( 0 )$ $= u 0 ( 1 ) = 0$.
We first consider the case (i), that is, $f = 0$. We divide the proof into two steps.
Step 1. We consider the approximation of the homogeneous problem of (1). Recall that the solution of the homogeneous problem of (1) has the form, by (21),
$v ( t , x ) = ∫ 0 1 G 1 ( t , x , y ) u 0 ( y ) d y .$
The approximate solution of the homogeneous problem of (1) has the form, by (39),
$v M ( t , x ) = ∫ 0 1 G 1 M ( t , x , y ) u 0 ( k M ( y ) ) d y .$
By (26), we have
$v ( t , x ) = u 0 ( x ) + ∫ 0 t ∫ 0 1 G 2 ( s , x , y ) u 0 ″ ( y ) d y d s .$
By (44), we obtain
$v M ( t , x ) = I h u 0 ( x ) + ∫ 0 t ∫ 0 1 G 2 M ( s , x , y ) u 0 ( k M ( y ) + 1 M ) − 2 u 0 ( k M ( y ) ) + u 0 ( k M ( y ) − 1 M ) Δ x 2 d y d s ,$
where $I h u 0 ( x )$ is the piecewise linear interpolation function of $u 0 ( x k ) , k = 0 , 1 , 2 , ⋯ , M$. Therefore, one gets
$| v ( t , x ) − v M ( t , x ) | 2 ≤ C | I h u 0 ( x ) − u 0 ( x ) | 2 + C | ∫ 0 t ∫ 0 1 G 2 M ( s , x , y ) − G 2 ( s , x , y ) u 0 ″ ( y ) d y d s | 2 + C | ∫ 0 t ∫ 0 1 G 2 M ( s , x , y ) u 0 ″ ( y ) − u 0 ( k M ( y ) + 1 M ) − 2 u 0 ( k M ( y ) ) + u 0 ( k M ( y ) − 1 M ) Δ x 2 d y d s | 2 = I 1 + I 2 + I 3 .$
For $I 1$, using the error estimates of the linear interpolation function, one gets
$I 1 = C | I h u 0 ( x ) − u 0 ( x ) | 2 ≤ C ∥ u 0 ∥ C 1 ( [ 0 , 1 ] ) 2 Δ x 2 .$
For $I 2$, we obtain, by Cauchy–Schwartz inequality,
$I 2 = C | ∫ 0 t ∫ 0 1 G 2 M ( s , x , y ) − G 2 ( s , x , y ) u 0 ″ ( y ) d y d s | 2 ≤ C ∫ 0 t ∫ 0 1 | G 2 M ( s , x , y ) − G 2 ( s , x , y ) | 2 d y d s ∫ 0 t ∫ 0 1 | u 0 ″ ( y ) | 2 d y d s ≤ C ∫ 0 t ∫ 0 1 | G 2 M ( s , x , y ) − G 2 ( s , x , y ) | 2 d y d s ∥ u 0 ∥ C 2 ( [ 0 , 1 ] ) 2 .$
Thus, by Lemma A9,
$I 2 ≤ C Δ x r 2 ∥ u 0 ∥ C 2 ( [ 0 , 1 ] ) 2 ,$
where $r 2$ is defined by (10).
For $I 3$, we have
$I 3 = C | ∫ 0 t ∫ 0 1 G 2 M ( s , x , y ) u 0 ″ ( y ) − u 0 ( k M ( y ) + 1 M ) − 2 u 0 ( k M ( y ) ) + u 0 ( k M ( y ) − 1 M ) Δ x 2 d y d s | 2 ≤ C ∫ 0 t ∫ 0 1 | G 2 M ( s , x , y ) | 2 d y d s · · [ ∫ 0 t ∫ 0 1 | u 0 ″ ( y ) − u 0 ( k M ( y ) + 1 M ) − 2 u 0 ( k M ( y ) ) + u 0 ( k M ( y ) − 1 M ) Δ x 2 | 2 d y d s .$
Note that, with $y l ≤ y < y l + 1 , l = 0 , 1 , 2 , ⋯ , M − 1$,
$| u 0 ″ ( y ) − u 0 ( y l + 1 ) − 2 u 0 ( y l ) + u 0 ( y l + 1 ) Δ x 2 | ≤ | u 0 ″ ( y ) − u 0 ″ ( y l ) | + | u 0 ″ ( y l ) − u 0 ( y l + 1 ) − 2 u 0 ( y l ) + u 0 ( y l + 1 ) Δ x 2 | ≤ C | u 0 ‴ ( c ) Δ x | ≤ C Δ x ∥ u 0 ∥ C 3 ( [ 0 , 1 ] ) .$
Therefore, we get, by Lemma A8,
$I 3 ≤ C Δ x 2 ∥ u 0 ∥ C 3 ( [ 0 , 1 ] ) 2 .$
Together with these estimates, we obtain
$E | v ( t , x ) − v M ( t , x ) | 2 ≤ C Δ x 2 ∥ u 0 ∥ C 2 ( [ 0 , 1 ] ) 2 + C Δ x r 2 ∥ u 0 ∥ C 2 ( [ 0 , 1 ] ) 2 + C Δ x 2 ∥ u 0 ∥ C 3 ( [ 0 , 1 ] ) 2$
$≤ C ( Δ x 2 + Δ x r 2 ) ,$
where $r 2$ is defined by (10).
Step 2. We now consider the approximation of the inhomogeneous problem of (1). Recall that, by (23), the solution of the inhomogeneous problem of (1) has the form, as $f = 0$,
$w ( t , x ) = ∫ 0 t ∫ 0 1 G 3 ( t − s , x , y ) σ ( u ( s , y ) ) ∂ 2 W ( s , y ) ∂ s ∂ y d y d s ,$
and, by (40), the approximate solution of the inhomogeneous problem of (1) has the form
$w M ( t , x ) = ∫ 0 t ∫ 0 1 G 3 M ( t − s , x , y ) σ ( u M ( s , k M ( y ) ) ) ∂ 2 W M ( s , y ) ∂ s ∂ y d y d s .$
Thus, we have
$E | w M ( t , x ) − w ( t , x ) | 2 = E | ∫ 0 t ∫ 0 1 G 3 M ( t − s , x , y ) σ ( u M ( s , k M ( y ) ) ) ∂ 2 W M ( s , y ) ∂ s ∂ y d y d s − ∫ 0 t ∫ 0 1 G 3 ( t − s , x , y ) σ ( u ( s , y ) ) ∂ 2 W ( s , y ) ∂ s ∂ y d y d s | 2 ≤ C E | ∫ 0 t ∫ 0 1 G 3 M ( t − s , x , y ) σ ( u M ( s , k M ( y ) ) ) − G 3 ( t − s , x , y ) σ ( u ( s , y ) ) ∂ 2 W ( s , y ) ∂ s ∂ y d y d s | 2 + C E | ∫ 0 t ∫ 0 1 G 3 M ( t − s , x , y ) σ ( u M ( s , k M ( y ) ) ) ∂ 2 W ( s , y ) ∂ s ∂ y − ∂ 2 W M ( s , y ) ∂ s ∂ y d y d s | 2 = I 1 + I 2 .$
For $I 1$, we have
$I 1 = C E | ∫ 0 t ∫ 0 1 G 3 M ( t − s , x , y ) σ ( u M ( s , k M ( y ) ) ) − G 3 ( t − s , x , y ) σ ( u ( s , y ) ) d W ( s , y ) | 2 ≤ C E | ∫ 0 t ∫ 0 1 G 3 M ( t − s , x , y ) − G 3 ( t − s , x , y ) σ ( u M ( s , k M ( y ) ) ) d W ( s , y ) | 2 + C E | ∫ 0 t ∫ 0 1 G 3 ( t − s , x , y ) σ ( u M ( s , k M ( y ) ) ) − σ ( u ( s , y ) ) d W ( s , y ) | 2 .$
By Burkholder inequality [1], p. 9, Proof of Proposition 3.5, one gets
$I 1 ≤ C ∫ 0 t ∫ 0 1 G 3 M ( t − s , x , y ) − G 3 ( t − s , x , y ) 2 sup s , y E ∥ σ ( u M ( s , k M ( y ) ) ) ∥ 2 d y d s + C ∫ 0 t ∫ 0 1 [ G 3 ( t − s , x , y ] 2 sup y E ∥ σ ( u M ( s , k M ( y ) ) ) − σ ( u ( s , y ) ) ∥ 2 d y d s .$
By the Assumptions (L) and (LG), we have, using the boundedness of the solution $u M$,
$I 1 ≤ C ∫ 0 t ∫ 0 1 G 3 M ( t − s , x , y ) − G 3 ( t − s , x , y ) 2 d y d s + C ∫ 0 t ∫ 0 1 [ G 3 ( t − s , x , y ] 2 sup y E ∥ u M ( s , k M ( y ) ) − u ( s , y ) ∥ 2 d y d s .$
Therefore, by Lemmas A4 and A6,
$I 1 ≤ C Δ x r 3 + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | u M ( s , k M ( y ) ) − u ( s , y ) | 2 d s ,$
where $r 3$ is defined by (11).
For $I 2$, we have
$I 2 = E | ∫ 0 t ∫ 0 1 G 3 M ( t − s , x , y ) σ ( u M ( s , k M ( y ) ) ) ∂ 2 W ( s , y ) ∂ s ∂ y d y d s − ∫ 0 t ∫ 0 1 G 3 M ( t − s , x , y ) σ ( u M ( s , k M ( y ) ) ) ∂ 2 W M ( s , y ) ∂ s ∂ y d y d s | 2 = E | ∑ k = 0 M − 1 ∫ 0 t ∫ y k y k + 1 G 3 M ( t − s , x , y ) σ ( u M ( s , k M ( y ) ) ) ∂ 2 W ( s , y ) ∂ s ∂ y d y d s − ∑ k = 0 M − 1 ∫ 0 t ∫ y k y k + 1 G 3 M ( t − s , x , y ¯ ) σ ( u M ( s , k M ( y ¯ ) ) ) ∂ 2 W M ( s , y ¯ ) ∂ s ∂ y ¯ d y ¯ d s | 2 .$
By (36), we have, for $y k ≤ y ¯ ≤ y k + 1 , k = 0 , 1 , 2 , ⋯ , M − 1$,
$∂ 2 W M ( s , y ¯ ) ∂ s ∂ y ¯ = d d s W ( s , y k + 1 ) − W ( s , y k ) Δ x = 1 Δ x ∫ y k y k + 1 ∂ 2 W ( s , y ) ∂ s ∂ y d y .$
Thus, we get
$I 2 = E | ∑ k = 0 M − 1 ∫ 0 t ∫ y k y k + 1 G 3 M ( t − s , x , y ) σ ( u M ( s , k M ( y ) ) ) ∂ 2 W ( s , y ) ∂ s ∂ y d y d s − ∑ k = 0 M − 1 ∫ 0 t ∫ y k y k + 1 G 3 M ( t − s , x , y ¯ ) σ ( u M ( s , k M ( y ¯ ) ) ) 1 Δ x ∫ y k y k + 1 ∂ 2 W ( s , y ) ∂ s ∂ y d y d y ¯ d s | 2 = E | ∑ k = 0 M − 1 ∫ 0 t ∫ y k y k + 1 [ 1 Δ x ∫ y k y k + 1 G 3 M ( t − s , x , y ) σ ( u M ( s , k M ( y ) ) ) d y ¯ − 1 Δ x ∫ y k y k + 1 G 3 M ( t − s , x , y ¯ ) σ ( u M ( s , k M ( y ¯ ) ) ) d y ¯ ] d W ( s , y ) | 2 .$
Note that, for $y k ≤ y , y ¯ ≤ y k + 1 , k = 0 , 1 , 2 , ⋯ , M − 1$,
$G 3 M ( t − s , x , y ) = ∑ j = 1 M − 1 ( t − s ) α + γ − 1 E α , α + γ ( − λ j M ( t − s ) α ) φ j M ( x ) φ j ( k M ( y ) ) = ∑ j = 1 M − 1 ( t − s ) α + γ − 1 E α , α + γ ( − λ j M ( t − s ) α ) φ j M ( x ) φ j ( k M ( y ¯ ) ) = G 3 M ( t − s , x , y ¯ ) ,$
which implies that
$I 2 = E | ∑ k = 0 M − 1 ∫ 0 t ∫ y k y k + 1 0 d W ( s , y ) | 2 = 0 .$
Thus, we get
$E | w M ( t , x ) − w ( t , x ) | 2 ≤ C Δ x r 3 + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | u M ( s , k M ( y ) ) − u ( s , y ) | 2 d s .$
By the spatial regularity Lemmas 8 and 9 and the error estimate (47) for $E | v M ( s , y ) − v ( s , y ) | 2$, we obtain
$E | u M ( s , k M ( y ) ) − u ( s , y ) | 2 ≤ E | w M ( s , k M ( y ) ) − w M ( s , y ) | 2 + E | w M ( s , y ) − w ( s , y ) | 2 + E | v M ( s , k M ( y ) ) − v M ( s , y ) | 2 + E | v M ( s , y ) − v ( s , y ) | 2 ≤ C ( Δ x 2 + Δ x r 2 + Δ x r 3 ) + C E | w M ( s , y ) − w ( s , y ) | 2 .$
Therefore, we get, if $2 ( α + γ − 1 ) − α 2 > − 1$, i.e., $2 ( 1 − 2 γ ) α < 3$,
$E | w M ( t , x ) − w ( t , x ) | 2 ≤ C Δ x r 3 + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 ( Δ x 2 + Δ x r 2 + Δ x r 3 ) + sup y E | w M ( s , y ) − w ( s , y ) | 2 d s ≤ C ( Δ x 2 + Δ x r 2 + Δ x r 3 ) + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | w M ( s , y ) − w ( s , y ) | 2 d s ,$
which implies that
$sup x ∈ [ 0 , 1 ] E | w M ( t , x ) − w ( t , x ) | 2 ≤ C ( Δ x 2 + Δ x r 2 + Δ x r 3 ) + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | w M ( s , y ) − w ( s , y ) | 2 d s .$
By using Grönwall Lemma 10, we get, if $2 ( α + γ − 1 ) − α 2 > − 1$, i.e., $2 ( 1 − 2 γ ) α < 3$,
$sup x ∈ [ 0 , 1 ] E | w M ( t , x ) − w ( t , x ) | 2 ≤ C ( Δ x 2 + Δ x r 2 + Δ x r 3 ) .$
We now consider the case (ii), that is, $f ≠ 0$. In this case, the approximation to the solution of the homogeneous problem of (1) is the same as the case (i). For the inhomogeneous problem of (1), the solution has the form
$w ( t , x ) = ∫ 0 t ∫ 0 1 G 2 ( t − s , x , y ) f ( u ( s , y ) ) d y d s$
$+ ∫ 0 t ∫ 0 1 G 3 ( t − s , x , y ) σ ( u ( s , y ) ) ∂ 2 W ( s , y ) ∂ s ∂ y d y d s .$
The approximate solution of the inhomogeneous problem of (1) has the form
$w M ( t , x ) = ∫ 0 t ∫ 0 1 G 2 M ( t − s , x , y ) f ( u M ( s , k M ( y ) ) ) d y d s$
$+ ∫ 0 t ∫ 0 1 G 3 M ( t − s , x , y ) σ ( u M ( s , k M ( y ) ) ) ∂ 2 W M ( s , y ) ∂ s ∂ y d y d s .$
Following the same arguments as in Step 2 above, we may get, if $2 ( α − 1 ) − α 2 > − 1$, i.e., $2 α < 3$,
$E | w M ( t , x ) − w ( t , x ) | 2 ≤ C ( Δ x r 2 + Δ x r 3 ) + C ∫ 0 t ( t − s ) 2 ( α − 1 ) − α 2 ( Δ x 2 + Δ x r 2 + Δ x r 3 ) + sup y E | w M ( s , y ) − w ( s , y ) | 2 d s + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 ( Δ x 2 + Δ x r 2 + Δ x r 3 ) + sup y E | w M ( s , y ) − w ( s , y ) | 2 d s ≤ C ( Δ x r 3 + Δ x r 2 + Δ x 2 ) + C ∫ 0 t ( t − s ) 2 ( α − 1 ) − α 2 sup y E | w M ( s , y ) − w ( s , y ) | 2 d s , + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | w M ( s , y ) − w ( s , y ) | 2 d s ,$
which implies that
$sup x ∈ [ 0 , 1 ] E | w M ( t , x ) − w ( t , x ) | 2 ≤ C ( Δ x r 2 + Δ x r 3 + Δ x 2 ) + C ∫ 0 t ( t − s ) 2 ( α − 1 ) − α 2 sup y E | w M ( s , y ) − w ( s , y ) | 2 d s + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | w M ( s , y ) − w ( s , y ) | 2 d s .$
By using Grönwall Lemma 10, we get, if $2 ( α − 1 ) − α 2 > − 1$, i.e., $2 α < 3$,
$sup x ∈ [ 0 , 1 ] E | w M ( t , x ) − w ( t , x ) | 2 ≤ C ( Δ x r 2 + Δ x r 3 + Δ x 2 ) .$
Together this with (47) shows (ii).
The proof of Theorem 2 is now complete.
Remark 8.
The smoothness assumption for $u 0$, that is, $u 0 ∈ C 3 ( [ 0 , 1 ] ) , u ( 0 ) = u ( 1 ) = 0$, is needed for estimating the term $I 3$ in (46). Such assumption has been used in Gyöngy Theorem 3.1 in [1].

#### 4.2. Proof of Theorem 1

In this subsection, we shall prove Theorem 1 where the initial data $u 0 ∈ C 1 ( [ 0 , 1 ] ) , u 0 ( 0 )$ $= u 0 ( 1 ) = 0$. The proof is similar as the proof of Theorem 2.
We first consider the case (i), that is, $f = 0$. We divide the proof into two steps.
Step 1. We consider the approximation of the homogeneous problem of (1). By Cauchy–Schwarz inequality, we have
$| v ( t , x ) − v M ( t , x ) | 2 ≤ C | ∫ 0 1 G 1 M ( t , x , y ) − G 1 ( t , x , y ) u 0 ( k M ( y ) ) d y | 2 + C | ∫ 0 1 G 1 ( t , x , y ) u 0 ( k M ( y ) ) − u 0 ( y ) d y | 2 ≤ C ∫ 0 1 | G 1 M ( t , x , y ) − G 1 ( t , x , y ) | 2 d y ∫ 0 1 | u 0 ( k M ( y ) ) | 2 d y$
$+ C ∫ 0 1 | G 1 ( t , x , y ) | 2 d y ∫ 0 1 | u 0 ( k M ( y ) ) − u 0 ( y ) | 2 d y .$
Therefore, by mean-value theorem,
$| v ( t , x ) − v M ( t , x ) | 2 ≤ C ∫ 0 1 | G 1 M ( t , x , y ) − G 1 ( t , x , y ) | 2 d y ∥ u 0 ∥ C ( [ 0 , 1 ] ) 2 + C ∫ 0 1 | G 1 ( t , x , y ) | 2 d y Δ x 2 ∥ u 0 ∥ C 1 ( [ 0 , 1 ] ) 2 .$
Further, applying Lemmas A1 and A3, one obtains
$| v ( t , x ) − v M ( t , x ) | 2 ≤ C t − 1 + ϵ Δ x r 1 ∥ u 0 ∥ C ( [ 0 , 1 ] ) 2 + C t − α 2 Δ x 2 ∥ u 0 ∥ C 1 ( [ 0 , 1 ] ) 2 ≤ C t − 1 + ϵ Δ x r 1 ∥ u 0 ∥ C 1 ( [ 0 , 1 ] ) 2 ,$
where $r 1$ is defined by (9).
Remark 9.
The smoothness assumption of the initial value $u 0 ∈ C 1 ( [ 0 , 1 ] )$ here is only required for estimating the term $∫ 0 1 | u 0 ( k M ( y ) ) − u 0 ( y ) | 2 d y$ in (53). Such assumption can be weakened in some cases. For example, in the stochastic parabolic equation case, i.e., $α = 1 , γ = 0$, it is sufficient to assume that $u 0 ∈ C β ( [ 0 , 1 ] ) , 0 < β < 1 2 , u ( 0 ) = u ( 1 ) = 0$ which implies that
$∫ 0 1 | u 0 ( k M ( y ) ) − u 0 ( y ) | 2 d y ≤ C Δ x 1 − ϵ ∥ u 0 ∥ C 1 / 2 − ϵ ( [ 0 , 1 ] ) 2 , ϵ > 0 ,$
and therefore, with $r 1$ defined by (9),
$| v ( t , x ) − v M ( t , x ) | 2 ≤ C t − 1 + ϵ Δ x r 1 ∥ u 0 ∥ C 1 / 2 − ϵ ( [ 0 , 1 ] ) 2 ≤ C t − 1 + ϵ Δ x 1 2 − ϵ ∥ u 0 ∥ C 1 / 2 − ϵ ( [ 0 , 1 ] ) 2 .$
This is consistent with the error estimate obtained in Gyöngy Theorem 3.1 in [1] with $u 0 ∈ C β ( [ 0 , 1 ] ) , 0 < β < 1 2 , u ( 0 ) = u ( 1 ) = 0$. See also Remark 1 for the discussion of the smoothness assumption of the initial value $u 0$ in Gyöngy Theorem 3.1 in [1].
Step 2. We now consider the approximation of the inhomogeneous problem of (1). Following the proof of (50), we get
$E | w M ( t , x ) − w ( t , x ) | 2 ≤ C Δ x r 3 + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | u M ( s , k M ( y ) ) − u ( s , y ) | 2 d s ,$
where $r 3$ is defined by (11).
Noting that
$E | u M ( s , k M ( y ) ) − u ( s , y ) | 2 ≤ E | w M ( s , k M ( y ) ) − w M ( s , y ) | 2 + E | w M ( s , y ) − w ( s , y ) | 2 + E | v M ( s , k M ( y ) ) − v M ( s , y ) | 2 + E | v M ( s , y ) − v ( s , y ) | 2 ,$
we therefore obtain
$E | w M ( t , x ) − w ( t , x ) | 2 ≤ C Δ x r 3 + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | w M ( s , k M ( y ) ) − w M ( s , y ) | 2 d s + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | w M ( s , k M ( y ) ) − w ( s , y ) | 2 d s + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | v M ( s , k M ( y ) ) − v M ( s , y ) | 2 d s + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | v M ( s , y ) − v ( s , y ) | 2 d s$
$= C Δ x r 3 + J 1 ( t ) + J 2 ( t ) + J 3 ( t ) + J 4 ( t ) .$
For $J 1 ( t )$, if $2 ( α + γ − 1 ) − α 2 > − 1$, we then have, applying Lemma 9 for the case $f = 0$,
$J 1 ( t ) ≤ C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 Δ x r 3 d s ≤ C Δ x r 3 ,$
where $r 3$ is defined by (11).
For $J 3 ( t )$, we consider the following two cases.
Case 1. If $2 ( α + γ − 1 ) − α 2 + ϵ ≥ 0$, then we have, by Lemma 3,
$J 3 ( t ) ≤ E ∫ 0 t ( t − s ) 2 ( α + γ ) − 1 − α 2 Δ r 1 s − 1 + ϵ d s ≤ C Δ x r 1 t 2 ( α + γ − 1 ) − α 2 + ϵ ≤ C Δ x r 1 ,$
where $r 1$ is defined by (9).
Case 2. If $2 ( α + γ − 1 ) − α 2 + ϵ < 0$, then we have, for $t > Δ x$ (the case $t < Δ x$ is easy to estimate and we omit the detail here),
$J 3 ( t ) = ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | v M ( s , k M ( y ) ) − v M ( s , y ) | 2 d s ≤ C ∫ 0 Δ x ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | v M ( s , k M ( y ) ) − v M ( s , y ) | 2 d s + C ∫ Δ x t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | v M ( s , k M ( y ) ) − v M ( s , y ) | 2 d s = J 31 ( t ) + J 32 ( t ) .$
For $J 31 ( t )$, if $2 ( α + γ − 1 ) − α 2 > − 1$, then we have, by using the boundedness of $v M ( s , y )$,
$J 31 ( t ) ≤ C ∫ 0 Δ x ( t − s ) 2 ( α + γ − 1 ) − α 2 d s = Δ x 2 ( α + γ − 1 ) − α 2 + 1 .$
For $J 32 ( t )$, we have, by Lemma 3,
$J 32 ( t ) ≤ C ∫ Δ x t ( t − s ) 2 ( α + γ − 1 ) − α 2 Δ x r 1 s − 1 + ϵ d s ≤ C Δ x r 1 t 2 ( α + γ − 1 ) − α 2 + ϵ ≤ C Δ x r 1 + 2 ( α + γ − 1 ) − α 2 + ϵ ,$
where $r 1$ is defined in (9).
Thus, we get
$J 3 ( t ) ≤ C Δ x r 1 , if 2 ( α + γ − 1 ) − α 2 + ϵ ≥ 0 , C Δ x 2 ( α + γ − 1 ) − α 2 + min { 1 , r 1 + ϵ } , if 2 ( α + γ − 1 ) − α 2 + ϵ < 0 .$
Following the same arguments as the estimate of $J 3 ( t )$, we may obtain
$J 4 ( t ) ≤ C Δ x r 1 , if 2 ( α + γ − 1 ) − α 2 + ϵ ≥ 0 , C Δ x 2 ( α + γ − 1 ) − α 2 + min { 1 , r 1 + ϵ } , if 2 ( α + γ − 1 ) − α 2 + ϵ < 0 .$
Thus, we have the following two cases.
Case 1. If $2 ( α + γ − 1 ) − α 2 + ϵ ≥ 0$, then
$E | w M ( t , x ) − w ( t , x ) | 2 ≤ C Δ x r 3 + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 [ sup y E | w M ( s , y ) − w ( s , y ) | 2 + Δ x r 1 .$
By Grönwall Lemma 10, we get
$E | w M ( t , x ) − w ( t , x ) | 2 ≤ C ( Δ x r 1 + Δ x r 3 ) ,$
where $r 1$ and $r 3$ are defined by (9) and (11), respectively.
Case 2. If $2 ( α + γ − 1 ) − α 2 + ϵ < 0$, then
$E | w M ( t , x ) − w ( t , x ) | 2 ≤ C Δ x r 3 + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | w M ( s , y ) − w ( s , y ) | 2 + Δ x 2 ( α + γ − 1 ) − α 2 + min { 1 , r 1 + ϵ } .$
By Grönwall Lemma 10, we get
$E | w M ( t , x ) − w ( t , x ) | 2 ≤ C ( Δ x 2 ( α + γ − 1 ) − α 2 + min { 1 , r 1 + ϵ } + Δ x r 3 ) .$
Together with these estimates, we obtain
$E | u M ( t , x ) − u ( t , x ) | 2 ≤ C t − 1 + ϵ Δ x r 1 + C Δ x r 3 + C Δ x r 1 , if 2 ( α + γ − 1 ) − α 2 + ϵ ≥ 0 , C Δ x 2 ( α + γ − 1 ) − α 2 + min { 1 , r 1 + ϵ } , if 2 ( α + γ − 1 ) − α 2 + ϵ < 0 ,$
where $r 1$ and $r 3$ are defined by (9) and (11), respectively.
We now consider the case (ii), that is, $f ≠ 0$. In this case, the approximation of the solution for the homogeneous problem of (1) is the same as in the case (i). For the inhomogeneous problem of (1), we have, following the same arguments as in Step 2,
$E | w M ( t , x ) − w ( t , x ) | 2 ≤ C Δ x r 3 + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | w M ( s , k M ( y ) ) − w M ( s , y ) | 2 d s + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | w M ( s , k M ( y ) ) − w ( s , y ) | 2 d s + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | v M ( s , k M ( y ) ) − v M ( s , y ) | 2 d s + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | v M ( s , y ) − v ( s , y ) | 2 d s = C Δ x r 3 + J 1 ′ ( t ) + J 2 ′ ( t ) + J 3 ( t ) + J 4 ( t ) ,$
where $J 3 ( t )$ and $J 4 ( t )$ are defined as in (55) as $v ( s , y )$ and $v M ( s , y )$ are the same as in the case (i).
For $J 1 ′ ( t )$, if $2 ( α + γ − 1 ) − α 2 > − 1$, then, by Lemma 9,
$J 1 ′ ( t ) ≤ C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 Δ x r 2 + Δ x r 3 d s ≤ C Δ x r 2 + Δ x r 3 .$
Thus, we have the following two cases.
Case 1. If $2 ( α + γ − 1 ) − α 2 + ϵ ≥ 0$, then
$E | w M ( t , x ) − w ( t , x ) | 2 ≤ C Δ x r 2 + Δ x r 3 + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 [ sup y E | w M ( s , y ) − w ( s , y ) | 2 + Δ x r 1 .$
By Grönwall Lemma 10, we get
$E | w M ( t , x ) − w ( t , x ) | 2 ≤ C ( Δ x r 1 + Δ x r 2 + Δ x r 3 ) ,$
where $r 1 , r 2$ and $r 3$ are defined by (9), (10), and (11), respectively.
Case 2. If $2 ( α + γ − 1 ) − α 2 + ϵ < 0$, then
$E | w M ( t , x ) − w ( t , x ) | 2 ≤ C Δ x r 2 + Δ x r 3 + C ∫ 0 t ( t − s ) 2 ( α + γ − 1 ) − α 2 sup y E | w M ( s , y ) − w ( s , y ) | 2 + Δ x 2 ( α + γ − 1 ) − α 2 + min { 1 , r 1 + ϵ } .$
By Grönwall Lemma 10, we have
$E | w M ( t , x ) − w ( t , x ) | 2 ≤ C ( Δ x 2 ( α + γ − 1 ) − α 2 + min { 1 , r 1 + ϵ } + Δ x r 2 + Δ x r 3 ) .$
Thus, we obtain
$E | u M ( t , x ) − u ( t , x ) | 2 ≤ C t − 1 + ϵ Δ x r 1 + C ( Δ x r 2 + Δ x r 3 ) + C C Δ x r 1 , if 2 ( α + γ − 1 ) − α 2 + ϵ ≥ 0 , C Δ x 2 ( α + γ − 1 ) − α 2 + min { 1 , r 1 + ϵ } , if 2 ( α + γ − 1 ) − α 2 + ϵ < 0 ,$
where $r 1 , r 2$ and $r 3$ are defined by (9), (10) and (11), respectively.
The proof of Theorem 2 is now complete.

## Author Contributions

We have the equal contributions to this work. J.W. considered the theoretical analysis and wrote the original version of the work. J.H. considered the theoretical analysis of the stochastic term and performed the numerical simulation. Y.Y. introduced and guided this research topic. All authors have read and agreed to the published version of the manuscript.

## Funding

This research received no external funding.

## Conflicts of Interest

The authors declare no conflict of interest.

## Appendix A

In this Appendix A, we shall consider the approximations of the Green functions $G i ( t , x , y )$ by $G i M ( t , x , y ) , i = 1 , 2 , 3$, where $G i ( t , x , y )$ and $G i M ( t , x , y ) , i = 1 , 2 , 3$ are defined by
$G 1 ( t , x , y ) = ∑ j = 1 ∞ E α , 1 ( − t α λ j ) φ j ( x ) φ j ( y ) , G 2 ( t , x , y ) = ∑ j = 1 ∞ t α − 1 E α , α ( − t α λ j ) φ j ( x ) φ j ( y ) , G 3 ( t , x , y ) = ∑ j = 1 ∞ t α + γ − 1 E α , α + γ ( − t α λ j ) φ j ( x ) φ j ( y ) ,$
and
$G 1 M ( t , x , y ) = ∑ j = 1 M − 1 E α , 1 ( − t α λ j M ) φ j M ( x ) φ j ( k M ( y ) ) , G 2 M ( t , x , y ) = ∑ j = 1 M − 1 t α − 1 E α , α ( − t α λ j M ) φ j M ( x ) φ j ( k M ( y ) ) , G 3 M ( t , x , y ) = ∑ j = 1 M − 1 t α + γ − 1 E α , α + γ ( − t α λ j M ) φ j M ( x ) φ j ( k M ( y ) ) ,$
where $E α , β ( z ) , α > 0 , β ∈ R$ denote the Mittag–Leffler functions defined in (13) and where ${ λ j , φ j } j = 1 ∞$ and ${ λ j M } j = 1 M − 1$ are defined by (12) and (31), respectively. Here, $φ j M ( x ) ,$ $j = 1 , 2 , ⋯ ,$ denote the piecewise linear interpolation functions of $φ j ( x )$ on the grids $0 = x 0 < x 1 < ⋯ < x M = 1$ and the piecewise constant function $k M ( y ) , 0 ≤ y ≤ 1$ is defined by (24).

#### Appendix A.1. Green Function G 1 (t,x,y) and Its Approximation G 1 M (t,x,y)

In this subsection, we will consider the bounds of $G 1 ( t , x , y )$, $G 1 M ( t , x , y )$, as well as the error bounds of $G 1 ( t , x , y ) − G 1 M ( t , x , y )$ in some suitable norms.
Lemma A1.
Let $0 < α ≤ 1$. There exist some positive constant $δ > 0$ and sufficiently small $ϵ > 0$ such that, with $t > 0$,
$∫ 0 1 | G 1 ( t , x , y ) | 2 d y ≤ C t − α 2 , 0 ≤ x ≤ 1 ,$
$∫ 0 t ∫ 0 1 | G 1 ( s , x , y ) | 2 d y d s ≤ C t δ , 0 ≤ x ≤ 1 ,$
$∫ 0 1 | G 1 ( t , y , z ) − G 1 ( t , k M ( y ) , z ) | 2 d z ≤ C t − 1 + ϵ Δ x r 1 , 0 ≤ y ≤ 1 ,$
$∫ 0 t ∫ 0 1 | G 1 ( s , y , z ) − G 1 ( s , k M ( y ) , z ) | 2 d z d s ≤ C t δ Δ x r 1 , 0 ≤ y ≤ 1 ,$
where $r 1$ is defined by (9).
Proof.
For (A1), we have
$∫ 0 1 | G 1 ( t , x , y ) | 2 d y = ∫ ∑ j = 1 ∞ E α , 1 ( − t α λ j ) φ j ( x ) φ j ( y ) 2 d y .$
As ${ φ j ( y ) } j = 1 ∞$ is an orthonormal basis in $H = L 2 ( 0 , 1 )$ and $φ j ( x ) , j = 1 , 2 , ⋯$ are bounded, we get
$∫ 0 1 | G 1 ( t , x , y ) | 2 d y ≤ C ∑ j = 1 ∞ E α , 1 2 ( − t α λ j ) .$
By boundedness of the Mittag–Leffler function (2), we obtain, with $0 ≤ γ 1 ≤ 1$,
$∫ 0 1 | G 1 ( t , x , y ) | 2 d y ≤ C ∑ j = 1 ∞ 1 1 + t α λ j 2 γ 1 ≤ C ∑ j = 1 ∞ 1 1 + t α j 2 2 γ 1$
$≤ C ∫ 0 ∞ 1 1 + t α x 2 2 γ 1 d x ≤ C t − α 2 ∫ 0 ∞ 1 1 + y 2 2 γ 1 d y .$
Then (A1) follows by choosing some $γ 1 ∈ ( 1 / 4 , 1 ]$ in (A6).
For (A2), we have, by (A1),
$∫ 0 t ∫ 0 1 | G 1 ( s , x , y ) | 2 d y d s ≤ C ∫ 0 t s − α 2 d s ≤ C .$
For (A3), one gets
$∫ 0 1 | G 1 ( t , y , z ) − G 1 ( t , k M ( y ) , z ) | 2 d z = ∫ 0 1 | ∑ j = 1 ∞ E α , 1 ( − t α λ j ) φ j ( y ) − φ j ( k M ( y ) ) φ j ( z ) | 2 d z .$
As ${ φ j ( z ) } j = 1 ∞$ is an orthonormal basis in $H = L 2 ( 0 , 1 )$, we obtain
$∫ 0 1 | G 1 ( t , y , z ) − G 1 ( t , k M ( y ) , z ) | 2 d z = ∑ j = M ∞ E α , 1 2 φ j ( y ) − φ j ( k M ( y ) ) 2 + ∑ j = 1 M − 1 E α , 1 2 φ j ( y ) − φ j ( k M ( y ) ) 2 = I 1 + I 2 .$
For $I 1$, we have, using the boundedness of $φ j ( y )$ and (2), with $0 ≤ γ 1 ≤ 1$,
$I 1 ≤ C ∑ j = M ∞ E α , 1 2 ( − t α λ j ) ≤ C ∑ j = M ∞ 1 1 + t α j 2 2 γ 1 ≤ C t − 2 α γ 1 ∑ j = M ∞ 1 j 4 γ 1 .$
Case 1. If $1 / 2 ≤ α ≤ 1$, then, with $− 2 α γ 1 = − 1 + ϵ , ϵ > 0$, i.e., $γ 1 = 1 − ϵ 2 α$,
$I 1 ≤ C t − 1 + ϵ ∑ j = M ∞ 1 j 4 1 − ϵ 2 α ≤ C t − 1 + ϵ Δ x 4 1 − ϵ 2 α − 1 .$
Case 2. If $0 ≤ α < 1 / 2$, then, with $γ 1 = 1$,
$I 1 ≤ C t − 2 α ∑ j = M ∞ 1 j 4 ≤ C t − 2 α Δ x 3 .$
Thus, we get
$I 1 ≤ C t − 1 + ϵ Δ x 4 1 − ϵ 2 α − 1 , 1 / 2 ≤ α ≤ 1 , C t − 2 α Δ x 3 , 0 ≤ α < 1 / 2 .$
For $I 2$, we have
$I 2 = ∑ j = 1 M − 1 E α , 1 2 ( − t α λ i ) φ j ( y ) − φ j ( k M ( y ) ) 2 .$
Note that
$| φ j ( y ) − φ j ( k M ( y ) ) | = | φ j ′ ( c ) ( y − k M ( y ) ) | ≤ C ( j π ) 1 M ≤ C j M ,$
which implies that
$I 2 ≤ ∑ j = 1 M − 1 E α , 1 2 ( − t α λ j ) j M 2 .$
By (2), we get, with $0 ≤ γ 1 ≤ 1$,
$I 2 ≤ C ∑ j = 1 M − 1 1 1 + t α λ j 2 γ 1 j 2 M 2 ≤ C t − 2 α γ 1 1 M 2 ∑ j = 1 M − 1 1 j 4 γ 1 − 2 .$
Case 1. If $0 ≤ α ≤ 2 ( 1 − ϵ ) 3$, then
$I 2 ≤ C t − 1 + ϵ 1 M 2 ∑ j = 1 M − 1 1 j 4 1 − ϵ 2 α − 2 ≤ C t − 1 + ϵ 1 M 2 ∫ 1 M x − 4 1 − ϵ 2 α + 2 d x ≤ C t − 1 + ϵ Δ x 2 .$
Case 2. If $2 ( 1 − ϵ ) 3 ≤ α ≤ 1$, then
$I 2 ≤ C t − 1 + ϵ 1 M 2 ∫ 1 M x − 4 1 − ϵ 2 α + 2 d x ≤ C t − 1 + ϵ Δ x 4 1 − ϵ 2 α − 1 .$
Therefore,
$I 2 ≤ C t − 1 + ϵ Δ x 2 , 0 ≤ α ≤ 2 ( 1 − ϵ ) 3 , C t − 1 + ϵ Δ x 4 1 − ϵ 2 α − 1 , 2 ( 1 − ϵ ) 3 ≤ α ≤ 1 .$
Note that the convergence order in $I 2$ is higher than the convergence order in $I 1$, we therefore obtain
$∫ 0 1 | G 1 ( t , y , z ) − G 1 ( t , k M ( y ) , z ) | 2 d z ≤ C t − 1 + ϵ Δ x r 1 ,$
where $r 1$ is defined by (9), which shows (A3).
For (A4), we get, by (A3),
$∫ 0 t ∫ 0 1 | G 1 ( s , y , z ) − G 1 ( s , k M ( y ) , z ) | 2 d z d s ≤ C ∫ 0 t s − 1 + ϵ Δ x r 1 d s ≤ C t δ Δ x r 1 .$
Together these estimates complete the proof of Lemma A1. □
Lemma A2.
Let $0 < α ≤ 1$. There exist some positive constant $δ > 0$ and sufficiently small $ϵ > 0$ such that, with $t > 0$,
$∫ 0 1 | G 1 M ( t , x , y ) | 2 d y ≤ C t − α 2 , 0 ≤ x ≤ 1 ,$
$∫ 0 t ∫ 0 1 | G 1 M ( s , x , y ) | 2 d y d s ≤ C t δ , , 0 ≤ x ≤ 1 ,$
$∫ 0 1 | G 1 M ( t , y , z ) − G 1 M ( t , k M ( y ) , z ) | 2 d z ≤ C t − 1 + ϵ Δ x r 1 , 0 ≤ y ≤ 1 ,$
$∫ 0 t ∫ 0 1 | G 1 M ( s , y , z ) − G 1 M ( s , k M ( y ) , z ) | 2 d z d s ≤ C t δ Δ x r 1 , 0 ≤ y ≤ 1 ,$
where $r 1$ is defined by (9).
Proof.
For (A9), we have
$∫ 0 1 | G 1 M ( t , x , y ) | 2 d y = ∫ ∑ j = 1 M − 1 E α , 1 ( − t α λ j M ) φ j ( x ) φ j ( k M ( y ) ) 2 d y .$
Note that
$∫ 0 1 φ j ( k M ( y ) ) φ l ( k M ( y ) ) d y = Δ x ∑ k = 1 M − 1 φ j ( y k ) φ l ( y k ) = 1 , j = l , 0 , j ≠ l ,$
and $φ j ( x ) , j = 1 , 2 , ⋯$ are bounded, one gets
$∫ 0 1 | G 1 M ( t , x , y ) | 2 d y = ∑ j = 1 M − 1 E α , 1 2 ( − t α λ j M ) ,$
which implies that, as $λ j M ≈ λ j , j = 1 , 2 , ⋯ , M − 1$,
$∫ 0 1 | G 1 M ( t , x , y ) | 2 d y ≤ C ∑ j = 1 M − 1 E α , 1 2 ( − t α λ j ) ≤ C ∑ j = 1 ∞ E α , 1 2 ( − t α λ j ) ≤ C t − α 2 .$
Therefore, we show (A9).
For (A10), we obtain, by (A9),
$∫ 0 t ∫ 0 1 | G 1 M ( s , x , y ) | 2 d y d s ≤ C ∫ 0 t s − α 2 d s ≤ C .$
For (A11), one gets, by (A13),
$∫ 0 1 | G 1 M ( t , y , z ) − G 1 M ( t , k M ( y ) , z ) | 2 d z = ∫ 0 1 | ∑ j = 1 M − 1 E α , 1 ( − t α λ j M ) φ j M ( y ) − φ j M ( k M ( y ) ) φ j ( k M ( z ) ) | 2 d z = ∑ j = 1 M − 1 E α , 1 2 ( − t α λ j M ) φ j M ( y ) − φ j M ( k M ( y ) ) 2 .$
Note that, for $y ∈ [ y k , y k + 1 ] , k = 0 , 1 , 2 , ⋯ , M$,
$| φ j M ( y ) − φ j M ( k M ( y ) ) | = | φ j M ( y k + 1 ) − φ j M ( y k ) y k + 1 − y k ( y − y k ) | = | φ j ( y k + 1 ) − φ j ( y k ) y k + 1 − y k ( y − y k ) | = | φ j ′ ( c ) ( y − y k ) | ≤ C j M ,$
which implies, following the estimates of (A8),
$∫ 0 1 | G 1 M ( t , y , z ) − G 1 M ( t , k M ( y ) , z ) | 2 d z ≤ C ∑ j = 1 M − 1 E α , 1 2 ( − t α λ j ) j M 2 ≤ C t − 1 + ϵ Δ x r 1 ,$
where $r 1$ is defined by (9).
For (A12), we get, by (A11),
$∫ 0 t ∫ 0 1 | G 1 M ( s , y , z ) − G 1 M ( s , k M ( y ) , z ) | 2 d z d s ≤ C ∫ 0 t s − 1 + ϵ Δ x r 1 d s ≤ C t δ Δ x r 1 .$
Together these estimates complete the proof of Lemma A2. □
Lemma A3.
Let $0 < α ≤ 1$. There exist some positive constant $δ > 0$ and sufficiently small $ϵ > 0$ such that, with $t > 0$,
$∫ 0 1 | G 1 ( t , x , y ) − G 1 M ( t , x , y ) | 2 d y ≤ C t − 1 + ϵ Δ x r 1 , 0 ≤ x ≤ 1 ,$
$∫ 0 t ∫ 0 1 | G 1 ( s , x , y ) − G 1 M ( s , x , y ) | 2 d y d s ≤ C t δ Δ x r 1 , 0 ≤ x ≤ 1 ,$
where $r 1$ is defined by (9).
Proof.
For (A16), we have
$∫ 0 1 | G 1 ( t , x , y ) − G 1 M ( t , x , y ) | 2 d y = ∫ 0 1 | ∑ j = 1 ∞ E α , 1 ( − λ j t α ) φ j ( x ) φ j ( y ) − ∑ j = 1 M − 1 E α , 1 ( − λ j M t α ) φ j M ( x ) φ j ( k M ( y ) ) | 2 d y ≤ C ∫ 0 1 | ∑ j = M ∞ E α , 1 ( − λ j t α ) φ j ( x ) φ j ( y ) | 2 d y + C ∫ 0 1 | ∑ j = 1 M − 1 E α , 1 ( − λ j t α ) φ j ( x ) φ j ( y ) − φ j ( k M ( y ) ) | 2 d y + C ∫ 0 1 | ∑ j = 1 M − 1 E α , 1 ( − λ j t α ) φ j ( x ) − φ j M ( x ) φ j ( k M ( y ) ) | 2 d y + C ∫ 0 1 | ∑ j = 1 M − 1 E α , 1 ( − λ j t α ) − E α , 1 ( − λ j M t α ) φ j M ( x ) φ j ( k M ( y ) ) | 2 d y = I 1 ( t ) + I 2 ( t ) + I 3 ( t ) + I 4 ( t ) .$
For $I 1 ( t )$, one gets, by (A7),
$I 1 ( t ) = ∫ 0 1 | ∑ j = M ∞ E α , 1 ( − λ j t α ) φ j ( x ) φ j ( y ) | 2 d y = ∑ j = M ∞ E α , 1 2 ( − λ j t α ) ≤ C t − 1 + ϵ Δ x r 1 ,$
where $r 1$ is defined by (9).
For $I 2 ( t )$, we obtain
$I 2 ( t ) = ∫ 0 1 | ∑ j = 1 M − 1 E α , 1 ( − λ j t α ) φ j ( x ) φ j ( y ) − φ j ( k M ( y ) ) | 2 d y = ∑ k = 0 M − 1 ∫ y k y k + 1 | ∑ j = 1 M − 1 E α , 1 ( − λ j t α ) φ j ( x ) ∫ y k y φ j ′ ( z ) d z | 2 d y = ∑ k = 0 M − 1 ∫ y k y k + 1 | ∫ y k y ∑ j = 1 M − 1 E α , 1 ( − λ j t α ) φ j ( x ) φ j ′ ( z ) d z | 2 d y ≤ ∑ k = 0 M − 1 ∫ y k y k + 1 1 M ∫ y k y k + 1 | ∑ j = 1 M − 1 E α , 1 ( − λ j t α ) φ j ( x ) φ j ′ ( z ) | 2 d z d y = 1 M 2 ∫ 0 1 | ∑ j = 1 M − 1 E α , 1 ( − λ j t α ) φ j ( x ) φ j ′ ( z ) | 2 d z .$
Note that
$∫ 0 1 φ j ′ ( z ) φ l ′ ( z ) d z = j 2 π 2 , j = l , 0 , j ≠ l ,$
and $φ j ( x ) , j = 1 , 2 , ⋯$ are bounded, we get
$I 2 ( t ) ≤ 1 M 2 ∑ j = 1 M − 1 E α , 1 2 ( − λ j t α ) j 2 = ∑ j = 1 M − 1 E α , 1 2 ( − λ j t α ) j M 2 .$
By (A8), one has
$I 2 ( t ) ≤ C t − 1 + ϵ Δ x r 1 ,$
where $r 1$ is defined by (9).
For $I 3 ( t )$, we have
$I 3 ( t ) = ∫ 0 1 | ∑ j = 1 M − 1 E α , 1 ( − λ j t α ) φ j ( x ) − φ j M ( x ) φ j ( k M ( y ) ) | 2 d y = ∑ j = 1 M − 1 E α , 1 2 ( − λ j t α ) φ j ( x ) − φ j M ( x ) 2 .$
Note that, for $y k ≤ y ≤ y k + 1 , k = 0 , 1 , 2 , ⋯ , M − 1$,
$| φ j ( y ) − φ j M ( y ) | = | φ j ″ ( c ) ( y − y k ) ( y − y k + 1 ) | ≤ C ( j 2 π 2 ) 1 M 2 ≤ C j M 2 .$
Therefore, we get, by (A8),
$I 3 ( t ) ≤ C ∑ j = 1 M − 1 E α , 1 2 ( − λ j t α ) j M 4 ≤ C ∑ j = 1 M − 1 E α , 1 2 ( − λ j t α ) j M 2 ≤ C t − 1 + ϵ Δ x r 1 ,$
where $r 1$ is defined by (9).
Finally, we consider $I 4 ( t )$. We have
$I 4 ( t ) = ∫ 0 t | ∑ j = 1 M − 1 E α , 1 ( − t α λ j ) − E α , 1 ( − t α λ j M ) φ j M ( x ) φ j ( k M ( y ) ) | 2 d y ≤ C ∑ j = 1 M − 1 | E α , 1 ( − t α λ j ) − E α , 1 ( − t α λ j M ) | 2 .$
Note that
$d d t E α , 1 ( − t α λ j ) = E α , 1 ′ ( − t α λ j ) d d t ( − t α λ j ) ,$
we have, by Lemma 1,
$E α , 1 ′ ( − t α λ j ) = d d t E α , 1 ( − t α λ j ) d d t ( − t α λ j ) = t α − 1 λ j E α , α ( − t α λ j ) − α t α − 1 λ j = − 1 α E α , α ( − t α λ j ) .$
Thus, by using the mean-value theorem
$I 4 ( t ) ≤ ∑ j = 1 M − 1 | E α , 1 ( − t α λ j ) − E α , 1 ( − t α λ j M ) | 2 = ∑ j = 1 M − 1 | E α , 1 ′ ( c ) t α ( λ j − λ j M ) | 2 ≤ C ∑ j = 1 M − 1 | E α , 1 ′ ( − t α λ j ) t α ( λ j − λ j M ) | 2 ≤ C ∑ j = 1 M − 1 E α , α 2 ( − t α λ j ) | t α ( λ j − λ j M ) | 2 .$
By (3), we get, with $0 ≤ γ 1 ≤ 2$,
$I 4 ( t ) ≤ C ∑ j = 1 M − 1 1 ( t α λ j ) 2 γ 1 | t α ( λ j − λ j M ) | 2 .$
Note that [1], line-4, p. 7
$| λ j − λ j M | ≤ C j 4 M 2 ,$
we have
$I 4 ( t ) ≤ C ∑ j = 1 M − 1 1 ( t α λ j ) 2 γ 1 t 2 α · j 8 M 4 ≤ C t 2 α − 2 α γ 1 ∑ j = 1 M − 1 j 8 − 4 γ 1 M 4 .$
Case 1. For $0 < α < 1 / 2$, one gets, with $γ 1 = 2$,
$I 4 ( t ) ≤ C t 2 α − 4 α ∑ j = 1 M − 1 1 j 8 · j 8 M 4 ≤ C t − 2 α ∑ j = 1 M − 1 1 M 4 ≤ C t − 2 α Δ x 3 .$
Case 2. For $1 / 2 ≤ α ≤ 1$, one has, with $2 α − 2 α γ 1 = − 1 + ϵ , ϵ > 0$,
$I 4 ( t ) ≤ C t − 1 + ϵ 1 M 4 ∑ j = 1 M − 1 1 j 4 2 α + 1 − ϵ 2 α − 8 ≤ C t − 1 + ϵ 1 M 4 ∫ 1 M x − 4 2 α + 1 − ϵ 2 α + 8 d x ≤ C t − 1 + ϵ 1 M 4 M 9 − 4 2 α + 1 − ϵ 2 α ≤ C t − 1 + ϵ Δ x 4 1 − ϵ 2 α − 1 .$
Note that the convergence order of $I 4 ( t )$ is higher than the order $Δ x r 1$, we therefore have
$I 4 ( t ) ≤ C t − 1 + ϵ Δ x r 1 ,$
where $r 1$ is defined by (9). Therefore, we complete the proof of (A16).
For (A17), we have, using (A16)
$∫ 0 t ∫ 0 1 | G 1 ( s , x , y ) − G 1 M ( s , x , y ) | 2 d y d s ≤ C t δ Δ x r 1 .$
Together these estimates complete the proof of Lemma A3.

#### Appendix A.2. Green Function G 3 (t,x,y) and Its Approximation G 3 M (t,x,y)

In this subsection, we will consider the bounds of $G 3 ( t , x , y )$ and its approximation $G 3 M ( t , x , y )$ and the error bounds of $G 3 ( t , x , y ) − G 3 M ( t , x , y )$ in some suitable norms.
Lemma A4.
Assume that the Assumption 1 holds. There exist some positive constant $δ > 0$ and sufficiently small $ϵ > 0$ such that, with $t > 0$,
$∫ 0 1 | G 3 ( t , x , y ) | 2 d y ≤ C t − α 2 , 0 ≤ x ≤ 1 ,$
$∫ 0 t ∫ 0 1 | G 3 ( s , x , y ) | 2 d y d s ≤ C t δ , 0 ≤ x ≤ 1 , if 2 ( 1 − 2 γ ) α < 3 ,$
$∫ 0 1 | G 3 ( t , y , z ) − G 3 ( t , k M ( y ) , z ) | 2 d z ≤ C t − 1 + ϵ Δ x r 3 , 0 ≤ y ≤ 1 ,$
$∫ 0 t ∫ 0 1 | G 3 ( s , y , z ) − G 1 ( s , k M ( y ) , z ) | 2 d z d s ≤ C t δ Δ x r 3 , 0 ≤ y ≤ 1 ,$
where $r 3$ is defined by (11).
Proof.
For (A19), we have
$∫ 0 1 | G 3 ( t , x , y ) | 2 d y = ∫ ∑ j = 1 ∞ t α + γ − 1 E α , α + γ ( − t α λ j ) φ j ( x ) φ j ( y ) 2 d y .$
As ${ φ j ( y ) } j = 1 ∞$ is an orthonormal basis in $H = L 2 ( 0 , 1 )$ and $φ j ( x ) , j = 1 , 2 , ⋯$ are bounded, we get
$∫ 0 1 | G 3 ( t , x , y ) | 2 d y = ∑ j = 1 ∞ t 2 ( α + γ − 1 ) E α , α + γ 2 ( − t α λ j ) .$
By boundedness of the Mittag–Leffler function (2), we have, with $0 ≤ γ 1 ≤ 1$,
$∫ 0 1 | G 3 ( t , x , y ) | 2 d y ≤ C t 2 ( α + γ − 1 ) ∑ j = 1 ∞ 1 1 + t α λ j 2 γ 1 ≤ C t 2 ( α + γ − 1 ) − α 2 ∫ 0 ∞ 1 1 + y 2 2 γ 1 d y .$
Then, (A19) follows by choosing some $γ 1 ∈ ( 1 / 4 , 1 ]$ in (A24).
For (A20), we have, by (A19),
$∫ 0 t ∫ 0 1 | G 3 ( s , x , y ) | 2 d y d s ≤ C ∫ 0 t s 2 ( α + γ − 1 ) − α 2 d s ≤ C ,$
if $2 ( α + γ − 1 ) − α 2 > − 1$, i.e., $2 ( 1 − 2 γ ) α < 3$.
For (A21), we have
$∫ 0 1 | G 3 ( t , y , z ) − G 3 ( t , k M ( y ) , z ) | 2 d z = ∑ j = M ∞ t 2 ( α + γ − 1 ) E α , α + γ 2 ( − t α λ j ) φ j ( y ) − φ j ( k M ( y ) ) 2 + ∑ j = 1 M − 1 t 2 ( α + γ − 1 ) E α , α + γ 2 ( − t α λ j ) φ j ( y ) − φ j ( k M ( y ) ) 2 = I 1 + I 2 .$
For $I 1$, we obtain, using the boundedness of $φ j ( y )$ and (2), with $0 ≤ γ 1 ≤ 1$,
$I 1 ≤ C ∑ j = M ∞ t 2 ( α + γ − 1 ) 1 1 + t α j 2 2 γ 1 ≤ C t 2 ( α + γ − 1 ) − 2 α γ 1 ∑ j = M ∞ 1 j 4 γ 1 .$
Case 1. If $2 γ − 1 < 0$, then, noting that, by Assumption 1, $γ + α > 1 / 2$ and choosing $γ 1 = 1 + 2 γ − 1 2 α − ϵ 2 α$,
$I 1 ≤ C t − 1 + ϵ ∑ j = M ∞ 1 j 4 1 + 2 γ − 1 2 α − ϵ 2 α ≤ C t − 1 + ϵ ∫ M ∞ 1 x 4 1 + 2 γ − 1 2 α − ϵ 2 α d x ≤ C t − 1 + ϵ Δ x 3 − 2 ( 1 − 2 γ ) α − 2 ϵ α , if 0 < 2 ( 1 − 2 γ ) α < 3 − 2 ϵ α .$
Case 2. If $2 γ − 1 ≥ 0$, then, choosing $γ 1 = 1 − ϵ 2 α$,
$I 1 ≤ C t 2 ( α + γ − 1 ) − 2 α γ 1 ∑ j = M ∞ 1 j 4 γ 1 = C t 2 ( α + γ − 1 ) − 2 α 1 − ϵ 2 α ∑ j = M ∞ 1 j 4 1 − ϵ 2 α ≤ C t ( 2 γ − 1 ) − 1 + ϵ ∫ M ∞ 1 x 4 1 − ϵ 2 α d x ≤ C t ( 2 γ − 1 ) − 1 + ϵ Δ x 3 − 2 ϵ α .$
Thus, we get
$I 1 ≤ C t − 1 + ϵ Δ x 3 − 2 ( 1 − 2 γ ) α − 2 ϵ α , 0 < 2 ( 1 − 2 γ ) α < 3 − 2 ϵ α , 2 γ − 1 < 0 , C t ( 2 γ − 1 ) − 1 + ϵ Δ x 3 − 2 ϵ α , 2 γ − 1 ≥ 0 .$
For $I 2$, we have
$I 2 = ∑ j = 1 M − 1 t 2 ( α + γ − 1 ) E α , α + γ 2 ( − t α λ j ) φ j ( y ) − φ j ( k M ( y ) ) 2 ≤ C ∑ j = 1 M − 1 t 2 ( α + γ − 1 ) E α , α + γ 2 ( − t α λ j ) j M 2 .$
By (2), we get, with $0 ≤ γ 1 ≤ 1$,
$I 2 ≤ C t 2 ( α + γ − 1 ) − 2 α γ 1 1 M 2 ∑ j = 1 M − 1 1 j 4 γ 1 − 2 .$
Case 1. If $2 γ − 1 < 0$, then, noting that, by Assumption 1, $α + γ > 1 / 2$ and choosing $2 ( α + γ − 1 ) − 2 α γ 1 = − 1 + ϵ$, that is, $γ 1 = 1 + 2 γ − 1 2 α − ϵ 2 α$,
$I 2 ≤ C t − 1 + ϵ 1 M 2 ∑ j = 1 M − 1 1 j 4 1 + 2 γ − 1 2 α − ϵ 2 α − 2 ≤ C t − 1 + ϵ 1 M 2 ∫ 1 M x − 4 1 + 2 γ − 1 2 α − ϵ 2 α + 2 d x ,$
which implies that
$I 2 = C t − 1 + ϵ Δ x 2 , 2 ( 1 − 2 γ ) α ≤ 1 − 2 ϵ α , C t − 1 + ϵ Δ x 3 − 2 ( 1 − 2 γ ) α − 2 ϵ α , 1 − 2 ϵ α ≤ 2 ( 1 − 2 γ ) α < 3 − 2 ϵ α .$
Case 2. If $2 γ − 1 ≥ 0$, then, choosing $γ 1 = 1 − ϵ 2 α$,
$I 2 ≤ C t 2 ( α + γ − 1 ) − 2 α γ 1 ∑ j = 1 M − 1 1 j 4 γ 1 j 2 M 2 ≤ C t 2 ( α + γ − 1 ) − 2 α 1 − ϵ 2 α 1 M 2 ∑ j = 1 M − 1 1 j 4 1 − ϵ 2 α − 2 ≤ C t ( 2 γ − 1 ) − 1 + ϵ 1 M 2 ∫ 1 M x − 4 1 − ϵ 2 α + 2 d x ≤ C t ( 2 γ − 1 ) − 1 + ϵ Δ x 2 .$
Thus, we get
$I 2 ≤ C t − 1 + ϵ Δ x 2 , 2 ( 1 − 2 γ ) α ≤ 1 − 2 ϵ α , 2 γ − 1 < 0 , C t − 1 + ϵ Δ x 3 − 2 ( 1 − 2 γ ) α − 2 ϵ α , 1 − 2 ϵ α ≤ 2 ( 1 − 2 γ ) α < 3 − 2 ϵ α , 2 γ − 1 < 0 , C t ( 2 γ − 1 ) − 1 + ϵ Δ x 2 , 2 γ − 1 ≥ 0 .$
Note that the convergence order in $I 2$ is higher than the convergence order in $I 1$, we therefore obtain
$∫ 0 1 | G 3 ( t , y , z ) − G 3 ( t , k M ( y ) , z ) | 2 d z ≤ C t − 1 + ϵ Δ x r 3 ,$
where
$r 3 = 2 , 2 ( 1 − 2 γ ) α ≤ 1 − 2 ϵ α , 2 γ − 1 < 0 , 3 − 2 ( 1 − 2 γ ) α − 2 ϵ α , 1 − 2 ϵ α ≤ 2 ( 1 − 2 γ ) α < 3 − 2 ϵ α , 2 γ − 1 < 0 , 2 , 2 γ − 1 ≥ 0 .$
For (A22), we get, using (A21),
$∫ 0 t ∫ 0 1 | G 3 ( s , y , z ) − G 3 ( s , k M ( y ) , z ) | 2 d z d s ≤ C ∫ 0 t s − 1 + ϵ Δ x r 3 d s ≤ C t δ Δ x r 3 .$
Together these estimates complete the proof of Lemma A4. □
Lemma A5.
Assume that the Assumption 1 holds. There exist some positive constant $δ > 0$ and sufficiently small $ϵ > 0$ such that, with $t > 0$,
$∫ 0 1 | G 3 M ( t , x , y ) | 2 d y ≤ C t − α 2 , 0 ≤ x ≤ 1 ,$
$∫ 0 t ∫ 0 1 | G 3 M ( s , x , y ) | 2 d y d s ≤ C t δ , 0 ≤ x ≤ 1 , if 2 ( 1 − 2 γ ) α < 3 ,$
$∫ 0 1 | G 3 M ( t , y , z ) − G 3 M ( t , k M ( y ) , z ) | 2 d z ≤ C t − 1 + ϵ Δ x r 3 , 0 ≤ y ≤ 1 ,$
$∫ 0 t ∫ 0 1 | G 3 M ( s , y , z ) − G 1 M ( s , k M ( y ) , z ) | 2 d z d s ≤ C t δ Δ x r 3 , 0 ≤ y ≤ 1 ,$
where $r 3$ is defined by (11).
Proof.
The proof of Lemma A5 is similar to the proof of Lemma A2. We omit the proof here. □
Lemma A6.
Assume that the Assumption 1 holds. There exist some positive constant $δ > 0$ and sufficiently small $ϵ > 0$ such that, with $t > 0$,
$∫ 0 1 | G 3 ( t , x , y ) − G 3 M ( t , x , y ) | 2 d y ≤ C t − 1 + ϵ Δ x r 3 , 0 ≤ x ≤ 1 ,$
$∫ 0 t ∫ 0 1 | G 3 ( s , x , y ) − G 3 M ( s , x , y ) | 2 d y d s ≤ C t δ Δ x r 3 , 0 ≤ x ≤ 1 ,$
where $r 3$ is defined by (11).
Proof.
For (A32), we have
$∫ 0 1 | G 3 ( t , x , y ) − G 3 M ( t , x , y ) | 2 d y ≤ C ∫ 0 1 | ∑ j = M ∞ t α + γ − 1 E α , α + γ ( − λ j t α ) φ j ( x ) φ j ( y ) | 2 d y + C ∫ 0 1 | ∑ j = 1 M − 1 t α + γ − 1 E α , α + γ ( − λ j t α ) φ j ( x ) φ j ( y ) − φ j ( k M ( y ) ) | 2 d y + C ∫ 0 1 | ∑ j = 1 M − 1 t α + γ − 1 E α , α + γ ( − λ j t α ) φ j ( x ) − φ j M ( x ) φ j ( k M ( y ) ) | 2 d y + C ∫ 0 1 | ∑ j = 1 M − 1 t α + γ − 1 E α , α + γ ( − λ j t α ) − t α + γ − 1 E α , α + γ ( − λ j M t α ) φ j M ( x ) φ j ( k M ( y ) ) | 2 d y = I 1 ( t ) + I 2 ( t ) + I 3 ( t ) + I 4 ( t ) .$
For $I 1 ( t )$, we have, by (A25),
$I 1 ( t ) = ∫ 0 1 | ∑ j = M ∞ t α + γ − 1 E α , α + γ ( − λ j t α ) φ j ( x ) φ j ( y ) | 2 d y = ∑ j = M ∞ t 2 ( α + γ − 1 ) E α , α + γ 2 ( − λ j t α ) ≤ C t − 1 + ϵ Δ x r 3 ,$
where $r 3$ is defined by (11).
For $I 2 ( t )$, we have
$I 2 ( t ) ≤ C ∑ j = 1 M − 1 t 2 ( α + γ − 1 ) E α , α + γ 2 ( − λ j t α ) j M 2 .$
By (A26), we get
$I 2 ( t ) ≤ C t − 1 + ϵ Δ x r 3 ,$
where $r 3$ is defined by (11).
For $I 3 ( t )$, we have, by (A26),
$I 3 ( t ) ≤ C ∑ j = 1 M − 1 t 2 ( α + γ − 1 ) E α , α + γ 2 ( − λ j t α ) j M 2 ≤ C t − 1 + ϵ Δ x r 3 ,$
where $r 3$ is defined by (11).
Finally, we consider $I 4 ( t )$. We have
$I 4 ( t ) = ∫ 0 t | ∑ j = 1 M − 1 t α + γ − 1 E α , α + γ ( − t α λ j ) − E α , α + γ ( − t α λ j M ) φ j M ( x ) φ j ( k M ( y ) ) | 2 d y ≤ C ∑ j = 1 M − 1 t 2 ( α + γ − 1 ) | E α , 1 ( − t α λ j ) − E α , 1 ( − t α λ j M ) | 2 .$
Note that
$d d t E α , α + γ ( − t α λ j ) = E α , α + γ ′ ( − t α λ j ) d d t ( − t α λ j ) ,$
we have, by Lemma 1,
$E α , α + γ ′ ( − t α λ j ) = d d t E α , α + γ ( − t α λ j ) d d t ( − t α λ j ) = t − 1 E α , α + γ − 1 ( − t α λ j ) − ( α + γ − 1 ) t − 1 E α , α + γ ( − t α λ j ) − α t α − 1 λ j = t − α E α , α + γ − 1 ( − t α λ j ) − ( α + γ − 1 ) t − α E α , α + γ ( − t α λ j ) − α λ j .$
Therefore, by using the mean-value theorem
$I 4 ( t ) ≤ ∑ j = 1 M − 1 t 2 ( α + γ − 1 ) | E α , α + γ ( − t α λ j ) − E α , α + γ ( − t α λ j M ) | 2 = ∑ j = 1 M − 1 | t α + γ − 1 E α , α + γ ′ ( c ) t α ( λ j − λ j M ) | 2 ≤ C ∑ j = 1 M − 1 | t α + γ − 1 E α , α + γ ′ ( − t α λ j ) t α ( λ j − λ j M ) | 2 ≤ C ∑ j = 1 M − 1 t 2 ( α + γ − 1 ) t − 2 α λ j 2 E α , α + γ − 1 ( − t α λ j ) − ( α + γ − 1 ) E α , α + γ ( − t α λ j ) 2 | t α ( λ j − λ j M ) | 2 = C ∑ j = 1 M − 1 t 2 ( α + γ − 1 ) λ j 2 | E α , α + γ − 1 ( − t α λ j ) | 2 + | E α , α + γ ( − t α λ j ) | 2 ( λ j − λ j M ) 2 .$
Further, we have, by (3), with $0 ≤ γ 1 ≤ 1$,
$I 4 ( t ) ≤ C ∑ j = 1 M − 1 t 2 ( α + γ − 1 ) λ j 2 | 1 t α λ j | 2 γ 1 ( λ j − λ j M ) 2 ≤ C ∑ j = 1 M − 1 t 2 ( α + γ − 1 ) − 2 α γ 1 1 j 4 + 4 γ 1 j 8 M 4 = C t 2 ( α + γ − 1 ) − 2 α γ 1 ∑ j = 1 M − 1 1 j 4 γ 1 j 4 M 4 .$
Case 1. If $2 γ − 1 < 0$, then, choosing $2 ( α + γ − 1 ) = − 1 + ϵ$, that is, $γ 1 = 1 + 2 γ − 1 2 α − ϵ 2 α$,
$I 4 ( t ) ≤ C t 2 ( α + γ − 1 ) − 2 α γ 1 ∑ j = 1 M − 1 1 j 4 γ 1 j 4 M 4 = C t − 1 + ϵ ∑ j = 1 M − 1 1 j 4 1 + 2 γ − 1 2 α − ϵ 2 α j 4 M 4 ≤ C t − 1 + ϵ 1 M 4 ∫ 1 M x − 4 1 + 2 γ − 1 2 α − ϵ 2 α + 4 d x ≤ C t − 1 + ϵ Δ x 3 − 2 ( 1 − 2 γ ) α − 2 ϵ α .$
Case 2. If $2 γ − 1 ≥ 0$, then, choosing $γ 1 = 1 − ϵ 2 α$,
$I 4 ( t ) = C t 2 ( α + γ − 1 ) − 2 α γ 1 ∑ j = 1 M − 1 1 j 4 γ 1 j 4 M 4 = C t 2 ( α + γ − 1 ) − 2 α 1 − ϵ 2 α ∑ j = 1 M − 1 1 j 4 1 − ϵ 2 α j 4 M 4 = C t ( 2 γ − 1 ) − 1 + ϵ 1 M 4 ∑ j = 1 M − 1 1 j − 2 ϵ α ≤ C t ( 2 γ − 1 ) − 1 + ϵ 1 M 4 ∫ 1 M x 2 ϵ α d x ≤ C t ( 2 γ − 1 ) − 1 + ϵ Δ x 3 − 2 ϵ α .$
Thus, we get
$I 4 ( t ) ≤ C t − 1 + ϵ Δ x 3 − 2 ( 1 − 2 γ ) α − 2 ϵ α , 2 γ − 1 < 0 , C t ( 2 γ − 1 ) − 1 + ϵ Δ x 3 − 2 ϵ α , 2 γ − 1 ≥ 0 .$
Note that the convergence order of $I 4 ( t )$ is higher than the order $Δ x r 3$, we therefore have
$I 4 ( t ) ≤ C t − 1 + ϵ Δ x r 3 ,$
where $r 3$ is defined by (11). Hence (A32) follows.
For (A33), we have, by (A32),
$∫ 0 t ∫ 0 1 | G 3 ( s , x , y ) − G 3 M ( s , x , y ) | 2 d y d s ≤ C t δ Δ x r 3 .$
Together these estimates complete the proof of Lemma A6.

#### Appendix A.3. Green Function G 2 (t,x,y) and Its Approximation G 2 M (t,x,y)

In this subsection, we will consider the bounds of $G 2 ( t , x , y )$ and its approximation $G 2 M ( t , x , y )$ and the error bounds of $G 2 ( t , x , y ) − G 2 M ( t , x , y )$ in some suitable norms. We obtain the following Lemmas A7–A9 by choosing $γ = 0$ in Lemmas A4–A6, respectively.
Lemma A7.
Let $0 < α ≤ 1$. There exist some positive constant $δ > 0$ and sufficiently small $ϵ > 0$ such that, with $t > 0$,
$∫ 0 1 | G 2 ( t , x , y ) | 2 d y ≤ C t − α 2 , 0 ≤ x ≤ 1 ,$
$∫ 0 t ∫ 0 1 | G 2 ( s , x , y ) | 2 d y d s ≤ C t δ , 0 ≤ x ≤ 1 , if 2 α < 3 ,$
$∫ 0 1 | G 2 ( t , y , z ) − G 2 ( t , k M ( y ) , z ) | 2 d z ≤ C t − 1 + ϵ Δ x r 2 , 0 ≤ y ≤ 1 ,$
$∫ 0 t ∫ 0 1 | G 2 ( s , y , z ) − G 2 ( s , k M ( y ) , z ) | 2 d z d s ≤ C t δ Δ x r 2 , 0 ≤ y ≤ 1 .$
Lemma A8.
Let $0 < α ≤ 1$. There exist some positive constant $δ > 0$ and sufficiently small $ϵ > 0$ such that, with $t > 0$,
$∫ 0 1 | G 2 M ( t , x , y ) | 2 d y ≤ C t − α 2 , 0 ≤ x ≤ 1 ,$
$∫ 0 t ∫ 0 1 | G 2 M ( s , x , y ) | 2 d y d s ≤ C t δ , 0 ≤ x ≤ 1 , if 2 α < 3 ,$
$∫ 0 1 | G 2 M ( t , y , z ) − G 2 M ( t , k M ( y ) , z ) | 2 d z ≤ C t − 1 + ϵ Δ x r 2 , 0 ≤ y ≤ 1 ,$
$∫ 0 t ∫ 0 1 | G 2 M ( s , y , z ) − G 2 M ( s , k M ( y ) , z ) | 2 d z d s ≤ C t δ Δ x r 2 , 0 ≤ y ≤ 1 .$
Lemma A9.
Let $0 < α ≤ 1$. There exist some positive constant $δ > 0$ and sufficiently small $ϵ > 0$ such that, with $t > 0$,
$∫ 0 1 | G 2 ( t , x , y ) − G 2 M ( t , x , y ) | 2 d y ≤ C t − 1 + ϵ Δ x r 2 , 0 ≤ x ≤ 1 ,$
$∫ 0 t ∫ 0 1 | G 2 ( s , x , y ) − G 2 M ( s , x , y ) | 2 d y d s ≤ C t δ Δ x r 3 , 0 ≤ x ≤ 1 .$

## References

1. Gyöngy, I. Lattice approximations for stochastic quasi-linear parabolic partial differential equations driven by space-time white noise I. Potential Anal. 1998, 9, 1–25. [Google Scholar] [CrossRef]
2. Walsh, J.B. Finite element methods for parabolic stochastic PDE’s. Potential Anal. 2005, 23, 1–43. [Google Scholar] [CrossRef]
3. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier: Amsterdam, The Netherlands, 2006. [Google Scholar]
4. Podlubny, I. Fractional differential equations. In Mathematics in Science and Engineering; Academic Press: San Diego, CA, USA, 1999; Volume 198. [Google Scholar]
5. Anton, R.; Cohen, D.; Quer-Sardanyons, L. A fully discrete approximation of the one-dimensional stochastic heat equation. IMA J. Numer. Anal. 2020, 40, 247–284. [Google Scholar] [CrossRef][Green Version]
6. Chen, Z.-Q.; Kim, K.-H.; Kim, P. Fractional time stochastic partial differential equations. Stoch. Process. Appl. 2015, 125, 1470–1499. [Google Scholar] [CrossRef]
7. Jin, B.; Yan, Y.; Zhou, Z. Numerical approximation of stochastic time-fractional diffusion. ESAIM M2AN Math. Model. Numer. Anal. 2019, 53, 1245–1268. [Google Scholar] [CrossRef][Green Version]
8. Anh, V.V.; Leonenko, N.N.; Ruiz-Medina, M. Space-time fractional stochastic equations on regular bounded open domains. Fract. Calc. Appl. Anal. 2016, 19, 1161–1199. [Google Scholar] [CrossRef][Green Version]
9. Chen, L. Nonlinear stochastic time-fractional diffusion equations on R: Moments, Holder regularity and intermittency. Trans. Am. Math. Soc. 2017, 369, 8497–8535. [Google Scholar] [CrossRef]
10. Chen, L.; Hu, Y.; Nualart, D. Nonlinear stochastic time-fractional slow and fast diffusion equations on Rd. Stochastic Process. Appl. 2019, 129, 5073–5112. [Google Scholar] [CrossRef]
11. Liu, W.; Röckner, M.; da Silva, J.L. Quasi-linear (stochastic) partial differential equations with time-fractional derivatives. SIAM J. Math. Anal. 2018, 50, 2588–2607. [Google Scholar] [CrossRef][Green Version]
12. Wu, X.; Yan, Y.; Yan, Y. An analysis of the L1 scheme for stochastic subdiffusion problem driven by integrated space-time white noise. Appl. Numer. Math. 2020, 157, 69–87. [Google Scholar] [CrossRef]
13. Gunzburger, M.; Li, B.; Wang, J. Convergence of finite element solution of stochastic partial integral-differential equations driven by white noise. Numer. Math. 2019, 141, 1043–1077. [Google Scholar] [CrossRef][Green Version]
14. Li, Y.; Wang, Y.; Deng, W. Galerkin finite element approximations for stochastic space-time fractional wave equations. SIAM J. Numer. Anal. 2017, 55, 3173–3202. [Google Scholar] [CrossRef][Green Version]
15. Li, X.; Yang, X. Error estimates of finite element methods for stochastic fractional differential equations. J. Comput. Math. 2017, 35, 346–362. [Google Scholar]
16. Zou, G. A Galerkin finite element method for time-fractional stochastic heat equation. Comput. Math. Appl. 2018, 75, 4135–4150. [Google Scholar] [CrossRef][Green Version]
17. Allen, E.J.; Novosel, S.J.; Zhang, Z. Finite element and difference approximation of some linear stochastic partial differential equations. Stoch. Stoch. Rep. 1998, 64, 117–142. [Google Scholar] [CrossRef]
18. Andersson, A.; Kovács, M.; Larsson, S. Weak error analysis for semilinear stochastic Volterra equations with additive noise. J. Math. Anal. Appl. 2016, 437, 1283–1304. [Google Scholar] [CrossRef][Green Version]
19. Andersson, A.; Larsson, S. Weak convergence for a spatial approximation of the nonlinear stochastic heat equation. Math. Comp. 2016, 85, 1335–1358. [Google Scholar] [CrossRef][Green Version]
20. Becker, S.; Jentzen, A. Strong convergence rates for nonlinearity-truncated Euler-type approximations of stochastic Ginzburg-Landau equations. Stoch. Process. Appl. 2019, 129, 28–69. [Google Scholar] [CrossRef][Green Version]
21. Bréhier, C.-E.; Cui, J.; Hong, J. Strong convergence rates of semidiscrete splitting approximations for the stochastic Allen-Cahn equation. IMA J. Numer. Anal. 2019, 39, 2096–2134. [Google Scholar] [CrossRef][Green Version]
22. Cao, Y.; Hong, J.; Liu, Z. Approximating stochastic evolution equations with additive white and rough noises. SIAM J. Numer. Anal. 2017, 55, 1958–1981. [Google Scholar] [CrossRef][Green Version]
23. Cui, J.; Hong, J. Strong and weak convergence rates of a spatial approximation for stochastic partial differential equation with one-sided Lipschitz coefficient. SIAM J. Numer. Anal. 2019, 57, 1815–1841. [Google Scholar] [CrossRef][Green Version]
24. Du, Q.; Zhang, T. Numerical approximation of some linear stochastic partial differential equations driven by special additive noises. SIAM J. Numer. Anal. 2002, 40, 142–1445. [Google Scholar] [CrossRef][Green Version]
25. Kovács, M.; Printems, J. Strong order of convergence of a fully discrete approximation of a linear stochastic Volterra type evolution equation. Math. Comp. 2014, 83, 2325–2346. [Google Scholar] [CrossRef][Green Version]
26. Kruse, R. Strong and weak approximation of semilinear stochastic evolution equations. In Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2014; Volume 2093. [Google Scholar]
27. Lord, G.J.; Powell, C.E.; Shardlow, T. An Introduction to Computational Stochastic PDEs; Number 50; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
28. Lord, G.J.; Tambue, A. A modified semi-implicit Euler-Maruyama scheme for finite element discretization of SPDEs with additive noise. Appl. Math. Comput. 2018, 332, 105–122. [Google Scholar] [CrossRef]
29. Qi, R.; Wang, X. Error estimates of semidiscrete and fully discrete finite element methods for the Cahan-Hilliard-Cook equation. SIAM J. Numer. Anal. 2020, 58, 1613–1653. [Google Scholar] [CrossRef]
30. Wang, X. Strong convergence rates of the linear implicit Euler method for the finite element discretization of SPDEs with additive noise. IMA J. Numer. Anal. 2016, 37, 965–984. [Google Scholar] [CrossRef]
31. Yan, Y. Galerkin finite element methods for stochastic parabolic partial differential equations. SIAM J. Numer. Anal. 2005, 43, 1363–1384. [Google Scholar] [CrossRef]
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Share and Cite

MDPI and ACS Style

Wang, J.; Hoult, J.; Yan, Y. Spatial Discretization for Stochastic Semi-Linear Subdiffusion Equations Driven by Fractionally Integrated Multiplicative Space-Time White Noise. Mathematics 2021, 9, 1917. https://doi.org/10.3390/math9161917

AMA Style

Wang J, Hoult J, Yan Y. Spatial Discretization for Stochastic Semi-Linear Subdiffusion Equations Driven by Fractionally Integrated Multiplicative Space-Time White Noise. Mathematics. 2021; 9(16):1917. https://doi.org/10.3390/math9161917

Chicago/Turabian Style

Wang, Junmei, James Hoult, and Yubin Yan. 2021. "Spatial Discretization for Stochastic Semi-Linear Subdiffusion Equations Driven by Fractionally Integrated Multiplicative Space-Time White Noise" Mathematics 9, no. 16: 1917. https://doi.org/10.3390/math9161917

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.