Next Article in Journal
Finite-Time Synchronization and Practical Synchronization for Caputo Fractional-Order Fuzzy Cellular Neural Networks with Transmission Delays and Uncertainties via Information Feedback
Previous Article in Journal
The Existence and Stability of Integral Fractional Differential Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Strong Convergence of a Modified Euler—Maruyama Method for Mixed Stochastic Fractional Integro—Differential Equations with Local Lipschitz Coefficients

by
Zhaoqiang Yang
1,2,* and
Chenglong Xu
1
1
School of Mathematics, Shanghai University of Finance and Economics, Shanghai 200433, China
2
Library & School of Finance, Lanzhou University of Finance and Economics, Lanzhou 730101, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2025, 9(5), 296; https://doi.org/10.3390/fractalfract9050296
Submission received: 31 March 2025 / Revised: 25 April 2025 / Accepted: 26 April 2025 / Published: 1 May 2025
(This article belongs to the Section Numerical and Computational Methods)

Abstract

This paper presents a modified Euler—Maruyama (EM) method for mixed stochastic fractional integro—differential equations (mSFIEs) with Caputo—type fractional derivatives whose coefficients satisfy local Lipschitz and linear growth conditions. First, we transform the mSFIEs into an equivalent mixed stochastic Volterra integral equations (mSVIEs) using a fractional calculus technique. Then, we establish the well—posedness of the analytical solutions of the mSVIEs. After that, a modified EM scheme is formulated to approximate the numerical solutions of the mSVIEs, and its strong convergence is proven based on local Lipschitz and linear growth conditions. Furthermore, we derive the modified EM scheme under the same conditions in the L 2 sense, which is consistent with the strong convergence result of the corresponding EM scheme. Notably, the strong convergence order under local Lipschitz conditions is inherently lower than the corresponding order under global Lipschitz conditions. Finally, numerical experiments are presented to demonstrate that our approach not only circumvents the restrictive integrability conditions imposed by singular kernels, but also achieves a rigorous convergence order in the L 2 sense.

1. Introduction

Mixed stochastic fractional integro—differential equations (mSFIEs) are essential tools for understanding certain system properties that cannot be captured with a deterministic framework; for example, such equations can capture the long—memory effects arising from macroeconomic factors or systemic trends [1,2,3].
Many numerical methods have been developed for SFIEs, such as the EM method [4,5,6,7], stopped Euler—Maruyama method [3,8], truncated Euler—Maruyama method [9,10,11], Milstein method [12,13]), θ Maruyama method [14,15], explicit Euler method [16], and implicit Euler method [17]. In particular, the authors of [3] considered a class of mixed SDEs driven by both Brownian motion and fractional Brownian motion (fBm) with the Hurst parameter H ( 1 / 2 , 1 ) , and they obtained the convergence rate δ (the diameter of partition) using a modified Euler method. In [4], the authors proved strong first—order superconvergence for linear SVIEs with convolution kernels when the kernel of the diffusion term becomes 0. In [5], nonlinear SFIEs were considered under non—Lipschitz conditions, and the EM solutions of SFIDEs shared strong first—order convergence. The authors of [7] introduced the initial value problem of Caputo—tempered SFIEs and proved the well—posedness of its solution, and the strong convergence order of the derived EM method was reported to be α 1 2 , with the fractional derivative’s order α ( 1 / 2 , 1 ) . Additionally, a fast EM method based on the sum—of—exponentials approximation was developed. However, most SDE models in real—world applications do not satisfy the global Lipschitz condition in the analysis of numerical solutions, especially Caputo—type fractional SDEs [5,18,19], where the local Lipschitz condition alone is insufficient to guarantee the existence of a global solution [12,20,21]. In [21], the authors found that, under linear growth conditions (Khasminskii—type conditions), both the exact and numerical solutions obtained via the EM or stochastic theta method satisfy the moment—boundedness condition, thereby establishing the strong convergence of the numerical solutions to the exact solution under local Lipschitz and linear growth conditions [9,10,11]. As the classical explicit EM method has a simple structure, is not time—consuming, and has an acceptable convergence rate under the global Lipschitz condition, it has attracted significant attention [16,17,22].
Additionally, research on the above numerical methods for SFIEs or SVIEs has concentrated on convergence under global Lipschitz and linear growth conditions, but the numerical stability properties of SFIEs or Hölder continuous kernels under local Lipschitz and linear growth conditions are rarely discussed. Specifically, no results have been reported on the mean—square stability of the analytical solutions of mSFIEs with singular kernels. Based on the Caputo—type fractional SDE, we consider the following mSFIE in It o ^ ’s sense:
D α y ( t ) = k 0 ( t , y ( t ) ) + k 1 ( t , s , y ( s ) ) d s d t + k 2 ( t , s , y ( s ) ) d W t d t + k 3 ( t , s , y ( s ) ) d W t H d t , t T , y ( 0 ) = y 0 ,
where D α represents the Caputo—type fractional derivative of order α ( 1 / 2 , 1 ) on T [ 0 , T ] , k 0 L 1 ( T × R d ; R d ) , k 1 L 1 ( { ( t , s ) : 0 s t T } × R d ; R d ) , k 2 L 2 ( { ( t , s ) : 0 s t T } × R d ; R d × r ) , and k 3 L 2 ( { ( t , s ) : 0 s t T } × R d ; R d × r ) . W t is defined as an r dimensional standard Brownian motion on the complete probability space ( Ω , F , P ) with an algebra filtration { F t , t 0 } , where F t is right—continuous, and F 0 contains all P -null sets. y 0 is an F 0 measurable R d valued random variable, which is defined on the same probability space and satisfies E | y 0 | 2 < . W t H is an fBm defined on ( Ω , F , P ) . As observed above, mSFIE (1) includes the Riemann—Liouville fractional integral operator and the fBm process. It becomes more complex to compare classical SDEs containing Brownian motion and fBm. Two major difficulties arise when investigating stochastic equations driven by W t H ; namely, the presence of correlated increments and the absence of the martingale property, which compromises the validity of the classical convergence theorem [12] in numerical analysis. In addition, it is challenging to determine the stability properties of analytical and numerical solutions, necessitating further in—depth research.
To the best of our knowledge, few studies have investigated the convergence rate of the numerical solutions of mSFIE (1) under local Lipschitz and linear growth conditions. To overcome the above difficulties and obtain the strong convergence rates in additive noise cases when H ( 1 / 2 , 1 ) , we first loosen the assumption of the global Lipschitz condition to the local Lipschitz condition and proceed to prove the well—posedness, and then we study the strong convergence order of the numerical method. We first transform the mSFIEs into the equivalent mSVIEs using a fractional calculus technique, and then we present the well—posedness of the analytical solutions of the mSVIEs. After that, a modified EM method is devised to approximate the numerical solutions of mSVIEs, and its strong convergence is obtained under local Lipschitz and linear growth conditions. Furthermore, we derive the modified EM scheme under the same conditions in the L 2 sense, which is consistent with the strong convergence result of the corresponding EM scheme. Notably, the strong convergence order under local Lipschitz conditions is inherently lower than the corresponding order under global Lipschitz conditions. Finally, numerical experiments are presented to demonstrate that our approach not only circumvents the restrictive integrability conditions imposed by singular kernels, but also achieves a rigorous convergence order in the L 2 sense.
The structure of this paper is as follows. In Section 2, some basic notations, preliminary facts on stochastic integrals for fBm, and some special functions are given, and some mild hypotheses are constructed. In Section 3, we transform the mSFIE into an equivalent mSVIE using a fractional calculus technique and Malliavin calculus. In Section 4, we employ a modified EM approximation to study the well—posedness of the solution to the mSVIE. In Section 5, we derive the strong convergence order of the modified EM method under local Lipschitz and linear growth conditions in the mean—square sense. Numerical experiments are presented in Section 6. Finally, we end with a brief conclusion in Section 7.

2. Preliminaries

In this paper, E denotes the expectation corresponding to P . Let | · | be the Euclidean norm | x | = i = 1 d x i 2 on R d and the trace norm · on R d × r . We define matrix A, and then A = trace ( A T A ) for A R d × r . Consider a complete probability space ( Ω , F , P ) with a filtration { F t } t 0 that satisfies common assumptions. For the real numbers a, b, and c, we write a b c : = min { a , b , c } and a b c : = max { a , b , c } . The following notations and preliminaries are provided in [23,24].
Definition 1 
([23,24]). Let a , b R and a < b , and let f L 1 ( [ a , b ] ) and 0 < α < 1 . The α order left—sided fractional Riemann—Liouville integral of f on [ a , b ] is defined as
I a + α f t = 1 Γ ( α ) a t ( t s ) α 1 f s d s ,
and the α order right—sided fractional Riemann—Liouville integral of f on [ a , b ] is defined as
I b α f t = exp { i π α } Γ ( α ) t b ( s t ) α 1 f s d s ,
where Γ ( · ) denotes the Gamma function.
Consider two continuous functions f , g L 1 ( [ a , b ] ) and 0 < α < 1 . For almost all t ( a , b ) , we define the following fractional derivatives:
( D a + α f a + ) ( t ) = 1 Γ ( 1 α ) f t ( t a ) α + α a t f t f s ( t s ) α + 1 d s 1 ( a , b ) ( t ) , ( D b 1 α g b ) ( t ) = exp { i π α } Γ ( α ) g t ( b t ) 1 α + ( 1 α ) t b g t g τ ( τ t ) 2 α d τ 1 ( a , b ) ( t ) .
Assume that D a + α f a + L 1 ( [ a , b ] ) and D b 1 α g L p ( [ a , b ] ) , where f a + ( t ) = f t f a and g b ( t ) = g t g b . Under these assumptions, the generalized (fractional) Lebesgue—Stieltjes integral a b f t d g t is defined as
a b f t d g t = exp { i π α } a b ( D a + α f a + ) ( t ) · ( D b 1 α g b ) ( t ) d t ,
Note that, for all 0 < ε < H , fBm W t H has the ( H ε ) Hölder regularity of continuous paths. Then, for f L ε ( [ a , b ] ) and 1 H < α < ε < 1 / 2 , the explicit expression in (2) becomes
a b f t d W t H = exp { i π α } a b ( D a + α f a + ) ( t ) · ( D b 1 α W b H ) ( t ) d t ,
where W b H ( t ) = W b H W t H .
Definition 2. 
We define the following norms for α ( 1 H , 1 / 2 ) :
f 0 , α ; t = sup 0 u < v < T | f t f s | ( t s ) 1 α + u v | f u f z | ( z u ) 2 α d z ,
f 2 , α ; t 2 = 0 t f α ; s 2 g ( t , s ) d s , f , α ; t = sup s [ 0 , t ] f α ; s ,
where g ( t , s ) = s α + ( t s ) α 1 / 2 , and
f α ; t = | f t | + 0 t | f t f s | ( t s ) 1 + α d s , f , 0 , T ; α = sup 0 t T | f t | + sup 0 s < t T | f t f s | ( t s ) α .
Next, we formulate some necessary mild hypotheses, which will be used in the next section.
Assumption 1. 
There exists a constant L 1 > 0 that, for any t 1 , t 2 , s T , y R d , such that k i ( i = 1 , 2 , 3 ) , satisfies the following condition:
| k 1 ( t 1 , s , y ) k 1 ( t 2 , s , y ) | + | k 2 ( t 1 , s , y ) k 2 ( t 2 , s , y ) | + | k 3 ( t 1 , s , y ) k 3 ( t 2 , s , y ) | L 1 ( 1 + | y | ) · | t 2 t 1 | .
Assumption 2. 
There exists a constant L 2 > 0 that, for any t , s 1 , s 2 T , y R d , such that k i ( t , s , y ( s ) ) ( i = 1 , 2 , 3 ) , satisfies the following condition:
| k 0 ( s 1 , y ) k 0 ( s 2 , y ) | | k 1 ( t , s 1 , y ) k 1 ( t , s 2 , y ) | | k 2 ( t , s 1 , y ) k 2 ( t , s 2 , y ) | | k 3 ( t , s 1 , y ) k 3 ( t , s 2 , y ) | L 2 ( 1 + | y | ) · | s 2 s 1 | .
Assumption 3. 
There exists a constant L m > 0 ( m 1 ) that is dependent on m and, for any s , t T , y 1 , y 2 R d and | y 1 | | y 2 | m , such that k i ( i = 1 , 2 , 3 ) , satisfies the following condition:
| k 0 ( s , y 1 ) k 0 ( s , y 2 ) | | k 1 ( t , s , y 1 ) k 1 ( t , s , y 2 ) | | k 2 ( t , s , y 1 ) k 2 ( t , s , y 2 ) | | k 3 ( t , s , y 1 ) k 3 ( t , s , y 2 ) | L m ( 1 + | y | ) · | y 2 y 1 | .
Assumption 4. 
There exists a constant L 4 > 0 that, for any s , t T , y R d , such that k i ( i = 1 , 2 , 3 ) , satisfies the following linear growth condition:
| k 0 ( s , y ) | | k 1 ( t , s , y ) | | k 2 ( t , s , y ) | | k 3 ( t , s , y ) | L 4 ( 1 + | y | ) .
Remark 1. 
In Assumption 3, we point out that this local Lipschitz condition is significantly weaker than the following global Lipschitz condition; that is, there exists a constant L > 0 that, for any s , t T , y 1 , y 2 R d , such that k i ( i = 1 , 2 , 3 ) , satisfies the following condition:
| k 0 ( s , y 1 ) k 0 ( s , y 2 ) | | k 1 ( t , s , y 1 ) k 1 ( t , s , y 2 ) | | k 2 ( t , s , y 1 ) k 2 ( t , s , y 2 ) | | k 3 ( t , s , y 1 ) k 3 ( t , s , y 2 ) | L · | y 2 y 1 | .

3. An Equivalent mSVIE

In this section, an equivalent mSVIE of mSFIE (1) is formulated for the deterministic fractional integral equation see [25] Section 2.2.2. page 119, [26] Section 3.1.3. page 204, and the integral Equation (2.14) of [27]. The integral equation form of mSFIE (1) can be rigorously defined to form the following mSVIE:
y ( t ) = y 0 + 1 Γ ( α ) 0 t ( t τ ) α 1 k 0 ( τ , y ( τ ) ) d τ + 1 Γ ( α ) 0 t ( t τ ) α 1 0 τ k 1 ( τ , s , y ( s ) ) d s d τ + 1 Γ ( α ) 0 t ( t τ ) α 1 0 τ k 2 ( τ , s , y ( s ) ) d W s d τ + 1 Γ ( α ) 0 t ( t τ ) α 1 0 τ k 3 ( τ , s , y ( s ) ) d W s H d τ .
Similar to the solution definition of the stochastic integral equation in [20] (Section 2.2. page 48, Definition 2.1), the solution of mSFIE (1) can be defined as follows.
Definition 3. 
Let { y ( t ) : t T } be an R d valued stochastic process if it satisfies the following conditions:
(1) { y ( t ) } is F t adapted and continuous;
(2) k 0 L 1 ( T × R d ; R d ) , k 1 L 1 ( { ( t , s ) : 0 s t T } × R d ; R d ) , k 2 L 2 ( { ( t , s ) : 0 s t T } × R d ; R d × r ) , and k 3 L 2 ( { ( t , s ) : 0 s t T } × R d ; R d × r ) ;
(3) mSVIE (4) holds for every t T with probability 1.
A solution { y ( t ) } is determined to be unique if any other solution { y ˜ ( t ) } is indistinguishable from { y ( t ) } , such that
P { y ( t ) = y ˜ ( t ) f o r a l l t T } = 1 .
Under Assumption 1, we can use the stochastic Fubini theorem (ref. [2] Theorem 1.13.1. page 57) for mSVIE (4); then,
y ( t ) = y 0 + 1 Γ ( α ) 0 t ( t τ ) α 1 k 0 ( τ , y ( τ ) ) d τ + 1 Γ ( α ) 0 t ( t τ ) α 1 0 τ k 1 ( τ , s , y ( s ) ) d τ d s + 1 Γ ( α ) 0 t ( t τ ) α 1 0 τ k 2 ( τ , s , y ( s ) ) d τ d W s + 1 Γ ( α ) 0 t ( t τ ) α 1 0 τ k 3 ( τ , s , y ( s ) ) d τ d W s H .
We let
K 0 ( t , s , y ( s ) ) = 1 Γ ( α ) 0 t ( t τ ) α 1 · k 0 ( s , y ( s ) ) d τ , K 1 ( t , s , y ( s ) ) = 1 Γ ( α ) 0 t ( t τ ) α 1 · k 1 ( τ , s , y ( s ) ) d τ , K 2 ( t , s , y ( s ) ) = 1 Γ ( α ) 0 t ( t τ ) α 1 · k 2 ( τ , s , y ( s ) ) d τ , K 3 ( t , s , y ( s ) ) = 1 Γ ( α ) 0 t ( t τ ) α 1 · k 3 ( τ , s , y ( s ) ) d τ ,
and then mSVIE (5) is equivalent to the following mSVIE:
y ( t ) = y 0 + 0 t K 0 ( t , s , y ( s ) ) d s + 0 t K 1 ( t , s , y ( s ) ) d s + 0 t K 2 ( t , s , y ( s ) ) d W s + 0 t K 3 ( t , s , y ( s ) ) d W s H .
For technical reasons, we need the following auxiliary lemmas.
Lemma 1. 
Let g : T R d be an ε Hölder continuous function. We define g c ( t ) = 1 c 0 t c t g ( τ ) d τ for c > 0 and t 1 , t 2 , s 1 , s 2 T . Then, for α ( 1 ε , 1 ) , there exists a constant C > 0 such that
g ( t ) g c ( t ) p , 0 , T ; α C L ε ( g ) c ε + α 1 , t T ,
where
L ε ( g ) = sup 0 | s 1 s 2 | < | t 1 t 2 | T | g ( | t 1 t 2 | ) g ( | s 1 s 2 | ) | ( | t 1 t 2 | | s 1 s 2 | ) ε
is the ε-Hölder constant of g ( t ) .
The proof can be seen in Appendix A.
Lemma 2. 
Under Assumptions 1–4, for any α ( 1 H , 1 2 ) , mSVIE (7) has a unique solution y ( t ) such that { y ( t ) , t T } L 0 α , p ( T , R d ) a . s . Furthermore, for any 0 < η < 1 2 and s t T , there exists a constant C > 0 such that
y ( t ) y c ( t ) y ( s ) + y c ( s ) p , 0 , T ; α C L 2 α + ( 1 2 η ) ( y ) c 2 α + ( 1 2 η ) ,
where
L 2 α + ( 1 2 η ) ( y ) = sup 0 s < t T y ( t ) y ( s ) ( t s ) 2 α + ( 1 2 η )
is the [ 2 α + ( 1 2 η ) ] -Hölder constant of y ( t ) .
The proof can be seen in Appendix B.

4. Existence and Uniqueness of Solution to mSVIE (7)

In this section, we employ a modified EM approximation with the aim of proving the existence, uniqueness, and stability of the solution to mSVIE (7).

4.1. A Modified EM Scheme

For every integer N 1 , we let T N { t n n T N = n h : n = 0 , 1 , , N } be a given uniform mesh on T . Then, we can define a stopping time τ N = T inf { t : W 0 , t N } and a stopped process W t N = W t τ N . The solution of mSVIE (7) is denoted by y N , with W replaced by W N . For t = t n , mSVIE (7) becomes
y ( t n ) = y 0 + 0 t n K 0 ( t n , s , y ( s ) ) d s + 0 t n K 1 ( t n , s , y ( s ) ) d s + 0 t n K 2 ( t n , s , y ( s ) ) d W s + 0 t n K 3 ( t n , s , y ( s ) ) d W s H = y 0 + i = 0 n 1 t i t i + 1 K 0 ( t n , s , y ( s ) ) d s + i = 0 n 1 t i t i + 1 K 1 ( t n , s , y ( s ) ) d s + i = 0 n 1 t i t i + 1 K 2 ( t n , s , y ( s ) ) d W s + i = 0 n 1 t i t i + 1 K 3 ( t n , s , y ( s ) ) d W s H y 0 + i = 0 n 1 t i t i + 1 K 0 ( t n , s , y ( t i ) ) d s + i = 0 n 1 t i t i + 1 K 1 ( t n , s , y ( t i ) ) d s + i = 0 n 1 t i t i + 1 K 2 ( t n , s , y ( t i ) ) d W s + i = 0 n 1 t i t i + 1 K 3 ( t n , s , y ( t i ) ) d W s H ,
and
y N ( t n ) = y 0 N + i = 0 n 1 t i t i + 1 K 0 ( t n , s , y N ( t i ) ) d s + i = 0 n 1 t i t i + 1 K 1 ( t n , s , y N ( t i ) ) d s + i = 0 n 1 t i t i + 1 K 2 ( t n , s , y N ( t i ) ) d W s + i = 0 n 1 t i t i + 1 K 3 ( t n , s , y N ( t i ) ) d W s H ,
for n = 1 , , N and y N ( t 0 ) = y 0 . Let y ^ N ( t ) = n = 0 N y N ( t n ) I [ t n , t n + 1 ) ( t ) , t T . Then, the modified EM scheme is as follows:
y N ( t ) = y 0 + 0 t K 0 ( t , s , y ^ N ( s ) ) d s + 0 t K 1 ( t , s , y ^ N ( s ) ) d s + 0 t K 2 ( t , s , y ^ N ( s ) ) d W s + 0 t K 3 ( t , s , y ^ N ( s ) ) d W s H .
Obviously, the [ 2 α + ( 1 2 η ) ] Hölder continuous trajectory y N ( t ) satisfies y N ( t n ) = y ^ N ( t n ) for n = 0 , 1 , , N .

4.2. Existence and Uniqueness

Lemma 3. 
Under Assumption 4, there exists a constant C N > 0 that is independent of N, and for any p 2 / θ , θ ( 0 , α + H 1 2 ) , it satisfies
E [ y N ( t ) , 0 , T ; α p ] C N a n d E [ y ^ N ( t ) , 0 , T ; α p ] C N , t T ,
where N 1 .
The proof can be seen in Appendix C.
Lemma 4. 
Under Assumptions 1 and 4, for any integer p 2 / θ , θ ( 0 , α + H 1 2 ) , there exists a constant C N > 0 such that
E [ y N ( t ) y N ( t ) , 0 , T ; α p ] C N | t t | α p , 0 t < t T ,
where N 1 .
The proof can be seen in Appendix D.
Theorem 1. 
Based on Assumption 1, Assumption 3, and Assumption 4, mSFIE (1) has a unique solution y ( t ) , and for any p 2 / θ (p is positive integer), θ ( 0 , α + H 1 2 ) , it satisfies
E [ | y ( t ) | p ] < , t T .
The proof is provided in Appendix E.

5. Strong Convergence Analysis of the Modified EM Approximation

Note that the numerical approximation provided by EM Scheme (11) in Section 4 will incur significant computational overhead with stochastic fractional integrals. In this section, we propose a modified version of Euler—Maruyama (EM) Scheme (11) that reduces computational complexity while preserving the desired strong convergence rates.
Given the setting with scheme (11), we design the modified version by using left—endpoint approximation [4]:
Y ( t ) = y 0 + 0 t K 0 ( t , s ̲ , Y ^ ( s ) ) d s + 0 t K 1 ( t , s ̲ , Y ^ ( s ) ) d s + 0 t K 2 ( t , s ̲ , Y ^ ( s ) ) d W s + 0 t K 3 ( t , s ̲ , Y ^ ( s ) ) d W s H ,
where s ̲ = t n for s [ t n , t n + 1 ) , Y ^ ( t ) = n = 0 N Y ( t n ) I [ t n , t n + 1 ) ( t ) . Our modified EM method can be defined as follows:
Y n : = Y ( t n ) = y 0 + j = 0 n 1 K 0 ( t n , t j , Y j ) h + j = 0 n 1 K 1 ( t n , t j , Y j ) h + j = 0 n 1 K 2 ( t n , t j , Y j ) Δ W j + j = 0 n 1 K 3 ( t n , t j , Y j ) Δ W j H , 0 n N , Y 0 = y 0 ,
where Δ W j = W t j + 1 W t j and Δ W j H = W t j + 1 H W t j H , j = 0 , 1 , , N 1 . Obviously, we only simulate Δ W j and Δ W j H without computing stochastic integrals. Noticing that the discrete of the fBm increment in the interval [ j Δ t , ( j + 1 ) Δ t ) can be using a binomial approach (see [28]), such as
Δ W j Δ t H = ( j + 1 ) 2 H j 2 H Δ t H , with probability 1 / 2 ( j + 1 ) 2 H j 2 H Δ t H , with probability 1 / 2 ,
and the fBm value on the interval [ 0 , i Δ t ] is computed as
W i Δ t H = j = 0 i 1 Δ W j Δ t H , with W 0 H = 0 and i = 1 , , n ,
which effectively reduces the calculations.

5.1. The Mean—Square Convergence Theorem of the Modified EM Method (14)

To analyze the strong convergence of the modified EM method (14), the boundedness of the numerical solution can be established using the following lemma.
Lemma 5. 
Under the same conditions as Lemma 2, for any t [ t n , t n + 1 ] , n = 1 , 2 , , N 1 ,
y ( t s ̲ ) y c ( t n s ̲ ) p , 0 , T ; α C L 2 α + ( 1 2 η ) ( y ) h 2 α + ( 1 2 η ) ,
y ( t s ̲ ) y c ( t s ) p , 0 , T ; α C L 2 α + ( 1 2 η ) ( y ) h 2 α + ( 1 2 η ) ,
where
L 2 α + ( 1 2 η ) ( y ) = sup c 0 y ( c ) c 2 α + ( 1 2 η )
is the [ 2 α + ( 1 2 η ) ] Hölder constant of y ( t ) .
The proof can be seen in Appendix F.
Lemma 6. 
Under Assumption 4, for any t T , there exists a constant C > 0 , which is independent of h, for any integer p 2 / θ , θ ( 0 , α + H 1 2 ) , such that
E [ Y ( t ) , 0 , T ; α p ] C , E [ Y ^ ( t ) , 0 , T ; α p ] C .
The proof is similar to that of Lemma 4.
Lemma 7. 
Under Assumptions 1 and 4, for any t T , there exists a constant C > 0 that, independent of h, satisfies
E [ Y ( t ) Y ^ ( t ) , 0 , T ; α 2 ] C h 2 α .
The proof can be seen in Appendix G.
Next, we study the mean-square convergence of the modified EM method (14) under Assumptions 3 and 4. More details of the properties of the local Lipschitz condition can be seen in Remark 2.1 of [29]. Notice that L m is an increasing function depending on m, and we need to consider L m as m . Therefore, we let Δ > 0 be sufficiently small for a strictly positive decreasing function ν : ( 0 , Δ ] ( 0 , ) such that
lim h 0 ν ( h ) = , lim h 0 L ν ( h ) 2 h 2 O = 0 ,
where O denotes the order of the modified EM method under the global Lipschitz condition (Remark 1).
Theorem 2. 
Based on Assumptions 3 and 4, for the arbitrary constant ζ ( 0 , 2 ) and p > 2 θ , θ ( 0 , α + H 1 2 ) , we assume that there exists an h that satisfies
ν ( h ) [ L ν ( h ) 2 h O ] p 2 ( p 2 ) , w h e r e O = min { 2 ζ , 2 α + 1 2 θ } , α θ = 1 / 2 , min { 2 , 2 α + 1 2 θ } , α θ 1 / 2
and for any h < Δ , there exists a constant C > 0 , which is independent of h. Then, the modified EM solution Y ( t ) to mSVIE (14) converges to the exact solution y ( t ) , that is,
E [ | Y ( t ) y ( t ) | 2 ] C L ν ( h ) 2 h O , f o r a l l t T .
Proof. 
We define the error as
e ( t ) = Y ( t ) y ( t )
and the stopping time as
t = inf { t 0 : | Y ( t ) | m } .
According to Theorem 1 and Lemma 7, for any δ > 0 , we have
E [ e ( t ) 2 , 0 , T ; α 2 ] = E [ e ( t ) 1 { t > T , τ m > T } 2 ] + E [ e ( t ) 1 { [ t > T ] [ τ m > T ] } 2 ] E [ e ( t t τ m ) 1 { ( t τ m ) > T } 2 ] + 2 δ E [ | e ( t ) | p ] p + p 2 p δ 2 / ( p 2 ) P { [ t > T ] [ τ m > T ] } E [ e ( t t τ m ) 1 { ( t τ m ) > T } 2 ] + 2 δ · 2 p ε p + p 2 p δ 2 / ( p 2 ) · 2 ε m p ( Young s   inequality ) E [ | e ( t t τ m ) | 2 ] + δ 2 p + 1 ε p + 2 ( p 2 ) ε p δ 2 / ( p 2 ) m p ,
where the constant ε > 0 is independent of δ and m. Furthermore, we can estimate E [ | e ( t t τ m ) | 2 ] in the last expression of inequality (16). With a Hölder—type inequality, we have
E [ | e ( t t τ m ) | 2 ] 8 E { | 0 t t τ m [ K 0 ( t t τ m , s ̲ , Y ^ ( s ) ) K 0 ( t t τ m , s , Y ^ ( s ) ) ] d s | 2 + | 0 t t τ m [ K 0 ( t t τ m , s , Y ^ ( s ) ) K 0 ( t t τ m , s , y ( s ) ) ] d s | 2 + | 0 t t τ m [ K 1 ( t t τ m , s ̲ , Y ^ ( s ) ) K 1 ( t t τ m , s , Y ^ ( s ) ) ] d s | 2 + | 0 t t τ m [ K 1 ( t t τ m , s , Y ^ ( s ) ) K 1 ( t t τ m , s , y ( s ) ) ] d s | 2 + | 0 t t τ m [ K 2 ( t t τ m , s ̲ , Y ^ ( s ) ) d W s K 2 ( t t τ m , s , Y ^ ( s ) ] ) d W s | 2 + | 0 t t τ m [ K 2 ( t t τ m , s , Y ^ ( s ) ) d W s K 2 ( t t τ m , s , y ( s ) ] ) d W s | 2 + | 0 t t τ m [ K 3 ( t t τ m , s ̲ , Y ^ ( s ) ) d W s H K 3 ( t t τ m , s , Y ^ ( s ) ) ] d W s H | 2 } + | 0 t t τ m [ K 3 ( t t τ m , s , Y ^ ( s ) ) d W s H K 3 ( t t τ m , s , y ( s ) ) ] d W s H | 2 } = : 8 { K 1 + K 2 + K 3 + K 4 + K 5 + K 6 + K 7 + K 8 } .
Using the Cauchy—Schwarz—type inequality and Itô isometry, under Assumption 1, Assumption 2, Assumption 4, and Lemmas 6–7, we have
K 1 + K 3 + K 5 + K 7 C h 2 α .
With the Cauchy—Schwarz—type inequality and Itô isometry, under Assumption 3, combined with Theorem 1, we have
K 2 + K 4 + K 6 + K 8 C L m 2 0 t t τ m [ ( t t τ m ) s ] α 1 E [ | Y ( s ) Y ^ ( s ) | 2 + | e ( s t τ m ) | 2 ] d s .
Applying Lemma 7 and the weakly singular Gronwall—type inequality (ref. [30], Theorem 3.3.1. page 349), we have
E [ | e ( t t τ m ) | 2 ] C m h min { 2 ζ , 2 α + 1 2 θ } , α θ = 1 / 2 , C m h min { 2 , 2 α + 1 2 θ } , α θ 1 / 2 .
Note that the constant C m depends on m, and it is independent of h and δ . Then, (16) becomes
E [ e ( t ) 2 , 0 , T ; α 2 ] C m h 2 α + δ 2 p + 1 ε p + 2 ( p 2 ) ε p δ 2 / ( p 2 ) m p .
We can then choose h, δ = L ν ( h ) 2 h min { 2 , 2 α + 1 2 θ } , and
m = [ L ν ( h ) 2 h min { 2 , 2 α + 1 2 θ } ] p 2 ( p 2 ) ν ( Δ ) .
Next, let h 0 , for any given ε > 0 , such that
E [ e ( t ) 2 , 0 , T ; α 2 ] C m h 2 α + δ 2 p + 1 ε p + 2 ( p 2 ) ε p δ 2 / ( p 2 ) m p ε 3 + ε 3 + ε 3 = ε .
Hence, for arbitrary t T ,
lim h 0 E [ | Y ( t ) y ( t ) | 2 ] = 0 ,
and the proof is complete. □
Remark 2. 
The limitation expression in (15) indicates the strong convergence of the modified EM method (14). Unfortunately, it is challenging to derive the exact orders of strong convergence since L ν ( h ) . The precise orders of strong convergence can only be obtained under the global Lipschitz condition. Using Theorem 2, we investigate the strong convergence order with the local Lipschitz condition, which is inherently lower than the strong convergence order O / 2 with the global Lipschitz condition.

5.2. Strong Convergence Order Analysis

In order to analyze the strong convergence order of the modified EM method (14), the following theorem is used to demonstrate the computational efficiency of the numerical scheme.
Theorem 3. 
Based on Assumptions 3 and 4, let p > 2 / θ , θ ( 0 , α + H 1 2 ) , and suppose that there exists an h satisfying
ν ( h ) [ L ν ( h ) 2 h O ] p 2 ( p 2 ) , w h e r e O = min { 2 ζ , 2 α + 1 2 θ } , α θ = 1 / 2 , min { 2 , 2 α + 1 2 θ } , α θ 1 / 2 .
Then, there exists a constant C > 0 , which is independent of h and ϵ, such that
E [ | Y ( t ) y ( t ) | 2 ] C L ν ( h ) 2 ( h O + ϵ 2 ) , f o r a l l t T .
Proof. 
With mSVIE (7) and (13), we have
E [ | Y ( t ) y ( t ) | 2 ] 8 E { | 0 t [ K 0 ( t , s ̲ , Y ^ ( s ) ) K 0 ( t , s , Y ^ ( s ) ) ] d s | 2 + | 0 t [ K 0 ( t , s , Y ^ ( s ) ) K 0 ( t , s , y ( s ) ) ] d s | 2 + | 0 t [ K 1 ( t , s ̲ , Y ^ ( s ) ) K 1 ( t , s , Y ^ ( s ) ) ] d s | 2 + | 0 t [ K 1 ( t , s , Y ^ ( s ) ) K 1 ( t , s , y ( s ) ) ] d s | 2 + | 0 t [ K 2 ( t , s ̲ , Y ^ ( s ) ) d W s K 2 ( t , s , Y ^ ( s ) ) ] d W s | 2 + | 0 t [ K 2 ( t , s , Y ^ ( s ) ) K 2 ( t , s , y ( s ) ) ] d W s | 2 + | 0 t [ K 3 ( t , s ̲ , Y ^ ( s ) ) K 3 ( t , s , Y ^ ( s ) ) ] d W s H | 2 } + | 0 t [ K 3 ( t , s , Y ^ ( s ) ) d K 3 ( t , s , y ( s ) ) ] d W s H | 2 } = : 8 { K 1 + K 2 + K 3 + K 4 + K 5 + K 6 + K 7 + K 8 } .
Using similar derivation steps for inequalities (18) and (19), we also have
K 1 + K 3 + K 5 + K 7 C h 2 α ,
and
K 2 + K 4 + K 6 + K 8 C L m 2 0 t E [ | Y ( s ) Y ^ ( s ) | 2 + sup s T | e ( s t ) | 2 ] ( t s ) 1 α d s ,
where e ( t ) = Y ( t ) y ( t ) , and t is the same stopping time as before. Notice that
E [ | e ( T ) | 2 ] E [ | e ( T ) | 2 1 ( t τ m ) > T ] + E [ | e ( T ) | 2 1 ( t τ m ) T ] E [ | e ( T t τ m ) | 2 ] + E [ | e ( T ) | 2 1 ( t τ m ) T ] .
Using the Young—type inequality, let p > 2 / θ . Then,
E [ | e ( T ) | 2 1 ( t τ m ) T ] 2 δ E | e ( T ) | p p + p 2 p δ 2 / ( p 2 ) P { ( t τ m ) T } .
According to Theorem 2, Lemma 7, and the BDG inequality, we obtain
E [ | e ( T ) | p ] C a n d P { ( t τ m ) T } C m 2 .
Thus, we choose δ = L ν ( h ) 2 h min { 2 , 2 α + 1 2 θ } , and
m = [ L ν ( h ) 2 h min { 2 ζ , 2 α + 1 2 θ } ] p 2 ( p 2 ) , α θ = 1 / 2 , [ L ν ( h ) 2 h min { 2 , 2 α + 1 2 θ } ] p 2 ( p 2 ) , α θ 1 / 2 . ν ( Δ ) .
Then, we have
E [ | e ( T ) | 2 ] C L ν ( h ) 2 ( h O + ϵ 2 ) , f o r a l l T T .

6. Numerical Experiments

In this section, we consider two examples to verify the strong convergence orders of our modified EM scheme (11). We characterize the mean—square errors at the terminal time t N as
E h , T = 1 6500 i = 1 6500 | Y h ( T , ω i ) Y h / 2 ( T , ω i ) | 2 ,
where ω i denotes the ith single sample path. Furthermore, the computing order is defined as
Order = log ( E h , N / E h / 2 , N ) log 2 .
Example 1. 
We consider the following one-dimensional mSVIE:
y ( t ) = y 0 + 0 t ( t s ) α sin ( y 2 ( s ) ) d s + 0 t ( t s ) α cos ( y 2 ( s ) ) d W s + 0 t ( t s ) α cos ( y 2 ( s ) ) d W s H , t [ 0 , 1 ] , y ( 0 ) = y 0 = 1 .
It is obvious that K 1 = sin ( y 2 ( s ) ) , K 2 = cos ( y 2 ( s ) ) , and K 3 = cos ( y 2 ( s ) ) are locally Lipschitz continuous with L m = 2 m and satisfy linear growth conditions. For p > 2 / θ , we choose
ν ( x ) = x exp min { 2 , 2 α + 1 2 θ } 4 ( p 1 ) , x < 1 .
We let ν ( h ) and
L ν ( h ) 2 h min { 2 , 2 α + 1 2 θ } = [ 2 ν ( h ) ] 2 h min { 2 , 2 α + 1 2 θ } C h min { 2 , 2 α + 1 2 θ } 2 ( p 1 ) h min { 2 , 2 α + 1 2 θ } C h 2 p 3 2 ( p 1 ) min { 2 , 2 α + 1 2 θ } 0
As h 0 and p , the limitation expression in (15) holds, and
[ L ν ( h ) 2 h min { 2 , 2 α + 1 2 θ } ] p 2 ( p 2 ) C h p · min { 2 , 2 α + 1 2 θ } 4 ( p 1 ) ν ( h ) ,
which means that ν ( h ) satisfies the condition ν ( h ) [ L ν ( h ) 2 h O ] p 2 ( p 2 ) in Theorem 3. Figure 1 shows the mean—square errors, which become smaller as h decreases. Furthermore, we find that the Hurst parameter H has a significant impact on the convergence order, such that an increase in H results in a higher convergence order. To be specific, when α = 0.9 , the strong convergence order of our modified EM method is close to 1, and for the cases of H = 0.8 and H = 0.9 , the strong convergence orders of our modified EM method is also close to 1, respectively. To illustrate this result, we recall the example 6.1 of [5], which is similar to our Example 1. Ref. [5] verified that the mean—square errors of explicit EM method applied to SFIDEs has strong first—order convergence. They also pointed out that the convergence rate of the numerical scheme used the global Lipschitz condition, but under non—Lipschitz condition, to prove the convergence rate is still an open problem. In our test, we fixed θ = 0.5 and varying α, we find that an increase in α results in a higher convergence order. The convergence orders with fixed θ = 0.3 and varying α are shown in Figure 2 and are consistent with the theoretical analysis. These numerical results verify that the strong convergence order with the local Lipschitz condition is inherently lower than the strong convergence order O / 2 with the global Lipschitz condition.
Example 2. 
We consider the one—dimensional mixed fractional Volterra O—U (Ornstein—Uhlenbeck) equation (also see [31,32]), which is a special case of mSVIE (7) with Hölder continuous kernels. A mixed fractional Volterra Ornstein—Uhlenbeck equation is defined by
y ( t ) = y 0 + 0 t K ( t s ) ( b 0 + b 1 y ( s ) ) d s + 0 t K ( t s ) σ 1 d W s + 0 t K ( t s ) σ 2 d W s H ,
where b 0 , b 1 , σ 1 , σ 2 R .
K ( t s ) = ( t s ) α 1 2 Γ ( α + 1 2 ) , α > 1 2 .
According to Theorems 2 and 3, the convergence order is O / 2 . We choose the following parameters:
y 0 = 1 , b 0 = 1 , b 1 = 0.3 , σ 1 = 0.1 , σ 2 = 0.05 , α = 0.6 , H = 0.65 ,
The computed results are shown in Figure 3, which demonstrates that our modified EM method for the mixed fractional Volterra O-U equation can achieve strong first-order convergence when α = 0.9 .
Remark 3. 
Example 1 and Example 2 shows that the convergence rate of our modified EM scheme using the local Lipschitz condition is much more complex than using the global Lipschitz condition. Since our error e ( t ) consists of two components, the first one is the error of a modified EM method, and the second one is the error of Monte Carlo method, thus, it is challenging to derive the exact orders of strong convergence under the local Lipschitz condition, even the exact orders of strong convergence is completely invisible. The precise orders of strong convergence can only be obtained under the global Lipschitz condition. We only show that the strong convergence order with the local Lipschitz condition, which is inherently lower than the strong convergence order O / 2 with the global Lipschitz condition. To prove the convergence rate under non—Lipschitz condition is still an open problem.

7. Conclusions

In this study, we relaxed the assumption of the global Lipschitz condition to the local Lipschitz condition and showed the strong convergence order of a modified EM method for mSFIEs. We first transformed the mSFIEs into an equivalent mSVIEs using a fractional calculus technique, and then proved the well—posedness of the analytical solutions to mSFIEs with weakly singular kernels. Moreover, a modified EM method was developed for numerically solving mSVIEs, and the strong convergence of the solutions was proven under local Lipschitz and linear growth conditions, as well as the well—posedness. Furthermore, we obtained the accurate convergence order of this method under the same conditions in the mean—square sense. Notably, the strong convergence order under local Lipschitz conditions is inherently lower than the corresponding order under global Lipschitz conditions. Finally, numerical experiments were presented to demonstrate that our approach not only circumvents the restrictive integrability conditions imposed by singular kernels, but also achieves a rigorous convergence order in the L 2 sense.

Author Contributions

Z.Y.: conceptualization, methodology, software, validation, formal analysis, investigation, writing—original draft preparation, project administration, and funding acquisition; C.X.: methodology, writing—review and editing, visualization, and supervision. All authors have read and agreed to the published version of the manuscript.

Funding

The authors are supported financially by the National Natural Science Foundation of China (No. 72361016); the Financial Statistics Research Integration Team of Lanzhou University of Finance and Economics (No. XKKYRHTD202304); and the General Scientific Research Project of Lzufe (Nos. Lzufe2017C-09 and 2019C-009).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No data are included in this article.

Acknowledgments

The authors would like to thank the anonymous referee and the editor for their helpful comments and valuable suggestions that led to several important improvements.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. The Proof of Lemma 1

We let g ( 0 ) = 0 and g ( x ) = 0 for x < 0 . Taking any t 1 , t 2 , s 1 , s 2 T , for | | t 1 t 2 | | s 1 s 2 | | c , there exists a constant C > 0 such that
| g ( | t 1 t 2 | ) g c ( | t 1 t 2 | ) g ( | s 1 s 2 | ) + g c ( | s 1 s 2 | ) | = 1 c | | t 1 t 2 | c | t 1 t 2 | [ g ( | t 1 t 2 | ) g ( u ) ] d u | s 1 s 2 | c | s 1 s 2 | [ g ( | s 1 s 2 | ) g ( v ) ] d v | L ε ( g ) 1 c | | t 1 t 2 | c | t 1 t 2 | ( | t 1 t 2 | ) ε d u | | | s 1 s 2 | c | s 1 s 2 | ( | s 1 s 2 | ) ε d v | C L ε ( g ) c ε ,
and for | | t 1 t 2 | | s 1 s 2 | | < c , we also have
| g ( | t 1 t 2 | ) g c ( | t 1 t 2 | ) g ( | s 1 s 2 | ) + g c ( | s 1 s 2 | ) | | g ( | t 1 t 2 | ) g ( | s 1 s 2 | ) | + 1 c | c 0 [ g ( | t 1 t 2 | + u ) g ( | s 1 s 2 | + u ) ] d u | C L ε ( g ) | | t 1 t 2 | | s 1 s 2 | | ε ,
and hence,
| g ( | t 1 t 2 | ) g c ( | t 1 t 2 | ) g ( | s 1 s 2 | ) + g c ( | s 1 s 2 | ) | C L ε ( g ) c | | t 1 t 2 | | s 1 s 2 | | ε ,
that is,
g ( t ) g c ( t ) p , 0 , T ; α sup 0 u < v T | g ( u ) g c ( u ) g ( v ) + g c ( v ) | ( v u ) 1 α + sup 0 u < v T | u v | g ( u ) g c ( u ) g ( x ) + g c ( x ) | ( x u ) 2 α d x | C L ε ( g ) sup 0 u < v T c | v u | ε ( v u ) 1 α + sup 0 u < v T u v [ c ( x u ) ] ε ( x u ) 2 α d x C L ε ( g ) c ε + α 1 + sup 0 u < v T [ c ( x u ) ] ε + α 1 C L ε ( g ) c ε + α 1 .

Appendix B. The Proof of Lemma 2

Let s < t T . We write (8) as
y ( t ) y c ( s ) p , 0 , T ; α C c ε + 2 α 1 [ | 0 . K 0 ( t , s , y ( s ) ) d s | + | 0 . K 1 ( t , s , y ( s ) ) d s | + | 0 . K 2 ( t , s , y ( s ) ) d W s | + | 0 . K 3 ( t , s , y ( s ) ) d W s H | ] .
Obviously,
| 0 . K 0 ( t , s , y ( s ) ) d s | 0 t | K 0 ( t , s , y ( s ) ) | d s + 0 t s t | K 0 ( t , u , y ( u ) ) | d u 1 ( t s ) α d s C 0 t ( 1 + | y ( s ) | ) d s + 0 t s t ( 1 + | y ( u ) | ) d u 1 ( t s ) α d s C 1 + 0 t y ( s ) p , 0 , T ; α ( t s ) α d s ,
and
| 0 . K 1 ( t , s , y ( s ) ) d s | 0 t | K 1 ( t , s , y ( s ) ) | d s + 0 t s t | K 1 ( t , u , y ( u ) ) | d u 1 ( t s ) α + 1 d s C 0 t ( 1 + | y ( s ) | ) d s + 0 t s t ( 1 + | y ( u ) | ) d u 1 ( t s ) α + 1 d s C 1 + 0 t | y ( s ) | d s + 0 t | y ( u ) | ( t u ) α + 1 d u C 1 + 0 t y ( s ) p , 0 , T ; α ( t s ) α + 1 d s .
Furthermore,
| 0 . K 2 ( t , s , y ( s ) ) d W s | 0 t | K 2 ( t , s , y ( s ) ) | d W s + 0 t s t | K 2 ( t , u , y ( u ) ) | d W u 1 ( t s ) α + ( 1 2 η ) d s C 0 t ( 1 + | y ( s ) | ) d s + 0 t s t ( 1 + | y ( u ) | ) d u 1 ( t s ) α + ( 1 2 η ) d s C 1 + 0 t | y ( s ) | d s + 0 t | y ( u ) | ( t u ) α + ( 1 2 η ) d u C 1 + 0 t y ( s ) p , 0 , T ; α ( t s ) α + ( 1 2 η ) d s C L α + ( 1 2 η ) ( y ) ( t s ) α + ( 1 2 η ) ,
where
L α + ( 1 2 η ) ( y ) = sup 0 s < t T y ( t ) y ( s ) ( t s ) α + ( 1 2 η )
is the [ α + ( 1 2 η ) ] -Hölder constant of y ( t ) .
Similarly,
| 0 . K 3 ( t , s , y ( s ) ) d W s H | 0 t | K 3 ( t , s , y ( s ) ) | d W s H + 0 t s t | K 3 ( t , u , y ( u ) ) | d W u H 1 ( t s ) α + ( 1 2 η ) d s C { 0 t 1 + | y ( s ) | ( t s ) α + ( 1 2 η ) + 0 s | y ( s ) y ( u ) | ( s u ) α + ( 1 2 η ) d u d s + 0 t s t 1 + | y ( v ) | ( v s ) α + ( 1 2 η ) + s v | y ( v ) y ( v ) | ( v v ) α + ( 1 2 η ) d v 1 ( t s ) α + ( 1 2 η ) d s } C 1 + 0 t y ( s ) p , 0 , T ; α ( t s ) α + ( 1 2 η ) d s + 0 t s t y ( v ) p , 0 , T ; α ( v s ) α + ( 1 2 η ) d v 1 ( t s ) α + ( 1 2 η ) d s C 1 + 0 t y ( s ) p , 0 , T ; α ( t s ) α + ( 1 2 η ) d s + 0 t y ( s ) p , 0 , T ; α ( t s ) 2 α + ( 1 2 η ) d s C L 2 α + ( 1 2 η ) ( y ) ( t s ) 2 α + ( 1 2 η ) ,
where
L 2 α + ( 1 2 η ) ( y ) = sup 0 s < t T y ( t ) y ( s ) ( t s ) 2 α + ( 1 2 η )
is the [ 2 α + ( 1 2 η ) ] Hölder constant of y ( t ) .
Combining the above estimates, for | t s | < c , we have
y ( t ) y c ( s ) p , 0 , T ; α C L 2 α + ( 1 2 η ) ( y ) [ c ( t s ) ] 2 α + ( 1 2 η ) C L 2 α + ( 1 2 η ) ( y ) c 2 α + ( 1 2 η ) .

Appendix C. The Proof of Lemma 3

We first consider the boundedness of y N ( t ) 2 , 0 , T ; α 2 . Using Lemma 2 and the Gronwall—type inequality, for | t s | < c , we have
E [ y N ( t ) 2 , 0 , T ; α 2 ] C N 2 1 + 0 t y N ( u ) 2 , 0 , T ; α 2 g ( t , s ) | t s | α + ( 1 2 η ) d u + 0 t y N ( v ) 2 , 0 , T ; α 2 g ( t , s ) | t s | 2 α + ( 1 2 η ) d v C N L 2 α + ( 1 2 η ) ( y N ) ( c | t s | ) 2 α + ( 1 2 η ) C N L 2 α + ( 1 2 η ) ( y N ) c 2 α + ( 1 2 η ) ,
where
L 2 α + ( 1 2 η ) ( y N ) = sup 0 s < t T y N ( t ) y N ( s ) ( t s ) 2 α + ( 1 2 η )
is the [ 2 α + ( 1 2 η ) ] -Hölder constant of y N ( t ) . Then, according to Lemma 1.17.1 in [2] (page 88, the estimates for fractional derivatives of fBm and the Wiener process via the Garsia—Rodemich—Rumsey inequality), we can assume that, for arbitrary θ ( 0 , α + H 1 2 ) ,
| 0 t f s d W s H | C ξ θ N ( r ) | t r | 1 2 θ , r T , ξ θ N ( t ) = 0 t 0 t | v v K 2 ( t , s , y N ( s ) ) d W s | 2 / θ | v v | 1 / θ d v d v θ / 2 .
Then, for p 2 / θ , we have
| s t K 2 ( t , s , y N ( s ) ) d W s | p C N [ E sup t T | 0 t K 2 ( t , s , y N ( s ) ) d W s | p + E sup t T 0 t 0 t | v v K 2 ( t , s , y N ( s ) ) d W s | p | v v | p / 2 d v d v ] C N 0 t E | K 2 ( t , s , y N ( s ) ) | 2 d s p / 2 + 0 t 0 t | v v E [ K 2 ( t , s , y N ( s ) ) ] 2 d s | p / 2 | v v | p / 2 d v d v C N C + C E ( | ξ θ N ( t ) | p ) sup t T 0 t 1 ( c | t s | ) ( 1 2 θ ) + α d s p .
Obviously,
y N ( t ) , 0 , T ; α p C N sup s T | s t K 2 ( t , s , y N ( s ) ) d W s | p .
Next, we prove the case for p > 2 / θ . We define the following stopping time:
τ N , ι = T inf { t : y N , ι ( t ) t ι } ,
where the integer ι 1 , τ N , ι T as ι . We set y N , ι ( t ) = y N , ι ( t τ N , ι ) , y ^ N , ι ( t ) = y ^ N , ι ( t τ N , ι ) for all t T . Under Assumption 4 and (A2), we have
E [ y N , ι ( t ) 2 , 0 , T ; α p ] 5 p 1 Γ p ( α ) { Γ p ( α ) E [ | y 0 | p ] + E | 0 t τ N , ι [ ( t τ N , ι ) s ] α 1 k 0 ( s , y ^ N , ι ( s ) ) d s | p + E | 0 t τ N , ι [ ( t τ N , ι ) s ] α 1 sup s u [ t τ N , ι ] | k 1 ( u , s , y ^ N , ι ( s ) ) | d s | p + E | 0 t τ N , ι [ ( t τ N , ι ) s ] 2 ( α + 1 2 θ ) sup s u [ t τ N , ι ] | k 2 ( u , s , y ^ N , ι ( s ) ) | 2 d s | p / 2 + E | 0 t τ N , ι [ ( t τ N , ι ) s ] 2 ( α + 1 2 θ ) sup s u [ t τ N , ι ] | k 3 ( u , s , y ^ N , ι ( s ) ) | 2 d s | p / 2 C N { 1 + 0 t τ N , ι [ ( t τ N , ι ) s ] 2 ( α + 1 2 θ ) E [ ξ θ N ( t τ N , ι ) p ] · sup t T 0 t 1 | ( t τ N , ι ) s | ( 1 2 θ ) + α d s p } C N 1 + 0 t τ N , ι [ ( t τ N , ι ) s ] 2 ( α + 1 2 θ ) E [ | y ^ N , ι ( s ) | p ] d s ,
where C N is independent of N and ι . Then, for | t s | < c , we have
sup 0 ι t E [ y N , ι ( ι ) 2 , 0 , T ; α p ] C N 1 + sup 0 ι t 0 ι τ N , ι [ c ( ( ι τ N , ι ) s ) ] 2 ( α + 1 2 θ ) sup 0 ζ s E [ | y ^ N , ι ( ζ ) | p ] d s .
Using the weakly singular Gronwall—type inequality [30] (Theorem 3.3.1. page 349), we have
E [ | y ^ N , ι ( t ) | p ] C N ,
and according to Fatou’s Lemma, for ι , (A3) implies that
E [ | y ^ N ( t ) | p ] C N a n d E [ | y N ( t ) | p ] C N .
For p = 2 / θ , by using the Cauchy—Schwarz—type inequality, as well as Assumption 4 and (A2), we can obtain the results.

Appendix D. The Proof of Lemma 4

Using mSVIE (11), we have
E [ y N ( t ) y N ( t ) , 0 , T ; α p ] 4 p 1 { E | 0 t K 0 ( t , s , y ^ N ( s ) ) d s 0 t K 0 ( t , s , y ^ N ( s ) ) d s | p + E | 0 t K 1 ( t , s , y ^ N ( s ) ) d s 0 t K 1 ( t , s , y ^ N ( s ) ) d s | p
+ E | 0 t K 2 ( t , s , y ^ N ( s ) ) d W s 0 t K 2 ( t , s , y ^ N ( s ) ) d W s | p + E | 0 t K 3 ( t , s , y ^ N ( s ) ) d W s H 0 t K 3 ( t , s , y ^ N ( s ) ) d W s H | p } = : 4 p 1 { K 0 + K 1 + K 2 + K 3 } .
Applying the Hölder—type inequality, Assumption 1, Assumption 4, Lemma 2, and Lemma 3, we have
K 0 2 p 1 { E | 0 t [ ( t s ) α 1 ( t s ) α 1 ] · k 0 ( s , y ^ N ( s ) ) d s | p + E | t t ( t s ) α 1 k 0 ( s , y ^ N ( s ) ) d s | p } C { | 0 t [ ( t s ) α 1 ( t s ) α 1 ] d s | p 1 · 0 t | ( t s ) α 1 ( t s ) α 1 | · [ 1 + E ( | y ^ N ( s ) | p ) ] d s + | t t ( t s ) α 1 d s | p 1 · t t ( t s ) α 1 [ 1 + E ( | y ^ N ( s ) | p ) ] d s C N | t t | α p ,
and
K 1 C N | t t | [ 1 ( α + 1 2 θ ) ] p .
Applying the Burkholder—Davis—Gundy (BDG)—type inequality (Theorem 7.3 of [20], page 40), we also have
K 2 C { E | 0 t [ ( t s ) α 1 ( t s ) α 1 ] 2 sup s u t | k 2 ( u , s , y ^ N ( s ) ) d s | 2 | p / 2 + E [ | 0 t ( t s ) 2 ( α + 1 2 θ ) sup 0 u 1 | k 2 ( ( t s ) u + s , s , y ^ N ( s ) ) k 2 ( ( t s ) u + s , s , y ^ N ( s ) ) d s | 2 | p / 2 ] + E | 0 t ( t s ) 2 ( α + 1 2 θ ) sup s u t | k 2 ( u , s , y ^ N ( s ) ) | 2 d s | p / 2 } C { | 0 t [ ( t s ) α 1 ( t s ) α 1 ] 2 d s | ( p 2 ) / 2 0 t [ ( t s ) α 1 ( t s ) α 1 ] 2 · [ 1 + E ( | y ^ N ( s ) | p ) ] d s + | t t | 2 | 0 t ( t s ) 2 ( α + 1 2 θ ) d s | ( p 2 ) / 2
· 0 t ( t s ) 2 ( α + 1 2 θ ) [ 1 + E ( | y ^ N ( s ) | p ) ] d s + | 0 t ( t s ) 2 ( α + 1 2 θ ) d s | ( p 2 ) / 2 · t t ( t s ) 2 ( α + 1 2 θ ) [ 1 + E ( | y ^ N ( s ) | p ) ] d s } C | t t | ( 2 ϵ ) p / 2 , α θ = 1 / 2 C | t t | [ 2 ( 2 α + 1 2 θ ) ] p / 2 , α θ 1 / 2 .
Observing that θ ( 0 , α + H 1 2 ) , if θ = 1 / 2 , then 0 < 2 2 α < 1 , and for ϵ = 2 2 α ,
K 2 C N | t t | α p .
Using (A2), we also estimate K 3 as follows:
K 3 C { E | 0 t [ ( t s ) α 1 ( t s ) α 1 ] 2 sup s u t | k 3 ( u , s , y ^ N ( s ) ) d s | 2 | p / 2 + E [ | 0 t ( t s ) 2 ( α + 1 2 θ ) sup 0 u 1 | k 3 ( ( t s ) u + s , s , y ^ N ( s ) ) k 3 ( ( t s ) u + s , s , y ^ N ( s ) ) d s | 2 | p / 2 ] + E | 0 t ( t s ) 2 ( α + 1 2 θ ) sup s u t | k 3 ( u , s , y ^ N ( s ) ) | 2 d s | p / 2 } C { E [ | 0 t [ ( t s ) α 1 ( t s ) α 1 ] 2 sup s u t | K 2 ( u , s , y ^ N ( s ) ) d s | 2 | p / 2 + 0 t [ ( t s ) α 1 ( t s ) α 1 ] 2 · sup s u t 0 t 0 t | v v E [ K 2 ( u , s , y N ( s ) ) ] 2 d s | p / 2 | v v | p / 2 d v d v ] + E [ | 0 t ( t s ) 2 ( α + 1 2 θ ) sup 0 u 1 | K 2 ( ( t s ) u + s , s , y ^ N ( s ) ) K 2 ( ( t s ) u + s , s , y ^ N ( s ) ) d s | 2 | p / 2 ] + E [ | 0 t ( t s ) 2 ( α + 1 2 θ ) sup s u t | K 2 ( u , s , y ^ N ( s ) ) | 2 d s | p / 2
+ 0 t ( t s ) 2 ( α + 1 2 θ ) sup s u t 0 t 0 t | v v E [ K 2 ( u , s , y N ( s ) ) ] 2 d s | p / 2 | v v | p / 2 d v d v ] } C { | 0 t [ ( t s ) α 1 ( t s ) α 1 ] 2 d s | ( p 2 ) / 2 · 0 t [ ( t s ) α 1 ( t s ) α 1 ] 2 · [ 1 + E ( | y ^ N ( s ) | p ) + E ( | ξ θ N ( s ) | p ) ] d s + | t t | 2 | 0 t ( t s ) 2 ( α + 1 2 θ ) d s | ( p 2 ) / 2
· 0 t ( t s ) 2 ( α + 1 2 θ ) [ 1 + E ( | y ^ N ( s ) | p ) + E ( | ξ θ N ( s ) | p ] d s + | 0 t ( t s ) 2 ( α + 1 2 θ ) d s | ( p 2 ) / 2 · t t ( t s ) 2 ( α + 1 2 θ ) [ 1 + E ( | y ^ N ( s ) | p ) + E ( | ξ θ N ( s ) | p ] d s } C | t t | ( 2 ϵ ) p / 2 , α θ = 1 / 2 C | t t | [ 2 ( 2 α + 1 2 θ ) ] p / 2 , α θ 1 / 2 . C N | t t | α p , ( where ϵ = 2 2 α ) .
Hence,
E [ y N ( t ) y N ( t ) , 0 , T ; α p ] 4 p 1 { K 0 + K 1 + K 2 + K 3 } C N | t t | α p .

Appendix E. The Proof of Theorem 1

From the case of the global Lipschitz condition (Remark 1), inspired by Theorem 3.4 of [20] (Section 2.3. page 56), we define the following truncation function:
K i m ( t , s , y ) = K i ( t , s , y ) , i f | y | m K i ( t , s , m y / | y | ) , i f | y | > m , i = 0 , 1 , 2 , 3 .
For each m 1 , K i ( t , s , y ) satisfies the Lipschitz and linear growth conditions, and there is a unique solution K i m ( t , s , y ) L 2 { ( t , s ) : 0 s t T } × R d ; R d × r ) to the equation
y m ( t ) = y 0 + 0 t K 0 m ( t , s , y m ( s ) ) d s + 0 t K 1 m ( t , s , y m ( s ) ) d s + 0 t K 2 m ( t , s , y m ( s ) ) d W s + 0 t K 3 m ( t , s , y m ( s ) ) d W s H , t T .
We can define the stopping time as
τ m = T inf { t T : | y m ( t ) | m } .
Note that τ m is increasing and
y m ( t ) = y m + 1 ( t ) i f t T .
We use the linear growth condition to show that, for almost all ω Ω , there exists an integer m 0 = m 0 ( ω ) such that
y ( t τ m ) = y 0 + 0 t τ m K 0 m ( t , s , y ( s ) ) d s + 0 t τ m K 1 m ( t , s , y ( s ) ) d s + 0 t τ m K 2 m ( t , s , y ( s ) ) d W s + 0 t τ m K 3 m ( t , s , y ( s ) ) d W s H .
Let m ; then, y ( t ) is a solution of mSFIE (1).
Uniqueness. We define y ( t ) and y ˜ ( t ) as two solutions of mSFIE (1) on L 2 ( { ( t , s ) : 0 s t T } × R d ; R d × r ) with y ( 0 ) = y ˜ ( 0 ) . According to Lemma 2, y ( t ) and y ˜ ( t ) are solutions to mSVIE (7), and according to Lemma 3, for | t s | < c , we have
E [ y ( t ) y ˜ ( t ) 2 , 0 , T ; α 2 ] C { E | 0 t ( t s ) α 1 [ k 0 ( s , y ( s ) ) k 0 ( s , y ˜ ( s ) ) ] d s | 2 + E | 0 t ( t s ) α 1 sup s u t | k 1 ( u , s , y ( s ) ) k 1 ( u , s , y ˜ ( s ) ) | d s | 2 + E | 0 t ( t s ) 2 ( α + 1 2 θ ) sup s u t | k 2 ( u , s , y ( s ) ) k 2 ( u , s , y ˜ ( s ) ) | 2 d s | + E | 0 t ( t s ) 2 ( α + 1 2 θ ) sup s u t | k 3 ( u , s , y ( s ) ) k 3 ( u , s , y ˜ ( s ) ) | 2 d s | C 0 t [ c ( t s ) ] α 1 E [ y ( t ) y ˜ ( t ) 2 , 0 , T ; α 2 ] d s .
Hence, for arbitrary t T , using the weakly singular Gronwall—type inequality [30] (Theorem 3.3.1. page 349), we have
E [ y ( t ) y ˜ ( t ) 2 , 0 , T ; α 2 ] = 0 ,
which means
P { | y ( t ) y ˜ ( t ) | = 0 f o r a l l t T } = 1 .
Uniqueness has been obtained.
Existence. We let N N 1 . Using mSVIE (11) and estimating (A4), for any p 2 / θ , we have
E [ y N ( t ) y N ( t ) 2 , 0 , T ; α p ] C 0 t ( t s ) α 1 E [ y N ( t ) y N ( t ) 2 , 0 , T ; α p ] d s .
Next, we show that y N ( t ) is a Cauchy sequence almost surely and has a limit in L α , 2 ( T ; R d ) . We first construct a Picard sequence { y N , n ( t ) : N 1 , n = 1 , 2 , } , which satisfies mSFIE (1), and let τ N , n = T inf { t : y N , n ( t ) t n } , y N , n ( t 0 ) = y 0 , that is,
y N , n ( t ) = y 0 + 0 t K 0 ( t , s , y ^ N , n 1 ( s ) ) d s + 0 t K 1 ( t , s , y ^ N , n 1 ( s ) ) d s + 0 t K 2 ( t , s , y ^ N , n 1 ( s ) ) d W s + 0 t K 3 ( t , s , y ^ N , n 1 ( s ) ) d W s H .
  • Step 1: The Picard sequence { y N , n ( t ) : N 1 , n = 1 , 2 , } L α , 2 ( T ; R d ) . For arbitrary t , s [ 0 , τ N , n ) , using inequality (15), let K 2 ( t , s , y N , n 1 ( s ) ) [ C p , θ ( n 1 ) C p , θ ( n 1 ) ] . Then, we have
    E [ y N , n ( t ) , 0 , T ; α 2 ] | y 0 | 2 + | 0 t K 0 ( t , s , y N , n 1 ( s ) ) d s | 2 + | 0 t K 1 ( t , s , y N , n 1 ( s ) ) d s | 2 + | 0 t K 2 ( t , s , y N , n 1 ( s ) ) d W s | 2 + | 0 t K 3 ( t , s , y N , n 1 ( s ) ) d W s H | 2 C N , n L 2 α + ( 1 2 η ) ( y N , n ) ( c | t s | ) 2 α + ( 1 2 η ) C N , p , θ .
  • Step 2: The Picard sequence { y N , n ( t ) : N 1 , n = 1 , 2 , } L α , 2 ( T ; R d ) is a Cauchy sequence almost surely.
    We define arbitrary n , m 1 and 1 t = 1 t < τ N , n τ N , m . We need to argue that, for any m > n 1 , y N , n ( t ) = y N , m ( t ) ( a . s . ) for t , s [ 0 , τ N , n τ N , m ) , or
    E [ | y N , m ( t ) y N , n ( t ) | 2 ] a . s . 0 , as m , n + .
    We write
    | y N , m ( t ) y N , n ( t ) | 1 t < τ N , n τ N , m = : | y 0 | 1 t + | 0 t [ K 0 ( t , s , y N , m 1 ( s ) ) K 0 ( t , s , y N , n 1 ( s ) ) Δ K 0 ( t , s , y ( s ) ) ] d s | 1 s + | 0 t [ K 1 ( t , s , y N , m 1 ( s ) ) K 1 ( t , s , y N , n 1 ( s ) ) Δ K 1 ( t , s , y ( s ) ) ] d s | 1 s + | 0 t [ K 2 ( t , s , y N , m 1 ( s ) ) K 2 ( t , s , y N , n 1 ( s ) ) Δ K 2 ( t , s , y ( s ) ) ] d W s | 1 s + | 0 t [ K 3 ( t , s , y N , m 1 ( s ) ) K 3 ( t , s , y N , n 1 ( s ) ) Δ K 3 ( t , s , y ( s ) ) ] d W s H | 1 s .
    Using the Cauchy—Schwarz inequality and Lemma 1, we have the estimate
    E | 0 t Δ K 0 ( t , s , y ( s ) ) d s | 1 s 2 0 t E | Δ K 0 ( t , s , y ( s ) ) | 1 s 2 d s C 0 t g ( t , s ) | Δ K 0 ( t , s , y ( s ) ) | 1 s 2 d s + 0 t g ( t , s ) 0 s | Δ K 0 ( t , v , y ( v ) ) | 1 v 2 ( s v ) α d v d s C K 0 0 t g ( t , s ) | Δ K 0 ( t , s , y ( s ) ) | c s 2 d s ,
    and
    E | 0 t Δ K 1 ( t , s , y ( s ) ) d s | c s 2 0 t E | Δ K 1 ( t , s , y ( s ) ) | c s 2 d s C 0 t g ( t , s ) | Δ K 1 ( t , s , y ( s ) ) | c s 2 d s + 0 t g ( t , s ) 0 s u s | Δ K 1 ( t , v , y ( v ) ) | c v 2 ( s u ) α + 1 d v d u d s C 0 t g ( t , s ) | Δ K 1 ( t , s , y ( s ) ) | c s 2 d s + 0 t g ( t , s ) 0 s | Δ K 1 ( t , v , y ( v ) ) | c v 2 ( s v ) α + 1 d v d s C K 1 0 t g ( t , s ) | Δ K 1 ( t , s , y ( s ) ) | c s 2 d s .
    Furthermore, according to Lemma 2,
    E | 0 t Δ K 2 ( t , s , y ( s ) ) d W s | c s 2 0 t E | Δ K 2 ( t , s , y ( s ) ) | 1 s 2 d W s C [ 0 t g ( t , s ) · E [ | Δ K 2 ( t , s , y ( s ) ) | c s 2 ] d s + 0 t g ( t , s ) · E 0 s u s | Δ K 2 ( t , v , y ( v ) ) | c v 2 ( s u ) α + 1 d W v d u d s ]
    C [ 0 t g ( t , s ) | Δ K 2 ( t , s , y ( s ) ) | c s 2 d s + 0 t g ( t , s ) · 0 s u s E | Δ K 2 ( t , v , y ( v ) ) | c v 2 ( s u ) α + 1 d W v d u d s ] C [ 0 t g ( t , s ) | Δ K 2 ( t , s , y ( s ) ) | c s 2 d s + 0 t 0 s g ( t , s ) · ( u s E | Δ K 2 ( t , v , y ( v ) ) | c v 2 d v ) ( s u ) α + ( 1 2 η ) d u d s ] C [ 0 t g ( t , s ) | Δ K 2 ( t , s , y ( s ) ) | c s 2 d s + 0 t g ( t , s ) 0 s E | Δ K 2 ( t , v , y ( v ) ) | c v 2 ( s v ) α + ( 1 2 η ) d v d s ] C K 2 0 t g ( t , s ) · E | Δ K 2 ( t , s , y ( s ) ) | c s 2 d s .
    Similarly, according to Lemma 2,
    E | 0 t Δ K 3 ( t , s , y ( s ) ) d W s H | c s 2 0 t E | Δ K 3 ( t , s , y ( s ) ) | c s 2 d W s H C N [ 0 s g ( t , u ) · E [ | Δ K 3 ( t , u , y ( u ) ) | c s 2 ] ( t u ) α d u + 0 s 0 u g ( u , v ) · E | K 3 ( t , u , y N , m 1 ( u ) ) K 3 ( t , v , y N , n 1 ( v ) ) | c u 2 ( u v ) 2 α + 1 d u d v ] C N { 0 s g ( t , u ) · E [ | Δ K 3 ( t , u , y ( u ) ) | c u 2 ] ( t u ) α d u + [ 0 s 0 u g ( u , v ) · E ( ( | K 3 ( t , u , y N , m 1 ( u ) ) K 3 ( t , v , y N , n 1 ( v ) ) | c u ) + | K 3 ( t , u , y N , m 1 ( u ) ) K 3 ( t , u , y N , n 1 ( u ) ) | c u · ( | K 3 ( t , u , y N , m 1 ( u ) ) + K 3 ( t , v , y N , n 1 ( v ) ) | c u ) ) ( u v ) ( ε + 2 α + 1 ) d u d v ] 2 } = : C N 0 s g ( t , u ) · E [ | Δ K 3 ( t , u , y ( u ) ) | c u 2 ] ( t u ) α d u + Δ K ^ 3 ( t , u , y N ( u ) ) .
    Notice that
    Δ K ^ 3 ( t , u , y N ( u ) ) C { [ 0 s E ( g ( u , v ) · ( K 3 ( t , u , y N , m 1 ( u ) ) K 3 ( t , v , y N , n 1 ( v ) ) c u ) ( u v ) 2 α + g ( t , u ) · | K 3 ( t , u , y N , m 1 ( u ) ) K 3 ( t , u , y N , n 1 ( u ) ) | c u ( t u ) 2 ( ε α ) ) d u ] 2 + [ 0 s E ( g ( t , u ) · | K 3 ( t , u , y N , m 1 ( u ) ) K 3 ( t , u , y N , n 1 ( u ) ) | c u ( t u ) 2 α · g ( u , v ) · ( K 3 ( t , u , y N , m 1 ( u ) ) + K 3 ( t , v , y N , n 1 ( v ) ) , u ) c u ( u v ) 2 ε ) d u ] 2 } C [ 0 s ( g ( t , u ) · E K 3 ( t , u , y N , m 1 ( u ) ) K 3 ( t , u , y N , n 1 ( u ) ) c u 2 ( t u ) 2 ( ε α ) d u + ( m + n 2 ) 2 0 s g ( t , u ) · | K 3 ( t , u , y N , m 1 ( u ) ) K 3 ( t , u , y N , n 1 ( u ) ) | c u 2 ( t u ) 2 α ) d u ]
    C N 0 s g ( t , u ) · E K 3 ( t , u , y N , m 1 ( u ) ) K 3 ( t , u , y N , n 1 ( u ) ) c u 2 d u ,
    and thus,
    E | 0 t Δ K 3 ( t , s , y ( s ) ) d W s H | c s 2 C K 3 0 t g ( t , s ) · E | Δ K 3 ( t , s , y ( s ) ) | c s 2 d s .
    Combining (A3), (12), and (A5), we have
    E [ y N , m ( t ) y N , n ( t ) c t 2 · g ( t , s ) ] C 0 t g ( t , s ) · E y N , m ( s ) y N , n ( s ) 1 s 2 d s .
    This means that y N , m ( s ) y N , n ( s ) 1 s 2 = 0 a.s., hence, y N , m ( t ) y N , n ( t ) 1 t 2 = 0 a.s. for t [ 0 , τ N , n τ N , m ) , especially τ N , m > τ N , n a.s., as y N , n ( t ) n < m for t < τ N , n . Then, we conclude that the Picard sequence { y N , n ( t ) : N 1 , n = 1 , 2 , } L α , 2 ( T ; R d ) is almost surely a Cauchy sequence.
  • Step 3: Since T > 0 , and τ N increases with N and ultimately reaches T, we find that there exists a Cauchy sequence that almost surely satisfies mSVIE (11), and mSVIE (14) has a unique solution for t [ 0 , τ N τ N , n ) . Noticing that y ( t ) L α , 2 ( T ; R d ) , such that y N ( t ) a . s . y ( t ) uniformly as N , we take expectations on both sides, that is,
    lim N E [ y ( t ) y N ( t ) , 0 , T ; α 2 ] = 0 .
Moreover, according to Lemma 4, we have
E [ y ( t ) y ( t ) 2 , 0 , T ; α p ] C | t t | α p .
Then, the process y ( t ) is a continuous solution of mSFIE (1). According to Lemma 3, let N , and then for any t T ,
E [ | y ( t ) | p ] < .

Appendix F. The Proof of Lemma 5

According to Lemma 2, we have
y ( t s ̲ ) y c ( t n s ̲ ) p , 0 , T ; α [ | 1 Γ ( α ) i = 0 n 1 t i t i + 1 [ ( t n + 1 t i ) α 1 ( t n t i ) α 1 ] · k 0 ( s , y ( s ) ) d s | + | 1 Γ ( α ) i = 0 n 1 t i t i + 1 [ ( t n + 1 t i ) α 1 ( t n t i ) α 1 ] · k 1 ( t , s , y ( s ) ) d s | + | 1 Γ ( α ) i = 0 n 1 t i t i + 1 [ ( t n + 1 t i ) α 1 ( t n t i ) α 1 ] · k 2 ( t , s , y ( s ) ) d W s | + | 1 Γ ( α ) i = 0 n 1 t i t i + 1 [ ( t n + 1 t i ) α 1 ( t n t i ) α 1 ] · k 3 ( t , s , y ( s ) ) d W s H | ] C { | h i = 0 n 1 t i t i + 1 [ ( ( n i + 1 ) h ) α 1 ( ( n i ) h ) α 1 ] d s · 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α c α d s | + | h i = 0 n 1 t i t i + 1 [ ( ( n i + 1 ) h ) α 1 ( ( n i ) h ) α 1 ] d s · 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α [ c ( t n + 1 t i t n + t i ) ] α d s | + | h i = 0 n 1 t i t i + 1 [ ( ( n i + 1 ) h ) α 1
( ( n i ) h ) α 1 ] d s · 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α [ c ( t n + 1 t i t n + t i ) ] α + ( 1 2 η ) d s | + | h i = 0 n 1 t i t i + 1 [ ( ( n i + 1 ) h ) α 1 ( ( n i ) h ) α 1 ] d s · [ 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α · 1 [ c ( t n + 1 t i t n + t i ) ] α + ( 1 2 η ) d s + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α [ c ( t n + 1 t i t n + t i ) ] 2 α + ( 1 2 η ) ] d s | } C { | L α ( y ) c α i = 0 n 1 [ ( ( n + 1 ) h ) α 1 h α 1 ] | + | L α ( y ) c α h i = 0 n 1 [ ( ( n + 1 ) h ) α 1 h α 1 ] | + | L α + ( 1 2 η ) ( y ) c α + ( 1 2 η ) h i = 0 n 1 [ ( ( n + 1 ) h ) α 1 h α 1 ] | + | L 2 α + ( 1 2 η ) ( y ) c 2 α + ( 1 2 η ) h i = 0 n 1 [ ( ( n + 1 ) h ) α 1 h α 1 ] | C L 2 α + ( 1 2 η ) ( y ) c 2 α + ( 1 2 η ) h .
Similarly,
y ( t s ̲ ) y c ( t s ) p , 0 , T ; α [ | 1 Γ ( α ) i = 0 n 1 t i t i + 1 [ ( t n + 1 t i ) α 1 ( t n t i + 1 ) α 1 ] · k 0 ( s , y ( s ) ) d s + t n t [ ( t s ̲ ) α 1 ( t s ) α 1 ] · k 0 ( s , y ( s ) ) d s | + | 1 Γ ( α ) i = 0 n 1 t i t i + 1 [ ( t n + 1 t i ) α 1 ( t n t i + 1 ) α 1 ] · k 1 ( t , s , y ( s ) ) d s + t n t [ ( t s ̲ ) α 1 ( t s ) α 1 ] · k 1 ( t , s , y ( s ) ) d s | + | 1 Γ ( α ) i = 0 n 1 t i t i + 1 [ ( t n + 1 t i ) α 1 ( t n t i + 1 ) α 1 ] · k 2 ( t , s , y ( s ) ) d W s + t n t [ ( t s ̲ ) α 1 ( t s ) α 1 ] · k 2 ( t , s , y ( s ) ) d s | + | 1 Γ ( α ) i = 0 n 1 t i t i + 1 [ ( t n + 1 t i ) α 1 ( t n t i + 1 ) α 1 ] · k 3 ( t , s , y ( s ) ) d W s H + t n t [ ( t s ̲ ) α 1 ( t s ) α 1 ] · k 3 ( t , s , y ( s ) ) d s | ] C { | h i = 0 n 1 t i t i + 1 [ ( ( n i + 1 ) h ) α 1 ( ( n i 1 ) h ) α 1 ] d s · 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α c α d s + t n t ( t s ̲ ) α 1 d s · 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α c α d s | + | h i = 0 n 1 t i t i + 1 [ ( ( n i + 1 ) h ) α 1 ( ( n i 1 ) h ) α 1 ] d s · 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α [ c ( t n + 1 t i t n + t i ) ] α d s
+ t n t ( t s ̲ ) α 1 d s · 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α [ c ( t n + 1 t i t n + t i ) ] α d s | + | h i = 0 n 1 t i t i + 1 [ ( ( n i + 1 ) h ) α 1 ( ( n i 1 ) h ) α 1 ] d s · 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α ( t n + 1 t i t n + t i ) α + ( 1 2 η ) d s + t n t ( t s ̲ ) α 1 d s · 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α [ c ( t n + 1 t i t n + t i ) ] α + ( 1 2 η ) d s | + | h i = 0 n 1 t i t i + 1 [ ( ( n i + 1 ) h ) α 1 ( ( n i 1 ) h ) α 1 ] d s · [ 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α [ c ( t n + 1 t i t n + t i ) ] α + ( 1 2 η ) d s + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α · 1 [ c ( t n + 1 t i t n + t i ) ] 2 α + ( 1 2 η ) d s ] + t n t ( t s ̲ ) α 1 d s · [ 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α 1 [ c ( t n + 1 t i t n + t i ) ] α + ( 1 2 η ) d s + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α [ c ( t n + 1 t i t n + t i ) ] 2 α + ( 1 2 η ) d s ] | } = C { | h [ ( ( n + 1 ) h ) α 1 + ( n h ) α 1 h α 1 ] d s · 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α c α d s + t n t ( t t n ) α 1 d s · 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α 1 c α d s | + | h [ ( ( n + 1 ) h ) α 1 + ( n h ) α 1 h α 1 ] · 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α [ c ( t n + 1 t i t n + t i ) ] α d s + t n t ( t t n ) α 1 d s · 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α [ c ( t n + 1 t i t n + t i ) ] α d s | + | h [ ( ( n + 1 ) h ) α 1 + ( n h ) α 1 h α 1 ] · 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α [ c ( t n + 1 t i t n + t i ) ] α + ( 1 2 η ) d s + t n t ( t t n ) α 1 d s · 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α [ c ( t n + 1 t i t n + t i ) ] α + ( 1 2 η ) d s | + | h [ ( ( n + 1 ) h ) α 1 + ( n h ) α 1 h α 1 ] · [ 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α · 1 [ c ( t n + 1 t i t n + t i ) ] α + ( 1 2 η ) d s + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α [ c ( t n + 1 t i t n + t i ) ] 2 α + ( 1 2 η ) d s ] + t n t ( t t n ) α 1 d s · [ 1 + i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α [ c ( t n + 1 t i t n + t i ) ] α + ( 1 2 η ) d s
+ i = 0 n 1 t i t i + 1 y ( s ) p , 0 , T , α [ c ( t n + 1 t i t n + t i ) ] 2 α + ( 1 2 η ) d s ] | } C { | 2 h c α 1 + h α | + | 2 L α ( y ) c α 1 h + h α | + | 2 L α + ( 1 2 η ) ( y ) c α + ( 1 2 η ) h + h α | + | 2 L 2 α + ( 1 2 η ) ( y ) c 2 α + ( 1 2 η ) h + h α | C L 2 α + ( 1 2 η ) ( y ) c 2 α + ( 1 2 η ) h .
The proof is complete.

Appendix G. The Proof of Lemma 7

Using the modified EM method (14), there exists a unique integer n for t [ t n , t n + 1 ) and Y ( t n ) = Y ^ ( t ) . For arbitrary t T , we have
E [ Y ( t ) Y ^ ( t ) 2 , 0 , T ; α 2 ] = E [ Y ( t ) Y ^ ( t n ) 2 , 0 , T ; α 2 ] 4 { E | 0 t K 0 ( t , s ̲ , Y ^ ( s ) ) d s 0 t n K 0 ( t n , s ̲ , Y ^ ( s ) ) d s | 2 + E | 0 t K 1 ( t , s ̲ , Y ^ ( s ) ) d s 0 t n K 1 ( t n , s ̲ , Y ^ ( s ) ) d s | 2 + E | 0 t K 2 ( t , s ̲ , Y ^ ( s ) ) d W s 0 t n K 2 ( t n , s ̲ , Y ^ ( s ) ) d W s | 2 + E | 0 t K 3 ( t , s ̲ , Y ^ ( s ) ) d W s H 0 t n K 3 ( t n , s ̲ , Y ^ ( s ) ) d W s H | p } = : 4 { K ˜ 0 + K ˜ 1 + K ˜ 2 + K ˜ 3 } .
Applying the Hölder—type inequality, Assumption 1, Assumption 4, Lemma 6, and Lemma 7, we have
K ˜ 0 2 { E | 0 t n [ ( t s ̲ ) α 1 ( t n s ̲ ) α 1 ] · k 0 ( s ̲ , Y ^ ( s ) ) d s | 2 + E | t n t ( t s ̲ ) α 1 k 0 ( s ̲ , Y ^ ( s ) ) d s | 2 } C { 0 t n | [ ( t s ̲ ) α 1 ( t n s ̲ ) α 1 ] | d s · 0 t n | ( t s ̲ ) α 1 ( t n s ̲ ) α 1 | · [ 1 + E ( | Y ^ ( s ) | 2 ) ] d s + t n t ( t s ) α 1 d s · t n t ( t s ) α 1 [ 1 + E ( | Y ^ ( s ) | 2 ) ] d s C h 2 α ,
and
K ˜ 1 C h 2 ( 2 α + 1 2 θ ) C h 2 α .
Applying the BDG—type inequality, we also have
K ˜ 2 C { E 0 t n [ ( t s ̲ ) α 1 ( t n s ̲ ) α 1 ] 2 sup s ̲ u t | k 2 ( u , s ̲ , Y ^ ( s ) ) | 2 d s
+ E [ 0 t n ( t n s ̲ ) 2 ( α + 1 2 θ ) sup 0 u 1 | k 2 ( ( t s ̲ ) u + s ̲ , s ̲ , Y ^ ( s ) ) k 2 ( ( t n s ̲ ) u + s ̲ , s ̲ , Y ^ ( s ) ) | 2 d s ] + E t n t ( t s ̲ ) 2 ( α + 1 2 θ ) sup s ̲ u t | k 2 ( u , s ̲ , Y ^ ( s ) ) | 2 d s } C { 0 t n [ ( t s ̲ ) α 1 ( t n s ̲ ) α 1 ] 2 d s · [ 1 + E ( | Y ^ ( s ) | 2 ) ] d s + h 2 0 t n ( t n s ) 2 ( α + 1 2 θ ) d s [ 1 + E ( | Y ^ ( s ) | 2 ) ] d s + t n t ( t s ) 2 ( α + 1 2 θ ) [ 1 + E ( | Y ^ ( s ) | 2 ) ] d s } C h ( 2 ϵ ) , α θ = 1 / 2 C h 2 ( 2 α + 1 2 θ ) , α θ 1 / 2 C h 2 α , ( where ϵ = 2 2 α ) .
Similarly,
K ˜ 3 C { E | 0 t n [ ( t s ̲ ) α 1 ( t s ̲ ) α 1 ] 2 sup s ̲ u t | k 3 ( u , s ̲ , Y ^ ( s ) ) | 2 d s | + E [ | 0 t n ( t n s ̲ ) 2 ( α + 1 2 θ ) sup 0 u 1 | k 3 ( ( t s ̲ ) u + s ̲ , s ̲ , Y ^ ( s ) ) k 3 ( ( t n s ̲ ) u + s ̲ , s ̲ , Y ^ ( s ) ) | 2 d s | ] + E | 0 t n ( t s ̲ ) 2 ( α + 1 2 θ ) sup s ̲ u t | k 3 ( u , s ̲ , Y ^ ( s ) ) | 2 d s | } C { E [ | 0 t n [ ( t s ̲ ) α 1 ( t n s ̲ ) α 1 ] 2 sup s ̲ u t | K 2 ( u , s ̲ , Y ^ ( s ) ) | 2 d s | + 0 t n [ ( t s ̲ ) α 1 ( t n s ̲ ) α 1 ] 2 · sup s ̲ u t 0 t 0 t | v v E [ K 2 ( u , s ̲ , Y ( s ) ) ] 2 d s | | v v | d v d v d s ] + E [ | 0 t n ( t n s ̲ ) 2 ( α + 1 2 θ ) sup 0 u 1 | K 2 ( ( t s ̲ ) u + s ̲ , s ̲ , Y ^ ( s ) ) K 2 ( ( t n s ̲ ) u + s ̲ , s ̲ , Y ^ ( s ) ) | 2 d s | ] + E [ | 0 t n ( t s ̲ ) 2 ( α + 1 2 θ ) sup s ̲ u t | K 2 ( u , s ̲ , Y ^ ( s ) ) | 2 d s | + 0 t n ( t s ̲ ) 2 ( α + 1 2 θ ) sup s ̲ u t 0 t 0 t | v v E [ K 2 ( u , s ̲ , Y ( s ) ) ] 2 d s | | v v | d v d v d s ] } C { | 0 t n [ ( t s ̲ ) α 1 ( t n s ̲ ) α 1 ] 2 d s | · 0 t n [ ( t s ̲ ) α 1 ( t n s ̲ ) α 1 ] 2 · [ 1 + E ( | Y ^ ( s ) | 2 ) + E ( | ξ θ ( s ) | 2 ) ] d s + | t t n | 2 | 0 t n ( t n s ̲ ) 2 ( α + 1 2 θ ) d s | · 0 t n ( t n s ̲ ) 2 ( α + 1 2 θ ) [ 1 + E ( | Y ^ ( s ) | 2 ) + E ( | ξ θ ( s ) | 2 ] d s + | 0 t n ( t s ̲ ) 2 ( α + 1 2 θ ) d s | · t n t ( t s ̲ ) 2 ( α + 1 2 θ ) [ 1 + E ( | Y ^ ( s ) | 2 ) + E ( | ξ θ ( s ) | 2 ] d s } C | t t n | ( 2 ϵ ) , α θ = 1 / 2 C | t t n | [ 2 ( 2 α + 1 2 θ ) ] , α θ 1 / 2 . C N | t t n | 2 α , ( where ϵ = 2 2 α ) .
Then,
E [ Y ( t ) Y ( t n ) , 0 , T ; α 2 ] C N | t t n | 2 α ,
and hence,
E [ Y ( t ) Y ^ ( t ) 2 , 0 , T ; α 2 ] 4 { K ˜ 0 + K ˜ 1 + K ˜ 2 + K ˜ 3 } C h 2 α ,
and the proof is complete.

References

  1. Mounir, Z.L. On the mixed fractional Brownian motion. J. Appl. Math. Stoch. Anal. 2006, 1, 1–9. [Google Scholar]
  2. Mishura, Y.S. Stochastic Calculus for Fractional Brownian Motion and Related Processes; Springer: Berlin, Germany, 2008. [Google Scholar]
  3. Liu, W.G.; Luo, J.W. Modified Euler approximation of stochastic differential equation driven by Brownian motion and fractional Brownian motion. Commun. Stat-Theor. Meth. 2017, 46, 7427–7443. [Google Scholar] [CrossRef]
  4. Liang, H.; Yang, Z.W.; Gao, J.F. Strong superconvergence of the Euler-Maruyama method for linear stochastic Volterra integral equations. J. Comput. Appl. Math. 2017, 317, 447–457. [Google Scholar] [CrossRef]
  5. Dai, X.J.; Bu, W.P.; Xiao, A.G. Well-posedness and EM approximations for non-Lipschitz stochastic fractional integro-differential equations. J. Comput. Appl. Math. 2019, 356, 377–390. [Google Scholar] [CrossRef]
  6. Zhang, J.N.; Lv, J.Y.; Huang, J.F.; Tang, Y.F. A fast Euler-Maruyama method for Riemann-Liouville stochastic fractional nonlinear differential equations. Phy. D 2023, 446, 133685. [Google Scholar] [CrossRef]
  7. Huang, J.F.; Shao, L.X.; Liu, J.H. Euler-Maruyama methods for Caputo tempered fractional stochastic differential equations. Int. J. Comput. Math. 2024, 101, 1113–1131. [Google Scholar] [CrossRef]
  8. Liu, W.; Mao, X.R. Strong convergence of the stopped Euler-Maruyama method for nonlinear stochastic differential equations. Appl. Math. Comput. 2013, 223, 389–400. [Google Scholar] [CrossRef]
  9. Song, M.H.; Hu, L.J.; Mao, X.R.; Zhang, L.G. Khasminskii-type theorems for stochastic functional differential equations. Discrete. Cont. Dyn-B 2013, 18, 1697–1714. [Google Scholar] [CrossRef]
  10. Mao, X.R.; Szpruch, L. Strong convergence and stability of implicit numerical methods for stochastic differential equations with non-globally Lipschitz continuous coefficients. J. Comput. Appl. Math. 2013, 238, 14–28. [Google Scholar] [CrossRef]
  11. Mao, X.R. The truncated Euler-Maruyama method for stochastic differential equations. J. Comput. Appl. Math. 2015, 290, 370–384. [Google Scholar] [CrossRef]
  12. Milstein, G.N.; Tretyakov, M.V. Stochastic Numerics for Mathematical Physics; Springer: Berlin, Germany, 2004. [Google Scholar]
  13. Wang, X.J.; Gan, S.Q. The tamed Milstein method for commutative stochastic differential equations with non-globally Lipschitz continuous coefficients. J. Difference Equ. Appl. 2013, 19, 466–490. [Google Scholar] [CrossRef]
  14. Wang, X.J.; Gan, S.Q.; Wang, D.S. θ-Maruyama methods for nonlinear stochastic differential delay equations. Appl. Num. Math. 2015, 98, 38–58. [Google Scholar] [CrossRef]
  15. Li, M.; Huang, C.M.; Hu, Y.Z. Numerical methods for stochastic Volterra integral equations with weakly singular kernels. IMA J. Numer. Anal. 2022, 42, 2656–2683. [Google Scholar] [CrossRef]
  16. Hutzenthaler, M.; Jentzen, A.; Kloeden, P.E. Strong convergence of an explicit numerical method for SDEs with nonglobally Lipschitz continuous coefficients. Ann. Appl. Proba. 2012, 22, 1611–1641. [Google Scholar] [CrossRef]
  17. Kamrani, M.; Jamshidi, N. Implicit Euler approximation of stochastic evolution equations with fractional Brownian motion. Commun. Nonlinear Sci. Numer. Simu. 2017, 44, 1–10. [Google Scholar] [CrossRef]
  18. Anh, P.; Doan, T.; Huong, P. A variation of constant formula for Caputo fractional stochastic differential equations. Stat. Proba. Lett. 2019, 145, 351–358. [Google Scholar] [CrossRef]
  19. Ford, N.J.; Pedas, A.; Vikerpuur, M. High order approximations of solutions to initial value problems for linear fractional integro-differential equations. Fract. Calc. Appl. Anal. 2023, 26, 2069–2100. [Google Scholar] [CrossRef]
  20. Mao, X.R. Stochastic Differential Equations and Applications; Horwood Pub Ltd.: Chichester, UK, 1997.
  21. Higham, D.J.; Mao, X.R.; Stuart, A. Strong convergence of Euler-type methods for nonlinear stochastic differential equations. SIAM J. Numer. Anal. 2003, 40, 1041–1063. [Google Scholar] [CrossRef]
  22. Yang, Z.W.; Yang, H.Z.; Yao, Z.C. Strong convergence analysis for Volterra integro-differential equations with fractional Brownian motions. J. Comput. Appl. Math. 2021, 383, 113156. [Google Scholar] [CrossRef]
  23. Liu, W.G.; Jiang, Y.; Li, Z. Rate of convergence of Euler approximation of time-dependent mixed SDEs driven by Brownian motions and fractional Brownian motions. AIMS Math. 2020, 5, 2163–2195. [Google Scholar] [CrossRef]
  24. Mishura, Y.; Shevchenko, G. Mixed stochastic differential equations with long-range dependence: Existence, uniqueness and convergence of solutions. Comput. Math. Appl. 2012, 64, 3217–3227. [Google Scholar] [CrossRef]
  25. Nualart, D. The Malliavin Calculus and Related Topics (Probability and Its Applications), 2nd ed.; Springer: Berlin, Germany, 2006. [Google Scholar]
  26. Biagini, F.; Hu, Y.Z.; Øksendal, B.; Zhang, T.S. Stochastic Calculus for Fractional Brownian Motion and Applications; Cambridge University Press: London, UK, 2008. [Google Scholar]
  27. Li, L.; Liu, J.G.; Lu, J.F. Fractional stochastic differential equations satisfying fluctuation-dissipation theorem. J. Stat. Phy. 2017, 169, 316–339. [Google Scholar] [CrossRef]
  28. Costabile, C.; Massabó, I.; Russo, E.; Staino, A. Lattice-based model for pricing contingent claims under mixed fractional Brownian motion. Commun. Nonlinear Sci. Num. Sim. 2023, 118, 107042. [Google Scholar] [CrossRef]
  29. Lan, G.Q.; Xia, F. Strong convergence rates of modified truncated EM method for stochastic differential equations. J. Comput. Appl. Math. 2018, 334, 1–17. [Google Scholar] [CrossRef]
  30. Qin, Y.M. Integral and Discrete Inequalities and Their Applications, Volume I: Linear Inequalities; Springer: Berna, Switzerland, 2016. [Google Scholar]
  31. Rao, B.L.S.P. Parametric estimation for linear stochastic differential equations driven by mixed fractional Brownian motion. Stoch. Anal. Appl. 2018, 36, 767–781. [Google Scholar] [CrossRef]
  32. Alazemi, F.; Douissi, S.; Sebaiy, K.E. Berry-Esseen bounds and asclts for drift parameter estimator of mixed fractional Ornstein-Uhlenbeck process with discrete observations. Theor. Proba. Appl. 2019, 64, 401–420. [Google Scholar] [CrossRef]
Figure 1. (Left): the mean—square errors of the modified EM scheme with θ = 0.5 and H = 0.6 . (Middle): the mean—square errors of the modified EM scheme with θ = 0.5 and H = 0.8 . (Right): the mean—square errors of the modified EM scheme with θ = 0.5 and H = 0.9 .
Figure 1. (Left): the mean—square errors of the modified EM scheme with θ = 0.5 and H = 0.6 . (Middle): the mean—square errors of the modified EM scheme with θ = 0.5 and H = 0.8 . (Right): the mean—square errors of the modified EM scheme with θ = 0.5 and H = 0.9 .
Fractalfract 09 00296 g001
Figure 2. (Left): the mean—square errors of the modified EM scheme with θ = 0.3 and H = 0.6 . (Middle): the mean—square errors of the modified EM scheme with θ = 0.3 and H = 0.8 . (Right): the mean—square errors of the modified EM scheme with θ = 0.3 and H = 0.9 .
Figure 2. (Left): the mean—square errors of the modified EM scheme with θ = 0.3 and H = 0.6 . (Middle): the mean—square errors of the modified EM scheme with θ = 0.3 and H = 0.8 . (Right): the mean—square errors of the modified EM scheme with θ = 0.3 and H = 0.9 .
Fractalfract 09 00296 g002
Figure 3. (Left): the mean—square errors of the modified EM scheme with θ = 0.5 and H = 0.65 . (Middle): the mean—square errors of the modified EM scheme with θ = 0.5 and H = 0.8 . (Right): the mean—square errors of the modified EM scheme with θ = 0.5 and H = 0.9 .
Figure 3. (Left): the mean—square errors of the modified EM scheme with θ = 0.5 and H = 0.65 . (Middle): the mean—square errors of the modified EM scheme with θ = 0.5 and H = 0.8 . (Right): the mean—square errors of the modified EM scheme with θ = 0.5 and H = 0.9 .
Fractalfract 09 00296 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, Z.; Xu, C. Strong Convergence of a Modified Euler—Maruyama Method for Mixed Stochastic Fractional Integro—Differential Equations with Local Lipschitz Coefficients. Fractal Fract. 2025, 9, 296. https://doi.org/10.3390/fractalfract9050296

AMA Style

Yang Z, Xu C. Strong Convergence of a Modified Euler—Maruyama Method for Mixed Stochastic Fractional Integro—Differential Equations with Local Lipschitz Coefficients. Fractal and Fractional. 2025; 9(5):296. https://doi.org/10.3390/fractalfract9050296

Chicago/Turabian Style

Yang, Zhaoqiang, and Chenglong Xu. 2025. "Strong Convergence of a Modified Euler—Maruyama Method for Mixed Stochastic Fractional Integro—Differential Equations with Local Lipschitz Coefficients" Fractal and Fractional 9, no. 5: 296. https://doi.org/10.3390/fractalfract9050296

APA Style

Yang, Z., & Xu, C. (2025). Strong Convergence of a Modified Euler—Maruyama Method for Mixed Stochastic Fractional Integro—Differential Equations with Local Lipschitz Coefficients. Fractal and Fractional, 9(5), 296. https://doi.org/10.3390/fractalfract9050296

Article Metrics

Back to TopTop