Next Article in Journal
Modified Combined-Step-Size Affine Projection Sign Algorithms for Robust Adaptive Filtering in Impulsive Interference Environments
Next Article in Special Issue
Ambit Field Modelling of Isotropic, Homogeneous, Divergence-Free and Skewed Vector Fields in Two Dimensions
Previous Article in Journal
Residual Control Chart for Binary Response with Multicollinearity Covariates by Neural Network Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Picard Iteration Methods for Simulation of Non-Lipschitz Stochastic Differential Equations

The Institute of Theoretical Electrical Engineering, Ruhr University of Bochum, Universitätsstrasse 150, D-44801 Bochum, Germany
Symmetry 2020, 12(3), 383; https://doi.org/10.3390/sym12030383
Submission received: 19 January 2020 / Revised: 12 February 2020 / Accepted: 19 February 2020 / Published: 3 March 2020

Abstract

:
In this paper, we present splitting approaches for stochastic/deterministic coupled differential equations, which play an important role in many applications for modelling stochastic phenomena, e.g., finance, dynamics in physical applications, population dynamics, biology and mechanics. We are motivated to deal with non-Lipschitz stochastic differential equations, which have functions of growth at infinity and satisfy the one-sided Lipschitz condition. Such problems studied for example in stochastic lubrication equations, while we deal with rational or polynomial functions. Numerically, we propose an approximation, which is based on Picard iterations and applies the Doléans-Dade exponential formula. Such a method allows us to approximate the non-Lipschitzian SDEs with iterative exponential methods. Further, we could apply symmetries with respect to decomposition of the related matrix-operators to reduce the computational time. We discuss the different operator splitting approaches for a nonlinear SDE with multiplicative noise and compare this to standard numerical methods.

1. Introduction

We are motivated to solve delicate stochastic differential equations, which are non-Lipschitz continuous, e.g., the nonlinear growth is polynomial or exponential, see [1]. Such SDEs are applied in different engineering problems, e.g., fluctuations in hydrodynamics, e.g., liquid jets in spraying and coating apparatuses, see [2,3,4]. For such modelling equation, the standard stochastic scheme have problems and we apply novel so called Picard iteration schemes, which apply Doléans-Dade exponential formula to overcome the local Lipschitzian problem, see [5]. The idea is to overcome the non-Lipschitzian problem with iterative scheme based on exponential functions, see [6]. The novelty in the paper is the additional splitting approaches, which are combined with the Doléans-Dade exponentials and included into the Picard iteration scheme. The splitting use ideas to decompose the underlying operators with respect to reduce the computational time, while we decompose into simpler solvable operators, which can be solved with much more faster solvers, see also [7].
Here, we deal with the following splitting ideas:
  • Symmetries: We consider symmetries in the operators and decompose to symmetrical operators, such that we could apply fast solver methods for each symmetric operator-part, see [7,8,9].
  • Deterministic-Stochastic splitting: We decompose into deterministic and stochastic parts, while we apply fast deterministic and fast stochastic solver, see [10,11].
Such splitting approaches are important to optimize and speedup the solver-processes, see [12]. Further, the symmetries are also important for the specific solver method, while we apply iterative splitting methods, the decomposition into homogeneous and inhomogeneous solver parts are important, see [12]. We decompose the iterative splitting method into a fast solvable e x p -matrix operator, which can apply fast symmetric solvers, see [7,13], and a right hand side part, which is a integral operator based on convolution operator. The convolution operator is based on the e x p -matrix operators with an additional linear or nonlinear operator, see [14]. Here, we could also apply fast symmetric numerical integration methods and accelerate the solver-process, see [9].
In the paper, we concentrate on the following general form of the non-Lipschitz SDE, which is given as:
d X ( t ) = A ( t , X ( t ) ) d t + B ( t , X ( t ) ) d W t , t × [ 0 , T ] ,
X ( 0 ) = X 0 ,
where we have a ( t , X ( t ) ) = a 1 ( t , X ( t ) ) + a 2 ( t , X ( t ) ) = A ( t , X ( t ) ) / X ( t ) I R \ { 0 } , which is the drift coefficient and b ( t , X ( t ) ) = B ( t , X ( t ) ) / X ( t ) I R \ { 0 } , which is the diffusion coefficient. Further we assume a and b are bounded and W t is the Wiener process and we assume W t is independent from X 0 for t t 0 . For the sake of convenience, we have also assumed, that t 0 = 0 , but we can also generalise to t 0 I R 0 + , which is only a shift in the initial conditions, see [15].
We have the following Assumption 1:
Assumption 1.
The SDE (1) can be rewritten into the stochastic differential equation:
d X ( t ) = X ( t ) ( a 1 ( t , X ( t ) ) + a 2 ( t , X ( t ) ) ) d t + X ( t ) b ( t , X ( t ) ) d W t , t × [ 0 , T ] ,
X ( 0 ) = X 0 ,
and a 1 : [ 0 , T ] × I R I R , a 2 : [ 0 , T ] × I R I R and b : [ 0 , T ] × I R I R be a locally Lipschitz-functions, where we assume, that there exist constants C L for every L > 0 with:
| a ( t , X ( t ) ) a ( t , Y ( t ) ) | C L | X ( t ) Y ( t ) | , ( t , X , Y ) I R + × I R 2 , | a 1 ( t , X ( t ) ) a 1 ( t , Y ( t ) ) | C L | X ( t ) Y ( t ) | , ( t , X , Y ) I R + × I R 2 , | a 2 ( t , X ( t ) ) a 2 ( t , Y ( t ) ) | C L | X ( t ) Y ( t ) | , ( t , X , Y ) I R + × I R 2 , | b ( t , X ( t ) ) b ( t , Y ( t ) ) | C L | X ( t ) Y ( t ) | , ( t , X , Y ) I R + × I R 2 .
We assume, that the Equation (1) has a unique strong solution X and it is positive, see also the work [5]. Further, we assume to deal with a decomposition of the full operator A = A 1 + A 2 , while the operator A 1 = a 1 X is fast computable as exp ( A 1 ) with the, while A 2 = a 2 X is the more nonlinear operator and applied as a right-hand side, see [16].
In the Section 2, we derive the approximation of stochastic process with the Picard iterations and applied the numerical analysis of the new schemes. The numerical results are given in Section 3. In Section 4, we conclude and summarize our results.

2. Numerical Analysis of the Splitting Approaches

In the following, we discuss the splitting approaches, which are based on a homogeneous and inhomogeneous part.
For the stochastic differential equation (SDE), we define the homogeneous SDE as an linear or linearised SDE, which can be solved analytically or numerically, see [17,18,19]. Further, there exists also ideas for SDE and stochastic partial differential equations (SPDEs) to decompose into linear and nonlinear parts of the SDE, which can be solved linear and nonlinear stochastic methods, see [20].
We concentrate on linearized SDEs and we use the notation of homogeneous and inhomogeneous parts, which are wel-known for linear SDEs, see [17]. We assume, that the solution of the homogeneous SDE can be used with respect to the variation-of-constants formula, see [21], which is also wel-known from the theory of ODEs, see [12,22]. Such a formula also holds in the infinite dimensional case (under suitable assumptions, see [19]). Therefore, we obtain a solution of the inhomogeneous SDE, which is a mild solution, see [23].
We apply the splitting for the stochastic differential equation in the following homogeneous and inhomogeneous parts:
  • Homogeneous equation:
    d X ( t ) = X ( t ) a 1 ( t , X ( t ) ) d t + X ( t ) b ( t , X ( t ) ) d W t , t × [ 0 , T ] ,
    X ( 0 ) = X 0 ,
    where we have the solution X h o m ( t ) = S m ( t ) , where m are the iterative steps of the Picard iterations, see the Section 2.1.
  • Inhomogeneous equation:
    d X ( t ) = X ( t ) ( a 1 ( t , X ( t ) ) + a 2 ( t , X ( t ) ) ) d t + X ( t ) b ( t , X ( t ) ) d W t , t × [ 0 , T ] ,
    X ( 0 ) = X 0 ,
    where we have the solution X i n h o m ( t ) = S m ( t ) + 0 t S m ( t s ) a 2 ( s ) d s , where m are the iterative steps of the Picard iterations, see the Section 2.1.
For the approximation of the scheme, we deal with three steps for the homogeneous part:
  • Approximate the diffusion process,
  • Picard iterations with Doléans-Dade solutions of the SDE,
  • Discretisation of the Picard iterations in time.
For the inhomogenous part, we deal with the following parts:
  • Discretisation of the Picard iterations in time for the inhomogeneous part,
  • Approximation of the integral-formulation of the inhomogeneous part.

2.1. Homogeneous Equation

In the homogeneous part, we approximate the solution of the

2.1.1. Approximate the Diffusion Process

We consider the SDE (1) and we assume Y is the unique solution to the SDE:
d Y ( t ) = c ( S 0 exp ( Y ( t ) ) , t ) d t + b ( S 0 exp ( Y ( t ) ) , t ) d W t , t × [ 0 , T ] , Y ( 0 ) = 0 ,
where c ( X , t ) = a ( X , t ) 1 2 b 2 ( X , t ) and we assume that for all t [ 0 , T ] , we have:
S ( t ) : = S 0 exp ( 0 t c ( S ( u ) , u ) d u + 0 t b ( S ( u ) , u ) d W u ) , S ( 0 ) = S 0 ,
see also the idea in [10,12].

2.1.2. Picard Iterations with Doléans-Dade Solutions of the SDE

In the following, we construct the iterative process, which is based on the idea S m S for m .
We have the following iterative scheme:
S m + 1 ( t ) : = F t ( a 1 , m + 1 ) E ( b m + 1 ) , S 0 = S ( t = 0 ) ,
where a 1 , m + 1 = a 1 ( S m ( t ) , t ) and b m + 1 = b ( S m ( t ) , t ) .
Therefore, we obtain:
S m + 1 ( t ) = S 0 exp ( 0 t c ( S m ( u ) , u ) d u + 0 t b ( S m ( u ) , u ) d W u ) .
Further, we can rewrite to
Y m ( t ) : = log S m ( t ) log S 0 , t [ 0 , T ] ,
and we have the following Picard iteration:
d Y m + 1 ( t ) = c ( S m ( t ) , t ) d t + b ( S m ( t ) , t ) d W t .
We have the following convergence result of the Picard iterations:
Lemma 1.
The Picard iteration (9) converge in L 2 with:
E ( s u p t T | Y m + 1 ( t ) Y m ( t ) | 2 ) C ˜ T m + 1 ( m + 1 ) ! ,
where C ˜ is a constant, depending on C, C L and S 0 .
Proof. 
We apply the recursion of the Lemma A1, see in the Appendix A.1.
We obtain:
E ( s u p t T | Y m + 1 ( t ) Y m ( t ) | 2 ) C L m T m m ! E ( s u p t T | Y 1 ( t ) Y 0 ( t ) | 2 ) ,
we have Y 1 ( t ) = S 0 and Y 0 ( t ) = 0 , we apply
C L m E ( s u p t T | Y 1 ( t ) Y 0 ( t ) | 2 ) C C L m + 1 T m + 1 m ! S 0 2 C ˜ T m + 1 ( m + 1 ) ! ,
where C ˜ is a constant depending on C and C L . □
The convergence shows, that for m we have lim m C ˜ T m + 1 ( m + 1 ) ! 0 .
We apply Picard iterations, which are deduced from the Doléans-Dade exponential formula, see [5].
We have the following Assumption:
Assumption 2.
We have the the stochastic differential equation:
d X ( t ) = X ( t ) a 1 ( t , X ( t ) ) d t + X ( t ) b ( t , X ( t ) ) d W t , t × [ 0 , T ] ,
X ( 0 ) = X 0 ,
and a 1 : [ 0 , T ] × I R I R and b : [ 0 , T ] × I R I R be locally Lipschitz-functions. We assume, that the Equation (11) has a unique strong solution X and it is positive.
A generalization of the Equation (11) is given as:
d X ( t ) = A 1 ( t , X ( t ) ) d t + B ( t , X ( t ) ) d W t , t × [ 0 , T ] ,
X ( 0 ) = X 0 ,
where we define a 1 ( t , X ( t ) ) = A 1 ( t , X ( t ) ) / X ( t ) and b ( t , X ( t ) ) = B ( t , X ( t ) ) / X ( t ) on I R \ { 0 } and we assume a and b are bounded.
We apply the iterated Doléans-Dade-process for X i , i I N + such that X i converge to S when i .
We have the following scheme:
X 0 ( t ) = X ( t 0 ) , X i ( t ) = X 0 F t ( a 1 , i ) E t ( b i ) , t × [ 0 , T ] ,
where we have
F t ( a 1 , i ) = exp 0 t a 1 ( s , X i 1 ( s ) ) d s , i 1 , t [ 0 , T ] , E t ( b i ) = exp 0 t b ( s , X i 1 ( s ) ) d W s 1 2 0 t b 2 ( s , X i 1 ( s ) ) d s , i 1 , t [ 0 , T ] ,
where we have Y i = log ( X i ) log ( X o ) , this process satisfies the following s.d.e.
d Y i + 1 ( t ) = a ˜ 1 ( t , X i ( t ) ) + b ( t , X i ( t ) ) d W t ,
where a ˜ 1 ( t , X ) = a 1 ( t , X ) 1 2 b 2 ( t , X ) .
Then, we can prove, that we have the convergence of X i to X for i , see also [5].

2.1.3. Discretisation of the Picard Iterations in Time

We have the uniform discretization with Δ t = T / n for all Δ t = t j + 1 t j , with j = 0 , , n 1 .
We have the recursive process with j = 0 , , n 1 with the iterative steps i 1 :
X j = X ( t j ) ,
X i ( t j + 1 ) = X j F t ( a 1 , i , j ) E t ( b i , j ) ,
where we have
F t ( a 1 , i , j ) = exp t j t j + 1 a 1 ( s , X i 1 ( s ) ) d s , i 1 , E t ( b i , j ) = exp t j t j + 1 b ( s , X i 1 ( s ) ) d W s 1 2 t j t j + 1 b 2 ( s , X i 1 ( s ) ) d s , i 1 .

2.2. Inhomogeneous Equation

For the inhomogeneous part, we have the following scheme:
X 0 ( t ) = X ( t 0 ) ,
X i ( t ) = X 0 F t ( a 1 , i ) E t ( b i ) + 0 t F t s ( a 1 , i ) E t s ( b i ) a 2 , i ( X ( s ) ) d s t × [ 0 , T ] ,
where a 2 , i ( x ) is a right hand side based on our decomposition of a ( x ) = a 1 ( x ) + a 2 ( x ) .

2.2.1. Discretisation of the Picard Iterations in Time for the Inhomogeneous Part

We have the uniform discretization with Δ t = T / n for all Δ t = t j + 1 t j , with j = 0 , , n 1 .
We have the recursively process with j = 0 , , n 1 with the iterative steps i 1 :
X j ( t ) = X ( t j ) ,
X i ( t j + 1 ) = X j F t ( a 1 , i , j ) E t ( b i , j ) + t j t j + 1 F t j + 1 s ( a 1 , i , j ) E t j + 1 s ( b i , j ) a 2 , i , j ( X ( s ) ) d s ,
where we have
F t j + 1 s ( a 1 , i , j ) = exp t j t j + 1 a 1 ( t j + 1 s , X i 1 ( t j + 1 s ) ) d s , i 1 , E t j + 1 s ( b i , j ) = exp t j t j + 1 b ( t j + 1 s , X i 1 ( t j + 1 s ) ) d W s 1 2 t j t j + 1 b 2 ( t j + 1 s , X i 1 ( t j + 1 s ) ) d s , i 1 .

2.2.2. Approximation of the Integral-Formulation of the Inhomogeneous Part

The integration of the inhomogeneous part can be done with numerical integration methods, e.g., trapezoidal rule, Romberg’s method, see [24].
We have the following convergence results in Lemma 2.
Lemma 2.
The inhomogeneous part of the Picard iteration (20) and (21) converge in L 2 with:
E ( s u p t T | Y m + 1 ( t ) Y m ( t ) | 2 ) C ^ T m + 1 ( m + 1 ) ! ,
where C ˜ is a constant, depending on C, C L and S 0 .
Proof. 
We deal with the inhomogeneous part and estimate the integral part, which is given as:
S ˜ m + 1 = 0 T F T s ( a 1 , i , j ) E t T s ( b i , j ) a 2 , i , j ( X ( s ) ) d s C ˜ ˜ T sup 0 t T F t ( a 1 , i , j ) E t ( b i , j ) sup 0 t T a 2 , i , j ( X ( t ) ) ,
and C ˜ ˜ is a constant. We obtain:
S ˜ m + 1 S ^ m + 1 a ^ 2 sup 0 t T a 2 , i , j ( X ( t ) ) .
Then, we could deal with the idea in the homogeneous part, we apply:
Y ^ m ( t ) : = log S ^ m ( t ) log a ^ 2 , t [ 0 , T ] ,
and we have the following Picard iteration:
d Y ^ m + 1 ( t ) = c ( S ^ m ( t ) , t ) d t + b ( S ^ m ( t ) , t ) d W t .
The estimation is done with the same results as in the homogeneous part and we receive:
E ( s u p t T | Y ^ m + 1 ( t ) Y ^ m ( t ) | 2 ) C L m T m m ! E ( s u p t T | Y ^ 1 ( t ) Y ^ 0 ( t ) | 2 ) ,
we have Y ^ 1 ( t ) = a ^ 2 and Y ^ 0 ( t ) = 0 , we apply
C L m E ( s u p t T | Y ^ 1 ( t ) Y ^ 0 ( t ) | 2 ) C C L m + 1 T m + 1 m ! a ^ 2 2 C ^ T m + 1 ( m + 1 ) ! ,
where C ^ is a constant depending on C and C L . □
The convergence results for the homogeneous and inhomogeneous parts can be combined. Therefore, we have a convergence of the Picard iterative scheme in the homogeneous and inhomogeneous case.
Example 1.
For the mid-point rule, we following recursion:
Based on the uniform discretization with Δ t = T / n for all Δ t = t j + 1 t j , with j = 0 , , n 1 and the iterative steps i 1 , we have:
X j ( t ) = X ( t j ) ,
X i ( t j + 1 ) = X j F t ( a 1 , i , j ) E t ( b i , j ) + Δ t ( F Δ t / 2 ( a 1 , i , j ) E Δ t / 2 ( b i , j ) a 2 , i , j ( X i ( t j + Δ t / 2 ) ) ) ,
where we have
F Δ t / 2 ( a 1 , i , j ) = exp Δ t a 1 ( ( t j + Δ t / 2 ) , X i 1 ( ( t j + Δ t / 2 ) ) ) , i 1 , E Δ t / 2 ( b i , j ) = exp Δ W t b ( ( t j + Δ t / 2 ) , X i 1 ( ( t j + Δ t / 2 ) ) ) 1 2 Δ t b 2 ( ( t j + Δ t / 2 ) , X i 1 ( ( t j + Δ t / 2 ) ) ) , i 1 .

3. Numerical Examples

In the following, we study the different numerical examples, which are based on rational polynomials of the deterministic and stochastic coefficients. Such delicate an only local Lipschitz continuous coefficients are also applied in thin-liquid film models, see [2].
We compare the standard stochastic methods, which are based on the
  • Euler-Maruyama-Scheme (EM), see [25],
  • Milstein-Scheme (Milstein), see [25],
and the different standard splitting methods, which are based on the
  • AB-splitting approaches (AB), see [26],
  • ABA-splitting approaches (ABA), see [27,28],
with the new splitting approaches, which are based on the Picard iteration approach
  • Iterative Picard approach (Picard), see [10,11],
  • Iterative Picard with Doléans-Dade exponential approach (Picard-Doleans), see Section 2 and [5],
  • Iterative Picard-splitting with Doléans-Dade exponential approach (Picard-Splitt-Doleans), see Section 2.
For simpler notation, we apply in the following the entries in the brackets for the labelling of the methods in the tables and figures.

3.1. First Example: Nonlinear SDE With Root-Function or Irrational Function

We deal with a test-example based on root- or irrational functions, see also [29].
The function is given as:
d X ( t ) = ( 2 5 X 3 / 5 + 5 X 4 / 5 ) d t + X 4 / 5 d W t , t × [ 0 , 1 ] ,
X ( 0 ) = 1 .
The analytical solution can be derived by the Ito-Taylor expansion, see [29] and we obtain:
X ( t ) = X ( 0 ) + t + 1 5 W ( t ) 5 ,
where W ( t ) = W t W 0 = t ξ and ξ obeys the Gaussian normal distribution N ( 0 , 1 ) with E ( ξ ) = 0 and E ( ξ 2 ) = V a r ( ξ ) = 1 .
We deal with the following numerical methods. Further, we apply the time-intervals Δ t = 1 N with N = 2 9 , 2 10 , 2 11 , 2 12 , 2 13 .
  • Euler-Maruyama-Scheme
    X ( t i + 1 ) = X ( t i ) + 2 5 X 3 / 5 ( t i ) + 5 X 4 / 5 ( t i ) Δ t + X 4 / 5 ( t i ) Δ W t ,
    where Δ t = t i + 1 t i , Δ W t = W t i + 1 W t i = Δ t ξ and ξ obeys the Gaussian normal distribution N ( 0 , 1 ) with E ( ξ ) = 0 and E ( ξ 2 ) = V a r ( ξ ) = 1 .
    We have i = 0 , , N 1 with X ( t 0 ) = 1.0 .
  • Milstein-Scheme
    X ( t i + 1 ) = X ( t i ) + 2 5 X 3 / 5 ( t i ) + 5 X 4 / 5 ( t i ) Δ t + X 4 / 5 ( t i ) Δ W t + 2 5 X 3 / 5 ( t i ) ( ( Δ W t ) 2 Δ t ) ,
    where Δ t = t i + 1 t i , Δ W t = W t i + 1 W t i = Δ t ξ and ξ obeys the Gaussian normal distribution N ( 0 , 1 ) with E ( ξ ) = 0 and E ( ξ 2 ) = V a r ( ξ ) = 1 .
    We have i = 0 , , N 1 with X ( t 0 ) = 1.0 .
  • AB-splitting approach:
    We initialize t i with i = 0 , , N 1 , while t N = T and we have X ( 0 ) is the initial condition.
    We deal with the 2 steps:
    • A-step:
      X ˜ ( t i + 1 ) = X ( t i ) + 2 5 X 3 / 5 ( t i ) + 5 X 4 / 5 ( t i ) Δ t ,
    • B-part:
      X ( t i + 1 ) = X ˜ ( t i + 1 ) + X ˜ 4 / 5 ( t i + 1 ) Δ W t + 2 5 X ˜ 3 / 5 ( t i + 1 ) ( ( Δ W t ) 2 Δ t ) ,
      where we have the solution X ( t i + 1 ) and we go to the next time-step till i = N 1 .
  • ABA-splitting approach:
    We initialize t i with i = 0 , , N 1 , while t N = T and we have X ( 0 ) is the initial condition.
    We deal with the 2 steps:
    • A-step ( Δ t / 2 ):
      X ˜ ( t i + 1 ) = X ( t i ) + 2 5 X 3 / 5 ( t i ) + 5 X 4 / 5 ( t i ) Δ t / 2 ,
    • B-part ( Δ t ):
      X ^ ( t i + 1 ) = X ˜ ( t i + 1 ) + X ˜ 4 / 5 ( t i + 1 ) Δ W t + 2 5 X ˜ 3 / 5 ( t i + 1 ) ( ( Δ W t ) 2 Δ t ) ,
    • A-step ( Δ t / 2 ):
      X ( t i + 1 ) = X ^ ( t i ) + 2 5 X ^ 3 / 5 ( t i ) + 5 X ^ 4 / 5 ( t i ) Δ t / 2 ,
      where we have the solution X ( t i + 1 ) and we go to the next time-step till i = N 1 .
  • Iterative Picard approach:
    d X ( t ) = ( 2 5 X 3 / 5 + 5 X 4 / 5 ) d t + X 4 / 5 d W t , t × [ 0 , 1 ] , X ( 0 ) = 1 ,
    we apply an Picard-Iteration
    d X i ( t ) = ( 2 5 X i 1 3 / 5 + 5 X i 1 4 / 5 ) d t + X i 1 4 / 5 d W t , t × [ 0 , 1 ] , X ( 0 ) = 1 ,
    where X 0 ( t ) = x ( 0 ) , while we apply the implicit method in the drift term and the explicit method in the diffusion term.
    The algorithm is given as: We initialize t n with n = 0 , , N 1 , while t N = T and we have X ( 0 ) is the initial condition.
    We deal with the 2 loops (loop 1 is the computation over the full time-domain and loop 2 is the computation with i = 1 , 2 , ):
    • n = 0 , , N 1 :
    • i = 1 , , I :
    • Computation
      X ( t n + 1 ) i = X ( t n ) + 2 5 X 3 / 5 ( t n + 1 ) i 1 + 5 X 4 / 5 ( t n + 1 ) i 1 Δ t + X 4 / 5 ( t n ) Δ W t + 2 5 X 3 / 5 ( t n ) ( ( Δ W t ) 2 Δ t ) ,
    • i = i + 1 , if i = I + 1 then we are done, else we go to Step (c)
    • n = n + 1 , if n + 1 = N + 1 then we are done, else we go to Step (b)
  • Iterative Picard with Doléans-Dade exponential approach:
    d X ( t ) = ( 2 5 X 3 / 5 + 5 X 4 / 5 ) d t + X 4 / 5 d W t , t × [ 0 , 1 ] , X ( 0 ) = 1 ,
    we apply an Picard with Doléans-Dade exponential approach
    d X i ( t ) = ( 2 5 X i 1 2 / 5 + 5 X i 1 1 / 5 ) X i d t + X i 1 1 / 5 X i d W t , t × [ 0 , 1 ] , X ( 0 ) = 1 ,
    where X 0 ( t ) = x ( 0 ) , while we apply the implicit method in the drift term and the explicit method in the diffusion term.
    The algorithm is given as: We initialize t n with n = 0 , , N 1 , while t N = T and we have X ( 0 ) is the initial condition.
    We deal with the 2 loops (loop 1 is the computation over the full time-domain and loop 2 is the computation with i = 1 , 2 , ):
    • n = 0 , , N 1 :
    • i = 1 , , I :
    • Computation
      X ( t n + 1 ) i = X ( t n ) exp Δ t 2 5 X 2 / 5 ( t n + 1 ) i 1 + X 2 / 5 ( t n ) 2 + Δ t 5 X 1 / 5 ( t n + 1 ) i 1 + X 1 / 5 ( t n ) 2 Δ t 1 2 X 2 / 5 ( t n + 1 ) i 1 + X 2 / 5 ( t n ) 2 + X 1 / 5 ( t n ) Δ W t + 1 2 X 2 / 5 ( t n ) ( ( Δ W t ) 2 Δ t ) ,
    • i = i + 1 , if i = I + 1 then we are done, else we go to Step (c)
    • n = n + 1 , if n + 1 = N + 1 then we are done, else we go to Step (b)
  • Iterative Picard-Splitting with Doléans-Dade exponential approach:
    d X ( t ) = ( 2 5 X 3 / 5 + 5 X 4 / 5 ) d t + X 4 / 5 d W t , t × [ 0 , 1 ] , X ( 0 ) = 1 ,
    we apply the following splitting approach:
    A ( X ( t ) ) = 2 5 X 3 / 5 + 5 X 4 / 5 , A 1 ( X ( t ) ) = 5 X 4 / 5 , A 2 ( X ( t ) ) = 2 5 X 3 / 5 ,
    where a 1 ( X ( t ) ) = 5 X 1 / 5 ( t ) .
    We apply the Picard-iterations with Doléans-Dade exponential approach and the splitting approach:
    d X i ( t ) = ( 5 X i 1 1 / 5 X i d t + X i 1 1 / 5 X i d W t ) + 2 5 X i 1 3 / 5 + , t × [ 0 , 1 ] , X ( 0 ) = 1 ,
    where X 0 ( t ) = x ( 0 ) , while we apply the implicit method in the drift term and the explicit method in the diffusion term.
    The algorithm is given as: We initialize t n with n = 0 , , N 1 , while t N = T and we have X ( 0 ) is the initial condition.
    We deal with the 2 loops (loop 1 is the computation over the full time-domain and loop 2 is the computation with i = 1 , 2 , ):
    • n = 0 , , N 1 :
    • i = 1 , , I :
    • Computation (we apply the full exp):
      X ( t n + 1 ) i = X ( t n ) exp ( Δ t 5 X 1 / 5 ( t n + 1 ) i 1 + X 1 / 5 ( t n ) 2 Δ t 1 2 X 2 / 5 ( t n + 1 ) i 1 + X 2 / 5 ( t n ) 2 + X 1 / 5 ( t n ) Δ W t + 1 2 X 2 / 5 ( t n ) ( ( Δ W t ) 2 Δ t ) ) + Δ t exp ( Δ t 5 X 1 / 5 ( t n + 1 ) i 1 + X 1 / 5 ( t n ) 2 Δ t 1 2 X 2 / 5 ( t n + 1 ) i 1 + X 2 / 5 ( t n ) 2 + X 1 / 5 ( t n ) Δ W t + 1 2 X 2 / 5 ( t n ) ( ( Δ W t ) 2 Δ t ) ) · 2 5 X 2 / 5 ( t n + 1 ) i 1 + X 2 / 5 ( t n ) 2 .
    • i = i + 1 , if i = I + 1 then we are done, else we go to Step (c)
    • n = n + 1 , if n + 1 = N + 1 then we are done, else we go to Step (b)
We obtain a critical CFL condition, given as following:
Δ t X 2 / 5 ξ 2 ,
where ξ is N ( 0 , 1 ) distributed. If the criterion is done all is fine and we go on. If we reach such a value, we reduce the time-step Δ t for the recent time step to Δ t = X 2 / 5 ξ 2 . For the next time-step, we could use the old time step and so on.
For the error analysis, we apply the different errors:
  • Mean value at t = t n and J-sample paths:
    E ( X m e t h o d ( t n ) ) = 1 J j = 1 J X m e t h o d j ( t n ) ,
    we deal with time-step Δ t = 1 N with N = 512 , 1024 , 2048 , 4096 and the time-points are t n , n = 1 , , N with end-time-point t N = 1.0 . For the sample paths, we apply J = 100 or J = 1000 and for the methods, we have m e t h o d = { A B , A B A , E m , M i l , P i c a r d , P i c a r d D o l e a n s , P i c a r d S p l i t t D o l e a n s } .
  • Local mean square error value at t = t n and J-sample paths:
    V a r ( X m e t h o d ( t n ) ) l o c a l = 1 J j = 1 J | E ( X m e t h o d ( t n ) ) X m e t h o d j ( t n ) | 2 ,
    we deal with time-step Δ t = 1 N with N = 512 , 1024 , 2048 , 4096 and the time-points are t n , n = 1 , , N with end-time-point t N = 1.0 . For the sample paths, we apply J = 100 or J = 1000 and for the methods, we have m e t h o d = { A B , A B A , E m , M i l , P i c a r d , P i c a r d D o l e a n s , P i c a r d S p l i t t D o l e a n s } .
  • Global means square error over the full time-scale with different time-steps Δ t and J-sample path:
    V a r ( X m e t h o d ) g l o b a l = n = 0 N Δ t V a r ( X m e t h o d ( t n ) ) l o c a l ,
    where we apply the Equation (30) for the local means square error. Further, we deal with Δ t = 1 N with N = 512 , 1024 , 2048 , 4096 . For the sample paths, we apply J = 100 or J = 1000 and for the methods, we have m e t h o d = { A B , A B A , E m , M i l , P i c a r d , P i c a r d D o l e a n s , P i c a r d S p l i t t D o l e a n s } .
Remark 1.
We could reduce the computational time, while we decompose the exp-operator and apply Pade-approximation. We saw in experiments, that we could accelerate the computation of 2–3 times, such that we are in the same region as the ABA-splitting approach, see the remarks in Appendix A.2.
In the following Table 1, we present the means-values at t = 1.0 .
In the following Figure 1, we present the exact solution of the first experiment.
In the following Figure 2, we present the mean value, while we apply Equation (29), with different time-steps and for all numerical methods.
In the following Figure 3, we present the mean square error, while we apply Equation (31), with different time-steps and for all numerical methods.
In the following Figure 4, we present the performance of the schemes.
Remark 2.
In the Figure 2 and Figure 3, we present the mean and the mean-square-errors of the different schemes. We see the higher stability and the benefits of the iterative splitting, which applied symmetric techniques like the Doléans-Dade exponential approach. The locally Lipschitz operators are more stable to compute, while we could bound the operators in an exponential-approach, see [12]. In the performance, see Figure 7 in Section 3.2 we see, that the exponential based methods, like the Picard-Doleans or the Picard-Splitt-Doleans, are 2–3 times higher, than the standard schemes. But we obtain much more accurate and stable results and could use much more larger time-steps, such that we accelerate the solver process. Here, the restriction to the CFL-condition for the EM-, Milstein-schemes are higher and therefore the exponential based methods can use larger time-steps or are more accuracy with the same time-steps. For an increasing of the volatility in the stochastic differential equation, we have a stable method based on the Picard-Doleans or Picard-Splitt-Doleans. Here, we deal with an implicit behaviour of the underlying methods and we have to be aware of the numerical diffusion, which appear with larger time-steps, see [30].

3.2. Second Example: Linear/Nonlinear SDE with Potential Function

We deal with a test-example based on root- or irrational functions, see also [29].
The function is given as:
d X ( t ) = ( 2 X 1 / 2 + 1 ) d t + 2 X 1 / 2 d W t , t × [ 0 , 1 ] ,
X ( 0 ) = 1 .
The analytical solution can be derived by the Ito-Taylor expansion, see [29] and we obtain:
X ( t ) = X ( 0 ) + t + W ( t ) 2 ,
where W ( t ) = W t W 0 = t ξ and ξ obeys the Gaussian normal distribution N ( 0 , 1 ) with E ( ξ ) = 0 and E ( ξ 2 ) = V a r ( ξ ) = 1 . W ( t ) t N ( 0 , 1 ) and N ( 0 , 1 ) indicates a standard normal random variable.
We deal with the following numerical methods. Further, we apply the time-intervals Δ t = 1 N with N = 2 9 , 2 10 , 2 11 , 2 12 , 2 13 .
  • Euler-Maruyama-Scheme
    X ( t i + 1 ) = X ( t i ) + 2 X 1 / 2 ( t i ) + 1 Δ t + 2 X 1 / 2 ( t i ) Δ W t ,
    where Δ t = t i + 1 t i , Δ W t = W t i + 1 W t i = Δ t ξ and ξ obeys the Gaussian normal distribution N ( 0 , 1 ) with E ( ξ ) = 0 and E ( ξ 2 ) = V a r ( ξ ) = 1 .
    We have i = 0 , , N 1 with X ( t 0 ) = 1.0 .
  • Milstein-Scheme
    X ( t i + 1 ) = X ( t i ) + 2 X 1 / 2 ( t i ) + 1 Δ t + 2 X 1 / 2 ( t i ) Δ W t + 1 2 ( ( Δ W t ) 2 Δ t ) ,
    where Δ t = t i + 1 t i , Δ W t = W t i + 1 W t i = Δ t ξ and ξ obeys the Gaussian normal distribution N ( 0 , 1 ) with E ( ξ ) = 0 and E ( ξ 2 ) = V a r ( ξ ) = 1 .
    We have i = 0 , , N 1 with X ( t 0 ) = 1.0 .
  • AB-splitting approach:
    We initialize t i with i = 0 , , N 1 , while t N = T and we have X ( 0 ) is the initial condition.
    We deal with the 2 steps:
    • A-step:
      X ˜ ( t i + 1 ) = X ( t i ) + 2 X 1 / 2 ( t i ) + 1 Δ t ,
    • B-part:
      X ( t i + 1 ) = X ˜ ( t i + 1 ) + 2 X ˜ 1 / 2 ( t i + 1 ) Δ W t + 1 2 ( ( Δ W t ) 2 Δ t ) ,
      where we have the solution X ( t i + 1 ) and we go to the next time-step till i = N 1 .
  • ABA-splitting approach:
    We initialize t i with i = 0 , , N 1 , while t N = T and we have X ( 0 ) is the initial condition.
    We deal with the 2 steps:
    • A-step ( Δ t / 2 ):
      X ˜ ( t i + 1 ) = X ( t i ) + 2 X 1 / 2 ( t i ) + 1 Δ t / 2 ,
    • B-part ( Δ t ):
      X ^ ( t i + 1 ) = X ˜ ( t i + 1 ) + 2 X ˜ 1 / 2 ( t i + 1 ) Δ W t + 2 ( ( Δ W t ) 2 Δ t ) ,
    • A-step ( Δ t / 2 ):
      X ( t i + 1 ) = X ^ ( t i + 1 ) + 2 X ^ 1 / 2 ( t i + 1 ) + 1 Δ t / 2 ,
      where we have the solution X ( t i + 1 ) and we go to the next time-step till i = N 1 .
  • Iterative Picard approach:
    d X ( t ) = ( 2 X 1 / 2 + 1 ) d t + 2 X 1 / 2 d W t , t × [ 0 , 1 ] , X ( 0 ) = 1 ,
    we apply an Picard-Iteration
    d X i ( t ) = ( 2 X i 1 1 / 2 + 1 ) d t + 2 X i 1 1 / 2 d W t , t × [ 0 , 1 ] , X ( 0 ) = 1 ,
    where X 0 ( t ) = x ( 0 ) , while we apply the implicit method for the drift term and the explicit method for the diffusion term.
    The algorithm is given as:
    We initialize t n with n = 0 , , N 1 , while t N = T and we have X ( 0 ) is the initial condition.
    We deal with the 2 loops (loop 1 is the computation over the full time-domain and loop 2 is the computation with i = 1 , 2 , ):
    • n = 0 , , N 1 :
    • i = 1 , , I :
    • Computation
      X i ( t n + 1 ) = X ( t n ) + ( 2 X i 1 ( t n + 1 ) 1 / 2 + 1 ) Δ t + 2 X 1 / 2 ( t n ) Δ W t + 1 2 ( ( Δ W t ) 2 Δ t ) , t × [ 0 , 1 ] , X ( 0 ) = 1 ,
      where Δ t = t n + 1 t n , Δ W t = W t n + 1 W t n = Δ t ξ and ξ obeys the Gaussian normal distribution N ( 0 , 1 ) with E ( ξ ) = 0 and E ( ξ 2 ) = V a r ( ξ ) = 1 .
    • i = i + 1 , if i = I + 1 then we are done, else we go to Step (c)
    • n = n + 1 , if n + 1 = N + 1 then we are done, else we go to Step (b)
  • Iterative Picard-splitting with Doléans-Dade exponential approach:
    d X ( t ) = ( 2 X 1 / 2 + 1 ) d t + 2 X 1 / 2 d W t , t × [ 0 , 1 ] , X ( 0 ) = 1 ,
    we apply the following splitting approach:
    A ( X ( t ) ) = 2 X 1 / 2 + 1 , A 1 ( X ( t ) ) = 2 X 1 / 2 , A 2 ( X ( t ) ) = 1 ,
    where a 1 ( X ( t ) ) = 2 X 1 / 2 ( t ) .
    We apply an Picard with Doléans-Dade exponential approach and right hand side
    d X i ( t ) = ( 2 X i 1 1 / 2 ) X i d t + 2 X i 1 1 / 2 X i d W t + 1 d t , t × [ 0 , 1 ] , X ( 0 ) = 1 ,
    where X 0 ( t ) = x ( 0 ) , while we apply the implicit method in the drift term and the explicit method in the diffusion term.
    The algorithm is given as: We initialize t n with n = 0 , , N 1 , while t N = T and we have X ( 0 ) is the initial condition.
    We deal with the 2 loops (loop 1 is the computation over the full time-domain and loop 2 is the computation with i = 1 , 2 , ):
    • n = 0 , , N 1 :
    • i = 1 , , I :
    • Computation (we apply Version 1 or Version 2)
    • Computation (we apply the full exp and integration of the RHS)
      X ( t n + 1 ) i = X ( t n ) exp ( Δ t 2 X 1 / 2 ( t n + 1 ) i 1 + 2 X 1 / 2 ( t n ) 2 Δ t X 1 ( t n + 1 ) i 1 + X 1 ( t n ) 2 + 2 X 1 / 2 ( t n ) Δ W t + 1 2 X 1 ( t n ) ( ( Δ W t ) 2 Δ t ) ) + Δ t .
    • i = i + 1 , if i = I + 1 then we are done, else we go to Step (c)
    • n = n + 1 , if n + 1 = N + 1 then we are done, else we go to Step (b)
We obtain a critical CFL condition, given as following:
Δ t X 4 ξ 2 ,
where ξ is N ( 0 , 1 ) distributed. If the criterion is done all is fine and we go on. If we reach such a value, we reduce the time-step Δ t for the recent time step to Δ t = X 4 ξ 2 . For the next time-step, we could use the old time step and so on.
Remark 3.
In the mixed example with additional right hand side, we could reduce the computational time, while we decompose into symmetric operators. We have applied an exp-operator and via Taylor- or Pade-approximation and additional a convolution operator with the right-hand side, see also [12]. We saw the stable behaviour of such a splitting and could additionally accelerate the computation of 2–3 times, such that we are in the same region as the ABA-splitting approach, see the remarks in Appendix A.2.
In the following Figure 5, we present the mean value, which is computed with the Equation (29) with different time-steps and for all the numerical methods.
In the following Figure 6, we present the mean square error, which is computed with the Equation (31) with different time-steps and for all the numerical methods.
In the following Figure 7, we present the performance of the schemes.
Remark 4.
In the second example, which is a mixed linear and nonlinear stochastic differential equation, we also see the benefit and the efficiency of the iterative splitting approaches with Doléans-Dade exponential approach. Based on the local Lipschitzian of the coefficients, we could also apply larger timesteps for the exponential method with an iterative method. Such a Picard iterations solve the nonlinear parts of the operators and we could stabilize the nonlinear growth-parts with the exponential exponential approach. This allows to neglect the blow-ups in the exponential growth with sufficient large time-steps, see [10,16]. Further the benefit in splitting into symmetric parts of nonlinear and linear parts allows to accelerate the solver processes, see [31]. The benefit of larger time-steps for the iterative Picard-splitting with Doléans-Dade exponential approach can also applied to increasing volatilities in the SDEs. Here, we have to consider the implicit character of the iterative Picard-splitting approaches, see [30]. Such a behaviour can smear or smoothen out the solution of increasing volatility, like increasing diffusion, such that it might be necessary to apply multiscale methods with much more finer time-steps or an restriction of the time-step around the CFL-condition, see [30,32].

4. Conclusions

We have discussed a new iterative splitting approach, which is based on Picard iterations and Doléans-Dade exponential approach. We proof the convergence of such new schemes, which have embedded the Doléans-Dade exponential approach. Such a combinations allow to circumvent the global non-Lipschitz problems and therefore consider more stable local Lipschitz problems. Such a local problem can be solved with reformulations of exponential based operators, which are more stable in the numerical approach. We could embed such locally Lipschitz problems into a splitting approach which deals with exponential parts and a nonlinear solver given as Picard’s method. Such a combination allows to obtain stable and convergent methods. Due to symmetric approaches of the operators, we could split into more appropriate linear, nonlinear and stochastic operators, which allows faster computations. The first numerical results are tested with linear and nonlinear stochastic differential equations with rational functions of the drift and diffusion parts. We present the benefits of the iterative splitting approach with the Doléans-Dade exponential, while we obtain an implicit method with stable results. Therefore, we are independent of the CFL conditions and accelerate the solver process. In future, we will test the splitting approach to more inhomogeneous problems and in the case of increasing volatility, e.g., real-life problems with stochastic lubrication models.

Funding

This research was funded by German Academic Exchange Service grant number 91588469.

Acknowledgments

We acknowledge support by the DFG Open Access Publication Funds of the Ruhr-Universität of Bochum, Germany.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

Appendix A.1. Additional Proofs

We have the Lemma A1 as following; see also the proof in paper [5].
Lemma A1.
We have f m ( t ) be a sequence of positive functions, which are defined on the interval t [ 0 , T ] , with T > 0 , such that we have for some C > 0 :
f m + 1 ( t ) C 0 t g m ( s ) d s , 0 f 0 C ,
then we obtain:
sup t [ 0 , T ] f m + 1 ( t ) C m t m m ! sup t [ 0 , T ] f 0 ( t ) .
Proof. 
The proof is sketch as following, see also [5]: We can bound the function g 0 as:
C f : = sup t [ 0 , T ] f 0 ( t ) C .
We apply:
sup t [ 0 , T ] f m 1 ( t ) C m 1 t m 1 ( m 1 ) ! C f ,
then we can apply the integral an obtain
f m ( t ) C 0 t C m 1 s m 1 ( m 1 ) ! C f d s .
By integration, we receive the results, that proves our lemma. □

Appendix A.2. Approximated exp-Functions

We can reduce the computational time for the Doléans-Dade exponential approach with the following reductions:
  • Version with reduced e x p in Example 1: We replace the Equation (27) with the reduced exponential function, see:
    X ( t n + 1 ) i = X ( t n ) + Δ t 2 5 X 2 / 5 ( t n + 1 ) i 1 + X 2 / 5 ( t n ) 2 · ( 1 + Δ t 5 X 1 / 5 ( t n + 1 ) i 1 + X 1 / 5 ( t n ) 2 Δ t 1 2 X 2 / 5 ( t n + 1 ) i 1 + X 2 / 5 ( t n ) 2 + X 1 / 5 ( t n ) Δ W t + 1 2 X 2 / 5 ( t n ) ( ( Δ W t ) 2 Δ t ) ) ,
    where we assume to choose and appropriate Δ t > 0 with 1 + Δ t 5 X 1 / 5 ( t n + 1 ) i 1 + X 1 / 5 ( t n ) 2 Δ t 1 2 X 2 / 5 ( t n + 1 ) i 1 + X 2 / 5 ( t n ) 2 + X 1 / 5 ( t n ) Δ W t + 1 2 X 2 / 5 ( t n ) ( ( Δ W t ) 2 Δ t ) 0 .
  • Version with reduced e x p in Example 2: We replace the Equation (34) with the reduced exponential function and integration of the RHS, see:
    X ( t n + 1 ) i = X ( t n ) ( 1 + Δ t 2 X 1 / 2 ( t n + 1 ) i 1 + 2 X 1 / 2 ( t n ) 2 Δ t 1 X 1 ( t n + 1 ) i 1 + 1 X 1 ( t n ) 2 + 2 X 1 / 2 ( t n ) Δ W t + 1 2 X 1 ( t n ) ( ( Δ W t ) 2 Δ t ) ) + Δ t .

References

  1. Zhang, Z.; Karniadakis, G.E. Numerical Methods for Stochastic Partial Differential Equations with White Noise; Applied Mathematical Sciences, No. 196; Springer International Publishing: Heidelberg/Berlin, Germany; New York, NY, USA, 2017. [Google Scholar]
  2. Duran-Olivencia, M.A.; Gvalani, R.S.; Kalliadasis, S.; Pavliotis, G.A. Instability, Rupture and Fluctuations in Thin Liquid Films: Theory and Computations. arXiv 2017, arXiv:1707.08811. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Grün, G.; Mecke, K.; Rauscher, M. Thin-film flow influenced by thermal noise. J. Stat. Phys. 2006, 122, 1261–1291. [Google Scholar] [CrossRef] [Green Version]
  4. Diez, J.A.; González, A.G.; Fernández, R. Metallic-thin-film instability with spatially correlated thermal noise. Phys. Rev. E Am. Phys. Soc. 2016, 93, 013120. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Baptiste, J.; Grepat, J.; Lepinette, E. Approximation of Non-Lipschitz SDEs by Picard Iterations. J. Appl. Math. Financ. 2018, 25, 148–179. [Google Scholar] [CrossRef]
  6. Geiser, J. Multicomponent and Multiscale Systems: Theory, Methods, and Applications in Engineering; Springer: Heidelberg, Germany, 2016. [Google Scholar]
  7. Moler, C.B.; Loan, C.F.V. Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later. SIAM Rev. 2003, 45, 3–49. [Google Scholar] [CrossRef]
  8. Bader, P.; Blanes, S.; Fernando, F.C. Computing the Matrix Exponential with an Optimized Taylor Polynomial Approximation. Mathematics 2019, 7, 1174. [Google Scholar] [CrossRef] [Green Version]
  9. Najfeld, I.; Havel, T.F. Derivatives of the matrix exponential and their computation. Adv. Appl. Math. 1995, 16, 321–375. [Google Scholar] [CrossRef] [Green Version]
  10. Geiser, J. Iterative semi-implicit splitting methods for stochastic chemical kinetics. In Proceedings of the Seventh International Conference, FDM:T&A 2018, Lozenetz, Bulgaria, 11–26 June 2018; Lecture Notes in Computer Science No. 11386: 32-43. Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  11. Geiser, J.; Bartecki, K. Iterative and Non-iterative Splitting approach of the stochastic inviscid Burgers’ equation. In Proceedings of the AIP Conference Proceedings Paper, ICNAAM 2019, Rhodes, Greece, 23–28 September 2019. [Google Scholar]
  12. Geiser, J. Iterative Splitting Methods for Differential Equations; Numerical Analysis and Scientific Computing Series; Taylor & Francis Group: Boca Raton, FL, USA; London, UK; New York, NY, USA, 2011. [Google Scholar]
  13. Geiser, J. Additive via Iterative Splitting Schemes: Algorithms and Applications in Heat-Transfer Problems. In Proceedings of the Ninth International Conference on Engineering Computational Technology, Naples, Italy, 2–5 September 2014; Ivanyi, P., Topping, B.H.V., Eds.; Civil-Comp Press: Stirlingshire, UK, 2014. [Google Scholar] [CrossRef]
  14. Vandewalle, S. Parallel Multigrid Waveform Relaxation for Parabolic Problems; Teubner Skripten zur Numerik, B.G. Teubner Stuttgart: Leipzig, Germany, 1993. [Google Scholar]
  15. Gyöngy, I.; Krylov, N. On the Splitting-up Method and Stochastic Partial Differential Equations. Ann. Probab. 2003, 31, 564–591. [Google Scholar] [CrossRef]
  16. Duan, J.; Yan, J. General matrix-valued inhomogeneous linear stochastic differential equations and applications. Stat. Probab. Lett. 2008, 78, 2361–2365. [Google Scholar] [CrossRef] [Green Version]
  17. Bonaccorsi, S. Stochastic variation of constants formula for infinite dimensional equations. Stoch. Anal. Appl. 1999, 17, 509–528. [Google Scholar] [CrossRef]
  18. Reiss, M.; Riedle, M.; van Gaans, O. On Emery’s Inequality and a Variation-of-Constants Formula. Stoch. Anal. Appl. 2007, 25, 353–379. [Google Scholar] [CrossRef]
  19. Scheutzow, M. Stochastic Partial Differential Equations; Lecture Notes, BMS Advanced Course; Technische Universität Berlin: Berlin, Germany, 2019; Available online: http://page.math.tu-berlin.de/~scheutzow/SPDEmain.pdf (accessed on 20 February 2020).
  20. Prato, G.D.; Jentzen, A.; Röckner, M. A mild Ito formula for SPDEs. Trans. Am. Math. Soc. 2019, 372, 3755–3807. [Google Scholar] [CrossRef]
  21. Deck, T. Der Ito-Kalkül: Einführung und Anwendung; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  22. Teschl, G. Ordinary Differential Equations and Dynamical Systems; American Mathematical Society, Graduate Studies in Mathematics: Providence, RI, USA, 2012; Volume 140. [Google Scholar]
  23. Oksendal, B. Stochastic Differential Equations: An Introduction with Applications; Springer: Heidelberg/Berlin, Germany, 2003. [Google Scholar]
  24. Stoer, J.; Bulirsch, R. Introduction to Numerical Analysis; Springer: Heidelberg/Berlin, Germany; New York, NY, USA, 1980. [Google Scholar]
  25. Kloeden, P.E.; Platen, E. The Numerical Solution of Stochastic Differential Equations; Springer: Berlin/Heidelberg, Germany, 1992. [Google Scholar]
  26. Karlsen, K.H.; Storrosten, E.B. On stochastic conservation laws and Malliavin calculus. J. Funct. Anal. 2017, 272, 421–497. [Google Scholar] [CrossRef] [Green Version]
  27. Ninomiya, S.; Victoir, N. Weak approximation of stochastic differential equations and application to derivative pricing. Appl. Math. Financ. 2008, 15, 107–121. [Google Scholar] [CrossRef] [Green Version]
  28. Ninomiya, M.; Ninomiya, S. A new higher-order weak approximation scheme for stochastic differential equations and the Runge–Kutta method. Financ. Stoch. 2009, 13, 415–443. [Google Scholar] [CrossRef] [Green Version]
  29. Bayram, M.; Partal, T.; Buyukoz, G.O. Numerical methods for simulation of stochastic differential equations. Adv. Differ. Equ. 2018, 2018. [Google Scholar] [CrossRef] [Green Version]
  30. Geiser, J. Multicomponent and Multiscale Systems—Theory, Methods, and Applications in Engineering; Springer International Publishing AG: Heiderberg/Berlin, Germany; New York, NY, USA, 2016. [Google Scholar]
  31. Geiser, J. Computing Exponential for Iterative Splitting Methods: Algorithms and Applications. Math. Numer. Model. Flow Transp. 2011, 2011, 193781. [Google Scholar] [CrossRef] [Green Version]
  32. Fouque, J.-P.; Papanicolaou, G.; Sircar, R. Multiscale Stochastic Volatility Asymptotics. SIAM J. Multiscale Model. Simul. 2003, 2, 22–42. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The exact solution of the stochastic differential equation (SDE) in the first experiment.
Figure 1. The exact solution of the stochastic differential equation (SDE) in the first experiment.
Symmetry 12 00383 g001
Figure 2. Results of the mean values are computed with Equation (29) and they are presented for the different methods with the following number of time steps N = 512 , 1024 , 2048 , 4096 .
Figure 2. Results of the mean values are computed with Equation (29) and they are presented for the different methods with the following number of time steps N = 512 , 1024 , 2048 , 4096 .
Symmetry 12 00383 g002
Figure 3. Results of mean square errors are computed with Equation (31) and they are presented for the different methods with the following number of time steps N = 512 , 1024 , 2048 , 4096 .
Figure 3. Results of mean square errors are computed with Equation (31) and they are presented for the different methods with the following number of time steps N = 512 , 1024 , 2048 , 4096 .
Symmetry 12 00383 g003
Figure 4. Performance of the different schemes, where we compute the mean error values of the different methods at t = 1.0 with J = 100 samples and different number of time-steps with N = 500 , 1000 , 1500 , 2000 , 2500 , 3000 , 3500 , 4000 .
Figure 4. Performance of the different schemes, where we compute the mean error values of the different methods at t = 1.0 with J = 100 samples and different number of time-steps with N = 500 , 1000 , 1500 , 2000 , 2500 , 3000 , 3500 , 4000 .
Symmetry 12 00383 g004
Figure 5. Results of the mean values, which are computed with Equation (29) and they are presented for the different methods with the number of time steps N = 512 , 1024 , 2048 , 4096 .
Figure 5. Results of the mean values, which are computed with Equation (29) and they are presented for the different methods with the number of time steps N = 512 , 1024 , 2048 , 4096 .
Symmetry 12 00383 g005
Figure 6. Results of mean square errors, which are computed with Equation (31) and they are presented for the different methods with the number of time steps N = 512 , 1024 , 2048 , 4096 .
Figure 6. Results of mean square errors, which are computed with Equation (31) and they are presented for the different methods with the number of time steps N = 512 , 1024 , 2048 , 4096 .
Symmetry 12 00383 g006
Figure 7. Performance of the different schemes, where we compute the mean error values of the different methods at t = 1.0 with J = 100 samples and different number of time-steps with N = 500 , 1000 , 1500 , 2000 , 2500 , 3000 , 3500 , 4000 .
Figure 7. Performance of the different schemes, where we compute the mean error values of the different methods at t = 1.0 with J = 100 samples and different number of time-steps with N = 500 , 1000 , 1500 , 2000 , 2500 , 3000 , 3500 , 4000 .
Symmetry 12 00383 g007
Table 1. Mean values of the all the methods at time-point t = 1.0 .
Table 1. Mean values of the all the methods at time-point t = 1.0 .
NEM-MilsteinABABAPicardPicard-Picard-Splitt-
DoleansDoleans
i = 2 i = 2 i = 2
2 9 33.9535 35.5878 35.4419 34.6244 40.6004 37.8312 31.3049
2 10 34.4942 35.4737 34.1016 34.529 40.0732 37.9911 32.2207
2 11 34.7694 34.2016 35.2721 34.0984 40.6338 37.9833 30.5993
2 12 34.2202 36.2405 35.1938 35.8434 40.1492 37.9433 31.4823

Share and Cite

MDPI and ACS Style

Geiser, J. Numerical Picard Iteration Methods for Simulation of Non-Lipschitz Stochastic Differential Equations. Symmetry 2020, 12, 383. https://doi.org/10.3390/sym12030383

AMA Style

Geiser J. Numerical Picard Iteration Methods for Simulation of Non-Lipschitz Stochastic Differential Equations. Symmetry. 2020; 12(3):383. https://doi.org/10.3390/sym12030383

Chicago/Turabian Style

Geiser, Jürgen. 2020. "Numerical Picard Iteration Methods for Simulation of Non-Lipschitz Stochastic Differential Equations" Symmetry 12, no. 3: 383. https://doi.org/10.3390/sym12030383

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop