A New Explicit Magnus Expansion for Nonlinear Stochastic Differential Equations

: In this paper, based on the iterative technique, a new explicit Magnus expansion is proposed for the nonlinear stochastic equation dy = A ( t , y ) ydt + B ( t , y ) y ◦ dW . One of the most important features of the explicit Magnus method is that it can preserve the positivity of the solution for the above stochastic differential equation. We study the explicit Magnus method in which the drift term only satisﬁes the one-sided Lipschitz condition, and discuss the numerical truncated algorithms. Numerical simulation results are also given to support the theoretical predictions.


Introduction
Stochastic differential equations (SDE) have been widely applied in describing and understanding the random phenomena in different areas, such as biology, chemical reaction engineering, physics, finance and so on. The stochastic differential equations can also be applied in 5G wireless networks to address the problem of joint transmission power [1,2], and the novel frameworks were showed in [3,4]. However, in most cases, explicit solutions of stochastic differential equations are not easy to get. Therefore, it is necessary and important to exploit the algorithms for the stochastic differential equations. So far, there is a lot of literature on the numerical approximation of stochastic differential equations [5,6]. One kind of approximation which aims to preserve the essential properties of corresponding systems is getting more and more attention. For example, a conservative method for stochastic Hamiltonian systems with additive and multiplicative noise [7], the stochastic Lie group integrators [8]. In this paper, we will focus on the stochastic Magnus expansions [9].
As is well known, the deterministic Magnus expansion was first investigated by Magnus [10] in 1960s. This topic was further pursued in [11,12]. With the development of stochastic differential equations, the stochastic Magnus expansion got more and more attention when approximating solutions of linear and nonlinear stochastic differential equations [13,14].
The linear stochastic differential equation is as follows where A(t) and B(t) are n × n matrices, W(t) is the standard Wiener process, Y(t) is expressed by Y(t) = exp(Ω(t))Y 0 , where Ω(t) can be represented by an infinite series Ω(t) = ∑ ∞ k=1 Ω k (t), whose terms are linear combinations of multiple stochastic integrals.
For example, the first two terms are as follows, where [X, Y] = XY − YX is the matrix commutator of X and Y.
In this paper, we design explicit stochastic Magnus expansions [9] for the nonlinear equation where G denotes a matrix Lie group, and g is the corresponding Lie algebra of G. The Equation (4) has lots of applications in the calculation of highly oscillatory systems of stochastic differential equations, Hamiltonian dynamics and finance engineering. It is necessary to construct efficient numerical integrators for the above equation. The general procedure of devising Magnus expansions for the above stochastic Equation (4) was obtained by applying Picard's iteration to the new derived stochastic differential equation with the Lie algebra, which is an extension of the stochastic case [11]. G. Lord, J.A. Malham and A.wiese [14] proposed the stochastic Magnus integrators using different techniques.
With the application of the stochastic Magnus methods, we can recapture some main features of the stochastic differential Equations (4) successfully. When Equation (4) is used to describe the asset pricing model in finance, the positivity of the solution is a very important factor to be maintained in the numerical simulation. This task can be accomplished by the nonlinear Magnus methods. We also notice that lots of works [15,16] have dealt with this issue using the methods of balancing and dominating. However, the Magnus methods using the Lie group methods are essentially different from the aforementioned methods. If Equation (4) has an almost sure exponential stable solution, it is necessary to construct numerical methods to preserve this property. The numerical methods which preserve the exponential stability usually have long-time numerical behavior. We prove that the explicit Magnus methods succeed in reproducing the almost sure exponential stability for the stochastic differential Equation (4) with a one-sided Lipschitz condition. Numerical simulation results are also given to support the theoretical predictions. This paper is organized as follows. We introduce and discuss the nonlinear Magnus in Section 2. The numerical truncated algorithms are discussed in Section 3. Then several numerical applications and experiments strongly support the theoretical analysis. In Section 4, we present the applications in the highly oscillating nonlinear stochastic differential equations. Section 5 consists of the simulations of processes for the dynamic asset pricing models. In Section 6, we show that the explicit Magnus simulation can preserve the almost sure stability and numerical experiments are presented to support the analysis.
Notation In this paper, (Ω, F , {F t } t≥0 , P) is the complete probability space with the filtration {F t } t≥0 . W(t) is the scalar Brownian motion defined on the probability space. •dW(t) denotes the Stratonovich integral and dW(t) denotes the Itô integral. [X, Y] = XY − YX is the matrix commutator of X and Y. A and B are n × n matrices and smooth in the domain, W(t) is the standard Wiener process and E|y 0 | 2 <∝.

The Nonlinear Stochastic Magnus Expansion
In this section, we consider a stochastic differential Equation (4).
Here, A and B have uniformly bounded partial derivatives, A(t, y)y and B(t, y)y satisfy the global Lipschtiz conditions. Therefore, Equation (4) has a unique strong solution [17].
The solution of Equation (4) is presented as follows and Combining (5) with (4), we have Equation (7) holds since the integral is in the Stratnovitch sense. Inserting (6) into (7), we can get According to Equation (8), we obtain It is not difficult to check that and where ad[p 0 , q] = q and ad[p Then it is true that Equations (13) and (14) indicate that where .
Finally, we get Then, applying the Picard's iteration to (17), we obtain In order to get explicit numerical methods which can be implemented in the computer, we need to truncate the infinite series in (18). The iterate function series Ω (m+1) (t) only reproduce the expansion of the solution Ω(t) up to certain orders in the mean-square sense.
For example, if m = 0, we obtain From Finally, we devise the following general Magnus expansion Obviously, Equation (21) consists of a linear combination of multiple stochastic integrals of nested commutators. It is easy to prove that Ω m (t) succeeds in reproducing the sum of the Ω(t) series with the Magnus expansion. This scheme is called an explicit stochastic Magnus expansion for the nonlinear stochastic differential equation.

Remark 1.
We let Ω(t) be the exact solution of the problem (17) and Ω m (t) be the approximate solution given by (21).
It is true that we can write the exact solution of Equation (17) as where .
The general Magnus expansion is Comparing the Taylor expansion of the Ω m (t) with the expansion of Ω(t) [12], the conclusion is proven.

Numerical Schemes
In this section, we present a new way of constructing efficient numerical methods based on the nonlinear stochastic Magnus expansion. It should be mentioned that highly efficient schemes always involve multiple stochastic integrals. In most cases, however, the half-order approximation of Ω 1 (t) (21) can only be exactly evaluated. In order to get higher order integrator, more complicated multiple stochastic integrals, which are hard to approximate, must be included.
We will investigate the schemes of order (mean-square order) 1/2 and 1 concretely, and choose the quadrature rules with equispaced points along the interval [t n , t n + h].

Methods of Order 1/2
When m = 1, the expansion (21) turns into Using the Taylor formula, we can get the expansion of the solution y(t n + h) = y(t n ) + A(t n , y(t n ))y(t n )h + B(t n , y(t n ))y(t n )∆W n + R 1 n , (25) and the approximate solution can be expanded in the following form where, It is easy to check that According to the Milstein mean-square convergence theorem [18], the strong convergence order is 1/2.

Remark 2.
If Ω 1 (t) cannot be evaluated exactly, we need to approximate it with a quadrature rule of order 1/2.
For example, using the Euler method (see reference [19]), we can get Notice that other numerical methods can also be used to approximate (24). we can get the strong convergence order is 1/2.

Methods of Order 1
When m = 2, the expansion (21) becomes The proof of the convergence order is the same as the method of order 1/2.

Remark 3.
Note that if (28) can be computed exactly, all that is required is to replace (29) with a quadrature of order 1. Using the stochastic Taylor expansion, we approximate Ω 2 (t) in the following way As a matter of fact, there is no need to compute (28) or (29) exactly. If we approximate (28) or (29) with the methods of order 1, The convergence order will also be order 1.

Numerical Experiment for the Highly Oscillatory Nonlinear Stochastic Differential Equations
We consider the following stochastic system dy = A(t, y)ydt + B(t, y)y • dW, y(0) = y 0 , where A and B are matrices, W is the Wiener process. It is supposed that the solution of (31) oscillates rapidly. If A and B are commutative, in order to compute the value of y(t), a new Z(x) is considered, and where F(x) = exp[xA(t n , y n ) + W(x)B(t n , y n )], B(0, Z(0)) = 0. We can consider Z(x) as a correction of the solution provided by Ω [1] . For this reason, if Equation (34) is solved by the method of the nonlinear Magnus expansion, the errors corresponding will be much smaller than the previous algorithms [13,14].
When A and B are not commutative with each other, we have [20] y(t n + x) = exp(xA(t n , y n )) exp(W(x)B(t n , y n ))Z 1 (x), and with here, F 1 (x) = exp(xA(t n , y n )) exp(W(x)B(t n , y n )).
To illustrate the main feature of the nonlinear modified Magnus expansion, we consider a system y + a(t, y,ẏ)y + b(t, y,ẏ)yẆ = 0, where, W is the standard Wiener process.
Applying the scheme (27) to Equation (44), we get the algorithm Then, inserting (45) into (37), we can get the one-step approximate algorithm.
In Figure 1, we can see that the two points (±1, 0) initially attract the noisy trajectories, and after some time, the trajectories switch to the other point which coincides with the tunneling phenomena described in [21].

Application to the Stochastic Differential Equations with Boundary Conditions
In this section, we will deal with the Itô-type stochastic differential equation we assume Equation (46) is well-defined with a certain boundary condition, that is, x(t) lies in the domain D.
There are lots of works on the efficient approximate solution of (46) [15,16,22]. Most of them use the balancing or dominating technique to restrict the numerical solution to stay in the domain. In this paper, the proposed explicit nonlinear Magnus methods based on the stochastic Lie group are efficient at approximating the solutions of stochastic differential Equations (46) with unattainable boundary conditions. Our methods are really different from the methods proposed in [23].
In the following, we will approximate two types of finance models using the nonlinear stochastic Magnus methods.
Firstly, we study the financial model defined as: This model has been widely used in financial engineering. For example, when 2ab > σ 2 , r = 1 2 , the solution is strictly positive. As well known, the traditional Euler method fails to preserve the positivity of the solution. We also notice that several methods such as the balanced implicit method (BIM) [22] have been proposed. The explicit or implicit Milstein method needs stepsize control [24]. In the following, we show that the nonlinear Magnus methods preserve the positivity independent of the stepsize.
For Equation (47), its equivalent Stratonovich form is defined as: To simulate the above equation, we split the system (48) into two subsystems and The system (49) is simulated with the nonlinear stochastic Magnus method and (50) is approximated with Euler method.
Combining the nonlinear Magnus method (27) and the Euler method, we obtain one step method on the interval [t n , t n+1 ] In the numerical experiments, we use the scheme (51) to simulate the financial models. When r = 0.5, it is the famous interest rate model of Cox, Ingeesoll, and Ross (for more detail, see [16]). Table 1 illustrates that both the Euler and the Milstein methods have a certain percentage of negative paths. When the stepsize decreases, the number of negative paths also decreases. However,  In Figures 2 and 3, we present the numerical simulations for two finance equations. We can see that both the Euler and the Milstein methods fail to preserve the positivity. However, the NLM is as good as the BMM and the BIM at preserving the positivity of the solution.    In Figure 4, we see that the explicit NLM which is of the order 1/2 in mean square sense verifies the analysis in Section 3. In Figure 5, we can see clearly that the simulation is of the order 1 in mean square sense with the linear model, which is the same order as in the BMM.

Application to the Nonlinear Itô Scalar Stochastic Differential Equations
In this part, we will focus on the nonlinear Itô scalar stochastic differential equation where σ is independent of y. The Equation (52) satisfies the following conditions and For convenience, we transform it into its Stratonovich form dy = (a(y) − σ 2 /2)ydt + σy • dW(t).
By the use of the condition (54), we obtain Let t− > ∞, (56) is proven. It means that the solution of the stochastic differential equation is almost surely stable.
In the following, we will approximate the model using the nonlinear stochastic Magnus methods.

Cubic Stochastic Differential Equations
For the cubic stochastic differential equation with the form We can get [25] lim t−>∞ Equation (66) implies that the solution has asymptotic stability. It is well-known that the EM method fails to preserve this property (see reference [25]).
Applying (27) to Equation (65), we get the one-step nonlinear Magnus scheme Theorem 3. Let ∆t be any stepsize , the scheme (67) satisfies Proof. According to (67), we get (69) implies Equation (68) is obtained. Almost sure exponential stability analysis which is independent of the time-step size is proven.
In Figure 6, a single trajectory is simulated using the Euler method and the proposed NLM, respectively. One can see that the Euler method fails to preserve the asymptotic stability. However, the NLM can exactly preserve this property. In Figure 7 and 8, we simulate the function log |x(t)|/t using the NLM. As proved in theorem 6.11, the property (68) can be exactly preserved independent of the stepsize.

Conclusions
In this paper, we introduce the nonlinear Magnus expansion and present a new way of constructing numerical Magnus methods. Based on the new Magnus method, we discuss the numerical truncated algorithms, and prove that the errors of the corresponding approximations will be smaller than the previous algorithms. Several numerical applications and experiments strongly support the theoretical analysis. For more applications of the new Magnus method, we believe that they can be applied to the semi-discretized stochastic partial differential equations such as stochastic Korteweg-de Vries equations.