Assessment of Stochastic Numerical Schemes for Stochastic Differential Equations with “White Noise” Using It ô ’s Integral

: Stochastic Differential Equations (SDEs) model physical phenomena dominated by stochastic processes. They represent a method for studying the dynamic evolution of a physical phenomenon, like ordinary or partial differential equations, but with an additional term called “noise” that represents a perturbing factor that cannot be attached to a classical mathematical model. In this paper, we study weak and strong convergence for six numerical schemes applied to a multiplicative noise, an additive, and a system of SDEs. The Efﬁcient Runge–Kutta (ERK) technique, however, comes out as the top performer, displaying the best convergence features in all circumstances, including in the difﬁcult setting of multiplicative noise. This result highlights the importance of researching cutting-edge numerical techniques built especially for stochastic systems and we consider to be of good help to the MATLAB function code for the ERK method.


Introduction
Although a field that emerged in the mid-20th century [1,2], stochastic calculus is a crucial step in various domains, such as finance, meteorology, epidemiology, and engineering.Without being exhaustive, we provide examples of several authors who emphasize the role of stochastic differential equations in real-life applications.For instance, Sagirow [3] dedicates an entire course to stochastic methods in the dynamics and stability of satellite orbits.Kloeden and Platen [4] extensively present multiple areas where stochastic calculus is used, such as population dynamics, genetics, experimental psychology, seismology, and mechanics.Higham [5,6] illustrates the applicability of stochastic calculus in the probabilistic nature of chemical reactions.
Stochastic differential equations are differential equations whose solutions are influenced by boundary and initial conditions but are not solely determined by them, unlike Ordinary Differential Equations (ODEs) for which knowing the boundary and initial conditions is enough to solve these equations.The difference between the two types of differential equations is the stochastic (or probabilistic) term, which gives the integrand function a degree of uncertainty which can be associated with the uncertainty of any phenomena occurring in the real world [1,2], such as the influence of random perturbations affecting the deterministic mathematical model applied to the phenomena.Such cases involving linear and harmonic oscillators [7] and even nonlinear oscillators [8,9] have been thoroughly analyzed and are a subject of interest in present studies.
We consider that this paper is dedicated to students and researchers who want to be quickly introduced in the field of stochastic differential equations and who want to solve them numerically, but do not know what numerical scheme to choose immediately.Through this work and the provided examples, we hope to guide them as quickly as Symmetry 2023, 15, 2038 2 of 13 possible towards a fair choice of the numerical method of calculation depending on the type of problem that is wanted to be solved.

Brownian Motion, White Noise
A stochastic process is a family X(t), t ≥ 0 of a random variable that depends on the continuous real parameter time t, [3,10].Thus, we can see the stochastic process like a trajectory of the system (or the sequence of values of the random variable obtained in temporal order) depending on the outcome of each probabilistic experiment.
A stochastic process with independent increments is the process in which, for arbitrary time moments t 0 < t 1 < t 2 < . . .< t k , the differences X(t 1 ) − X(t 0 ), X(t 2 ) − X(t 1 ), . .., X(t k ) − X(t k−1 ) are independent, and the process at any moment can be written as [6][7][8][9][10]: A Wiener process (or Brownian motion), W(t), t ≥ 0 is a stochastic process with with probability one, the variation W(t 2 ) − W(t 1 ) ∼ N 0, √ t 2 − t 1 has a normal (or Gaussian) distribution such that the expectation E and the variance σ 2 are: and the increments W(t 2 ) − W(t 1 ) are independent on [0, T] for any 0 ≤ t 1 < t 2 ≤ T. This process is not differentiable at any point in its domain of definition.The origin of the term "noise," commonly used in SDEs, represents a generic term to describe a random variable with zero mean [11,12].The concept of white noise can be likened to the effect of unwanted, unpredictable vibrations or unwanted interference contaminating a reference signal due to atmospheric effects or imperfections in transmitting and/or receiving equipment.White noise, denoted by ξ(t), defined as the derivative of the Wiener process: is satisfying that the mean is E[ξ(t)] = 0 and the covariance is E(ξ(t 1 )ξ(t 2 )) = δ(t 1 − t 2 ), which is the Dirac delta function.Although we previously mentioned that the Wiener process is not differentiable, it admits, in the limit, for infinitely small-time domains, the association of the Dirac function as its derivative.White noise can be thought of as a sequence of infinitely large impulses acting in an infinitely small time [3].

A Brief Presentation of Stochastic Differential Equations
Given two differentiable functions f , g : R × [0, T] → R , we consider the stochastic initial value problem (SIVP) for the scalar Itô stochastic differential equation (SDE) given by The form of a stochastic differential equation includes both the deterministic term f (X(t), t) (i.e., "drift coefficient") and the probabilistic term g(X(t), t)ξ(t) (i.e., "diffusion coefficient" or "stochastic term").The equivalent relationship arises from replacing (4) in (5) and obtains: Symmetry 2023, 15, 2038 where W(t) is a Wiener process, f and g are known functions, at least of class C 2 (R), and X(t) is the random variable to be determined.The integral solution of this stochastic differential equation is: The solution (7) of Equation (6) suggests that the knowledge of the process state at each intermediate time moment is necessary due to the integral with the integrand dW.Previously, we mentioned that the Wiener process has independent increments, so the Riemannian sense of integration is not applicable here.For this reason, a different integration concept of the Wiener process has been defined, known as the Itô integral [13] This Equation (8) represents Itô's interpretation of the integral of a Wiener stochastic process [3,[14][15][16].However, there is another interpretation of the integral of a stochastic process, given by the Russian mathematician Ruslan Stratonovich [3], which differs from the first one as follows (marked to avoid confusion with the previous one by the symbol (•): It cannot be said that one of these definitions of the integral is correct and the other one is wrong; everything depends on the way one wants to solve an equation like the one in (6).Furthermore, if the temporal domain of computation [t, t ] is discretized into as many nodes as possible, we arrive at the Itô integral: Or to the Stratonovich integral: In the numerical resolution of stochastic differential equations to be analyzed, the Itô integral (10) will be used.To illustrate the differences between the two types of stochastic calculus, the integral of WdW(t) will be integrated in both directions:

•
In the Itô sense:

•
In the Stratonovich sense: Thus, the random variable X(t) satisfies the stochastic differential equation in the Itô sense: If X(t) satisfies the stochastic differential equation, then to be valid in the Stratonovich sense the differential equation must be written: where we denoted F(X(t), t) = f (X(t), t) − 1 2 g (X(t), t)g(X(t), t).

Finite Difference Schemes in Solving SDEs
Let us now consider a discretized Brownian motion by setting the time step ∆t = T/N, where N is the number of intervals.The discretization of the temporal domain into a grid t i = i∆t, i = 0, N with a constant step is the first stage in applying a numerical scheme.In this paper, five schemes will be studied:

•
Euler-Maruyama Scheme [2,10] of order O ∆t 3/2 is given by: where the index represents X i = X(t i ) and the difference of the Wiener process is given by where the terms (u i ) i=1,N which replaced the white noise, is a set of independent Gaussian random variables with zero mean and variance 1.

Convergence Analysis
In this section, we recall from the stochastic theory some relevant concepts about the strong and weak convergence of sequences of random variables.Convergence of a numerical method is measured by comparing the exact solution X exact (t i ) and the estimation X i .Since X(t i ) and X i are random variables, in order to measure their difference, the expected values are used, [18]: A method is said to have a strong order of convergence equal to p if there exist the positive constant A such that: • A method is said to have a weak order of convergence equal to q if there exist the positive constant C and an arbitrary smooth test function φ such that: In this paper we will focus on the case where the test function φ is the identity function: φ(X) = X.Strong convergence quantifies "the expected value of the error," whereas weak convergence quantifies "the error of the expected values".The strong order always satisfies p ≤ q.The orders of convergence of the presented schemes will be studied by varying the time step [6].

Numerical Application
In this section, we provide a numerical study of the properties of convergence of the presented methods for two stochastic differential equations, one with additive "white" noise and the other with multiplicative "white" noise.
Benchmark Test 1: Consider the stochastic differential equation with multiplicative white noise: with the initial condition X(0) = X 0 .The exact solution is given by: The numerical strategy is the following.The time domain is initially discretized with dt = T/2 13 and then we consider seven intervals, multiple of 2, defined by N(s) = 2 13−s , s = 0, 6.The corresponding number of time steps is Dt = 1/N(s) = 2 s dt.To understand how is added the white noise on each time step in the obtained results, see an example in Figure 1.

Numerical Application
In this section, we provide a numerical study of the properties of convergence of the presented methods for two stochastic differential equations, one with additive "white" noise and the other with multiplicative "white" noise.
Benchmark Test 1: Consider the stochastic differential equation with multiplicative white noise: with the initial condition ( ) 0 0 X X = .The exact solution is given by: The numerical strategy is the following.The time domain is initially discretized with .To understand how is added the white noise on each time step in the obtained results, see an example in Figure 1.We compute 5000 M = different trajectories with white noise (Brownian paths) over the interval [ ] 0,1 , following a Gaussian distribution.For each path, we have generated seven numerical solutions corresponding to each step size.To compare the numerical methods, we arbitrary choose a path from those M paths.The results are given in Figure 2 in full and detailed representations.We can notice that the Runge-Kutta classic method is practically the same as the Milshtein method.We compute M = 5000 different trajectories with white noise (Brownian paths) over the interval [0, 1], following a Gaussian distribution.For each path, we have generated seven numerical solutions corresponding to each step size.To compare the numerical methods, we arbitrary choose a path from those M paths.The results are given in Figure 2 in full and detailed representations.We can notice that the Runge-Kutta classic method is practically the same as the Milshtein method.It is evident that the Euler-Maruyama method has the worst accuracy when it is compared to the rest of the schemes.In order to observe the dependency between the size of the time step and the accuracy of the results, the first four, smaller time steps have been considered for each scheme, as shown in Figure 3.It is evident that the Euler-Maruyama method has the worst accuracy when it is compared to the rest of the schemes.In order to observe the dependency between the size  It is evident that the Euler-Maruyama method has the worst accuracy when it is compared to the rest of the schemes.In order to observe the dependency between the size of the time step and the accuracy of the results, the first four, smaller time steps have been considered for each scheme, as shown in Figure 3.The Euler-Maruyama method performs well for smaller time steps and increasing the size of the time step drastically reduces the accuracy of the results.In Figure 3, the differences between the graphs considering the same range on horizontal axis are almost imperceptible ranging at the last decimal.However, out of all the schemes, the ERK method appears to have the best results (i.e., the exact solution is the closest to the line magenta in Figure 2c), not being dramatically affected by the change in time step size.
Regarding the order of convergence, we have studied the two orders of convergence for each method.According to [18] strong convergence was studied computing the expectation at the final time N t T = : The Euler-Maruyama method performs well for smaller time steps and increasing the size of the time step drastically reduces the accuracy of the results.In Figure 3, the differences between the graphs considering the same range on horizontal axis are almost imperceptible ranging at the last decimal.However, out of all the schemes, the ERK method appears to have the best results (i.e., the exact solution is the closest to the line magenta in Figure 2c), not being dramatically affected by the change in time step size.
Regarding the order of convergence, we have studied the two orders of convergence for each method.According to [18] strong convergence was studied computing the expectation at the final time where we expect to see a line of slope p.The absolute errors at the final time moment are taken, the averages of these errors are calculated depending on the considered time steps, and they are graphically represented in Figure 4 where the slope of the resulting line corresponds to the order of convergence, p.The numerical results confirm the orders of strong convergence of O ∆t 1/2 for Euler-Maruyama and of O(∆t) order for both Milshtein, Stochastic Heun, FRKI, and IRK schemes.
we expect to see a line of slope p .The absolute errors at the final time moment are taken, the averages of these errors are calculated depending on the considered time steps, and they are graphically represented in Figure 4 where the slope of the resulting line corresponds to the order of convergence, p .The numerical results confirm the orders of strong convergence of ( )

O t Δ
for Euler-Maruyama and of ( )

O t Δ
order for both Milshtein, Stochastic Heun, FRKI, and IRK schemes.The best order of strong convergence, of ( )

O t Δ is obtained by the Efficient
Runge-Kutta method.For weak convergence, we perform with a similar procedure, but the difference is that we compute at the end time the mean of the last values for all the trajectories and the mean of all the last values using a certain method: The results are presented in Figure 5 and, with the exception of the ERK which is once again of order ( )

O t Δ
, and the Heun method, in which determining the type of accuracy is difficult, all the other schemes have a weak convergence of order 1.The best order of strong convergence, of O ∆t 3/2 is obtained by the Efficient Runge-Kutta method.For weak convergence, we perform with a similar procedure, but the difference is that we compute at the end time the mean of the last values for all the trajectories and the mean of all the last values using a certain method: The results are presented in Figure 5 and, with the exception of the ERK which is once again of order O ∆t 3/2 , and the Heun method, in which determining the type of accuracy is difficult, all the other schemes have a weak convergence of order 1.
corresponds to the order of convergence, p .The numerical results confirm the orders of strong convergence of ( )  The best order of strong convergence, of ( ) O t Δ is obtained by the Efficient Runge-Kutta method.For weak convergence, we perform with a similar procedure, but the difference is that we compute at the end time the mean of the last values for all the trajectories and the mean of all the last values using a certain method: The results are presented in Figure 5 and, with the exception of the ERK which is once again of order ( )

O t Δ
, and the Heun method, in which determining the type of accuracy is difficult, all the other schemes have a weak convergence of order 1.Benchmark Test 2: Consider the stochastic differential equation with additive white noise, discretized in the same way as in the previous example, (32): with the initial condition X 0 = 0.5 and the analytical solution: From Figure 6, it can be observed that the values of the Milshtein method overlap with those of the Euler-Maruyama method and with those of the First Order Runge-Kutta method.This is due to the term dg dX which, in the case of additive noise, cancels out, and thus, the three schemes will be identical.From Figure 6, it can be observed that the values of the Milshtein method overlap with those of the Euler-Maruyama method and with those of the First Order Runge-Kutta method.This is due to the term dg dX which, in the case of additive noise, cancels out, and thus, the three schemes will be identical.Figure 7 shows that all schemes have very similar values, which is caused by the way the benchmark test is chosen.An interesting conclusion is that the decision to use a greater computation effort scheme is invalid since the additive noise does not allow for significant differences between the schemes.Figure 7 shows that all schemes have very similar values, which is caused by the way the benchmark test is chosen.An interesting conclusion is that the decision to use a greater computation effort scheme is invalid since the additive noise does not allow for significant differences between the schemes.
From Figure 6, it can be observed that the values of the Milshtein method overlap with those of the Euler-Maruyama method and with those of the First Order Runge-Kutta method.This is due to the term dg dX which, in the case of additive noise, cancels out, and thus, the three schemes will be identical.Figure 7 shows that all schemes have very similar values, which is caused by the way the benchmark test is chosen.An interesting conclusion is that the decision to use a greater computation effort scheme is invalid since the additive noise does not allow for significant differences between the schemes.As anticipated, Figure 8 shows the strong and weak convergence for all of the schemes, and the conclusion is that additive noise results in the same convergence ( )

O t
Δ regardless of whether it is strong or weak.The ERK and IRK methods are still the most precise and near the exact solution among all the other numerical approaches.Benchmark Test 3: Solving a system of SDEs is the subject of the final test.Consider a mechanical oscillator having a negligible damping force and undamped oscillations, maintained by a harmonic force of constant amplitude and a white additive noise ( ) t ξ : As anticipated, Figure 8 shows the strong and weak convergence for all of the schemes, and the conclusion is that additive noise results in the same convergence O(∆t) regardless of whether it is strong or weak.The ERK and IRK methods are still the most precise and near the exact solution among all the other numerical approaches.

Figure 1 .
Figure 1.Contribution of the white noise.At time step 1/2 12 the white noise is considered for each step.At time step 1/2 11 the white noise is the sum of each two previous steps.At time step 1/2 10 the white noise is the sum of each four initial steps, and the process goes on.

Figure 2 .
Figure 2. (a) Comparison of all the numerical schemes for the benchmark test 1 (b), (c) Details of the comparisons.

Figure 2 .
Figure 2. (a) Comparison of all the numerical schemes for the benchmark test 1 (b,c) Details of the comparisons.
step and the accuracy of the results, the first four, smaller time steps have been considered for each scheme, as shown in Figure3.

Figure 2 .
Figure 2. (a) Comparison of all the numerical schemes for the benchmark test 1 (b), (c) Details of the comparisons.

Figure 4 .
Figure 4. Study of strong convergence for all methods, graph on a logarithmic scale, multiplicative noise.

Figure 4 .
Figure 4. Study of strong convergence for all methods, graph on a logarithmic scale, multiplicative noise.

Figure 4 .
Figure 4. Study of strong convergence for all methods, graph on a logarithmic scale, multiplicative noise.

Figure 5 .
Figure 5. Study of weak convergence for all methods, logarithmic scale graph, multiplicative noise.

Figure 6 .
Figure 6.(a).Comparison of the numerical solution for all the numerical schemes used in the benchmark test 2 (b), (c) Details of the comparisons.

Figure 6 .
Figure 6.(a) Comparison of the numerical solution for all the numerical schemes used in the benchmark test 2 (b,c) Details of the comparisons.

Figure 6 .
Figure 6.(a).Comparison of the numerical solution for all the numerical schemes used in the benchmark test 2 (b), (c) Details of the comparisons.