Monte Carlo Algorithms for the Parabolic Cauchy Problem

New Monte Carlo algorithms for solving the Cauchy problem for the second order parabolic equation with smooth coefficients are considered. Unbiased estimators for the solutions of this problem are constructed.


Introduction
Consider the parabolic operator Let all coefficients of the operator L be defined in the domain D (T) n+1 = R n × (0, T).Denote by A(x, t) the coefficient matrix of the highest derivatives of the operator L and suppose that A(x, t) is symmetric matrix.Suppose that all eigenvalues of the matrix A(x, t) belong to the fixed interval [ν, µ], where ν > 0.
Consider the Cauchy problem in the domain D (T) n+1 L(x, t, ∂/∂x, ∂/∂t)u(x, t) = f (x, t), u| t=0 = ϕ(x). ( A Random variable ξ(x, t) is called an unbiased estimator for a function u(x, t) if mathematical expectation Eξ(x, t) is equal to u(x, t).Every unbiased estimator gives a stochastic numerical method for evaluation of the function u(x, t).Now we briefly discuss some known stochastic methods for solving the Cauchy problem.
Let 0 < α < 1 and the coefficients of the parabolic operator are elements of the Hölder class n+1 ), then Equation (2) has a fundamental solution Z(x, y, t, τ) [1].Let the function f (x, t) satisfy the Hölder condition with respect to all of its arguments, and let the function ϕ(x) be continuous function.Let in addition f (x, t) and ϕ(x) grow no faster than e a|x| 2 , as |x| → ∞.Then, the solution of the Cauchy problem can be written in the following form u(x, t) = t 0 dτ R n Z(x, y, t, τ) f (y, τ)dy + R n Z(x, y, t, 0)ϕ(y)dy. ( If a 0 (x, t) ≡ 0 then the fundamental solution Z(x, y, t, τ) is a probability density (as a function of y).So, if the fundamental solution is known one can construct the corresponding unbiased estimator.
Particularly, if the coefficients of the equation are constant, it is enought to generate a normally distributed random vector in R n for the evaluation of u(x, t).In the general case, Z(x, y, t, τ) is a transition density of a stochastic process X t , which started from a point x at time τ = 0. Hence, and random variable η = t f (X tθ , t(1 − θ)) + ϕ(X t ) is an unbiased estimator for u(x, t), where the variable θ is uniformly distributed in [0, The first factor ζ in the formula (5) was constructed by W. Wagner in his papers [2].It was shown that the fundamental solution is a functional of the solution of some integral Volterra equation.
The von-Neumann-Ulam scheme [3] was applied for estimation of the fundamental solution.Monte Carlo algorithms for evaluation of some other functionals can be found in the works [4][5][6].
In paper [7], the von-Neumann-Ulam scheme was used for constructing another class of estimators for u(x, t) without using a grid.A conjugate (dual) scheme of construction of unbiased estimators for functionals of the solutions of an integral equation, which is equivalent to the Cauchy problem, was considered in [8].This scheme simplifies the modeling procedure, because boundaries of the spectrum for the matrix A(x, t) are not required to be known.
Finally, if the operator L has differentiable coefficients, then we can obtain an integral equation for u(x, t) by using the Green formula and solve this equation via the Monte Carlo method.Such algorithms were considered in [9,10] for equations whose principal part one is the Laplace operator.We obtain a Volterra equation for the Cauchy problem solution u(x, t) in the general case.In this paper we investigate the von-Neumann-Ulam scheme for regular and conjugate cases.
It is necessary to note that the Multilevel Monte Carlo Method [11,12] is often used for evaluation of the functional Eϕ(X t ), where process X t is a solution of the respective stochastic differential equation.This approach is not covered in this paper.This paper does not contain any results of numerical experiments.Numerical experiments and the efficiency of various stochastic algorithms for solving the Cauchy problem will be the subject of the separate paper.

Integral Representation
Let all coefficients of the operator L be elements of the Hölder class and let there exist continuous and bounded derivatives We also suppose that the Cauchy problem solution is continuous and bounded.We define u by equality u = sup Take a point (x, t).Let A (i,j) (x, t) be the elements of the inverse matrix A −1 (x, t).Let us define a function σ(y, x, t) by equality Define the function Z 0 for t > τ by equality For t < τ we set Z 0 (x, y, t, τ) = 0. We denote Z 0 (x, y, t, τ) by v 0 (y, τ) if the point (x, t) is fixed.
Using a Green formula, it is easy to prove that where the inner integral in the third term is a surface integral on the boundary of the domain Using the Cauchy inequality we have Using the Cauchy inequality we have Now, we can evaluate the last integral in formula (8): Hence, the last integral in formula (8) converges to zero as ρ → ∞.For the function v we have inequality v(y, τ) ≤ v 0 (y, τ).Moreover, v(y, τ) → v 0 (y, τ) as ρ → ∞.So, using the equality we have It is easy to see that , where is a coefficient of the function u in the operator Mu.The inequalities Putting ρ → ∞ in the formula (8), we have the following integral representation of the Cauchy problem ( 2)

Von-Neumann-Ulam Scheme
Now we investigate some properties of the integral operator in Equation (10).The matrix A(x, t) of the coefficients of higher derivatives is symmetric.So, from the equation we have where d i (y, τ) = −2 ∑ n j=1 ∂a ij (y, τ)/∂y j − a i (y, τ) are bounded.The expression (12) has the same structure and properties as the kernel K(x, y, t, λ) in formula (11.12) in ( [1], Sec.IV).It follows from inequalities (11.3) and (11.17) in ( [1], Sec.IV) that there exist positive constants C and c, such that Examples of constants c, C and further discussion can be found in [7].In particular, it is shown in [7] that the inequality (13) implies uniform convergence of the von-Neumann series for Equation (10), if f (x, t) and ϕ(x) are bounded functions.We have We can apply methods of [7] for constructing unbiased estimators for u(x, t).To realize the von-Neumann-Ulam scheme, it is sufficient to choose a transition probability density for a Markov chain consistent with the kernel K 1 (x, y, t, τ) of the operator K.For instance, we can take a density in the form where 0 < q < 1 is the probability of absorption at a current step and The constant C in these formulas is the same as in inequality (13).We can take any constant such that 4µC < 1.Hence, we have the compatibility of the density and the kernel of the integral equation.The probability of absorption at each step is a constant.Therefore, the time of the absorption (N) has a geometric probability distribution with a parameter q: P(N = m) = q(1 − q) m for m = 0, 1, 2, . . . .Random variable N and the trajectory are independent random elements and EN = q −1 .We can use procedure described in [7] for generating a Markov chain {(x m , t m )} ∞ m=1 which starts at the point (x 0 , t 0 ) = (x, t).
For constructing unbiased estimators for the solution of Equation ( 10), we use the formulas We define weight functions as W (0) = 1, for m = 1, 2, . ... Final unbiased estimators for u(x, t) are obtained after replacement of F(x m , t m ) by their unbiased estimators where the random variable θ is uniformly distributed on the interval [0, 1], and a random vector Y has a normal distribution with mean 0 and covariance matrix A(x m , t m ).They are independent.It is proved in [7] that the estimators have finite variances.

Numerical Algorithm
The numerical algorithm is based on the Monte Carlo method for calculating the mathematical expectation of a random variable.
Consider as an example the following unbiased estimator ζ(x, t) = W (N) FN /q for u(x, t).

Conjugate Scheme
Now we apply the technique developed in [8] to Equation (10).Fix a number q (0 < q < 1) and generate a random variable N having a geometric distribution (P(N = m) = q(1 − q) m , m = 0, 1, . ..).The random variables are unbiased estimators for u(x, t).We execute m times the procedure of evaluation of the integral in (11) to determine the unbiased estimator for K m F(x, t).This procedure is similar to the procedure of evaluation of the integral (3.8) in [8].Namely, let S 0 1 (x, t) = ω ∈ R n |ω A −1 (x, t)ω = 1 be an ellipsoid centered at zero, and let σ n = 2π n 2 /Γ( n 2 ) be an area of the sphere of radius 1 in R n .The random vector Ω is distributed on S 0 1 (x, t) with density p(x, t, ω) = 1 After the calculation of the kernel K 1 (x, y, t, τ) = −Mv 0 (y, τ), we have where g1 , g2 , h1 , h2 are bounded functions.Substituting these expressions into (23) and putting s = r 2 /4(t − τ), we obtain the following representation for Ku(x, t) : The unbiased estimator η(x, t) for Ku(x, t) has the form: where the random variables ϑ, δ, θ are distributed on the interval [0, 1].The variables ϑ and δ have densities (α/2)s α/2−1 and 1/(2 √ s), respectively, and θ is distributed uniformly.The variable γ(m) has a gamma distribution with a density s m−1 e −s /Γ(m).
Choosing one of the summands in ( 27) with probability 1 6 and multiplying it by 6, we obtain the final unbiased estimator ζ(x, t) for Ku(x, t).
The unbiased estimators ψ m for K m F(x, t) can be constructed on trajectories of the inhomogeneous Markov chain {(x k , t k )} ∞ k=0 with initial point (x, t).Consider stochastically independent random elements {ϑ and define the next point (x k , t k ) by formulas: After m steps, we multiply the variable ψ m by an estimator for F(x m , t m ) which is equal to So, the random variables ξ1 (x, t) = ψ N q(1 − q) N , ξ2 (x, t) = N ∑ m=0 ψ m (1 − q) m are unbiased estimators for u(x, t).Repeating the arguments of the proof of Theorem 1 in [8], it is easy to prove that constructed estimators have finite variances.
Remark 1.The unbiased estimators constructed above and the algorithm for calculating them can be used in the Monte Carlo method to find u(x, t).This computational algorithm is more complex than the algorithm in Section 3. On the other hand this algorithm does not require an estimate of spectrum of matrix A(x, t).
Funding: The research is supported by the Russian Foundation for Basic Research, project No. 17-01-00267.

Conflicts of Interest:
The authors declare no conflict of interest.
)where d denotes the transposed vector d = (d 1 , d 2 , . . ., d n ) and Tr(A) denotes the trace of the matrix A. E is the mathematical expectation of the function of random variable Ω.All coefficients in the Equation (2) belong to the Hölder class.Hence, we can simplify the expressions in (23):

2 k− 1
The initial value of the variable ψ m is 1.At step k we consider ζ(x k−1 , t k−1 ) and multiply the variable ψ m by the corresponding weight factor.The arguments of the function u determine the next state of the Markov chain.For example, if the first summand of the estimator (27) was chosen at step k, then we multiply variable ψ m by 6t α 1].Then we can use this estimator in the Monte Carlo procedure if we can generate the process X t .The process X t is a solution of the respective stochastic differential equation, and we can approximate it by another process Y t , using, for example, the Euler scheme.Let 0 = t 0 < t 1 < . . .< t m = t, and Y t 0 = x, Y t 1 , . . ., Y t m be the Euler approximation for the corresponding values of X s , s ∈ [0, t].After replacing X by Y, the estimator η became the biased one.Let p X (y 1 , . . ., y m ) and p Y (y 1 , . . ., y m ) be the densities of m−dimentional distributions for the X and Y processes, respectively.The estimator p X (Y t 1 , . . ., Y t m )ϕ(Y t m )/p Y (Y t 1 , . . ., Y t m ) is an unbiased estimator for Eϕ(X t ).Finally, if random variable ζ is an unbiased estimator for p X (Y t 1 , . . ., Y t m ), then Eζ ϕ(Y t m )/p Y (Y t 1 , . . ., Y t m ) = Eϕ(X t ).