Next Article in Journal
Dynamic Keynesian Model of Economic Growth with Memory and Lag
Next Article in Special Issue
Estimating the Expected Discounted Penalty Function in a Compound Poisson Insurance Risk Model with Mixed Premium Income
Previous Article in Journal
Convergence and Best Proximity Points for Generalized Contraction Pairs
Previous Article in Special Issue
Cumulative Measure of Inaccuracy and Mutual Information in k-th Lower Record Values
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Monte Carlo Algorithms for the Parabolic Cauchy Problem

Institute of Mathematics, Natural Science and Computer Science, Vologda State University, 160000 Vologda, Russia
Mathematics 2019, 7(2), 177; https://doi.org/10.3390/math7020177
Submission received: 19 December 2018 / Revised: 30 January 2019 / Accepted: 12 February 2019 / Published: 15 February 2019
(This article belongs to the Special Issue Stochastic Processes: Theory and Applications)

Abstract

:
New Monte Carlo algorithms for solving the Cauchy problem for the second order parabolic equation with smooth coefficients are considered. Unbiased estimators for the solutions of this problem are constructed.

1. Introduction

Consider the parabolic operator
L = L ( x , t , / x , / t ) = t i , j = 1 n a i j ( x , t ) 2 x i x j + i = 1 n a i ( x , t ) x i + a 0 ( x , t )
Let all coefficients of the operator L be defined in the domain D n + 1 ( T ) = R n × ( 0 , T ) . Denote by A ( x , t ) the coefficient matrix of the highest derivatives of the operator L and suppose that A ( x , t ) is symmetric matrix. Suppose that all eigenvalues of the matrix A ( x , t ) belong to the fixed interval [ ν , μ ] , where ν > 0 .
Consider the Cauchy problem in the domain D n + 1 ( T )
L ( x , t , / x , / t ) u ( x , t ) = f ( x , t ) , u | t = 0 = φ ( x ) .
A Random variable ξ ( x , t ) is called an unbiased estimator for a function u ( x , t ) if mathematical expectation E ξ ( x , t ) is equal to u ( x , t ) . Every unbiased estimator gives a stochastic numerical method for evaluation of the function u ( x , t ) . Now we briefly discuss some known stochastic methods for solving the Cauchy problem.
Let 0 < α < 1 and the coefficients of the parabolic operator are elements of the Hölder class H α , α 2 ( D n + 1 ( T ) ¯ ) , then Equation (2) has a fundamental solution Z ( x , y , t , τ ) [1]. Let the function f ( x , t ) satisfy the Hölder condition with respect to all of its arguments, and let the function φ ( x ) be continuous function. Let in addition f ( x , t ) and φ ( x ) grow no faster than e a | x | 2 , as | x | . Then, the solution of the Cauchy problem can be written in the following form
u ( x , t ) = 0 t d τ R n Z ( x , y , t , τ ) f ( y , τ ) d y + R n Z ( x , y , t , 0 ) φ ( y ) d y .
If a 0 ( x , t ) 0 then the fundamental solution Z ( x , y , t , τ ) is a probability density (as a function of y). So, if the fundamental solution is known one can construct the corresponding unbiased estimator. Particularly, if the coefficients of the equation are constant, it is enought to generate a normally distributed random vector in R n for the evaluation of u ( x , t ) . In the general case, Z ( x , y , t , τ ) is a transition density of a stochastic process X t , which started from a point x at time τ = 0 . Hence,
u ( x , t ) = E 0 t f ( X s , t s ) d s + E φ ( X t ) ,
and random variable η = t f ( X t θ , t ( 1 θ ) ) + φ ( X t ) is an unbiased estimator for u ( x , t ) , where the variable θ is uniformly distributed in [ 0 , 1 ] . Then we can use this estimator in the Monte Carlo procedure if we can generate the process X t . The process X t is a solution of the respective stochastic differential equation, and we can approximate it by another process Y t , using, for example, the Euler scheme. Let 0 = t 0 < t 1 < < t m = t , and Y t 0 = x , Y t 1 , , Y t m be the Euler approximation for the corresponding values of X s , s [ 0 , t ] . After replacing X by Y, the estimator η became the biased one. Let p X ( y 1 , , y m ) and p Y ( y 1 , , y m ) be the densities of m —dimentional distributions for the X and Y processes, respectively. The estimator p X ( Y t 1 , , Y t m ) φ ( Y t m ) / p Y ( Y t 1 , , Y t m ) is an unbiased estimator for E φ ( X t ) . Finally, if random variable ζ is an unbiased estimator for p X ( Y t 1 , , Y t m ) , then
E ζ φ ( Y t m ) / p Y ( Y t 1 , , Y t m ) = E φ ( X t ) .
The first factor ζ in the formula (5) was constructed by W. Wagner in his papers [2]. It was shown that the fundamental solution is a functional of the solution of some integral Volterra equation. The von-Neumann–Ulam scheme [3] was applied for estimation of the fundamental solution. Monte Carlo algorithms for evaluation of some other functionals can be found in the works [4,5,6].
In paper [7], the von-Neumann–Ulam scheme was used for constructing another class of estimators for u ( x , t ) without using a grid. A conjugate (dual) scheme of construction of unbiased estimators for functionals of the solutions of an integral equation, which is equivalent to the Cauchy problem, was considered in [8]. This scheme simplifies the modeling procedure, because boundaries of the spectrum for the matrix A ( x , t ) are not required to be known.
Finally, if the operator L has differentiable coefficients, then we can obtain an integral equation for u ( x , t ) by using the Green formula and solve this equation via the Monte Carlo method. Such algorithms were considered in [9,10] for equations whose principal part one is the Laplace operator. We obtain a Volterra equation for the Cauchy problem solution u ( x , t ) in the general case. In this paper we investigate the von-Neumann–Ulam scheme for regular and conjugate cases.
It is necessary to note that the Multilevel Monte Carlo Method [11,12] is often used for evaluation of the functional E φ ( X t ) , where process X t is a solution of the respective stochastic differential equation. This approach is not covered in this paper.
This paper does not contain any results of numerical experiments. Numerical experiments and the efficiency of various stochastic algorithms for solving the Cauchy problem will be the subject of the separate paper.

2. Integral Representation

Let all coefficients of the operator L be elements of the Hölder class and let there exist continuous and bounded derivatives
2 a i j ( x , t ) / x i x j , a i j ( x , t ) / x j , a i ( x , t ) / x i
for i , j = 1 , 2 , , n .
We also suppose that the Cauchy problem solution is continuous and bounded. We define u by equality
u = sup ( x , t ) D n + 1 ( T ) ¯ | u ( x , t ) | .
Take a point ( x , t ) . Let A ( i , j ) ( x , t ) be the elements of the inverse matrix A 1 ( x , t ) . Let us define a function σ ( y , x , t ) by equality
σ ( y , x , t ) = i , j = 1 n A ( i , j ) ( x , t ) ( y i x i ) ( y j x j ) 1 2 .
Define the function Z 0 for t > τ by equality
Z 0 ( x , y , t , τ ) = 1 [ 4 π ( t τ ) ] n 2 ( det A ( x , t ) ) 1 2 · exp σ 2 ( y , x , t ) 4 ( t τ ) .
For t < τ we set Z 0 ( x , y , t , τ ) = 0 . We denote Z 0 ( x , y , t , τ ) by v 0 ( y , τ ) if the point ( x , t ) is fixed.
For ρ > 0 , we define a function v ( y , τ ) by equality
v ( y , τ ) = 1 4 π ( t τ ) n 2 det A ( x , t ) 1 2
× exp σ 2 ( y , x , t ) 4 ( t τ ) exp ρ 2 4 ( t τ ) .
Using a Green formula, it is easy to prove that
u ( x , t ) = 0 t D ρ v ( y , τ ) L u ( y , τ ) u ( y , τ ) M v ( y , τ ) d y d τ + D ρ u ( y , 0 ) v ( y , 0 ) d y + 0 t D ρ [ ( y x ) T A 1 ( x , t ) A ( y , τ ) A 1 ( x , t ) ( y x ) ] 2 ( t τ ) A 1 ( x , t ) ( y x ) × Z 0 ( x , y , t , τ ) u ( y , τ ) d y S d τ ,
where the inner integral in the third term is a surface integral on the boundary of the domain D ρ R n , which is defined by D ρ = { y R n | σ ( y , x , t ) < ρ } . Let M be a conjugate operator for L = L ( y , τ , / y , / τ ) :
M v ( y , τ ) = v ( y , τ ) τ i , j = 1 n 2 y i y j a i j ( y , τ ) v ( y , τ ) i = 1 n y i a i ( y , τ ) v ( y , τ ) + a 0 ( y , τ ) v ( y , τ ) .
Using the Cauchy inequality we have
[ ( y x ) T A 1 ( x , t ) A ( y , τ ) A 1 ( x , t ) ( y x ) ] A ( y , τ ) · A 1 ( x , t ) ( y x ) 2 .
Define a new scalar product [ v , w ] by equality [ v , w ] = v T A 1 ( x , t ) w . Then
[ y x , y x ] = σ 2 ( x , y , t ) , [ w , w ] A 1 ( x , t ) w 2 .
Using the Cauchy inequality we have
A 1 ( x , t ) ( y x ) 4 = [ ( y x ) , A 1 ( x , t ) ( y x ) ] 2 ,
A 1 ( x , t ) ( y x ) 4 [ A 1 ( x , t ) ( y x ) , A 1 ( x , t ) ( y x ) ] · σ 2 ( x , y , t ) .
Hence,
A 1 ( x , t ) ( y x ) 2 A 1 ( x , t ) · σ 2 ( x , y , t ) .
Now, we can evaluate the last integral in formula (8):
0 t D ρ [ ( y x ) T A 1 ( x , t ) A ( y , τ ) A 1 ( x , t ) ( y x ) ] 2 ( t τ ) A 1 ( x , t ) ( y x )
× Z 0 ( x , y , t , τ ) u ( y , τ ) d y S d τ
μ ν 0 t D ρ ρ 2 ( t τ ) 1 4 π ( t τ ) n 2 ( det A ( x , t ) ) 1 2
× exp ρ 2 4 ( t τ ) u ( y , τ ) d y S d τ
u Γ ( n 2 ) μ ν 0 t ρ n ( t τ ) 1 4 ( t τ ) n 2 · exp ρ 2 4 ( t τ ) d τ
u Γ ( n 2 ) μ ν ρ 2 4 t s n 2 1 · exp s d s .
Hence, the last integral in formula (8) converges to zero as ρ . For the function v we have inequality v ( y , τ ) v 0 ( y , τ ) . Moreover, v ( y , τ ) v 0 ( y , τ ) as ρ . So, using the equality
R n v 0 ( y , τ ) d y = 1 ,
we have
D ρ u ( y , 0 ) v ( y , 0 ) d y R n u ( y , 0 ) v 0 ( y , 0 ) d y
and
0 t D ρ v ( y , τ ) L u ( y , τ ) d y d τ 0 t R n v 0 ( y , τ ) L u ( y , τ ) d y d τ .
It is easy to see that
M v ( y , τ ) M v 0 ( y , τ ) =
= ( τ d 0 ) 1 4 π ( t τ ) n 2 det A ( x , t ) 1 2 exp ρ 2 4 ( t τ )
= n 2 ( t τ ) ρ 2 4 ( t τ ) 2 d 0 1 4 π ( t τ ) n 2 det A ( x , t ) 1 2 exp ρ 2 4 ( t τ ) ,
where
d 0 = i , j = 1 n 2 a i j ( y , τ ) y i y j j = 1 n a j ( y , τ ) y j + a 0 ( y , τ )
is a coefficient of the function u in the operator M u . The inequalities
0 t D ρ u ( y , τ ) M v ( y , τ ) u ( y , τ ) M v 0 ( y , τ ) d y d τ
c o n s t · u 0 t n 2 ( t τ ) + ρ 2 4 ( t τ ) 2 + d 0
× ρ n Γ ( n + 1 2 ) 4 ( t τ ) n 2 exp ρ 2 4 ( t τ ) d τ
c o n s t · u 1 + 2 t d 0 n 1 Γ n 2 ρ 2 4 t s n 2 1 · exp s d s
+ c o n s t · u 1 Γ n + 1 2 ρ 2 4 t s n 2 · exp s d s
show that
0 t D ρ u ( y , τ ) M v ( y , τ ) d y d τ 0 t R n u ( y , τ ) M v 0 ( y , τ ) d y d τ .
Putting ρ in the formula (8), we have the following integral representation of the Cauchy problem (2)
u ( x , t ) = 0 t R n v 0 ( y , τ ) f ( y , τ ) u ( y , τ ) M v 0 ( y , τ ) d y d τ + R n φ ( y ) v 0 ( y , 0 ) d y .

3. Von-Neumann–Ulam Scheme

Now we investigate some properties of the integral operator
K u ( x , t ) = 0 t R n u ( y , τ ) M v 0 ( y , τ ) d y d τ
in Equation (10). The matrix A ( x , t ) of the coefficients of higher derivatives is symmetric. So, from the equation
i , j = 1 n a i j ( x , t ) 2 y i y j v 0 ( y , τ ) = v 0 ( y , τ ) τ ,
we have
M v 0 ( y , τ ) = i , j = 1 n [ a i j ( x , t ) a i j ( y , τ ) ] 2 y i y j v 0 ( y , τ ) + i = 1 n d i ( y , τ ) y i v 0 ( y , τ ) + d 0 ( y , τ ) v 0 ( y , τ ) ,
where d i ( y , τ ) = 2 j = 1 n a i j ( y , τ ) / y j a i ( y , τ ) are bounded.
The expression (12) has the same structure and properties as the kernel K ( x , y , t , λ ) in formula (11.12) in ([1], Sec. IV). It follows from inequalities (11.3) and (11.17) in ([1], Sec. IV) that there exist positive constants C and c, such that
| M v 0 ( y , τ ) | c ( t τ ) n + 2 α 2 exp C | y x | 2 t τ ,
for 0 τ < t .
Examples of constants c, C and further discussion can be found in [7]. In particular, it is shown in [7] that the inequality (13) implies uniform convergence of the von-Neumann series for Equation (10), if f ( x , t ) and φ ( x ) are bounded functions. We have
u ( x , t ) = i = 0 K i F ( x , t ) ,
F ( x , t ) = F 1 ( x , t ) + F 2 ( x , t ) = 0 t R n v 0 ( y , τ ) f ( y , τ ) d y d τ + R n φ ( y ) v 0 ( y , 0 ) d y .
We can apply methods of [7] for constructing unbiased estimators for u ( x , t ) . To realize the von-Neumann–Ulam scheme, it is sufficient to choose a transition probability density for a Markov chain consistent with the kernel K 1 ( x , y , t , τ ) of the operator K. For instance, we can take a density in the form
p ( ( x , t ) ( y , τ ) ) = α ( 1 q ) 2 t α 2 ( t τ ) α 2 1 Z 1 ( x y , t τ ) ,
where 0 < q < 1 is the probability of absorption at a current step and
Z 1 ( x y , t τ ) = C π ( t τ ) n 2 exp C | x y | 2 t τ
for 0 τ < t and Z 1 ( x y , t τ ) = 0 for τ > t .
The constant C in these formulas is the same as in inequality (13). We can take any constant such that 4 μ C < 1 . Hence, we have the compatibility of the density and the kernel of the integral equation. The probability of absorption at each step is a constant. Therefore, the time of the absorption (N) has a geometric probability distribution with a parameter q: P ( N = m ) = q ( 1 q ) m for m = 0 , 1 , 2 , . Random variable N and the trajectory are independent random elements and E N = q 1 . We can use procedure described in [7] for generating a Markov chain { ( x m , t m ) } m = 1 which starts at the point ( x 0 , t 0 ) = ( x , t ) .
For constructing unbiased estimators for the solution of Equation (10), we use the formulas
η ( x , t ) = m = 0 N W ( m ) F ( x m , t m ) ,
ζ ( x , t ) = W ( N ) F ( x N , t N ) q
We define weight functions as W ( 0 ) = 1 ,
W ( m ) = W ( m 1 ) K 1 ( x m 1 , x m , t m 1 , t m ) p ( ( x m 1 , t m 1 ) ( x m , t m ) ) ,
for m = 1 , 2 , . Final unbiased estimators for u ( x , t ) are obtained after replacement of F ( x m , t m ) by their unbiased estimators
F ^ m = t m f ( x m + 2 t m ( 1 θ ) Y , t m θ ) + φ ( x m + 2 t m Y ) ,
where the random variable θ is uniformly distributed on the interval [ 0 , 1 ] , and a random vector Y has a normal distribution with mean 0 and covariance matrix A ( x m , t m ) . They are independent.
It is proved in [7] that the estimators have finite variances.

Numerical Algorithm

The numerical algorithm is based on the Monte Carlo method for calculating the mathematical expectation of a random variable.
Consider as an example the following unbiased estimator ζ ^ ( x , t ) = W ( N ) F ^ N / q for u ( x , t ) .
Let ζ ^ 1 , ζ ^ 2 , , ζ ^ k , be independent realizations of the estimator ζ ^ ( x , t ) . Then we can approximate u ( x , t ) by the sample average ζ ¯ = ( ζ ^ 1 + ζ ^ 2 + + ζ ^ k ) / k . The approximation error is calculated as 3 S 2 / k , where S 2 = ( ζ ^ 1 2 + ζ ^ 2 2 + + ζ ^ k 2 ) / k ζ ¯ 2 is the sample variance.
For simulating a Markov chain { ( x m , t m ) } m = 0 N , we can use the formulas
x 0 = x , t 0 = t , x m + 1 = x m + t m ϑ m / ( 2 C ) Y m , t m + 1 = t m ( 1 ϑ m )
where the random variables { ϑ } m = 0 and the random vectors { Y m } m = 0 are stochastically independent. The variables ϑ m are distributed on the interval ( 0 , 1 ) and have a distribution density ( α / 2 ) s α / 2 1 . All the components of the vector Y m are stochastically independent and have a standard normal distribution.

4. Conjugate Scheme

Now we apply the technique developed in [8] to Equation (10). Fix a number q ( 0 < q < 1 ) and generate a random variable N having a geometric distribution ( P ( N = m ) = q ( 1 q ) m , m = 0 , 1 , ). The random variables
ξ 1 ( x , t ) = K N F ( x , t ) q ( 1 q ) N , ξ 2 ( x , t ) = m = 0 N K m F ( x , t ) ( 1 q ) m
are unbiased estimators for u ( x , t ) . We execute m times the procedure of evaluation of the integral in (11) to determine the unbiased estimator for K m F ( x , t ) . This procedure is similar to the procedure of evaluation of the integral (3.8) in [8]. Namely, let S 1 0 ( x , t ) = ω R n | ω A 1 ( x , t ) ω = 1 be an ellipsoid centered at zero, and let σ n = 2 π n 2 / Γ ( n 2 ) be an area of the sphere of radius 1 in R n . The random vector Ω is distributed on S 1 0 ( x , t ) with density
p ( x , t , ω ) = 1 σ n det ( A ( x , t ) ) | A 1 ( x , t ) ω | .
After the calculation of the kernel K 1 ( x , y , t , τ ) = M v 0 ( y , τ ) , we have
K u ( x , t ) = 0 t d τ 0 d r E n Tr A ( x + r Ω , τ ) A 1 ( x , t ) ( t τ ) Γ ( n 2 ) ( 4 ( t τ ) ) n 2 × exp r 2 4 ( t τ ) r n 1 u ( x + r Ω , τ ) + 0 t d τ 0 d r E 2 r 2 Ω A 1 ( x , t ) A ( x + r Ω , τ ) A 1 ( x , t ) Ω 1 4 ( t τ ) 2 Γ ( n 2 ) ( 4 ( t τ ) ) n 2 × exp r 2 4 ( t τ ) r n 1 u ( x + r Ω , τ ) + 0 t d τ 0 d r E ( r d ( x + r Ω , τ ) A 1 ( x , t ) Ω ( t τ ) Γ ( n 2 ) ( 4 ( t τ ) ) n 2 × exp r 2 4 ( t τ ) r n 1 u ( x + r Ω , τ ) 0 t d τ 0 d r E d 0 ( x + r Ω , τ ) × u ( x + r Ω , τ ) 2 r n 1 Γ ( n 2 ) ( 4 ( t τ ) ) n 2 exp r 2 4 ( t τ ) ,
where d denotes the transposed vector d = ( d 1 , d 2 , , d n ) and Tr ( A ) denotes the trace of the matrix A. E is the mathematical expectation of the function of random variable Ω .
All coefficients in the Equation (2) belong to the Hölder class. Hence, we can simplify the expressions in (23):
n Tr A ( x + r Ω , τ ) A 1 ( x , t ) = Tr A ( x , t ) A ( x + r Ω , t ) A 1 ( x , t ) + Tr A ( x + r Ω , t ) A ( x + r Ω , τ ) A 1 ( x , t ) = g ˜ 1 ( x + r Ω , x , t ) r α + g ˜ 2 ( x + r Ω , x , τ , t ) ( t τ ) α 2 ,
Ω A 1 ( x , t ) A ( x + r Ω , τ ) A 1 ( x , t ) Ω 1 = Ω A 1 ( x , t ) A ( x + r Ω , t ) A ( x , t ) A 1 ( x , t ) Ω + Ω A 1 ( x , t ) A ( x + r Ω , τ ) A ( x + r Ω , t ) A 1 ( x , t ) Ω = h ˜ 1 ( x + r Ω , x , t ) r α + h ˜ 2 ( x + r Ω , x , τ , t ) ( t τ ) α 2 ,
where g ˜ 1 , g ˜ 2 , h ˜ 1 , h ˜ 2 are bounded functions.
Substituting these expressions into (23) and putting s = r 2 / 4 ( t τ ) , we obtain the following representation for K u ( x , t ) :
K u ( x , t ) = 0 t d τ ( t τ ) α 2 1 0 d s 2 α 2 Γ ( n 2 ) s n + α 2 1 exp ( s ) × E g ˜ 1 ( x + 2 s ( t τ ) Ω , x , t ) u ( x + 2 s ( t τ ) Ω , τ ) + 0 t d τ ( t τ ) α 2 1 0 d s 1 2 Γ ( n 2 ) s n 2 1 exp ( s ) × E g ˜ 2 ( x + 2 s ( t τ ) Ω , x , τ , t ) u ( x + 2 s ( t τ ) Ω , τ ) + 0 t d τ ( t τ ) α 2 1 0 d s 2 α Γ ( n 2 ) s n + 2 + α 2 1 exp ( s ) × E h ˜ 1 ( x + 2 s ( t τ ) Ω , x , t ) u ( x + 2 s ( t τ ) Ω , τ ) + 0 t d τ ( t τ ) α 2 1 0 d s 1 Γ ( n 2 ) s n + 2 2 1 exp ( s ) × E h ˜ 2 ( x + 2 s ( t τ ) Ω , x , τ , t ) u ( x + 2 s ( t τ ) Ω , τ ) + 0 t d τ ( t τ ) 1 2 0 d s 1 Γ ( n 2 ) s n + 1 2 1 exp ( s ) × E d ( x + 2 s ( t τ ) Ω , τ ) A 1 ( x , t ) Ω u ( x + 2 s ( t τ ) Ω , τ ) 0 t d τ 0 d s 1 Γ ( n 2 ) s n 2 1 exp ( s ) × E d 0 ( x + 2 s ( t τ ) Ω , τ ) u ( x + 2 s ( t τ ) Ω , τ ) .
The unbiased estimator η ˜ ( x , t ) for K u ( x , t ) has the form:
η ˜ ( x , t ) = t α 2 2 α Γ ( n + α 2 ) α Γ ( n 2 ) g ˜ 1 x + 2 γ n + α 2 t ϑ Ω , x , t × u x + 2 γ n + α 2 t ϑ Ω , t t ϑ + t α 2 1 α g ˜ 2 x + 2 γ n 2 t ϑ Ω , x , t t ϑ , t × u x + 2 γ n 2 t ϑ Ω , t t ϑ + t α 2 2 α + 1 Γ ( n + 2 + α 2 ) α Γ ( n 2 ) h ˜ 1 x + 2 γ n + 2 + α 2 t ϑ Ω , x , t × u x + 2 γ n + 2 + α 2 t ϑ Ω , t t ϑ + t α 2 n α × h ˜ 2 x + 2 γ n + 2 2 t ϑ Ω , x , t t ϑ , t × u x + 2 γ n + 2 2 t ϑ Ω , t t ϑ + t 1 2 2 Γ ( n + 1 2 ) Γ ( n 2 ) d x + 2 γ n + 1 2 t δ Ω , t t δ A 1 ( x , t ) Ω × u x + 2 γ n + 1 2 t δ Ω , t t δ t d 0 x + 2 γ n 2 t θ Ω , t t θ u x + 2 γ n 2 t θ Ω , t t θ ,
where the random variables ϑ , δ , θ are distributed on the interval [ 0 , 1 ] . The variables ϑ and δ have densities ( α / 2 ) s α / 2 1 and 1 / ( 2 s ) , respectively, and θ is distributed uniformly. The variable γ ( m ) has a gamma distribution with a density s m 1 e s / Γ ( m ) .
Choosing one of the summands in (27) with probability 1 6 and multiplying it by 6, we obtain the final unbiased estimator ζ ˜ ( x , t ) for K u ( x , t ) .
The unbiased estimators ψ m for K m F ( x , t ) can be constructed on trajectories of the inhomogeneous Markov chain { ( x k , t k ) } k = 0 with initial point ( x , t ) . Consider stochastically independent random elements { ϑ k } k = 0 , { δ k } k = 0 , { θ k } k = 0 , { Ω k } k = 0 . The initial value of the variable ψ m is 1. At step k we consider ζ ˜ ( x k 1 , t k 1 ) and multiply the variable ψ m by the corresponding weight factor. The arguments of the function u determine the next state of the Markov chain. For example, if the first summand of the estimator (27) was chosen at step k , then we multiply variable ψ m by
6 t k 1 α 2 2 α Γ ( n + α 2 ) α Γ ( n 2 ) g ˜ 1 x k 1 + 2 γ k 1 n + α 2 t k 1 ϑ k 1 Ω k 1 , x k 1 , t k 1
and define the next point ( x k , t k ) by formulas:
x k = x k 1 + 2 γ k 1 n + α 2 t k 1 ϑ k 1 Ω k 1 ,
t k = t k 1 t k 1 ϑ k 1 .
After m steps, we multiply the variable ψ m by an estimator for F ( x m , t m ) which is equal to
t m f x m + 2 γ m n 2 t m θ m Ω m , t m t m θ m + φ x m + 2 γ m n 2 t m Ω m .
So, the random variables
ξ ˜ 1 ( x , t ) = ψ N q ( 1 q ) N , ξ ˜ 2 ( x , t ) = m = 0 N ψ m ( 1 q ) m
are unbiased estimators for u ( x , t ) . Repeating the arguments of the proof of Theorem 1 in [8], it is easy to prove that constructed estimators have finite variances.
Remark 1.
The unbiased estimators constructed above and the algorithm for calculating them can be used in the Monte Carlo method to find u ( x , t ) . This computational algorithm is more complex than the algorithm in Section 3. On the other hand this algorithm does not require an estimate of spectrum of matrix A ( x , t ) .

Funding

The research is supported by the Russian Foundation for Basic Research, project No. 17-01-00267.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ladyzhenskaya, O.A.; Solonnikov, V.A.; Uraltseva, N.N. Linear and Quasilinear Equations of Parabolic Type; Nauka: Moscow, Russia, 1967. (In Russian) [Google Scholar]
  2. Wagner, W. Unbiased Monte Carlo estimators for functionals of weak solutions of stochastic diffretial equations. Stoch. Stoch. Rep. 1989, 28, 1–20. [Google Scholar] [CrossRef]
  3. Ermakov, S.M.; Mikhailov, G.A. Statistical Modeling; Nauka: Moscow, Russia, 1982. (In Russian) [Google Scholar]
  4. Wagner, W. Unbiased Monte Carlo evaluation of certain functional integrals. J. Comput. Phys. 1987, 71, 21–33. [Google Scholar] [CrossRef]
  5. Wagner, W. Unbiased Multi-step Estimators for the Monte Carlo Evaluation of Certain Functional Integrals. J. Comput. Phys. 1988, 79, 336–352. [Google Scholar] [CrossRef]
  6. Wagner, W. Monte Carlo evaluation of functionals of solutions of stochastic differential equations. Variance reduction and numerical examples. Stoch. Anal. Appl. 1988, 6, 447–468. [Google Scholar] [CrossRef]
  7. Sipin, A.S. Statistical Algorithms for Solving the Cauchy Problem for Second-Order Parabolic Equations. Vestn. Peterburg Univ. Math. 2011, 45, 65–74. [Google Scholar] [CrossRef]
  8. Sipin, A.S. Statistical Algorithms for Solving the Cauchy Problem for Second-Order Parabolic Equations: The “Dual” Scheme. Vestn. Peterburg Univ. Math. 2012, 45, 57–67. [Google Scholar] [CrossRef]
  9. Sabelfeld, K.K. Monte Carlo Methods in Boundary Value Problems; Nauka: Novosibirsk, Russia, 1989. (In Russian) [Google Scholar]
  10. Simonov, N.A. Stochastic iterative methods for solving equations of parabolic type. Sib. Mat. Zhurnal 1997, 38, 1146–1162. [Google Scholar]
  11. Heinrich, S. Multilevel Monte Carlo Methods; Volume 2179 of Lecture Notes in Computer Science; Springer: Berlin, Germany, 2001; pp. 58–67. [Google Scholar]
  12. Giles, M.B. Multi-Level Monte Carlo Path Simulation. Oper. Res. 2008, 56, 607–617. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Sipin, A. Monte Carlo Algorithms for the Parabolic Cauchy Problem. Mathematics 2019, 7, 177. https://doi.org/10.3390/math7020177

AMA Style

Sipin A. Monte Carlo Algorithms for the Parabolic Cauchy Problem. Mathematics. 2019; 7(2):177. https://doi.org/10.3390/math7020177

Chicago/Turabian Style

Sipin, Alexander. 2019. "Monte Carlo Algorithms for the Parabolic Cauchy Problem" Mathematics 7, no. 2: 177. https://doi.org/10.3390/math7020177

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop