Next Article in Journal
Renormalization in Quantum Brain Dynamics
Previous Article in Journal
A Fast Algorithm for the Eigenvalue Bounds of a Class of Symmetric Tridiagonal Interval Matrices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multilevel Monte Carlo Approach for a Stochastic Optimal Control Problem Based on the Gradient Projection Method

South Huaxi Avenue, School of Mathematics and Statistics, Guizhou University, No. 2708, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
AppliedMath 2023, 3(1), 98-116; https://doi.org/10.3390/appliedmath3010008
Submission received: 10 January 2023 / Revised: 1 February 2023 / Accepted: 2 February 2023 / Published: 13 February 2023

Abstract

:
A multilevel Monte Carlo (MLMC) method is applied to simulate a stochastic optimal problem based on the gradient projection method. In the numerical simulation of the stochastic optimal control problem, the approximation of expected value is involved, and the MLMC method is used to address it. The computational cost of the MLMC method and the convergence analysis of the MLMC gradient projection algorithm are presented. Two numerical examples are carried out to verify the effectiveness of our method.

1. Introduction

The stochastic optimal control problem has been widely used in engineering, finance and economics. Many scholars have studied the stochastic optimal control problem for different controlled systems. Because it is difficult to find analytical solutions for these problems, the application of numerical approximation is a good choice. As with numerical methods for determining optimal control problems (see, e.g., [1,2,3,4,5,6]), numerical methods for stochastic optimal control problems have also been extensively studied. For optimal control problems governed by PDE with a random coefficient, the authors of [7,8,9] studied numerical approximations using different methods for different problems. For SDE optimal control problems, the stochastic maximum principle in [10,11], Bellman dynamic programming principle in [12] and Martingale method in [13] have been used to study numerical approximation in recent years.
The gradient projection method is a common numerical optimization method. In [14,15,16,17], the gradient projection method is used. For the numerical simulation of stochastic optimal control problems, whether gradient projection or other optimization methods are used, the approximation of expected value is always involved. For a class stochastic optimal control problem, the authors of [16] combined the gradient projection method with conditional expectation (which was used to solve forward and backward stochastic differential equations) to solve the stochastic optimal control problem. This method is difficult because it involves conditional expectation and numerical methods of solving forward and backward stochastic differential equations. In [15], the expectation was calculated using the Monte Carlo (MC) method, which is easy to understand and implement. However, the convergence speed of the MC method is slow. If we want it to produce a relatively small allowable error, we need to use a large sample. The MLMC method is a commonly used method of improving the slow convergence rate. For the MLMC method and its application, we can refer to the literature [7,18,19,20,21,22,23,24]. It is worth mentioning that in reference [18], an MLMC method is proposed for the robust optimization of PDEs with random coefficients. The MLMC method can effectively overcome adverse effects when the iterative solution is near the exact solution. However, the proof of the convergence for the gradient algorithm is not given.
In this work, we apply the gradient projection method with the MLMC method to solve a stochastic optimal control problem. An expected value is needed to compute in the simulation of the stochastic optimal control problem. To reduce the influence of statistical and discrete errors brought about by the calculation of the gradient, we use the MLMC method to estimate the gradient in each iteration. In the process of iteration, the mean square error (MSE) is dynamically updated. We prove the convergence of the gradient projection method combined with MLMC, and also extend the theory of MLMC which is suitable for our stochastic optimal control problem.
The rest of this paper is organized as follows. We describe the stochastic optimal control problem in Section 2. In Section 3, we review the gradient projection method and MLMC theory. In Section 4, we expand the existing MLMC theory and apply it to the gradient projection method for the stochastic optimal problem. The convergence analysis is also presented in this section. Some numerical experiments are carried out to verify the validity of our method in Section 5. Main contributions and future work are presented in Section 6.

2. Stochastic Optimal Control Problem

Let ( Ω , F , { F t } t 0 , P ) be a complete probability space and L F 2 ( [ 0 , T ] ; R ) be a real-valued square-integrable F t -adapted process space such that | | y | | L 2 ( Ω , L 2 [ 0 , T ] ) < , where { F t } t 0 is a natural filtration generated by a one-dimensional standard Brownian motion { W t } t 0 , and 
| | y | | L 2 ( Ω , L 2 [ 0 , T ] ) = Ω 0 T | y | 2 d t d P ( ω ) 1 / 2 .
The objective function of the optimal control problem that we consider is
min u U a d J ( y , u ) = 0 T E [ h ( y ) ] d t + 0 T j ( u ) d t ,
where h ( · ) , j ( · ) are first-order continuous derivative functions. u U a d is a deterministic control. U a d is a closed convex control set in the control space L 2 ( 0 , T ) . E ( · ) stands for expectation, which is defined by E ( h ) = Ω h ( ω ) d P ( ω ) . The stochastic process y ( u ) L F 2 ( [ 0 , T ] ; R ) is generated by the following stochastic differential equation:
d y = f ( t , y , u ) d t + g ( t , y ) d W t , y ( 0 ) = y 0 .
Assume that f is a continuous differentiable function with respect to t , y , u ; and g is a continuous differentiable function with respect to t , y . Under these continuous differentiable assumptions, we can find that for the problem (3) there exists a unique solution y ( · ) L F 2 ( [ 0 , T ] ; R ) with ( y 0 , u ( · ) ) R × U a d (this can be seen in [25]). Here, y ( · ) is a function of u ( · ) . For the optimal control problem (2)–(3) there exists a unique solution. Let u * be the optimal solution.
In the following, C denotes different positive constants at different occurrences, and is independent of discrete parameters, sample parameters and iteration times.

3. Review of Gradient Projection Method and MLMC Method

The gradient projection method is a common numerical method for optimization problems. The gradient projection method for stochastic optimal control problems usually contains expectation, and MLMC is an important method for calculating expectations. This section introduces the gradient projection and MLMC methods.

3.1. Gradient Projection Method

Let J ( u ) = J ( y ( u ) , u ) , where y ( u ) is the solution of (3). We assume that J ( u ) is a convex function, U = L 2 ( [ 0 , T ] , R ) is a Hilbert space and K is a closed convex subset of U. Let b ( · , · ) be a symmetric and positive definite bilinear form and define b : U U by ( b u , v ) = b ( u , v ) . Combining the first order optimization condition with the projection operator P K : U K , we can obtain (see [15,16])
u * = P K ( u * ρ b 1 J ( u * ) ) ,
where J ( u ) is the Gâteaux derivative of J ( u ) , ρ is a positive constant.
We introduce a uniform partition for time intervals: 0 = t 0 N < t 1 N < < t N N = T , t n + 1 N t n N = T / N = h . Let I n N = ( t n 1 N , t n N ] . The piecewise constant space U N is denoted by
U N = u U : u = n = 0 N α n χ I n N , a . e . , α n R ,
where χ I n N is the characteristic function of I n N . u * can be approximated by:
u * , N = P K N ( u * , N ρ b 1 J ( u * , N ) ) ,
where K N = K U N . Based on the analysis of (4)–(6), we can get an iterative scheme for the numerical approximation of (2)–(3) as below:
b ( u i + 1 2 N , v ) = b ( u i N , v ) ρ i ( J N ( u i N ) , v ) , v K N , u i + 1 N = P K N b ( u i + 1 2 N ) ,
where ρ i is the iterative step size, and J N is the numerical approximation of J ( · ) . The error between J ( · ) and J N ( · ) is represented by the following formula:
ϵ N = sup i | | J ( u i N ) J N ( u i N ) | | .
For the iterative scheme (7), the following convergence results hold (see [16]).
Lemma 1.
Assume that J ( · ) is Lipschitz continuous and uniformly monotone in the neighborhood of u * and u * , N , i.e.,  there exist positive constants c and C such that
| | J ( u * ) J ( v ) | | C | | u * v | | v K ,
( J ( u * ) J ( v ) , u * v ) c | | u * v | | 2 v K ,
| | J ( u * , N ) J ( v ) | | C | | u * , N v | | v K N ,
( J ( u * , N ) J ( v ) , u * , N v ) c | | u * , N v | | 2 v K N .
Suppose that
ϵ N = sup i | | J ( u i N ) J N ( u i N ) | | 0 , N ,
and ρ i can be chosen such that 0 < 1 2 c ρ i + ( 1 + 2 C ) ρ i 2 δ 2 for some constant 0 < δ < 1 . Then the iteration scheme (7) is convergent, i.e.,
| | u * u i N | | 0 , ( i , N ) .
In the iterative scheme (7), the gradient of the objective function (2) needs to be calculated. Using the stochastic maximum principle, the gradient is solved more conveniently by introducing an adjoint equation:
d p = [ h ( y ) + p f y ( t , y , u ) p g y ( t , y ) 2 ] d t + p g y ( t , y ) d W t , p ( T ) = 0 .
The detail deduction can be seen in [26,27]. Thus the derivative of J ( u ) can be denoted by
J ( u ) v = 0 T ( E [ p f u ( t , y , u ) ] + j ( u ) ) v d t , v U .
According to the Riesz representation theorem, we can get
J ( u ) = E [ p f u ( t , y , u ) ] + j ( u ) .

3.2. MLMC Method

In [19,20,21], the quantities of interest in the MLMC method are scalar value. The authors of [18,21]) make an extension of the MLMC theory to fit function value. In the gradient projection algorithm, the quantities of interest are unknown functions. In this section, we will first briefly review the theoretical knowledge of MLMC. Then, we discuss in detail how to estimate the expectation for the gradient projection method.

3.2.1. Scalar-Valued Quantities of Output

The quantity we are interested in is A : Ω R . Generally, the exact sample of A is not known. We can only get an approximate sample A h ( ω ) . It is assumed that A has an α order weak convergence property, i.e.,
| E [ A h A ] | C h α ,
and the computational cost satisfies
C ( A h ( ω ) ) C h γ ,
where γ is a positive constant. Usually, α , γ depend on the algorithm itself.
For the MLMC method (see, e.g., [19,21]), we consider the multiple approximations A h 0 , A h 1 , ⋯, A h L of A, where h l = M l T ( l = 0 , 1 , , L ) represents time step size at the lth level. Here, M is a positive integer (in the later numerical experiments, M = 2 ). The expectation of E [ A h L ] can be defined by
E [ A h L ] = E [ A h 0 ] + l = 1 L E [ A h l A h l 1 ] = l = 0 L E [ Y l ] ,
where Y l = A h l A h l 1 , A h 1 = 0 . For each E [ Y l ] , we apply the standard MC method to estimate. If the sample number is M l , we can obtain
Y l ^ = M l 1 i = 1 M l Y l ( ω i ) = 1 M l i = 1 M l ( A h l ( w i ) A h l 1 ( w i ) ) .
In order to maintain a high correlation between samples under fine and coarse meshes, we must produce samples A h l ( ω i ) , A h l 1 ( ω i ) in the same Brownian motion path. Combining (19) and (20), we can get the multilevel estimation Y = l = 0 L Y l ^ . Because the expectation operator is linear, and each expectation is estimated independently, we have
E [ Y ^ l ] = E [ A h l A h l 1 ] , V a r [ Y ] = l = 0 L 1 M l V a r [ Y l ] .
Moreover, Y is known as an approximation of E [ A ] , and its mean square error (MSE) can be described as:
E [ ( Y E [ A ] ) 2 ] = E [ ( Y E [ Y ] + E [ Y ] E [ A ] ) 2 ] = l = 0 L 1 M l V a r [ Y l ] + ( E [ A h L A ] ) 2 ,
where the first term is the statistical error and the second term is the algorithm error. To make the MSE less than ϵ 2 , we may let both terms be less than ϵ 2 / 2 . Denote the cost of taking a sample of Y l by C l , and the sample size by M l . The total cost can be represented as
C ( Y ) = l = 0 L M l C l .
How should we choose M l such that the multilevel estimated variance is less than ϵ 2 ? In [21], we find that the optimal sample number can be selected as
M l = 2 ϵ 2 V a r [ Y l ] C l 1 i = 0 L V a r [ Y i ] C i .
Here the symbol · indicates rounding up.
The complexity theorem for the MLMC is given in [19,20,21] when the quantities of interest are scalar. Usually, l can not be taken from 0, because when the grid is too coarse, the correlation of equation (SDE, SPDE) is lost. Especially when the quantity is a function, an interpolation operator is needed. If the grid is too coarse, interpolation can cause large errors.

3.2.2. Function Valued Quantities of Output

When the interest quantity is a function in Equation (16), a natural problem is how to apply classical MLMC theory.
When the samples are a vector or matrix, the discrete time step size of each level is different. A h l ( ω ) and A h l 1 ( ω ) are not compatible. Thus they cannot be subtracted directly. The most natural idea is processed by interpolation and compression. Reference [28] introduced an abstract operator I l 1 l 2 : R M l 1 + 1 R M l 2 + 1 for one-dimensional cases. If l 1 < l 2 , the operator is a bounded linear prolongation operator. If  l 1 > l 2 , the operator is a bounded linear compression operator. If  l 1 = l 2 , it is an identity operator. We also require I l 1 l 2 = I l 3 l 2 I l 1 l 3 . In real applications, I l 1 l 2 is often a linear interpolation operator. However, in order to be consistent with the previous control space (5), we define I l : R M l + 1 U M l . Redefine
E [ A h L ] = E [ I 0 A h 0 ] + l = 1 L E [ I l A h l I l 1 A h l 1 ] = l = 0 L E [ Y ¯ l ] ,
Y ¯ l = 1 M l i = 1 M l I l A h l ( ω i ) I l 1 A h l 1 ( w i ) ,
and
Y = l = 0 L Y ¯ l ,
where I 1 A h 1 = 0 , Y U M L . The MLMC theory of extending to vectors or functions can be found in [18,21]. The MSE of each A is less than ϵ 2 , i.e.,
E [ ( Y E [ A ] ) 2 ] ϵ 2 .
The optimal sample number is determined by the maximum variance. Thus, the optimal sample number can be revised as follows:
M l = 2 ϵ 2 | | V a r [ Y ¯ l ] | | C l 1 i = 0 L | | V a r [ Y ¯ i ] | | C i .
Next we consider the termination condition of the MLMC method, when the bias term is less than ϵ 2 / 2 (see, e.g., [20]). a C b if and only if a C b and b a . Choosing M = 4 , we may assume that
| | E [ I l A h l A ] | | 4 α l , | | E [ I l A h l I l 1 A h l 1 ] | | 4 α l ,
where I l is a bounded linear prolongation operator, such as linear interpolation. Using the inverse triangle inequality, we can obtain
| | E [ I l A h l I l 1 A h l 1 ] | | = | | E [ I l A h l A + A I l 1 A h l 1 ] | | | | E [ A I l 1 A h l 1 ] | | | | E [ I l A h l A ] | | .
From (30)–(31), we can derive
| | E [ A I l 1 A h l 1 ] | | 4 α ( l 1 ) = C 4 α l 4 α 4 α | | E [ I l A h l A ] | | .
Furthermore, we get
| | E [ I l A h l A ] | | C ( 4 α 1 ) 1 | | E [ I l A h l I l 1 A h l 1 ] | | .
Combining Equation (32) with (33), we can derive the following error estimate,
| | E [ I l A h l I l 1 A h l 1 ] | | C 1 2 ( 4 α 1 ) ϵ .
To ensure that the bias term can be less than ϵ 2 / 2 , we use the following formula:
max 1 4 | | Y ¯ l 1 | | , | | Y ¯ l | | C 1 2 ( 4 α 1 ) ϵ .
Based on the above analysis, the complexity theorem of MLMC is given below.
Theorem 1.
Suppose that there are positive constants α , β , γ ,   α 1 2 min { β , γ } , and 
| | E [ I l A h l A ] | | C 4 α l , | | V a r [ Y ¯ l ] | | C 4 β l , C l C 4 γ l .
Then, there exists a positive integer L and a sequence { M l } l = 0 L such that, for any ϵ < e 1
| | E [ ( Y E [ A ] ) 2 ] | | ϵ 2 ,
and the cost
C m l m c C ϵ 2 , i f β > γ , C ϵ 2 ( log ϵ ) 2 , i f β = γ , C ϵ 2 ( γ β ) α i f β < γ .
The proof of Theorem 1 is similar to that of Theorem 3.1 in [20]. The norm can be replaced by | | · | | L p ( D ) , 1 p .
Next, we introduce a combination of the MLMC method with the gradient projection method.

4. MLMC Method Based on Gradient Projection

For the expectation form E [ p f u ( t , y , u ) ] in formula (16), a more general expectation estimation form may occur in the numerical approximation of stochastic optimal control problems. We consider the expectation formula
E [ A ] = E [ f ( y ) g ( p ) ] ,
where y, p are the solutions of the state equation and the adjoint equation, respectively. We assume that f, g have continuous derivatives. The numerical approximation is denoted by A h L = f ( y h L ) g ( p h L ) . Before the theoretical analysis, we first make the following assumptions:
Hypothesis 1 (H1).
Assume that the error estimate of state y is as
| | I L y h L y | | L 4 ( Ω , L 4 [ 0 , T ] ) 2 C h L β y .
Hypothesis 2 (H2).
Assume that the error estimate of the adjoint state p is as
| | I L p h L p | | L 4 ( Ω , L 4 [ 0 , T ] ) 2 C h L β p .
Hypothesis 3 (H3).
Assume that the cost of calculating approximate sample A is as
C L C h L γ 1 + h L γ 2 ,
where the first and second terms are costs in sampling y, p, respectively.

4.1. Classic Monte Carlo Method

Let A L 2 ( Ω , L 2 [ 0 , T ] ) . E [ A ] is estimated by the average value of the sample, i.e.,
E M [ A ] = 1 M i = 1 M A ( ω i ) ,
where A ( w i ) L 2 [ 0 , T ] . For a fixed M, we have E M [ A ] L 2 [ 0 , T ] . Because the exact sample is taken here, there is only statistical error (see, e.g., [7,22]). The statistical error can be given by the following lemma.
Lemma 2.
Assume that A L 2 ( Ω , L 2 [ 0 , T ] ) . Then, for any M N , we have
| | E [ A ] E M [ A ] | | L 2 ( Ω , L 2 [ 0 , T ] ) M 1 2 | | A | | L 2 ( Ω , L 2 [ 0 , T ] ) .
The approximation of E [ A ] can be defined by
E M [ A h L ] = 1 M i = 1 M A h L ( ω i ) ,
where A h L ( ω i ) , i = 1 , , M is independent and identically distributed. From the formula (45), we can obviously find that there are two sources of error: one is statistical error, and the other is discrete error. The detailed error estimate is described as follows.
Theorem 2.
Let assumptions H1H2 hold and f, g be Lipschitz continuous. Then we have
| | E [ A ] E M [ I L A h L ] | | L 2 ( Ω , L 2 [ 0 , T ] ) C M 1 2 | | I L A h L | | L 2 ( Ω , L 2 [ 0 , T ] ) + h L β y + h L β p ,
where I l A h l = f ( I l y h l ) g ( I l p h l ) .
Proof. 
Firstly, applying Lemma 2 and triangle inequality, we obtain
| | E [ A ] E M [ I L A h L ] | | L 2 ( Ω , L 2 [ 0 , T ] ) | | E [ f ( y ) g ( p ) ] E [ I L A h L ] | | L 2 ( Ω , L 2 [ 0 , T ] ) + | | E [ I L A h L ] E M [ I L A h L ] | | L 2 ( Ω , L 2 [ 0 , T ] ) C M 1 2 | | I L A h L | | L 2 ( Ω , L 2 [ 0 , T ] ) + | | E [ f ( y ) g ( p ) ] E [ f ( I L y h L ) g ( p ) ] | | L 2 [ 0 , T ] + | | E [ f ( I L y h L ) g ( I L p h L ) ] E [ f ( I L y h L ) g ( p ) ] | | L 2 [ 0 , T ] .
Secondly, using Cauchy–Schwartz inequality and Lipschitz continuity, we have
| | E [ f ( y ) g ( p ) ] E [ f ( I L y h L ) g ( p ) ] | | L 2 [ 0 , T ] + | | E [ f ( I L y h L ) g ( I L p h L ) ] E [ f ( I L y h L ) g ( p ) ] | | L 2 [ 0 , T ] C h L β y + h L β p .
Combining (47) with (48), we can derive the desired result.    □
From Theorem 2, we can find that the numbers of statistical samples are affected by the step size. Based on this, the complexity theorem of MC is given below:
Theorem 3.
Let assumptions H1–H3 hold, A L 2 ( Ω , L 2 [ 0 , T ] ) and f, g be Lipschitz continuous. Then, the MC sample number M can be derived as
M = O ( h L β p + h L β y ) 2 ,
the error bound yields
| | E [ A ] E M [ I L A h L ] ] | | L 2 ( Ω , L 2 [ 0 , T ] ) C h L β p + h L β y ,
and the total cost satisfies
C m c C ( h L γ 1 + h L γ 2 ) ( h L β p + h L β y ) 2 .
Proof. 
Selecting M = O ( ( h L β p + h L β y ) 2 ) , from Theorem 2 we have inequality (50). The formula (51) can be obtained with assumption H 3 .    □

4.2. Multilevel Monte Carlo Method

According to Equations (26) and (27), a multilevel estimator is established for A in the following theorem.
Theorem 4.
Let assumptions H1H2 hold, A L 2 ( Ω , L 2 [ 0 , T ] ) and f, g be Lipschitz continuous. Then, using (26) and (27), the error of MLMC expectation for A is established as follows:
| | E [ A ] Y | | L 2 ( Ω , L 2 [ 0 , T ] ) C h L β y + h L β p + l = 0 L M l 1 2 ( h l β y + h l β p ) .
Proof. 
Using the triangle inequality, we get
| | E [ A ] Y | | L 2 ( Ω , L 2 [ 0 , T ] ) | | E [ A ] E [ I L A h L ] | | + | | E [ I L A h L ] Y | | L 2 ( Ω , L 2 [ 0 , T ] ) : = I 1 + I 2 ,
where
I 1 = | | E [ A ] E [ I L A h L ] | | , I 2 = | | E [ I L A h L ] Y | | L 2 ( Ω , L 2 [ 0 , T ] ) .
Now, the aim is to estimate the terms I 1 and I 2 . For  I 1 , employing Theorem 2, we can obtain
| | E [ A ] E [ I L A h L ] | | C h L β y + h L β p .
For the term I 2 , using the triangle inequality, Lemma 2, Cauchy–Schwarz inequality and Lipschitz continuity, we get
| | E [ A h L ] Y | | L 2 ( Ω , L 2 [ 0 , T ] ) l = 0 L | | E [ I l A h l I l 1 A h l 1 ] E M l [ I l A h l I l 1 A h l 1 ] | | L 2 ( Ω , L 2 [ 0 , T ] ) l = 0 L M l 1 2 | | f ( I l y h l ) g ( I l p h l ) f ( I l 1 y h l 1 ) g ( I l 1 p h l 1 ) | | L 2 ( Ω , L 2 [ 0 , T ] ) l = 0 L M l 1 2 | | f ( I l y h l ) g ( I l p h l ) f ( y ) g ( p ) | | L 2 ( Ω , L 2 [ 0 , T ] ) + | | f ( I l 1 y h l 1 ) g ( I l 1 p h l 1 ) f ( y ) g ( p ) | | L 2 ( Ω , L 2 [ 0 , T ] ) C l = 0 L M l 1 2 ( h l β y + h l β p ) ,
where h l 1 = M h l is used in the above derivation. Hence, substituting estimates of I 1 , I 2 into equation (53), we derive
| | E [ A ] Y | | L 2 ( Ω , L 2 [ 0 , T ] ) C h L β y + h L β p + l = 0 L M l 1 2 ( h l β y + h l β p ) ,
which is the desired result.    □
The formula (52) shows that { M l } l = 0 L is selected by balancing discrete and statistical errors. We choose sample number { M l } l = 0 L such that
l = 0 L M l 1 2 ( h l β y + h l β p ) C 0 ( h L β y + h L β p ) ,
and the total cost C m l m c = l = 0 L C l M l is as little as possible. According to [7], this is a convex optimization minimization problem. Therefore, there exists an optimal sample number at each level. Thus, we introduce a Lagrange function as
L ( μ , M ) = l = 0 L C l M l + μ l = 0 L M l 1 2 ( h l β y + h l β p ) C 0 ( h L β y + h L β p ) .
Letting the derivative of L ( μ , N ) with respect to N l be zero, we can derive
M l ( h l γ 1 + h l γ 2 ) 2 3 ( h l β y + h l β p ) 2 3 l = 0 L ( h l γ 1 + h l γ 2 ) 1 3 ( h l β y + h l β p ) 2 3 2 ( h L β y + h L β p ) 2 .
Based on the above analysis, the complexity theorem of the MLMC method based on gradient projection is given as follows.
Theorem 5.
Let assumptions H1H3 hold, A L 2 ( Ω , L 2 [ 0 , T ] ) and f, g be Lipschitz continuous. Then, the MLMC estimator (27) can be obtained by the following choice of { M l } l = 0 L ,
M l = O ( ( h l γ 1 + h l γ 2 ) 2 3 ( h l β y + h l β p ) 2 3 ( h L β y + h L β p ) 2 ) , i f τ > 0 , O ( ( h l γ 1 + h l γ 2 ) 2 3 ( h l β y + h l β p ) 2 3 ( L + 1 ) 2 ( h L β y + h L β p ) 2 ) , i f τ = 0 , O ( ( h l γ 1 + h l γ 2 ) 2 3 ( h l β y + h l β p ) 2 3 h L 2 τ 3 ( h L β y + h L β p ) 2 ) , i f τ < 0 ,
where
τ = min { 2 β y γ 1 , 2 β p γ 1 , β p + β y γ 1 , 2 β p γ 2 , 2 β y γ 2 , β y + β p γ 2 } .
Then the error bound is yielded as
| | E [ A ] Y | | L 2 ( Ω , L 2 [ 0 , T ] ) C h L β y + h L β p .
And the total computational cost C m l m c is asymptotically bounded by L
C m l m c C ( h L β y + h L β p ) 2 , i f τ > 0 , C ( L + 1 ) 3 ( h L β y + h L β p ) 2 , i f τ = 0 , C h L τ ( h L β y + h L β p ) 2 , i f τ < 0 .
Proof. 
Firstly, we prove (63). Using Theorem 4 and h l 1 = M h l , choosing
M l ( h l γ 1 + h l γ 2 ) 2 3 ( h l β y + h l β p ) 2 3 ( h L β y + h L β p ) 2 , l = 0 , , L ,
we obtain
| | E [ A ] Y | | L 2 ( Ω , L 2 [ 0 , T ] ) C h L β y + h L β p + l = 0 L M l 1 2 ( h l β y + h l β p ) C h L β y + h L β p + ( h L β y + h L β p ) l = 0 L ( h l γ 1 + h l γ 2 ) 1 3 ( h l β y + h l β p ) 2 3 C h L β y + h L β p ,
thereby (63) is proved.
For formula (64), we discuss the proof of the first case that τ > 0 . It is similar for the latter two cases.
According to hypothesis H 3 , we get
C m l m c = l = 0 L N l C l C ( h L β y + h L β p ) 2 l = 0 L ( h l γ 1 + h l γ 2 ) 1 3 ( h l β y + h l β p ) 2 3 .
Let τ > 0 , as  L . Then
l = 0 L ( h l γ 1 + h l γ 2 ) 1 3 ( h l β y + h l β p ) 2 3 = O ( 1 ) ,
Thus we have
C m l m c C ( h L β y + h L β p ) 2 .
Here we complete the proof.    □

4.3. Gradient Projection Based on Optimization

Following the analysis of the former parts, the numerical iterative algorithm for the optimal control problem (2)–(3) is given in [15].
From Lemma 1, Corollary 3.2 of [16], we know that their algorithm is convergent. The error of the scheme in [15] has two main sources. One is the error caused by estimating the expected E [ p f u ( t , y , u ) ] ; the other is caused by the Euler scheme, which cannot be ignored. The error of the estimation for expectation has little effect on the step size ρ i at the initial iteration step. When ρ i is small, the estimated error from expectations may completely distort the gradient direction. This causes the algorithm to be unable to converge, i.e., the iterative error does not subsequently decrease.
To make the gradient valid, it is necessary that the MSE satisfies ϵ ( i ) < η | | u ( i ) u ( i 1 ) | | L 2 ( 0 , T ) , where η ( 0 , 1 ) ( η is determined by the optimal control problem). To reduce the influence of statistical error, discrete error and unnecessary computation cost, we use MLMC to estimate expectation.

4.4. MLMC Gradient Projection Algorithm

For the algorithm presented in [15], the MSE cannot decrease for a fixed time step size, no matter how much the number of samples is increased, because it is a biased estimate. Therefore, in an efficient algorithm, the time step size will not be fixed. Here we apply the MLMC method to estimate the gradient, which can ensure that the MSE of each iteration is within an allowable range. The detail step is presented in Algorithm 1. Norm | | · | | L 2 ( Ω , L 2 [ 0 , T ] ) is induced by the inner product ( u , v ) L 2 ( Ω ) , L 2 [ 0 , T ] = 0 T Ω u v d P ( ω ) d t . J ( L i , M i ) ( u ( i 1 ) ) = Y + j ( u ( i 1 ) ) is the MLMC approximation of (16). The definition of Y can be seen in (27). Here, A = p f u ( t , y , u ) is involved in (28), Theorems 4 and 5.
Algorithm 1: MLMC gradient projection based optimization
1: input τ , ϵ ( 1 ) , i m a x , u ( 0 ) , η
2: for i = 1, , i m a x  do
3:     estimate  J ( L i , M i ) ( u ( i 1 ) )
4:      estimate  u ( i )
5:   if   | | u ( i ) u ( i 1 ) | | L 2 [ 0 , T ] τ  then
6:          return  u ( i )
7:   end if
8:   if  ϵ ( i ) η | | u ( i ) u ( i 1 ) | | L 2 [ 0 , T ]  or   ϵ ( i ) η 2 | | u ( i ) u ( i 1 ) | | L 2 [ 0 , T ]  then
9:          ϵ ( i + 1 ) = η | | u ( i ) u ( i 1 ) | | L 2 [ 0 , T ]
              or  ϵ ( i + 1 ) = max { τ , η | | u ( i ) u ( i 1 ) | | L 2 [ 0 , T ] }
10:   else
11:         ϵ ( i + 1 ) = ϵ ( i )
12:   end if
13: end for
Line 1: τ is the iterative permissible error; ϵ ( 1 ) is the given initial MSE; i m a x is the max iterative steps; u ( 0 ) is the initial control function; η ( 0 , 1 ) is a given parameter.
Line 3: We use the MLMC method to estimate gradient according to MSE ϵ ( i ) .
Line 4: We use the gradient projection formula (7) to update the control function. To determine the new optimal iterative step size ρ i , we need to calculate the objective function. After a small number of iterations, the change of the objective function value is small, which makes the MSE very small and the computation cost of MLMC very large, especially through the Armijio search method. So for the iterative step size here we simply use ρ i = 1 i + d , where d is a positive constant.
Lines 8–12 determine the MSE of the next iteration, ensure that the MSE of the MLMC estimation gradient will not affect the iterative error of each iteration, and also ensure that the MSE cannot be less than the iterative error of each iteration. This avoids the waste of unnecessary samples; especially when the iteration is close to termination, a very small error change will lead to a large sample number difference.
Noting that the sample number M i and max level L i are related to ϵ ( i ) , and
J ( L i , M i ) ( u ( i 1 ) ) U M L i ,
from Algorithm 1, we can see
ϵ ( i ) | | u ( i ) u ( i 1 ) | | L 2 [ 0 , T ]
hold.

4.5. Convergence Analysis of the Algorithm

First of all, we consider the accuracy of applying MLMC to estimate a gradient. For the accuracy of the gradient estimate, Theorem 6.1 of [18] is discussed in detail. The gradient estimated by MLMC is the exact gradient of a discrete objective function. When the objective function is convex, the MLMC estimation remains convex.
Algorithm 1 eliminates the impact of | | u ( i ) u ( i 1 ) | | L 2 [ 0 , T ] from discrete and statistical errors as far as possible. According to formula (16), line 3 and line 9 in Algorithm 1, we have
| | J ( u ( i 1 ) ) J ( L i , M i ) ( u ( i 1 ) ) | | L 2 ( Ω , L 2 [ 0 , T ] ) C | | u ( i ) u ( i 1 ) | | L 2 ( Ω , L 2 [ 0 , T ] ) ,
where J ( L i , M i ) ( u ( i 1 ) ) is the MLMC estimate of the J ( u ( i ) ) .
From Lemma 1, Corollary 3.2 of [16] and Theorem 1 of [15], we know that Algorithm 1 is convergent and that the final iterative error is not affected by discrete or statistical errors. We have the following convergence theorem as τ 0 :
Theorem 6.
Suppose all the conditions of Lemma 1 hold,
| | u ( i ) u ( i 1 ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 0 , i ,
and ρ i satisfies 0 < 1 2 c ρ i + C ¯ ρ i 2 δ 2 4 , where δ ( 0 , 1 ) . Then, Algorithm 1 based on multilevel estimation is convergent, i.e.,
| | u * u ( i ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 0 , ( i ) .
Proof. 
Triangle inequality implies
| | u * u ( i ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 2 | | u * u * , N ( i 1 ) | | L 2 [ 0 , T ] 2 + 2 | | u * , N ( i 1 ) u ( i ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 ,
where the definition of u * , N ( i 1 ) can be seen (6). Let ω 1 = u * ρ i J ( u * ) , ω 2 = u * , N ( i 1 ) ρ i J ( u * , N ( i 1 ) ) , ω 3 = u ( i 1 ) ρ i J ( L i , M i ) . Then, for the second term of the right hand side (RHS) of (75), we have
| | u * , N ( i 1 ) u ( i ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 = | | P K N ( i 1 ) ( ω 2 ) P K N ( i ) ( ω 3 ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 2 | | P K N ( i 1 ) ( ω 2 ) P K N ( i ) ( ω 2 ) | | L 2 [ 0 , T ] 2 + 2 | | P K N ( i ) ( ω 2 ) P K N ( i ) ( ω 3 ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 .
For the first term of the RHS of (76), we can get
| | P K N ( i 1 ) ( ω 2 ) P K N ( i ) ( ω 2 ) | | L 2 [ 0 , T ] 2 2 | | P K N ( i 1 ) ( ω 2 ) u * | | L 2 [ 0 , T ] 2 + 2 | | u * P K N ( i ) ( ω 2 ) | | L 2 [ 0 , T ] 2 C | | P K N ( i 1 ) ( ω 2 ) P K N ( i 1 ) ( ω 1 ) | | L 2 [ 0 , T ] 2 + | | P K N ( i 1 ) ( ω 1 ) P K ( ω 1 ) | | L 2 [ 0 , T ] 2 + | | P K ( ω 1 ) P K N ( i ) ( ω 1 ) | | L 2 [ 0 , T ] + | | P K N ( i ) ( ω 1 ) P K N ( i ) ( ω 2 ) | | L 2 [ 0 , T ] C | | u * u * , N ( i 1 ) | | L 2 [ 0 , T ] 2 + | | P K ( ω 1 ) P K N ( i ) ( ω 1 ) | | L 2 [ 0 , T ] 2 + | | P K N ( i 1 ) ( ω 1 ) P K ( ω 1 ) | | L 2 [ 0 , T ] 2 .
The second term of the RHS of (77) can be written as
| | P K N ( i ) ( ω 2 ) P K N ( i ) ( ω 3 ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 | | u * , N ( i 1 ) u ( i 1 ) ρ i ( J ( u * , N ( i 1 ) ) J ( L i , M i ) ( u ( i 1 ) ) ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 | | u * , N ( i 1 ) u ( i 1 ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 2 ρ i ( u * , N ( i 1 ) u ( i 1 ) , J ( u * , N ( i 1 ) ) J ( L i , M i ) ( u ( i 1 ) ) ) L 2 ( Ω , L 2 [ 0 , T ] ) + ρ i 2 | | J ( u * , N ( i 1 ) ) J ( L i , M i ) ( u ( i 1 ) ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 .
Using (11), (72) and Cauchy–Schwartz inequality, we can obtain for the second term of the RHS of (78)
2 ρ i ( u * , N ( i 1 ) u ( i 1 ) , J ( u * , N ( i 1 ) ) J ( L i , M i ) ( u ( i 1 ) ) ) L 2 ( Ω , L 2 [ 0 , T ] ) = 2 ρ i ( u * , N ( i 1 ) u ( i 1 ) , J ( u * , N ( i 1 ) ) J ( u ( i 1 ) ) ) L 2 ( Ω , L 2 [ 0 , T ] ) 2 ρ i ( u * , N ( i 1 ) u ( i 1 ) , J ( u ( i 1 ) ) J ( L i , M i ) ( u ( i 1 ) ) ) L 2 ( Ω , L 2 [ 0 , T ] ) 2 c ρ i | | u * , N ( i 1 ) u ( i 1 ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 + ρ i C | | ϵ ( i ) | | L 2 ( Ω ) | | u * , N ( i 1 ) u ( i 1 ) | | L 2 ( Ω , L 2 [ 0 , T ] ) .
Substituting (77)–(79) into (76), we have
| | u * , N ( i 1 ) u ( i ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 2 ( 1 2 c ρ i + C ¯ ρ i 2 ) | | u * , N ( i 1 ) u ( i 1 ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 + C | | ϵ ( i ) | | L 2 ( Ω ) 2 .
Triangle inequality and (80) imply
| | u * , N ( i 1 ) u ( i 1 ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 2 | | u * , N ( i 1 ) u ( i ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 + 2 | | u ( i ) u ( i 1 ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 4 ( 1 2 c ρ i + C ¯ ρ i 2 ) | | u * , N ( i 1 ) u ( i 1 ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 + 2 | | u ( i ) u ( i 1 ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 + C | | ϵ ( i ) | | L 2 ( Ω ) 2 .
Choosing appropriate ρ i , we can get
| | u * , N ( i 1 ) u ( i 1 ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 C | | u ( i ) u ( i 1 ) | | L 2 ( Ω , L 2 [ 0 , T ] ) + | | ϵ ( i ) | | L 2 ( Ω ) 2 .
Combining (75) with (76) and (82), we have
| | u * u ( i ) | | L 2 ( Ω , L 2 [ 0 , T ] ) 2 C | | u * u * , N ( i 1 ) | | L 2 [ 0 , T ] 2 + | | u ( i ) u ( i 1 ) | | L 2 ( Ω , L 2 [ 0 , T ] ) + | | ϵ ( i ) | | L 2 ( Ω ) 2 + | | P K ( ω 1 ) P K N ( i ) ( ω 1 ) | | L 2 [ 0 , T ] 2 + | | P K N ( i 1 ) ( ω 1 ) P K ( ω 1 ) | | L 2 [ 0 , T ] 2 .
It is known from Theorem 3.1 of [16] that
| | u * u * , N ( i 1 ) | | L 2 [ 0 , T ] 2 0 , as i .
When i , according to the assumption and (71), we have | | ϵ ( i ) | | L 2 ( Ω ) 0 . The fact that K N ( i ) is dense in K ( N ( i ) ) implies that | | P K ( ω 1 ) P K N ( i ) ( ω 1 ) | | L 2 [ 0 , T ] 0 and | | P K N ( i 1 ) ( ω 1 ) P K ( ω 1 ) | | L 2 [ 0 , T ] 0 . This completes the proof. □

5. Numerical Experiments

In this section, two numerical examples are presented. The effectiveness of Algorithm 1 is analyzed numerically.

5.1. Example 1

This example comes from the literature [16]. It is as follows:
min u L 2 ( 0 , T ) J ( u ) = 1 2 0 T E [ ( y y d ) 2 ] d t + 1 2 0 T u 2 d t , s . t d y t = u ( t ) y ( t ) d t + σ y ( t ) d W t , y ( 0 ) = y 0 ,
The optimal control problem is equivalent to
d y = u y d t + σ y d W t , y ( 0 ) = y 0 , d p = ( y y d + u p p σ 2 ) d t + p σ d W t , p ( T ) = 0 , u = E [ p y ] .
The exact solution of the optimal control problem is
u * = T t 1 y 0 T t + 1 2 t 2 , y d = e σ 2 t ( T t ) 2 1 y 0 T t + 1 2 t 2 + 1 .
The parameters are chosen as u ( 0 ) = 1 , T = 1 , y 0 = 1 , σ = 0.05 , 0.1 , 0.2 , τ = 10 4 , ϵ ( 1 ) = 5 e 2 , η = 0.5 . Firstly, the parameters α y , α p , β y , β p , γ 1 , γ 2 need to be simulated when MLMC is used to estimate the gradient. Because the Euler algorithm is used for numerically simulating adjoint equation and state equation, the relevant parameters have been determined theoretically (the details are referred to in [20,29,30]). Therefore, α y = α p = 1 , β y = β p = 1 2 , γ 1 = γ 2 = 1 . The experimental results are shown in Figure 1, Figure 2 and Figure 3 with h = 1 / 64 . The final error values | | u ( i ) u * | | L 2 [ 0 , T ] are 6.4 × 10 4 , 1.22 × 10 3 , 9.9 × 10 3 , respectively.
Partial information about the computation process when σ = 0.2 is listed in Table 1. In the table, M k is the number of samples, k means time step h = T / 2 k . The total time of iteration is 2.4 E + 4 s for σ = 0.2 .
From Figure 1, Figure 2 and Figure 3 and Table 1, it can be seen that ϵ ( i ) | | u ( i ) u ( i 1 ) | | L 2 [ 0 , T ] 0 , Algorithm 1 is convergent, and the MLMC method is efficient.
From Figure 3, we can see that when the τ gradually decreases, the error decreases slowly ( σ = 0.1 , σ = 0.2 ). This is mainly caused by the direction we choose is the negative gradient direction, which is the disadvantage of the gradient descent method itself.

5.2. Example 2

This example comes from [15]; it is as follows:
min u 0 J = 0.5 [ c 1 0 T E [ ( y y d ) 2 ] + c 2 0 T u 2 d t , s . t d y t = [ u ( t ) r ( t ) ] d t + σ d W t , y ( 0 ) = y 0 ,
This optimal control problem is equivalent to:
d y = ( u r ) d t + σ d W t , y ( 0 ) = y 0 , d p = ( y y d ) d t , p ( T ) = 0 , u = max ( 0 , c 1 c 2 E [ p ] ) ,
where
c 1 = c 2 = 1 , y 0 = 0 , y d = 0.5 T t 0.25 t 2 + 1 , r = 0.5 ( T T ) , u * = T t .
The parameters are selected as u ( 0 ) = 1 , T = 1 , σ = 0.1 , 3 , 5 , τ = 10 4 , ϵ ( 1 ) = 1 × 10 1 , η = 0.5 . The computational results are shown in Figure 4, Figure 5 with h = 1 / 64 . The final error values | | u ( i ) u * | | L 2 [ 0 , T ] are 3.4 × 10 4 , 1.39 × 10 3 , 8.34 × 10 4 respectively.
When σ = 5 , partial information about the computation process is listed in Table 2. In this table, M k has the same meaning as in Table 1. The total time of iteration is 1.841 × 10 4 s when σ = 5 .
From Figure 4, Figure 5 and Figure 6 and Table 2, it can also be seen that ϵ ( i ) | | u ( i ) u ( i 1 ) | | L 2 [ 0 , T ] 0 , Algorithm 1 is convergent, and the MLMC method is efficient.
In the numerical approximation of the stochastic optimal control problem, statistical error (or MSE) and discrete error have a great influence on the convergence of the gradient projection method. If we use a Monte Carlo method as in [15], the optimization iteration method (gradient projection method) may not converge for a fixed large time step. In order to ensure the convergence of the iteration (gradient projection method), it is necessary to select a small time step size h and a large sample size M. This means that each optimization iteration will take a lot of time.
Our Algorithm 1 is actually an MLMC method with variable step size. The number of samples in each iteration step is different, and the sample size is also different with different time step sizes. It is optimal in a sense, just as the MLMC is superior to the MC method. Compared with the method in [16], our method is much simpler.

6. Conclusions

In this paper, an MLMC method based on gradient projection is used to approximate a stochastic optimal control problem. One of the contributions is that MLMC method is used to compute expectation, reducing the influence of statistical and discrete errors on the convergence of the gradient projection algorithm. The other contribution is that the convergence proof of Algorithm 1 is given; to the best of our knowledge, this is not found elsewhere. Two numerical examples are carried out to verify the effectiveness of the proposed algorithm.
Our method can be applied to simulate other stochastic optimal control problems with expected value in the optimality conditions (optimal control of SDE or SPDE).
The MLMC is used to reduce the MSE. Other methods include the most important sampling method (see, e.g., [31,32]), quasi-Monte Carlo method and multilevel quasi-Monte Carlo method (see, e.g., [33,34]). In future work, we can use the importance sampling MLMC method and the multi-level quasi-Monte Carlo method to approximate the stochastic optimal control problem with gradient projection or other optimization methods.

Author Contributions

Methodology, X.L.; software, C.Y.; validation, X.L. and C.Y.; formal analysis, C.Y. and X.L.; investigation, C.Y.; writing—original draft preparation, C.Y.; writing—review and editing, X.L.; visualization, C.Y.; supervision, X.L.; project administration, X.L.; funding acquisition, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Foundation of China (Granted No. 11961008).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cerone, V.; Regruto, D.; Abuabiah, M.; Fadda, E. A kernel- based nonparametric approach to direct data-driven control of LTI systems. IFAC-PapersOnLine 2018, 51, 1026–1031. [Google Scholar] [CrossRef]
  2. Hinze, M.; Pinnau, R.; Ulbrich, M.; Ulbrich, S. Optimization with PDE Constraints; Springer: New York, NY, USA, 2009. [Google Scholar]
  3. Liu, W.; Yan, N. Adaptive Finite Element Methods: Optimal Control Governed by PDEs; Science Press: Beijing, China, 2008. [Google Scholar]
  4. Luo, X. A priori error estimates of Crank-Nicolson finite volume element method for a hyperbolic optimal control problem. Numer. Methods Partial Differ. Equ. 2016, 32, 1331–1356. [Google Scholar] [CrossRef]
  5. Luo, X.; Chen, Y.; Huang, Y.; Hou, T. Some error estimates of finite volume element method for parabolic optimal control problems. Optim. Control Appl. Methods 2014, 35, 145–165. [Google Scholar] [CrossRef]
  6. Luo, X.; Chen, Y.; Huang, Y. A priori error estimates of finite volume element method for hyperbolic optimal control problems. Sci. China Math. 2013, 56, 901–914. [Google Scholar] [CrossRef]
  7. Ali, A.A.; Ullmann, E.; Hinze, M. Multilevel Monte Carlo analysis for optimal control of elliptic PDEs with random coefficients. SIAM/ASA J. Uncertain. Quantif. 2016, 5, 466–492. [Google Scholar] [CrossRef]
  8. Borzi, A.; Winckel, G. Multigrid methods and sparse-grid collocation techniques for parabolic optimal control problems with random coefficients. SIAM J. Sci. Comput. 2009, 31, 2172–2192. [Google Scholar] [CrossRef]
  9. Sun, T.; Shen, W.; Gong, B.; Liu, W. A priori error estimate of stochastic Galerkin method for optimal control problem governed by random parabolic PDE with constrained control. J. Sci. Comput. 2016, 67, 405–431. [Google Scholar] [CrossRef]
  10. Archibald, R.; Bao, F.; Yong, J.; Zhou, T. An efficient numerical algorithm for solving data driven feedback control problems. J. Sci. Comput. 2020, 85, 51. [Google Scholar] [CrossRef]
  11. Haussmann, U.G. Some examples of optimal stochastic controls or: The stochastic maximum principle at work. SIAM Rev. 1981, 23, 292–307. [Google Scholar] [CrossRef]
  12. Kushner, H.J.; Dupuis, P. Numerical Methods for Stochastic Control Problems in Continuous Time, 2nd ed.; Springer: New York, NY, USA, 2001. [Google Scholar]
  13. Korn, R.; Kraft, H. A stochastic control approach to portfolio problems with stochastic interest rates. SIAM J. Control Optim. 2001, 40, 1250–1269. [Google Scholar] [CrossRef] [Green Version]
  14. Archibald, R.; Bao, F.; Yong, J. A stochastic gradient descent approach for stochastic optimal control. East Asian J. Appl. Math. 2020, 10, 635–658. [Google Scholar] [CrossRef]
  15. Du, N.; Shi, J.; Liu, W. An effective gradient projection method for stochastic optimal control. Int. J. Numer. Anal. Model. 2013, 10, 757–774. [Google Scholar]
  16. Gong, B.; Liu, W.; Tang, T. An efficient gradient projection method for stochastic optimal control problems. SIAM J. Numer. Anal. 2017, 55, 2982–3005. [Google Scholar] [CrossRef]
  17. Wang, Y. Error analysis of a discretization for stochastic linear quadratic control problems governed by SDEs. IMA J. Math. Control Inf. 2021, 38, 1148–1173. [Google Scholar] [CrossRef]
  18. Barel, A.V.; Vandewalle, S. Robust optimization of PDEs with random coefficients using a multilevel Monte Carlo method. SIAM/ASA J. Uncertain. Quantif. 2017, 7, 174–202. [Google Scholar] [CrossRef]
  19. Cliffe, K.A.; Giles, M.B.; Scheichl, R.; Teckentrup, A.L. Multilevel Monte Carlo methods and applications to elliptic PDEs with random coefficients. Comput. Vis. Sci. 2011, 14, 3–15. [Google Scholar] [CrossRef]
  20. Giles, M.B. Multilevel Monte Carlo Path Simulation. Oper. Res. 2008, 56, 607–617. [Google Scholar] [CrossRef]
  21. Giles, M.B. Multilevel Monte Carlo methods. Acta Numer. 2015, 24, 259–328. [Google Scholar] [CrossRef]
  22. Kornhuber, R.; Schwab, C.; Wolf, M.W. Multilevel Monte Carlo finite element methods for stochastic elliptic variational inequalities. SIAM J. Numer. Anal. 2014, 52, 1243–1268. [Google Scholar] [CrossRef]
  23. Li, M.; Luo, X. An MLMCE-HDG method for the convection diffusion equation with random diffusivity. Comput. Math. Appl. 2022, 127, 127–143. [Google Scholar] [CrossRef]
  24. Li, M.; Luo, X. Convergence analysis and cost estimate of an MLMC-HDG method for elliptic PDEs with random coefficients. Mathematics 2021, 9, 1072. [Google Scholar] [CrossRef]
  25. Ikeda, N.; Watanabe, S. Stochastic Differential Equations and Diffusion Processes, 2nd ed.; North-Holland Publishing Company: New York, NY, USA, 1989. [Google Scholar]
  26. Ma, J.; Yong, J. Forward-Backward Stochastic Differential Equations and their Applications; Springer: Berlin/Heidelberg, Germany, 2007; pp. 51–79. [Google Scholar]
  27. Peng, S. Backward stochastic differential equations and applications to optimal control. Appl. Math. Optim. 1993, 27, 125–144. [Google Scholar] [CrossRef]
  28. Brenner, S.C.; Scott, L.R. The Mathematical Theory of Finite Element Methods, 3rd ed.; Springer: New York, NY, USA, 2007. [Google Scholar]
  29. Li, T.; Vanden-Eijnden, E. Applied Stochastic Analysis; AMS: Providengce, RI, USA, 2019. [Google Scholar]
  30. Lord, G.J.; Powell, C.E.; Shardlow, T. An Introduction to Computational Stochastic PDEs; Cambridge University Press: New York, NY, USA, 2014. [Google Scholar]
  31. Alaya, M.B.; Hajji, K.; Kebaier, A. Adaptive importance sampling for multilevel Monte Carlo Euler method. Stochastics 2022. [Google Scholar] [CrossRef]
  32. Kebaier, A.; Lelong, J. Coupling importance sampling and multilevel Monte Carlo using sample average approximation. Methodol. Comput. Appl. Probab. 2018, 20, 611–641. [Google Scholar] [CrossRef]
  33. Giles, M.B.; Waterhouse, B.J. Multilevel quasi-Monte Carlo path simulation. In Advanced Financial Modeling; Albrecher, H., Runggaldier, W.J., Schachermayer, W., Eds.; de Gruyter: New York, NY, USA, 2009; pp. 165–181. [Google Scholar]
  34. Kuo, F.Y.; Schwab, C.; Sloan, I.H. Multi-level quasi-Monte Carlo finite element methods for a class of elliptic partial differential equation with random coefficients. Found. Comput. Math. 2015, 15, 411–449. [Google Scholar] [CrossRef]
Figure 1. The exact solution and the numerical solutions (Left: σ = 0.05 . Middle σ = 0.1 . Right: σ = 0.2 ).
Figure 1. The exact solution and the numerical solutions (Left: σ = 0.05 . Middle σ = 0.1 . Right: σ = 0.2 ).
Appliedmath 03 00008 g001
Figure 2. The changing process of ϵ ( i ) and | | u ( i ) u ( i 1 ) | | L 2 [ 0 , T ] with iteration number i (Left: σ = 0.05 . Middle σ = 0.1 . Right: σ = 0.2 ).
Figure 2. The changing process of ϵ ( i ) and | | u ( i ) u ( i 1 ) | | L 2 [ 0 , T ] with iteration number i (Left: σ = 0.05 . Middle σ = 0.1 . Right: σ = 0.2 ).
Appliedmath 03 00008 g002
Figure 3. The changing process of | | u ( i m a x ) u ( i m a x 1 ) | | L 2 [ 0 , T ] with the iterative error τ (Left: σ = 0.05 . Middle σ = 0.1 . Right: σ = 0.2 ).
Figure 3. The changing process of | | u ( i m a x ) u ( i m a x 1 ) | | L 2 [ 0 , T ] with the iterative error τ (Left: σ = 0.05 . Middle σ = 0.1 . Right: σ = 0.2 ).
Appliedmath 03 00008 g003
Figure 4. The exact solution and the numerical solutions (Left: σ = 0.1 . Middle σ = 3 . Right: σ = 5 ).
Figure 4. The exact solution and the numerical solutions (Left: σ = 0.1 . Middle σ = 3 . Right: σ = 5 ).
Appliedmath 03 00008 g004
Figure 5. The changing process of ϵ ( i ) and | | u ( i ) u ( i 1 ) | | L 2 [ 0 , T ] with iteration number i (Left: σ = 0.1 . Middle σ = 3 . Right: σ = 5 ).
Figure 5. The changing process of ϵ ( i ) and | | u ( i ) u ( i 1 ) | | L 2 [ 0 , T ] with iteration number i (Left: σ = 0.1 . Middle σ = 3 . Right: σ = 5 ).
Appliedmath 03 00008 g005
Figure 6. The changeing process of | | u ( i m a x ) u ( i m a x 1 ) | | L 2 [ 0 , T ] with the iterative error τ ( σ = 3 ).
Figure 6. The changeing process of | | u ( i m a x ) u ( i m a x 1 ) | | L 2 [ 0 , T ] with the iterative error τ ( σ = 3 ).
Appliedmath 03 00008 g006
Table 1. Behavior of MLMC gradient projection-based optimization ( σ = 0.2 ).
Table 1. Behavior of MLMC gradient projection-based optimization ( σ = 0.2 ).
i ϵ ( i ) | | u ( i ) u ( i 1 ) | | M 3 M 4 M 5 M 6 M 7 t ( i ) / s
3 4.5 × 10 2 5.6 × 10 2 17831 0.08
7 9.1 × 10 3 1.3 × 10 2 42735510 0.09
11 2.6 × 10 3 3.8 × 10 3 53,203749126 0.20
15 8.4 × 10 4 1.3 × 10 3 508,31267151181254 1.70
19 3.1 × 10 4 4.9 × 10 4 3,816,29251,1588526188046311.63
23 1.3 × 10 4 2.0 × 10 4 23,879,767321,83854,73211,887289267.07
Table 2. Behavior of MLMC gradient projection-based optimization ( σ = 5 ).
Table 2. Behavior of MLMC gradient projection-based optimization ( σ = 5 ).
i ϵ ( i ) | | u ( i ) u ( i 1 ) | | M 3 M 4 M 5 M 6 t ( i ) / s
3 7.6 × 10 2 7.7 × 10 2 2785193 0.05
7 7.2 × 10 3 9.9 × 10 3 329,9832050267 0.56
11 1.6 × 10 3 2.3 × 10 3 6,642,69342,4345277 9.23
15 4.4 × 10 4 6.6 × 10 4 85,982,855549,80069,370 119.30
19 1.4 × 10 4 2.2 × 10 4 821,441,0115,264,257656,89182,9111160.74
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ye, C.; Luo, X. A Multilevel Monte Carlo Approach for a Stochastic Optimal Control Problem Based on the Gradient Projection Method. AppliedMath 2023, 3, 98-116. https://doi.org/10.3390/appliedmath3010008

AMA Style

Ye C, Luo X. A Multilevel Monte Carlo Approach for a Stochastic Optimal Control Problem Based on the Gradient Projection Method. AppliedMath. 2023; 3(1):98-116. https://doi.org/10.3390/appliedmath3010008

Chicago/Turabian Style

Ye, Changlun, and Xianbing Luo. 2023. "A Multilevel Monte Carlo Approach for a Stochastic Optimal Control Problem Based on the Gradient Projection Method" AppliedMath 3, no. 1: 98-116. https://doi.org/10.3390/appliedmath3010008

APA Style

Ye, C., & Luo, X. (2023). A Multilevel Monte Carlo Approach for a Stochastic Optimal Control Problem Based on the Gradient Projection Method. AppliedMath, 3(1), 98-116. https://doi.org/10.3390/appliedmath3010008

Article Metrics

Back to TopTop