Abstract
This article studies the theoretical properties of the numerical scheme for backward stochastic differential equations, extending the relevant results of Briand et al. with more general assumptions. To be more precise, the Brown motion will be approximated using the sum of a sequence of martingale differences or a sequence of i.i.d. Gaussian variables instead of the i.i.d. Bernoulli sequence. We cope with an adaptation problem of by defining a new process ; then, we can obtain the Donsker-type theorem for numerical solutions using a similar method to Briand et al.
MSC:
60F17; 60H35
1. Introduction
In this article we consider the following backward stochastic differential equation (BSDE for short):
where W is standard Brownian motion on , is Lipschitz continuous with y and z, and is measurable with .
Research on backward stochastic differential equations can be traced back to 1973, Bismut [1] first studied linear backward stochastic differential equations. Pardoux and Peng [2] first introduced nonlinear backward stochastic differential equations, and the existence and uniqueness of the solutions under the standard Lipschitz condition were presented. For over thirty years, based on the works of Pardoux and Peng [2], backward stochastic differential equations have developed rapidly in the fields of financial mathematics, stochastic control, differential games, and numerical analysis; see El Karoui et al. [3] for details.
Unlike forward stochastic differential equations, the solution of backward stochastic differential equations is a pair of adapted processes ; this also poses certain difficulties in numerically solving backward stochastic differential equations. Early studies mostly used stochastic differential equations to study numerical solutions for forward and backward stochastic differential equations. The system of differential equations consists of a portion of forward stochastic differential equations and a portion of backward stochastic differential equations. Ma et al. [4] proposed a four-step solution method for forward and backward stochastic differential equations. Douglas et al. [5] provided a numerical solution for forward and backward stochastic differential equations using approximation techniques for partial differential equations and stochastic differential equations. Zhang [6] also provided the convergence rate of numerical schemes for forward and backward stochastic differential equations. Chevance [7] assumed that when f is irrelevant with z, by discretizing conditional expectations, the following can be obtained:
This provided a numerical solution for Y and proved the convergence property. Coquet et al. [8] studied the convergence of numerical schemes for backward stochastic differential equations in the sense of function space using the convergence of filtration when f is irrelevant with z.
When f is relevant with z, Briand et al. [9] studied the Donsker-type theorem to develop a numerical scheme of backward stochastic differential equations, and investigated the following numerical scheme:
where , is a set of independent and identically distributed random variables, and
Here, we note , and set as a measurable random variable. Meanwhile, set
For , define
where when x is an integer, and when x is not an integer.
Briand et al. [9] found that converge to in a certain sense. This is a type of Donsker-type theorem that was first used to describe the weak convergence of partial sum processes to Brownian motion on (see Donsker [10]).
We can see that the Bernoulli distribution is critical for obtaining Donsker-type theorem for numerical solutions. The key contribution of this article is to extend the Bernoulli condition (2) to a more general condition: is a sequence of martingale difference or i.i.d. Gaussian variables. In the former case, we need an additional assumption (see Assumption 6 below) to deal with the absence of independent and identical distributions. In the latter case, we will use an estimation Lemma 5 to obtain the UT property, which is necessary to prove the Theorem 3. Moreover, we can see in Remark 1 that the original solution might not be adapted to the filtration. Therefore, we introduce an improved process to insure the adaptation. The structure of this article is as follows: Section 2 introduces the main results of this article. The Section 3 introduces a result for the convergence of filtration, which is our tool to prove the main results of this paper. In Section 4, we provide proof of the main results of this article. We also introduce some lemmas that could be used in Section 5.
2. Main Result
The discussion in this article is based on the complete probability space with filtration ; W is a standard Brownian motion on ; and filtration is the natural filtration generated by W, assumed to be right continuous. Consider random variable defined on the probability space above, and define
Similar to the settings of Briand et al. [9], we set , and for
and
where .
Remark 1.
In fact, (4) is equivalent to the following equation:
Let and subtract it from (5); we obtain
However, this could not yield that is adapted to the filtration because we know nothing about .
To adapt to filtration , we further define as a modified version of :
and for ,
Remark 2.
Because of the definition of [·] and ⌊·⌋ in the previous section, and are processes, and is a process.
This article investigates the weak convergence problem of in space . The topology of is . Related concepts are referenced in the work of Jacod and Shiryaev [11]. Here, we present a few symbols required for this article. means converge in law under the topology. represent for all . If , are two filtrations, then means that for any , probably converges to under topology . We denote the natural filtration generated by in (3) as , which is assumed to be right continuous. In the following, we recall the UT condition, which is crucial for the convergence of the stochastic integrals.
Definition 1.
A sequence of continuous -valued semimartingales is said to have the UT condition if for each , the decomposition has
where is a local martingale and is a local finite variation process.
Now, we provide the assumptions of this article.
Assumption 1.
is Lipschitz, which means there exists a constant , such that
Assumption 2.
ξ is measurable. For , is measurable, and
Assumption 3.
Assumption 4.
Assumption 5.
for , , .
Assumption 6.
For any bounded continuous function g,
Remark 3.
At first glance, Assumption 6 may seem quite strange. However, it is a technical condition to apply Lemma 1 to proof the convergence of filtration if is not an i.i.d. sequence.
Assumption 7.
is a family of independent and identically distributed Gaussian random variables, and , .
Remark 4.
Actually, under Assumptions 5 and 6 or Assumption 7, we can obtain through Lemma 4. Since W is continuous, we also obtain that converges under local uniform topology in law because of the property of the topology.
Now, we present the main results of this article. We should note that the results are based on Assumption 5 or Assumption 7, which cannot coexist, so the conditions for the main results are divided into two independent assumptions.
Theorem 1.
Under Assumptions 1–4, with 5 and 6 or 7, we have
Furthermore,
Now, we provide an example by applying Theorem 1.
Theorem 2.
Let be defined as follows:
where are in Assumptions 5 and 6 or Assumption 7 without superscript n. Let be the solution of the BSDE (1) with and be defined by (7) with . If is a bounded continuous function, then .
Proof.
From the Donsker’s theorem and Skorokhod representation theorem, we obtain that there exists a probability space, with a Brown motion W and a sequence of satisfying Assumptions 5 and 6 or Assumption 7 and
satisfy
As g is a bounded continuous function, Assumptions 2 and 3 are trivially satisfied. Then, we could apply Theorem 1 to come to the conclusion. □
3. Convergence of Filtration
Before proving our main results, we first provide the conclusion of the convergence of filtration. This conclusion is crucial for us to prove the main results of this article.
We denote the natural filtration generated by as in . We assume is right continuous. We first provide a necessary and sufficient condition for the convergence of filtration.
Lemma 1
(Coquet et al. [12]). S is an adaptive continuous process on ; is the natural filtration generated by that, assumed to be right continuous. The following statements are equivalent:
- (1).
- (2).
- For all , , and non-negative real numbers , in topology,
Based on the above results, we have
Theorem 3.
Set as a measurable integrable random variable; X is a measurable integrable random variable:
Assume
- (a)
- (b)
Under Assumptions 4–6 or 4 and 7, we have
Proof.
From Remark 1.(2) in Coquet et al. [12], if we can prove , together with condition (b), we could obtain . Hence, now we prove . According to Assumptions 4–6, we first prove for fix ,
According to Assumption 4, we only need to prove
For ,
According to Assumptions 4 and 6, we have . Therefore, we have proved the pointwise convergence in probability of , which is uniformly integrable for fix t to , which is a continuous martingale. Then, using Lemma 1 and Proposition 1.2 in Aldous [13], we can obtain
If Assumptions 4 and 7 hold, the convergence of filtration is a direct corollary of Proposition 2 of [12] since is a sequence of processes with independent increments. Therefore we have .
Next, we proceed to prove and . Under Assumptions 4–6 or 4 and 7, taking advantage of the Doob inequality and condition (a),
From Lemma 7, we could obtain
For , if Assumptions 4–6 hold, this trivially satisfies
On the other hand, if Assumptions 4 and 7 hold, noting that is bounded and
it is easy to verify (15) using Lemma 5. Hence, using the same proof, we have
Combining this with (14), we have
Since the convergence likely has addictive properties, we have proved the theorem. □
Using the above theorem, we can obtain the following theorem, similarly to Corollary 3.2 in the work of Briand et al. [9]. The proof is the same as that corollary, so we present only the theorem, without the proof.
Theorem 4.
Under the condition of Theorem 3, a predictable process exists, and , such that
and
4. Proof of Theorem 1
Equation (9) is a direct corollary of (10); therefore, we proceed to prove (10). Set ) as the solution of the backward stochastic differential equation:
and define
Set we have
where , and .
Using the Picard iteration, we can obtain in the sense of (10), which is a classic conclusion and can be found in the work of Zhang [14]. Therefore, only the convergence of other terms needs to be considered. Hence, now we need a only few lemmas.
Lemma 2.
Under Assumptions 5 and 6 or 7, there exists , such that when , ,
Proof.
First assume we have Assumptions 5 and 6; then, we need to prove that there exists , such that , for all ,
where
For the convenience of narration, we use instead of , and use instead of . Set ; since , we have ,
Since f is Lipschitz continuous, for ,
Furthermore, squaring the last term in (22) and using Assumption 5, we derive
On the other hand, using (24), we know
Therefore, if , the above inequality implies, for ,
Since is a martingale difference sequence, using the expectation, we have
Furthermore, using (26), we can obtain
Because of Assumption 5, we can easily verify that is a martingale. Using the Burkholder–Davis–Gundy inequality, , it follows that
Using (27), we have
where , set . Choose , such that . Make , where ; we hope that , so should be greater than or equal to . When , this equation tends towards . Hence, we set . On the other hand, when , . Based on the analysis above, there exists , such that when ,
Now, we prove that for ,
In fact, since ,
then,
so we have
According to Assumption 7, in fact, (20)–(23) and (25) are similar to what can be obtained. Meanwhile, it should be noted that (24) can be estimated as follows:
Hence,
Combining (25), we have
Taking the mathematical expectations from both sides, for , since is independent and identically distributed, we can also obtain (27). From (28), we have
According to the Burkholder–Davis–Gundy inequality, and Assumption 7, we have:
The remaining parts are similar and can be obtained. □
Lemma 3.
Under Assumptions 1–4, with 5 and 6 or 7, for , when , we have
Proof.
We will use induction to prove that if , then we have For convenience of notation, we omit the p and consider everything in a continuous setting; hence, by virtue of Remark 2, the situation in (16) and (17) changes to
where . Suppose we already know that converges to and we only need to prove that converges to . Now, we can use induction on p. For , , set
which satisfies
thus, is a -martingale. Since , we have
To use Theorems 3 and 4, we need to prove the convergence of under . Since we have
from Assumption 3, the equation above converges under . Applying Theorems 3 and 4, we have , where
and
Therefore, we can obtain
Our task is to prove
Hence, we only need to prove
where , and the result clearly follows from the Riemann integration. For the case of , using a similar analysis to Briand et al. [9], set
which satisfies
Hence, is a -martingale. Since , we have
To use Theorems 3 and 4, we need to prove the convergence of under . Noting that and are piecewise constant and Y, Z are continous, we have
The equation above converges under because of the boundedness of ; e refer to [14], Theorem 4.2.1. Thanks to Theorems 3 and 4, using the same step as used in , we again have
where
Now, we proceed to show
so we only need to prove
However, we have just proved this in (29), which finishes the proof. □
Now, we prove Theorem 1. Due to the two lemmas above, we only need to show
This can be used to show
In fact,
From Lemmas 2 and 3, the third term converged in probability. For the first term, thanks to Doob’s inequality, we have
where the last equality is because of the boundedness of .
Furthermore, we proved in Theorem 3; combining this with
where Y is the solution of (1), we can prove the second convergence using Lemma 6. Hence, we, at length, complete the proof.
5. Some Lemmas
Lemma 4.
Under Assumptions 5 and 6 or Assumption 7, it holds that .
Proof.
We follow the notations in the book by Jacod and Shiryaev [11], which states that if the Assumptions 5 and 6 hold, then ’s predictable characteristics “without truncation” are as follows:
Remember, the characteristics of the standard Brown motion are , , . Furthermore, for some , we have because of , implying
which tend toward 0 as . Additionally, as . Combining the above, thanks to [11] VII.3.7, we can obtain .
Now, if the Assumption 7 holds, the proof is the same as the above. □
Lemma 5.
Let , be a sequence of random variables satisfying the standard normal distribution . Then, we have
Consequently, it holds that
Proof.
On the one hand, let for . We have
Using Jensen’s inequality and taking the expectations on both sides of the above inequality, we could obtain
Taking , we can obtain .
On the other hand, it is obvious that for all . Since is independent, we can obtain
For , since , we have
Set ; it is easy to verify that
Using Fubini theorem, we can show that, for fix ,
It is well-known that for some , since is a Gaussian variable,
Therefore, , so, for some constants and a large arbitrary n, we have
which completes the first proof.
We shall start to prove the last result. Since we have and (35), it is obvious that we need only to verify
The same step is used as the proof in the beginning, noting that
we get
Taking , because of , we can obtain
We, at length, complete the proof. □
The following Lemma is a simplified version of a result of [12], Theorem 1.
Lemma 6.
Let be a sequence of filtrations and be a filtration on , such that . Let X be a adapted continuous process, such that is integrable. Then,
We also list Proposition 1.5 (b) and Corollary 1.9 in [15] for readers’ convenience.
Lemma 7.
If a sequence of continuous local martingales satisfies
then it has the UT condition. Moreover, if , then the quadratic variation process
Author Contributions
Writing—original draft, Y.G. and N.L. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Bismut, J. Conjugate convex functions in optimal stochastic control. J. Math. Anal. Appl. 1973, 44, 384–404. [Google Scholar] [CrossRef]
- Pardoux, É.; Peng, S. Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 1990, 14, 55–61. [Google Scholar] [CrossRef]
- El Karoui, N.; Peng, S.; Quenez, M. Backward stochastic differential equations in finance. Math. Financ. 1997, 7, 1–71. [Google Scholar] [CrossRef]
- Ma, J.; Protter, P.; Yong, J. Solving forward-backward stochastic differential equations explicitly—A four step scheme. Probab. Theory Relat. Fields 1994, 98, 339–359. [Google Scholar] [CrossRef]
- Douglas, J.; Ma, J.; Protter, P. Numerical methods for forward-backward stochastic differential equations. Ann. Appl. Probab. 1996, 6, 940–968. [Google Scholar] [CrossRef]
- Zhang, J. A numerical scheme for BSDEs. Ann. Appl. Probab. 2004, 14, 459–488. [Google Scholar] [CrossRef]
- Chevance, D. Numerical methods for backward stochastic differential equations. In Numerical Methods in Finance; Publications of the Newton Institute; Cambridge University Press: Cambridge, UK, 1997; Volume 13, pp. 232–244. [Google Scholar]
- Coquet, F.; Mackevičius, V.; Mémin, J. Stability in D of martingales and backward equations under discretization of filtration. Stoch. Process. Appl. 1998, 75, 235–248. [Google Scholar] [CrossRef]
- Briand, P.; Delyon, B.; Mémin, J. Donsker-type theorem for BSDEs. Electron. Commun. Probab. 2001, 6, 1–14. [Google Scholar] [CrossRef]
- Donsker, M. An invariance principle for certain probability limit theorems. Mem. Am. Math. Soc. 1951, 6, 1–12. [Google Scholar]
- Jacod, J.; Shiryaev, A. Limit Theorems for Stochastic Processes, 2nd ed.; Grundlehren der Mathematischen Wissenschaften; Springer: Berlin/Heidelberg, Germany, 2003; Volume 288. [Google Scholar]
- Coquet, F.; Mémin, J.; Słominski, L. On weak convergence of filtrations. In Séminaire de Probabiliés XXXV; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2001; Volume 1755, pp. 306–328. [Google Scholar]
- Aldous, D. Stopping times and tightness. Ann. Probab. 1978, 6, 335–340. [Google Scholar] [CrossRef]
- Zhang, J. Backward Stochastic Differential Equations: From Linear to Fully Nonlinear Theory; Springer: New York, NY, USA, 2017. [Google Scholar]
- Mémin, J.; Słominski, L. Condition UT et stabilité en loi des solutions d’équations différentielles stochastiques. In Séminaire de Probabilités XXV; Springer: Berlin/Heidelberg, Geramny, 2006; pp. 162–177. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).