Next Article in Journal
Mapping Landslide Susceptibility Using Machine Learning Algorithms and GIS: A Case Study in Shexian County, Anhui Province, China
Previous Article in Journal
Global Cognitive Functioning versus Controlled Functioning throughout the Stages of Development
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Backward Doubly Stochastic Differential Equations with Markov Chains and a Comparison Theorem

School of Mathematics, Shandong University, Jinan 250100, China
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(12), 1953; https://doi.org/10.3390/sym12121953
Submission received: 28 October 2020 / Revised: 15 November 2020 / Accepted: 24 November 2020 / Published: 26 November 2020

Abstract

:
In this paper we study the existence and uniqueness of solutions for one kind of backward doubly stochastic differential equations (BDSDEs) with Markov chains. By generalizing the Itô’s formula, we study such problem under the Lipschitz condition. Moreover, thanks to the Yosida approximation, we solve such problem under monotone condition. Finally, we give the comparison theorems for such equations under the above two conditions respectively.

1. Introduction

Backward stochastic differential equations (BSDEs) were first introduced by Pardoux and Peng [1]. Then a class of BDSDEs were introduced by Pardoux and Peng [2] in 1994, with two different directions of stochastic integrals, i.e., the equations involve both a forward stochastic integral d W t and a backward stochastic integral d B t . They have proved the existence and uniqueness of solutions for BDSDEs under uniformly Lipschitz conditions. Since then many efforts have been done in relaxing the Lipschitz conditions of the coefficient. For instance, Lepeltier and San Martin [3] have proved the existence of a solution for one-dimensional BSDEs when the coefficient is only continuous with linear growth in 1997. In 2002, Bahlali [4] dealt with multi-dimensional BSDEs with locally Lipschitz and sublinear growth coefficient. Then in 2004, Bahlali et al. [5] studied the existence and uniqueness of reflected BSDEs with both monotone and locally monotone coefficient. Inspired by the results above, Wu and Zhang [6] obtained the existence and uniqueness result of the solutions to BDSDEs with locally monotone and locally Lipschitz coefficient. On the other hand, Lepeltier and Martín [7], Kobylanski [8] studied BSDEs with generators of quadratic growth. In 2010, Zhang and Zhao [9] proved the existence and uniqueness of the L ρ 2 ( R d ; R 1 ) L ρ 2 ( R d ; R d ) -valued solutions for BDSDEs with linear growth and the monotonicity condition.
The comparison theorem is one of the important properties of the solutions of BDSDEs. It is not only important in basic fields, but also in stochastic control and financial mathematics. For example, it can be used to study viscosity solutions of the associated stochastic partial differential equations. For this, there have been many works. In 1997, Karoui et al. [10] studied the comparison theorem for one-dimension BSDEs, then Briand et al. [11] gave a converse comparison theorem for one-dimension BSDEs. After that, Hu and Peng [12] proved the comparison theorem for multidimensional BSDEs. Moreover, Yin and Situ [13] proved the comparison theorem of forward-backward SDEs with jumps and with random terminal time. In 2005, Shi et al. [14] firstly gave the comparison theorem for one-dimensional BDSDEs and by this, they showed the existence of the minimal solution of BDSDEs under linear growth conditions. Wu and Xu [15] proved some comparison theorems for forward–backward SDEs in one-dimension or multi-dimension by the probabilistic method and duality technique in 2009. In this paper, We are going to prove the comparison theorem for BDSDEs with Markov chains under locally Lipschitz and monotone conditions.
However, Brownian motion alone can not provide a good description of random phenomena in reality, such as the jump phenomenon in financial markets. In order to construct and describe more realistic models, we introduce Markov chains in the study of BDSDEs, which can better reflect random environment and has a strong application significance. For example, the applications of the regime-switching model in finance have received significant attention in recent years. It modulates the system with a continuous-time finite-state Markov chain with each state representing a regime of the system or level of an economic indicator, which depends on the market mode that switches among finite number states. However, the regime-switching model is based on the BSDEs driven by Markov chains with two-time-scale structure (see e.g., [16,17]), and such equations are based on the BSDEs with general Markov chains. Therefore, we relax the conditions of BSDEs with Markov chains and add a backward Brownian motion to drive it, which will provide a theoretical basis for the study of more general regime-switching models.
The organization of our paper is as follows. In Section 2, we recall some definitions and results for the Markov chains. We also give a new general Itô’s formula. The existence and uniqueness results about BDSDEs with Markov chains under Lipschitz condition and under monotone condition are given in Section 3 and Section 4 respectively. In the last section, we prove the comparison theorems for these kinds of BDSDEs.

2. Preliminaries

We set ( Ω , F , P ) as a probability space. Let { W t , 0 t T } and { B t , 0 t T } be two mutually independent standard Brownian motions defined on ( Ω , F , P ) , with values respectively in R d and R l . Let { α t , 0 t T } be a finite-state Markov chain with the state space I = { 1 , 2 , , m } , for some positive integer m. The transition intensities are λ i j ( t ) for i j with λ i j nonnegative and bounded and λ i i = j I { i } λ i j . Assume that W, B and α are independent. Let N be the class of P-null sets of F . For each t [ 0 , T ] , we define
F t F t W F t , T B F t α ,
where for any process η t , F s , t η = σ { η r η s ; s r t } N , F t η = F 0 , t η .
Remark 1.
The collection { F t , t [ 0 , T ] } does not constitute a filtration, for it is neither increasing nor decreasing.
Let | x | be the Euclidean norm of a vector x R k . For a d × d matrix A , we define A = Tr A A * .
For any n N , let M 2 [ t , T ] ; R n denote the set of (classes of d P × d t a.e. equal) n dimensional jointly measurable random processes φ s ; s [ t , T ] which satisfy:
(i)
E t T φ s 2 d s < ,
(ii)
φ s is F s measurable, for a.e. s [ t , T ] .
We denote similarly by S 2 [ t , T ] ; R n (resp. N 2 ( [ t , T ] × I ; R k ) ) the set of continuous n dimensional random processes which satisfy:
(i)
E sup t s T φ s 2 < + (resp. E j I t T | φ r ( j ) | 2 λ r ( j ) d r < + ),
(ii)
φ s is F s measurable, for any s [ t , T ] .
Let V t ( j ) denote the number of jumps of { α s , 0 s T } from any state in I to state j between time 0 and t and let V denote the corresponding integer-valued random measure on ( [ 0 , T ] × I , B ( [ 0 , T ] B I ) ) . The compensator of V t ( j ) is given by 1 α t j λ α t , j d t , i.e.,
d V ˜ t ( j ) d V t ( j ) 1 α t j λ α t , j d t ,
is a martingale (compensated measure). We set λ t ( j ) = 1 α t j λ α t , j . Then the canonical special semimartingale representation for α (see [18,19]) is given by
d α t = j I λ α t , j ( t ) ( j α t ) d t + j I ( j α t ) d V ˜ t ( j ) .
We need the following lemma, which is an extension of the well-known Itô’s formula. The proof is a combination of Theorem 5.1 in Chapter 2 in [20] and Lemma 1.3 in [2]. So we give the following lemma without proof.
Lemma 1.
Let a S 2 [ 0 , T ] ; R k , b M 2 [ 0 , T ] ; R k , c M 2 [ 0 , T ] ; R k × , d M 2 0 , T ; R k × d and e N 2 [ t , T ] × I ; R k ) be such that:
a t = a 0 + 0 t b s d s + 0 t c s d B s + 0 t d s d W s + j I 0 t e s ( j ) d V ˜ s ( j ) , 0 t T .
Then if ϕ C 2 R k ,
ϕ a t = ϕ a 0 + 0 t ϕ a s , b s d s + 0 t ϕ a s , c s d B s + 0 t ϕ a s , d s d W s 1 2 0 t Tr ϕ a s c s c s * d s + 1 2 0 t Tr ϕ a s d s d s * d s   + j I 0 t + ϕ ( a s + e s ) ϕ ( a s ) d V ˜ s ( j ) + j I 0 t ϕ ( a s + e s ) ϕ ( a s ) e s ϕ ( a s ) λ s ( j ) d s .

3. BDSDEs with Lipschitz Conditions

At the beginning of our study, we are going to study the BDSDEs with the Lipschitz condition. Let
f : Ω × [ 0 , T ] × R k × R k × d × I R k g : Ω × [ 0 , T ] × R k × R k × d × I R k × l
be jointly measurable such that for any ( y , z , i ) R k × R k × d × I ,
f ( · , y , z , i ) M 2 ( 0 , T ; R k ) g ( · , y , z , i ) M 2 ( 0 , T ; R k × l ) .
Moreover, we assume that there exist constants c > 0 and 0 < β < 1 such that for any ( ω , t , i ) Ω × [ 0 , T ] , ( y 1 , z 1 ) , ( y 2 , z 2 ) R k × R k × l ,
(H1) g ( t , y 1 , z 1 , i ) g ( t , y 2 , z 2 , i ) 2 c | y 1 y 2 | 2 + β z 1 z 2 2 ;
(H2) | f ( t , y 1 , z 1 , i ) f ( t , y 2 , z 2 , i ) | 2 c ( | y 1 y 2 | 2 + z 1 z 2 2 ) .
Given ξ L 2 Ω , F T , P ; R k , we consider the following backward doubly stochastic differential equation:
Y t = ξ + t T f s , Y s , Z s , α s d s + t T g s , Y s , Z s , α s d B s t T Z s d W s + j I t T U s ( j ) d V ˜ s ( j ) , 0 t T ,
where the integral with respect to B t is a backward Itô integral and the integral with respect to W t is a standard forward Itô integral. One can refer to Nualart and Pardoux [21] for more details.
The main objective of this section is to prove:
Theorem 1.
Under the above conditions (H1) and (H2), Equation (3) having a unique solution
( Y , Z , U ) S 2 ( [ 0 , T ] ; R k ) × M 2 ( [ 0 , T ] ; R k × d ) × N 2 ( [ 0 , T ] × I ; R k ) .
Before we start proving the theorem, let us establish the same result in the case when f and g do not depend on Y and Z . Given f N 2 ( [ 0 , T ] × I ; R k ) and g N 2 ( [ 0 , T ] × I ; R k ) and ξ as above, consider the BSDE:
Y t = ξ + t T f ( s , α s ) d s + t T g ( s , α s ) d B s t T Z s d W s + j I t T U s ( j ) d V ˜ s ( j ) , 0 t T .
Proposition 1.
There exists a unique triple
( Y , Z , U ) S 2 ( [ 0 , T ] ; R k ) × M 2 ( [ 0 , T ] ; R k × d ) × N 2 ( [ 0 , T ] × I ; R k ) ,
which solves Equation (4).
Proof. 
Uniqueness. Setting ( Y ¯ , Z ¯ , U ¯ ) is the difference of two solutions, then we have
Y ¯ t + s T Z ¯ s d W s j I t T U ¯ s ( j ) d V ˜ s ( j ) = 0 , 0 t T .
Hence by orthogonality
E | Y ¯ t | 2 + E t T Z ¯ s 2 d s + E t T j I | U ¯ s ( j ) | 2 λ s ( j ) d s = 0 .
Then Y t ¯ 0 P a.s., Z t ¯ 0 and U t ¯ 0 dtdP a.e.,
Existence. We define the filtration ( G t ) 0 t T by
G t = F t W F t α F T B
and the G t -square integrable martingale
M t = E G t ξ + 0 T f ( s , α s ) d s + 0 T g ( s , α s ) d B s , 0 t T .
If we set A ( j ) : = ( t , ω ) ; α t ( ω ) = j , j I , t [ 0 , T ] , by Burkholder–Davis–Gundy’s inequality and Hölder’s inequality, then there exists a constant 0 < C < , such that
E | M t | 2 C E | ξ | 2 + C E 0 T | f ( s , α s ) | 2 d s + C E 0 T | g ( s , α s ) | 2 d s .
By virtue of condition (H2), we have
E 0 T | f ( s , α s ) | 2 d s = E 0 T j I | f ( s , α s ) | 2 1 A ( j ) d s = j I E 0 T | f ( s , j ) | 2 1 A ( j ) d s j I E 0 T | f ( s , j ) | 2 d s < + .
Similarly, we have E 0 T | g ( s , α s ) | 2 d s < + , so E | M t | 2 < + . Then by the martingale representation theorem (see Crépey and Matoussi [18]) there exist some Z M 2 ( [ 0 , T ] ; R k × d ) and W N 2 ( [ 0 , T ] × I ; R k ) such that
M t = M 0 + 0 t Z s d W s + j I 0 t U s ( j ) d V ˜ s ( j ) .
Hence
M T = M t + 0 T Z s d W s + j I 0 T U s ( j ) d V ˜ s ( j ) .
Replacing M T and M t by their defining formulas and subtracting 0 t f ( s , α s ) d s + 0 t g ( s , α s ) d B s from both sides of the equality yields
Y t = ξ + t T f ( s , α s ) d s + t T g ( s , α s ) d B s t T Z s d W s j I t T U s ( j ) d V ˜ s ( j ) ,
where
Y t E G t ξ + t T f ( s , α s ) d s + t T g ( s , α s ) d B s .
We still have to prove that Y t , Z t and U t are in fact F t -adapted. For Y t , this is obvious since for each t
Y t = E Θ | F t F t B ,
where Θ is F T W F T α F t . T B measurable. Hence F t B is independent of F t σ ( Θ ) , and
Y t = E Θ | F t .
Now
t T Z s d W s + j I t T U s ( j ) d V ˜ s ( j ) = ξ + t T f ( s , α s ) d s + t T g ( s , α s ) d B s Y t
and the right side is F T W F s α F t , T B measurable. Hence, from the martingale representation theorem, Z s , t < s < T and U s , t < s < T are F s W F s α F t , T B adapted. Consequently Z s and U s are F s W F s α F t , T B measurable, for any t < s , so it is F s measurable. □
Proof of Theorem 1. 
Uniqueness. Let { ( Y t 1 , Z t 1 , U t 1 ) } and { ( Y t 2 , Z t 2 , U t 2 ) } be two solutions. Define
Y ¯ t = Y t 1 Y t 2 , Z ¯ t = Z t 1 Z t 2 , U ¯ t = U t 1 U t 2 , 0 t T .
Then
Y ¯ t = t T f s , Y s 1 , Z s 1 f s , Y s 2 , Z s 2 d s + t T g s , Y s 1 , Z s 1 g s , Y s 2 , Z s 2 d B s t T Z ¯ s d W s j I t T U ¯ s ( j ) d V ˜ s ( j ) .
Applying Lemma 1 to Y ¯ yields:
E ( Y ¯ t 2 ) + E t T Z ¯ s 2 d s + j I E t T U ¯ s ( j ) 2 λ s ( j ) d s = 2 E t T f s , Y s 1 , Z s 1 f s , Y s 2 , Z s 2 , Y ¯ s d s + E t T g s , Y s 1 , Z s 1 g s , Y s 2 , Z s 2 2 d s .
Hence from (H1) and (H2) and the inequality a b 1 2 ( 1 β ) a 2 + 1 β 2 b 2 ,
E Y ¯ t 2 + E t T Z ¯ s 2 d s + j I E t T | U ¯ s ( j ) | 2 λ s ( j ) d s c ( β ) E t T Y ¯ t 2 d s + 1 β 2 E t T Z ¯ s 2 d s + β E t T Z ¯ s 2 d s ,
where 0 < β < 1 is the constant appearing in (H1). Consequently
E ( Y ¯ t 2 ) + 1 β 2 E t T Z ¯ s 2 d s + j I E t T | U ¯ s ( j ) | 2 λ s ( j ) d s c ( β ) E t T Y ¯ s 2 d s .
From Gronwall’s lemma, E ( Y ¯ t 2 ) = 0 , 0 t T , and hence E 0 T Z ¯ t 2 d s = 0 .
Existence. We define recursively a sequence Y t i , Z t i , U t i i = 0 , 1 , as follows. Let Y t 0 0 , Z t 0 0 and U t 0 0 . By Proposition 4, for any Y t i , Z t i , U t i S 2 ( [ 0 , T ] ; R k ) × M 2 ( [ 0 , T ] ; R k × d ) × N 2 ( [ 0 , T ] × I ; R k ) , there exists a unique Y t i , Z t i , U t i satisfying:
Y t i + 1 = ξ + t T f s , Y s i , Z s i , α s d s + t T g s , Y s i , Z s i , α s d B s t T Z s i + 1 d W s + j I t T U s i + 1 ( j ) d V ˜ s ( j ) , 0 t T .
Moreover, by Proposition 4, Y t i + 1 , Z t i + 1 , U t i + 1 S 2 ( [ 0 , T ] ; R k ) × M 2 ( 0 , T ; R k × d ) × N 2 ( [ 0 , T ] × I ; R k ) . Let Y ¯ t i + 1 Y t i + 1 Y t i , Z ¯ t i + 1 Z t i + 1 Z t i and U ¯ t i + 1 U t i + 1 U t i , 0 t T . Applying Itô’s formula (Lemma 1) to | Y ¯ t i + 1 | 2 e θ t , we have
E Y ¯ t i + 1 2 e θ t + λ E t T Y ¯ s i + 1 2 e θ s d s + E t T Z ¯ s i + 1 2 + j I | U ¯ s i + 1 ( j ) | 2 λ s ( j ) e θ s d s = 2 E t T f s , Y s i , Z s i f s , Y s i 1 , Z s i 1 , Y ¯ s i + 1 e θ s d s + E t T g s , Y s i , Z s i g s , Y s i 1 , Z s i 1 2 e θ s d s .
There exists c , γ > 0 such that
E Y ¯ t i + 1 2 e θ t + ( θ γ ) E t T Y ¯ s i + 1 2 e θ s d s + E t T Z ¯ s i + 1 2 + j I | U ¯ s i + 1 ( j ) | 2 λ s ( j ) e θ s d s E t T c Y ¯ s i 2 + 1 + β 2 Z ¯ s i 2 e θ s d s E t T c Y ¯ s i 2 + 1 + β 2 Z ¯ s i 2 + 1 + β 2 j I U ¯ s i ( j ) 2 λ s ( j ) e θ s d s .
Now choose θ = γ + 2 c 1 + β , and define c ¯ = 2 c 1 + β . It follows immediately that
E t T c ¯ Y ¯ s i + 1 2 + Z ¯ s i + 1 2 + j I | U ¯ s i + 1 ( j ) | 2 λ s ( j ) e θ s d s 1 + β 2 i E t T c ¯ Y s 1 2 + Z s 1 2 + j I U s 1 ( j ) 2 λ s ( j ) e θ s d s .
Since 1 + β 2 < 1 , Y t i , Z t i i = 0 , 1 , 2 , is a Cauchy sequence in M 2 0 , T ; R k × N 2 ( [ 0 , T ] × I ; R k ) × M 2 0 , T ; R k × d . It is then easy to conclude that Y t i i = 0 , 1 , 2 , is also Cauchy in S 2 [ 0 , T ] ; R k and that
Y t , Z t , U t = lim i Y t i , Z t i , U t i
is a solution of Equation (3). □

4. BDSDEs with Monotone Coefficients

In this section, we study the BDSDEs with monotone coefficients. The main ideas come from Wu and Zhang [6]. Let f and g satisfying (1) and (2). Moreover we assume:
(H3) for any fixed ( w , t , i ) , f ( w , t , · , · , i ) is continuous,
(H4) there exist a constant K > 0 and a process f ¯ t M 2 ( 0 , T ; R ) such that
| f ( t , y , z ) | f ¯ t + K ( | y | + | z | ) , t , y , z , i .
(H5) there exists μ R such that
y 1 y 2 · f t , y 1 , z , i f t , y 2 , z , i μ y 1 y 2 2 , t , y 1 , y 2 , z , i .
(H6) there exists K > 0 such that
f t , y , z 1 , i f t , y , z 2 , i K z 1 z 2 , t , y , z 1 , z 2 , i .
Given ξ L 2 Ω , F T , P ; R m , we consider Equation (3). The main result of this section is the following.
Theorem 2.
Under Conditions (H1) and (H3)-(H6), BDSDE (3) admits a unique solution.
Remark 2.
For the following, we may assume, without loss of generality, that the constant μ in (H5) is equal to 0. Let ( Y , Z , U ) be the solution of BDSDE (3) and set for each s [ t , T ] ,
Y ¯ s = e β s Y s , Z ¯ s = e β s Z s , U ¯ s = e β s U s .
Applying Itô’s formula to Y ¯ t , we see that
d Y ¯ t = d e β t Y t + e β t d Y t = β Y ¯ t e β t f t , Y t , Z t , α t d t + e β t Z t d W t e β t g t , Y t , Z t , α t d B t + j I e β t U t ( j ) d V ˜ s ( j ) = f ¯ t , Y ¯ t , Z ¯ t , α t d t + Z ¯ t d W t g ¯ t , Y ¯ t , Z ¯ t , α t d B t + j I U ¯ t ( j ) d V ˜ t ( j ) ,
where
f ¯ ( t , y , z , i ) : = β y + e β t f t , e β t y , e β t z , i
and
g ¯ ( t , y , z , i ) : = e β t g t , e β t y , e β t z , i .
It is easy to check that
y y f ¯ ( t , y , z , i ) f ¯ t , y , z , i = β y y 2 + e β t y y f t , e β t y , e β t z , i f t , e β t y , e β t z , i ( μ β ) y y 2 .
Let β = μ , then the transformed processes ( Y ¯ , Z ¯ , U ¯ ) is the solution of a BDSDE with the generator f ¯ satisfying
y y f ¯ ( t , y , z , i ) f ¯ t , y , z , i 0 .
Before proving the theorem, we first recall the following definition and lemma from [6].
Definition 1.
Let F : R n R n be a continuous function such that
x 1 x 2 · F x 1 F x 2 0 , x 1 , x 2 R n .
Then for any α > 0 and y R n , there exists a unique x = J α ( y ) such that x α F ( x ) = y . We define the Yosida approximations F α , α > 0 , of F , by setting
F α ( x ) : = F J α ( x ) = α 1 J α ( x ) x , x R n .
Lemma 2.
Let F be a continuous and monotone function, and F γ , γ > 0 , be its Yosida approximations. Then we have
(i) 
γ > 0 , F γ ( x ) | F ( x ) | , x R n .
(ii) 
γ > 0 , x 1 , x 2 R n ,
x 1 x 2 · F γ x 1 F γ x 2 0 , F γ x 1 F γ x 2 2 γ 1 x 1 x 2 .
(iii) 
γ , β > 0 ,
x 1 x 2 · F γ x 1 F β x 2 ( γ + β ) F x 1 + F x 2 2 , x 1 , x 2 R n .
(iv) 
x γ R n , γ > 0 and x R n , if lim γ 0 x γ = x , then lim γ 0 F γ x γ = F ( x ) .
Proposition 2.
For any V M 2 0 , T ; R k × d , there exists a unique triple of F t measurable processes Y t , Z t , U t S 2 ( [ 0 , T ] ; R k ) × M 2 ( [ 0 , T ] ; R k × d ) × N 2 ( [ 0 , T ] × I ; R k ) , such that
Y t = ξ + t T f s , Y s , V s , α s d s + t T g s , Y s , V s , α s d B s t T Z s d W s j I t T U s ( j ) d V ˜ s ( j ) , 0 t T .
Proof. 
Uniqueness. If Y 1 , Z 1 , U 1 and Y 2 , Z 2 , U 2 are two solutions of BDSDE (5), then by Itô’s formula applied to Y t 1 Y t 2 2 it follows that, t [ 0 , T ]
E Y t 1 Y t 2 2 + E t T Z s 1 Z s 2 2 d s + E j I t T | U s 1 ( j ) U s 2 ( j ) | 2 λ s ( j ) d s c E t T Y s 1 Y s 2 2 d s .
Then the uniqueness can be concluded from Gronwall’s inequality.
Existence. For any V M 2 [ 0 , T ] ; R k × d , set f v ( s , y , i ) = f s , y , V s , i and g v ( s , y , i ) = g s , y , V s , i . Then f v is continuous and globally monotone in y , and g v is globally Lipschitz in y . Let f v γ , γ > 0 , be the Yosida approximations of f v . Then from Theorem 1 we conclude that, for any γ > 0 , the following BDSDE
Y t γ = ξ + t T f v γ s , Y s γ , α s d s + t T g v s , Y s γ , α s d B s t T Z s γ d W s j I t T U s γ ( j ) d V ˜ s ( j ) , 0 t T .
admits a unique Y γ , Z γ , U γ S 2 [ 0 , T ] ; R k × M 2 [ 0 , T ] ; R k × d × N 2 ( [ 0 , T ] × I ; R k ) . By Itô’s formula applied to Y t γ 2 , from properties (i), (ii) of Lemma 2, Gronwall’s inequality and the Burkholder-Davis-Gundy’s inequality, we can obtain that there exists c > 0 which is independent of γ , such that γ > 0 ,
E sup 0 t T Y t γ 2 + E 0 T Z t γ 2 d t + E j I 0 T U t γ ( j ) 2 λ t ( j ) d t c i I E | ξ | 2 + 0 T f v ( t , 0 , i ) 2 d t + 0 T g v ( t , 0 , i ) 2 d t .
By Itô’s formula applied to Y t γ Y t γ 2 , we have
E Y t γ Y t β 2 + E t T Z s γ Z s β 2 d s + E j I 0 T U t γ ( j ) U t β ( j ) 2 λ s ( j ) d t c E t T Y s γ Y s γ 2 d s + 2 ( γ + β ) i I E 0 T f v s , Y s γ , i + f v s , Y s β , i 2 d s .
Since
sup γ > 0 E 0 T f v s , Y s γ , i 2 d s c i I E | ξ | 2 + 0 T f ¯ t 2 + g ( t , 0 , 0 , i ) 2 + V t 2 d t < ,
applying Gronwall’s inequality and the Burkholder–Davis–Gundy’s inequality gives
sup 0 t T E Y t γ Y t β 2 + E 0 T Z t γ Z t β 2 d t + E j I 0 T U t γ ( j ) U t β ( j ) 2 λ t ( j ) d t c ( γ + β ) ,
and
E sup 0 t T Y t γ Y t β 2 c ( γ + β ) .
Hence, Y γ , Z γ , U γ , γ > 0 is a Cauchy sequence in S 2 0 , T ; R k × M 2 0 , T ; R k × d × N 2 ( [ 0 , T ] × I ; R k ) , and it has a limit denoted by ( Y , Z , U ) . Passing to the limit on γ , as γ 0 , in (6), from the dominated convergence theorem we obtain that ( Y , Z , U ) satisfies BDSDE (5). □
Proof of Theorem 2. 
By Proposition 2 we can construct a mapping Θ from M 2 0 , T ; R k × d into itself as follows. For any V M 2 , Z = Θ ( V ) can be uniquely determined by BDSDE (5). Let V , V M 2 , ( Y , Z , U ) and ( Y , Z , U ) be the solutions introduced by V and V respectively. We will use the notations V ¯ = V V , ( Y ¯ , Z ¯ , U ¯ ) = Y Y , Z Z , U U . It follows from Itô’s formula that γ R , t [ 0 , T ]
E e γ t Y ¯ t 2 + E t T e γ s γ Y ¯ s 2 + Z ¯ s 2 + j I | U ¯ s | 2 ( j ) λ s ( j ) d s E t T e γ s c + 2 c 2 1 ϵ Y ¯ s 2 + 1 + ϵ 2 V ¯ s 2 d s .
Hence, if we choose γ = c + 2 c 2 1 ϵ , then
E 0 T e γ s Z ¯ s 2 d s 1 + ϵ 2 E 0 T e γ s V ¯ s 2 d s .
Consequently, Θ is a strict contraction on M 2 , hence we get the unique solution of BDSDE (5). □

5. A Comparison Theorem

In this section, we will study the comparison theorem under the Lipschitz and monotone conditions respectively. We only consider the one-dimensional case, i.e., k = 1. For 0 t T and m = 1 , 2 , we consider the following BDSDEs:
Y t m = ξ m + t T f m s , Y s m , Z s m , α s d s + t T g s , Y s m , Z s m , α s d B s t T Z s m d W s j I t T U s m ( j ) d V ˜ s ( j ) .
Then we have the following comparison theorem.
Theorem 3.
Assume f 1 , f 2 and g satisfy assumptions (H1) and (H2) or satisfy assumptions (H1) and (H3)–(H6). Let Y 1 , Z 1 , U 1 and Y 2 , Z 2 , U 2 be solutions of Equation (7) with m equals to 1 and 2, respectively. Assume further that for any s , i , y and z , we have
f 1 ( t , y , z , i ) f 2 ( t , y , z , i ) a . s . , ( t , y , z , i ) [ 0 , T ] × R × R d × I ,
and ξ k L 2 ( Ω , F , P ) such that ξ 1 ξ 2 , a . s . . Then
Y 1 ( s ) Y 2 ( s ) , a , s . , s [ t , T ] .
Proof. 
We firstly disscuss the first case when f 1 , f 2 , g satisfy assumptions (H1) and (H2). Then Y t 1 Y t 2 , Z t 1 Z t 2 , U t 1 U t 2 satisfies the following BDSDE
Y t 1 Y t 2 = ξ 1 ξ 2 + t T f 1 s , Y s 1 , Z s 1 , α t f 2 s , Y s 2 , Z s 2 , α t d s + t T g s , Y s 1 , Z s 1 , α t g s , Y s 2 , Z s 2 , α t d B s t T Z s 1 Z s 2 d W s j I t T [ U s 1 ( j ) U s 2 ( j ) ] d V ˜ s ( j ) , 0 t T .
For all ϵ > 0 we introduce the following C 2 ( R ) function
φ ϵ ( y ) = y 2 , y 0 y 2 1 6 ϵ y 3 , 0 y 2 ϵ 2 ϵ y 4 3 ϵ 2 , y 2 ϵ ,
whose second derivative is bounded, uniformly with respect to ϵ > 0 . Obviously, for each real y,
φ ϵ ( y ) y 2 , φ ϵ ( y ) 2 y , φ ϵ ( y ) 2 I { y 0 } , as ϵ 0 .
Hence, by applying the Itô formula to φ ϵ Y ¯ t and taking ϵ 0 , we obtain
Y ¯ t 2 = Y ¯ T 2 2 t T Y ¯ s f s 1 Y s 1 , Z s 1 , α s f s 2 Y s 2 , Z s 2 , α s d s + 2 t T Y ¯ s Z ¯ s d W s t T I Y ¯ s 0 Z ¯ s 2 d s 2 t T Y ¯ s g s 1 Y s 1 , Z s 1 , α s g s 2 Y s 2 , Z s 2 , α s d B s + t T I Y ¯ s 0 g s 1 Y s 1 , Z s 1 , α s g s 2 Y s 2 , Z s 2 , α s 2 d s j I 0 t + [ ( Y ¯ s + U s ( j ) ) ] 2 [ ( Y ¯ s ) ] 2 d V ˜ s ( j ) j I 0 t + [ ( Y ¯ s + U s ( j ) ) ] 2 [ ( Y ¯ s ) ] 2 + 2 U ¯ s ( j ) Y ¯ s λ s ( j ) d s .
Since function l ( x ) : = ( x ) 2 is convex, we have
[ ( Y ¯ s + U s ( j ) ) ] 2 [ ( Y ¯ s ) ] 2 + 2 U ¯ s ( j ) Y ¯ s 0 .
Moreover, we have λ s ( j ) 0 , which leads to
j I 0 t + [ ( Y ¯ s + U s ( j ) ) ] 2 [ ( Y ¯ s ) ] 2 + 2 U ¯ s ( j ) Y ¯ s λ s ( j ) d s 0 .
Then taking expectation on both sides of Equation(8), we have
E Y ¯ t 2 = E ξ ¯ 2 2 E t T Y ¯ s f s 1 Y s 1 , Z s 1 , α s f s 2 Y s 2 , Z s 2 , α s d s + 2 E t T Y ¯ s Z ¯ s d W s t T I Y ¯ s 0 Z ¯ s 2 d s 2 E t T Y ¯ s g s 1 Y s 1 , Z s 1 , α s g s 2 Y s 2 , Z s 2 , α s d B s + E t T I Y ¯ s 0 g s 1 Y s 1 , Z s 1 , α s g s 2 Y s 2 , Z s 2 , α s 2 d s .
Because of ξ 1 ξ 2 , we have ξ 1 ξ 2 0 , so
E ξ 1 ξ 2 1 2 = 0 .
Since Y 1 , Z 1 , U 1 and Y 2 , Z 2 , U 2 are in S 2 ( [ 0 , T ] ; R ) × M 2 0 , T ; R d × N 2 ( [ 0 , T ] × I ; R k ) , it easily follows that
E t T Y s 1 Y s 2 Z s 1 Z s 2 d W s = 0 , E t T Y s 1 Y s 2 g s , Y s 1 , Z s 1 g s , Y s 2 , Z s 2 d B s = 0 , E j I 0 t + [ ( Y ¯ s + U s ( j ) ) ] 2 [ ( Y ¯ s ) ] 2 d V ˜ s ( j ) = 0 .
Let
Δ = 2 t T Y s 1 Y s 2 f 1 s , Y s 1 , Z s 1 , α s f 2 s , Y s 2 , Z s 2 , α s d s = 2 t T Y s 1 Y s 2 f 1 s , Y s 1 , Z s 1 , α s f 1 s , Y s 2 , Z s 2 , α s d s 2 t T Y s 1 Y s 2 f 1 s , Y s 2 , Z s 2 , α s f 2 s , Y s 2 , Z s 2 , α s d s = Δ 1 + Δ 2
where
Δ 1 = 2 t T Y s 1 Y s 2 f 1 s , Y s 1 , Z s 1 , α s f 1 s , Y s 2 , Z s 2 , α s d s , Δ 2 = 2 t T Y s 1 Y s 2 f 1 s , Y s 2 , Z s 2 , α s f 2 s , Y s 2 , Z s 2 , α s d s 0 .
From (H2) and Young’s inequality, it follows that
Δ Δ 1 2 C t T Y s 1 Y s 2 Y s 1 Y s 2 + Z s 1 Z s 2 d s
2 C + C 2 1 β t T Y s 1 Y s 2 2 d s + ( 1 β ) t T 1 Y s 1 Y s 2 Z s 1 Z s 2 2 d s ,
where C > 0 only depends on the Lipschitz constant c in (H1) and (H2). Using the assumption (H1), again, we deduce
t T 1 Y s 1 Y s 2 g s , Y s 1 , Z s 1 , α s g s , Y s 2 , Z s 2 , α s 2 d s t T 1 Y s 1 Y s 2 c Y s 1 Y s 2 2 + β Z s 1 Z s 2 2 d s = c t T Y s 1 Y s 2 2 d s + β t T 1 Y s 1 Y s 2 Z s 1 Z s 2 2 d s .
Then taking expectation on both sides of Equation (9), we get
E Y t 1 Y t 2 2 c + 2 C + C 2 1 β t T E Y s 1 Y s 2 2 d s .
By Gronwall’s inequality, it follows that
E Y t 1 Y t 2 2 = 0 t [ 0 , T ] .
That is, Y t 1 Y t 2 , a.s. , t [ 0 , T ]
For the second case that f 1 , f 2 and g satisfy conditions (H1) and (H3)–(H6), the proof is very similar to the first case and we only show the difference. Following the procedure as above, we get
Δ 1 = 2 t T Y s 1 Y s 2 f 1 s , Y s 1 , Z s 1 , α s f 1 s , Y s 2 , Z s 2 , α s d s = 2 t T Y s 1 Y s 2 f 1 s , Y s 1 , Z s 1 , α s f 1 s , Y s 2 , Z s 1 , α s d s 2 t T Y s 1 Y s 2 f 1 s , Y s 2 , Z s 1 , α s f 1 s , Y s 2 , Z s 2 , α s d s 2 t T Y s 1 Y s 2 f 1 s , Y s 1 , Z s 1 , α s f 1 s , Y s 2 , Z s 1 , α s I { Y s 1 Y s 2 } d s + 2 c t T Y s 1 Y s 2 Z s 1 Z s 2 d s c 2 1 β t T Y s 1 Y s 2 2 d s + ( 1 β ) t T 1 Y s 1 Y s 2 Z s 1 Z s 2 2 d s .
Then according to the proof of the first case, we get that Y t 1 Y t 2 , a.s. , t [ 0 , T ] and we conclude the proof. □

6. Discussion

It is well-known that the distribution of Brownian motion is symmetric and has rotational symmetry itself. Moreover, under the duality hypotheses, Brownian motion and Markov chains will have some time symmetry, see e.g., [22]. Therefore, our study can provide a theoretical basis for the establishment of symmetric models. Compared with [18], our innovation is to add a backward Brownian motion to the equation and relax the conditions of the equation, but the equation in this paper does not contain obstacles. Moreover, compared with [14], we studied the BDSDEs deriven by both Brownian motion and Markov chains while [14] studied the BDSDEs deriven only by Brownian motion.
In this paper, we studied the BDSDEs with Markov chains. Firstly, the existence and uniqueness results of the solutions to the BDSDEs were given under the Lipschitz condition. Then, we extended this result to the monotonicity condition. Finally, we proved the comparison theorem, which is very helpful for us to study the viscosity solution of the associated stochastic partial differential equation and the corresponding control problem. If the coefficient only satisfies the local monotonic condition, the study of the problem will be more difficult. We shall come back to this case in future work.

Author Contributions

All authors contributed equally to this article. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of China (11831010, 61961160732), and Shandong Provincial Natural Science Foundation (ZR2019ZD42).

Acknowledgments

The authors express their sincerest thanks to the reviewers for their valuable comments, which further improve the conclusion and proof process of the article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BDSDEsBackward doubly stochastic differential equations
BSDEsBackward stochastic differential equations

References

  1. Étienne, P.; Shige, P. Adapted solution of a backward stochastic differential equation. Syst. Control. Lett. 1990, 14, 55–61. [Google Scholar]
  2. Étienne, P.; Shige, P. Backward doubly stochastic differential equations and systems of quasilinear SPDEs. Probab. Theory Relat. Fields 1994, 98, 209–227. [Google Scholar]
  3. Lepeltier, J.-P.; Jaime, S.M. Backward stochastic differential equations with continuous coefficient. Stat. Probab. Lett. 1997, 32, 425–430. [Google Scholar] [CrossRef]
  4. Khaled, B. Backward stochastic differential equations with locally Lipschitz coefficient. C. R. l’Académie Sci. Ser. Math. 2001, 333, 481–486. [Google Scholar]
  5. Bahlali, K.; Essaky, E.H.; Ouknine, Y. Reflected backward stochastic differential equation with locally monotone coefficient. Stoch. Anal. Appl. 2004, 22, 939–970. [Google Scholar] [CrossRef]
  6. Wu, Z.; Zhang, F. BDSDEs with locally monotone coefficients and Sobolev solutions for SPDEs. J. Differ. Equ. 2011, 251, 759–784. [Google Scholar] [CrossRef] [Green Version]
  7. Lepeltier, J.-P.; Martín, J.S. Existence for BSDE with superlinear–quadratic coefficient. Stoch. Int. J. Probab. Stoch. Process. 1998, 63, 227–240. [Google Scholar] [CrossRef]
  8. Kobylanski, M. Backward stochastic differential equations and partial differential equations with quadratic growth. Ann. Probab. 2000, 28, 558–602. [Google Scholar] [CrossRef]
  9. Zhang, Q.; Zhao, H. Stationary solutions of SPDEs and infinite horizon BDSDEs with non-Lipschitz coefficients. J. Differ. Equ. 2010, 248, 953–991. [Google Scholar] [CrossRef] [Green Version]
  10. El Karoui, N.; Peng, S.; Quenez, M.C. Backward Stochastic Differential Equations in Finance. Math. Financ. 1997, 7, 1–71. [Google Scholar] [CrossRef]
  11. Briand, P.; Coquet, F.; Hu, Y.; Mémin, J.; Peng, S. A Converse Comparison Theorem for BSDEs and Related Properties of g-Expectation. Electron. Commun. Probab. 2000, 5, 101–117. [Google Scholar] [CrossRef]
  12. Hu, Y.; Peng, S. On the comparison theorem for multidimensional BSDEs. Probab. Theory 2006, 343, 135–140. [Google Scholar] [CrossRef]
  13. Yin, J.; Rong, S.T. The Comparison Theorem of Forward-Backward Stochastic Differential Equations with Jumps and with Random Terminal Time. Math. Prepr. Arch. 2002, 10, 674–682. [Google Scholar]
  14. Shi, Y.; Gu, Y.; Liu, K. Comparison theorems of backward doubly stochastic differential equations and applications. Stoch. Anal. Appl. 2005, 23, 97–110. [Google Scholar] [CrossRef]
  15. Wu, Z.; Xu, M. Comparison theorems for forward backward SDEs. Stat. Probab. Lett. 2009, 79, 426–435. [Google Scholar] [CrossRef]
  16. Tao, R.; Wu, Z.; Zhang, Q. BSDEs with regime switching: Weak convergence and applications. J. Math. Anal. Appl. 2013, 407, 97–111. [Google Scholar] [CrossRef]
  17. Tao, R.; Wu, Z.; Zhang, Q. Optimal switching under a regime-switching model with two-time-scale markov chains. Multiscale Model. Simul. 2009, 13, 99–131. [Google Scholar] [CrossRef]
  18. Crépey, S.; Matoussi, A. Reflected and doubly reflected BSDEs with jumps: A priori estimates and comparison. Ann. Appl. Probab. 2008, 18, 2041–2069. [Google Scholar] [CrossRef]
  19. Crépey, S. About the Pricing Equations in Finance. In Paris-Princeton Lectures on Mathematical Finance 2010; Springer: Berlin/Heidelberg, Germany, 2011; pp. 63–203. [Google Scholar]
  20. Ikeda, N.; Watanabe, S. Stochastic Differential Equations and Diffusion Processes; Elsevier: Amsterdam, The Netherlands, 1989. [Google Scholar]
  21. Nualart, D.; Pardoux, É. Stochastic calculus with anticipating integrands. Probab. Theory Relat. Fields 1988, 78, 535–581. [Google Scholar] [CrossRef]
  22. Chung, K.; Walsh, J.B. Markov Processes, Brownian Motion, and Time Symmetry; Springer: New York, NY, USA, 2006. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ma, N.; Wu, Z. Backward Doubly Stochastic Differential Equations with Markov Chains and a Comparison Theorem. Symmetry 2020, 12, 1953. https://doi.org/10.3390/sym12121953

AMA Style

Ma N, Wu Z. Backward Doubly Stochastic Differential Equations with Markov Chains and a Comparison Theorem. Symmetry. 2020; 12(12):1953. https://doi.org/10.3390/sym12121953

Chicago/Turabian Style

Ma, Ning, and Zhen Wu. 2020. "Backward Doubly Stochastic Differential Equations with Markov Chains and a Comparison Theorem" Symmetry 12, no. 12: 1953. https://doi.org/10.3390/sym12121953

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop