Next Article in Journal
The Hausdorff Dimension and Capillary Imbibition
Previous Article in Journal
Local and Global Existence and Uniqueness of Solution for Time-Fractional Fuzzy Navier–Stokes Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Backward Stochastic Differential Equations Driven by a Jump Markov Process with Continuous and Non-Necessary Continuous Generators

1
Laboratory of Applied Mathematics, University of Biskra, P.O. Box 145, Biskra 07000, Algeria
2
Mathematics Department, College of Sciences, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(6), 331; https://doi.org/10.3390/fractalfract6060331
Submission received: 23 April 2022 / Revised: 5 June 2022 / Accepted: 9 June 2022 / Published: 15 June 2022
(This article belongs to the Section Probability and Statistics)

Abstract

:
We deal with backward stochastic differential equations driven by a pure jump Markov process and an independent Brownian motion (BSDEJs for short). We start by proving the existence and uniqueness of the solutions for this type of equation and present a comparison of the solutions in the case of Lipschitz conditions in the generator. With these tools in hand, we study the existence of a (minimal) solution for BSDE where the coefficient is continuous and satisfies the linear growth condition. An existence result for BSDE with a left-continuous, increasing and bounded generator is also discussed. Finally, the general result is applied to solve one kind of quadratic BSDEJ.

1. Introduction

In this work, we are interested in the following backward stochastic differential equations driven by both Wiener and Markov jump processes.
Y s = h ( X T ) + s T f ( r , X r , Y r , Z r , K r ( · ) ) d r s T Z r d B r s T Γ K r ( e ) q ( d r , d e ) .
where X is a Γ -valued (to be defined later) jump Markov process defined on a complete filtered probability space ( Ω , F , ( F t ) t 0 , T , P ) , B t : t 0 , T is a standard R -valued Wiener process, f is the generator, h ( X T ) is the terminal condition and q ( d t , d e ) stands for a random measure associated with the jump Markov process X.
It is well known that BSDEs driven by continuous Brownian motion without the jump part have been studied extensively in the literature and they comprise an important tool in the real world concerning applications such as control theory, finance, stochastic games, partial differential equations and homogenization. After the seminal work of Pardoux and Peng [1], where they proved the first results for the existence and uniqueness concerning BSDEs, many efforts have been made to relax the assumptions concerning the terminal data and the generator with respect to the state variables. More precisely, Hamadène [2] studied a BSDE whose generator f is locally Lipschitz and satisfies a reasonably growing condition. Thereafter, Lepeltier and San Martin [3] studied a one-dimensional BSDE with a bounded terminal condition and an only continuous generator, which satisfies the linear growth conditions. Bahlali [4] studied the existence and uniqueness of solutions for a multidimensional BSDE with a local Lipschitz coefficient and square-integrable terminal data. More recently, Abdelhadi and Khelfallah [5] extended the former results to the case of BSDEs driven by a jump Markov process. Regarding BSDE with unbounded generators, recently a paper was published by Gashi and Li [6] in the Brownian setting using Picard’s iteration scheme.
Many papers have also studied BSDEs driven by random jump processes, we can refer the reader to [7,8,9,10,11,12,13,14] and the references therein for more information about this subject. However, only a few papers have studied BSDEs driven by a random measure related to a pure jump process. Among these, Confortola and Fuhrman in [10] provided existence and uniqueness results for global Lipschitz BSDE driven by a pure Markov jump process. Further, they applied their own results to study nonlinear variants of the Kolmogorov equation of the Markov process and also to solve some optimal control problems. Then, the same authors in [9] studied a class of backward stochastic differential equations driven by a marked point process. Under appropriate assumptions, they proved the well-posedness and continuous dependence of the solution on the data. More recently, Confortola [15] proved the existence and uniqueness of L p -solutions ( p > 1 ) for a BSDE driven by a marked point process on a bounded time interval.
The aim of our work is to study BSDE driven by both a random measure associated with a jump Markov process and independent Brownian motion with a continuous and non-necessary continuous generator. We recall that El Otmani [16] studied BSDEs driven by a simple Lévy process with a continuous coefficient. Later, Yin and Mao [17] dealt with a class of BSDEs with Poisson jumps and random terminal times. They proved the existence of a unique solution along with two comparison theorems for such BSDEs under non-Lipschitz assumptions for the coefficient. These results were applied to investigate the existence and uniqueness of a minimal solution to one-dimensional BSDE with jumps in the case where its generator is merely continuous and of linear growth. Subsequently, Qin and Xia [18] studied one-dimensional BSDE driven by Poisson point processes with continuous and discontinuous coefficients. By means of the comparison theorem, the authors proved the existence of a (minimal) solution for such BSDEs where the coefficient is continuous and satisfies an improved linear growth assumption. Then, they extended the results for a left or right continuous coefficient.
The main contributions of the present paper are divided into two parts. The first part contains the first result which consists in proving the existence and uniqueness of the solution to the BSDEJ (1) under a global Lipschitz condition on the generator f. The ideas of the proof for the existence and uniqueness are classical but we include them for the sake of completeness. The second result concerns the comparison principle which compares solutions for different and comparable data (terminal values and generators). Its proof is new and non-standard, it makes use of appropriate martingales to perform Girsanov’s theorem. Moreover, this principle will be used to prove the existence of a minimal solution when the generators of the BSDE are only continuous with linear growth. The first result of the second part concerns the structure of the set of solutions of (1) in the case where the generator f is only continuous with linear growth in y , z and Lipschitz in k ( · ) . The comparison theorem is the main tool in the proof. The second result of this part consists in proving an existence result of solutions of (1) when the generator f is left-continuous and increasing in y.
This paper is organized as follows. In Section 2, we present some preliminary information concerning the jump Markov process and state some auxiliary results. In Section 3, we study BSDEJs with global Lipschitz coefficients; therein, we prove the existence and uniqueness result and comparison theorem as well. Section 4 is devoted to the BSDEJ with a continuous coefficient and the BSDEJ with a left continuous and increasing coefficient.

2. Preliminaries and Auxiliary Results

In this section we recall some notations for the jump Markov process. Let Ω , F , P be a complete probability space. Let Γ , E be a measurable space such that E contains all one-point sets, and let X be a normal jump Markov process and B a standard Wiener process. We denote by F t the filtration ( F t , s ) s t , + , such that ( F t , s ) s t , + is the right-continuous increasing family of F , defined by F t , s = σ X r , t r s σ B r , t r s N , where N is the totality of P -null sets.
Let Prog t be the progressive σ -algebra on t , + × Ω ; the same symbols will also denote the restriction to [ t , T ] × Ω ; let P t be the predictable σ -algebra. We define a transition measure (also called rate measure) ν ( s , x , A ) , s [ t , T ] , x Γ , A Γ from [ t , + [ × Γ to Γ , such that sup s [ 0 , T ] , x Γ ν ( s , x , Γ ) is finite and ν ( s , x , x ) = 0 .
For every t 0 , we define a sequence T n t n 0 of random variables with values in [ 0 , ] as follows
T 0 t ( ω ) = t , T n + 1 t ( ω ) = inf s > T n t ( ω ) : X s ( ω ) X T n t ( ω ) ( ω ) ,
with the convention that T n + 1 t ( ω ) = if the indicated set is empty. Since X is a jump process, we have T n t ( ω ) < T n + 1 t ( ω ) if T n t ( ω ) < . Since X is non-explosive, T n t ( ω ) tends towards infinity with n.
In other words, T n t are the jump times of X; we consider the marked point process ( T n t , X T n t ) and the associated random measure
p t ( d s , d e ) : = n 1 δ ( T n t , X T n t ) ( d s , d e ) on t , + × Γ ,
where δ stands for the Dirac measure. The compensator (also called the dual predictable projection) p ˜ t of p t is p ˜ t ( d s , d e ) = ν ( s , X s , d e ) d s , so that q t ( d s , d e ) : = p t ( d s , d e ) ν ( s , X s , d e ) d s is the Itô differential of an F t -martingale. Notice that
t s Γ K r ( e ) p t ( d r , d e ) = n 1 , T n t s K T n t X T n t , s [ t , T ]
is always well defined since T n t .
Next, we define functional spaces needed in the following.
  • For m [ 1 , ) , we define L m p t as the space of P t E -measurable real functions K s ( ω , e ) on Ω × [ t , T ] × Γ such that
    E t , x t T Γ K r ( e ) m p t ( d r , d e ) = E t , x t T Γ K r ( e ) m v r , X r , d e d s = E t , x t T Γ K r ( e ) m v r , X r , d e d s < +
  • L l o c 1 ( p t ) is the space of the real functions K such that K 1 0 , τ n L 1 ( p t ) for some increasing sequence of F t -stopping times τ n diverging to + .
  • M 2 is the space of real valued square integrable, progressively measurable and predictable processes ϕ = ϕ u : u 0 , T such that
    ϕ 2 = 0 T E ϕ u 2 d u < + .
  • L 2 ( Γ , E , ν ( · , x , d e ) ) is the space of processes k : Γ R such that
    k ( · ) ν = Γ k ( e ) 2 ν ( · , x , d e ) 1 2 < + .
  • S 2 is the space of processes y = y ω , t Ω × 0 , T , F t -adapted and right-continuous with the left limit (rcll), such that
    E sup t 0 , T y t 2 < + .
  • B = S 2 × M 2 × L 2 ( Γ , E , ν ( · , x , d e ) ) is the space of processes ( Y , Z , K ( · ) ) on 0 , T , such that
    ( Y , Z , K ( · ) ) B 2 = E sup s 0 , T Y s 2 + 0 T Z r 2 d r + 0 T K r ( · ) ν 2 d r < + .
    The space B , endowed with this norm, is a Banach space.
Remark 1.
A stochastic integral 0 · Γ W r ( e ) q t ( d r , d e ) is a finite variation martingale if W L 1 ( p t ) .
A solution of Equation (1) is the triple ( Y , Z , K ( · ) ) which belongs to the space ( B , · B ) and satisfies (1).
Now, we give the representation theorem which is one of the important tools to prove the results concerning the existence of solutions. Its proof can be found in ([19] Theorem 2.9).
Proposition 1.
Given ( t , x ) 0 , T × Γ , let M be a square integrable martingale and F t -adapted on [ t , T ] . Then, there exist two unique processes K ( · ) L 2 ( p ) and Z M 2 such that
M r = M t + t r Z u d B u + t r Γ K u ( e ) q t ( d u , d e ) , r [ t , T ] .
In what follows, we recall Girsanov’s theorem which plays a key role in the sequel. Let us denote by H 2 the set of square integrable martingales and by H the subset:
H = ( M s ) s 0 , T H 2 : w r ( e ) C , w r ( e ) > 1 , u M 2 ,
such that M s = t s u r d B r + t s Γ w r ( e ) q ( d r , d e ) . For all M H , the Doleans–Dade exponential is defined as
E T ( M ) = e M T 1 2 M c T s 0 , T ( 1 + Δ M s ) e Δ M s .
Proposition 2
(Girsanov’s theorem [20]). Let W L 2 ( q ) , V M 2 and U s = t s V r d B r + t s Γ W r ( e ) q ( d r , d e ) . For a given M H , we define U ˜ s = U s M , U s ; then, the process U ˜ is a martingale under the probability measure d Q : = E T ( M ) d P .
Remark 2.
For the sake of simplicity, throughout the following three sections we drop the superscripts t , x and shall state the results and their proofs for t = 0 .

3. BSDEJ with Global Lipschitz Coefficients

3.1. Problem Statement and Main Results

In this subsection, we discuss in short the existence and uniqueness results for the Equation (1) in the global Lipschitz case. The main hypothesis needed is the following:
Hypothesis 1: 
Hypothesis 1.1
( H 1.1 ). The final condition h : Γ R is E -measurable and E [ h ( X T ) 2 ] < + .
Hypothesis 1.2
( H 1.2 ). For every s 0 , T , x Γ , r R , z R , f ( s , x , r , z , · ) maps L 2 ( Γ , E , ν ( s , x , d e ) ) to R .
Hypothesis 1.3
( H 1.3 ). For every bounded and E -measurable function k ( · ) : Γ R , the mapping ( s , x , r , z ) f ( s , x , r , z , k ( · ) ) is B ( 0 , T ) E B ( R ) B ( R ) -measurable.
Hypothesis 1.4
( H 1.4 ). 0 T E [ f ( s , X s , 0 , 0 , 0 ) 2 ] d s < + .
Hypothesis 1.5
( H 1.5 ). There exists L 0 such that for every s 0 , T , x Γ , r , r ´ , z , z ´ R and k ( · ) , k ´ ( · ) L 2 ( Γ , E , ν ( s , x , d e ) )
f ( s , x , r , z , k ( · ) ) f ( s , x , r ´ , z ´ , k ´ ( · ) ) L ( r r ´ + z z ´ ) + | | k ( · ) k ´ ( · ) | | ν .
It is worth noting that, under Hypotheses 1, it was shown in Lemma 3.2 in [10] that the mapping ω , s , y , z f s , X s ω , y , z , K s ω , · is P B ( R ) -measurable if K ( · ) L 2 ( p ) . Furthermore, if Y is a Prog -measurable process then the mapping
ω , s f s , X s ω , Y s ω , Z s ω , K s ω , ·
is Prog -measurable.
Throughout the following theorem, we reveal the first main result of this paper.
Theorem 1.
Let Hypothesis 1 hold. Then, the BSDEJ (1) has a unique solution Y , Z , K ( · ) in B .
To prove the above theorem, we shall start by giving and proving the following lemmas.
Lemma 1.
Suppose that H 1.1 holds and f r : Ω × [ 0 , T ] R is Prog -measurable, such that both h ( X T ) and f r are square integrable. Then, the following BSDEJ
Y s = h ( X T ) + s T f r d r s T Z r d B r s T Γ K r ( e ) q ( d r , d e ) ,
has a unique solution ( Y , Z , K ( · ) ) B .
Proof. 
We break down the proof into two steps.
Step 1: We want to prove that there exists a process ( Y , Z , K ( · ) ) satisfying Equation (3). To do so, we consider the following martingale
M s = E h ( X T ) + 0 T f r d r F 0 , s .
The martingale representation property in Proposition 1 confirms that there exist two unique processes Z M 2 and K ( · ) L 2 ( p ) such that
M s = M 0 + 0 s Z r d B r + 0 s Γ K r ( e ) q ( d r , d e ) s 0 , T .
Define the process Y as follows
Y s = M s 0 s f r d r , s 0 , T .
It is worth noting that Y T = h ( X T ) ; then, a simple computation shows that BSDEJ (3) is verified. The uniqueness of Y is guaranteed by the uniqueness of Z · and K · ( · ) .
Step 2: We shall show that ( Y , Z , K ( · ) ) B . By taking the conditional expectation in (3), we arrive at
Y s = E h ( X T ) + s T f r d r F 0 , s .
Squaring both sides of the former equality and taking account of Jensen and Schwarz inequalities, we obtain
Y s 2 2 E h ( X T ) 2 + T 0 T f r 2 d r F 0 , s = : θ s .
Manifestly, the process ( θ s ) 0 s T is an F -martingale. For every stopping time τ with values in 0 , T , we have, using the optional stopping theorem
E Y τ 2 E θ τ = E θ T < + .
Let us define the increasing sequence of stopping times
τ n = inf s 0 , T : Y s 2 + 0 s Z r 2 d r + 0 s K r ( · ) ν 2 d r > n T ,
It is not difficult to check that τ n increases to T. Then, using Itô’s formula for semi-martingales (see Theorem 32 in [20]) to Y s 2 and integrating on the time interval s , T ,
Y s 2 = h ( X τ n ) 2 + 2 s τ n Y r f r d r s τ n Z r 2 d r 2 s τ n Y r Z r d B r 2 s τ n Γ Y r K r e q ( d r , d e ) s r τ n Δ Y r 2 .
Due the fact that Y r K r ( · ) L 1 ( p ) , one can easily check that the process
s t Γ Y r K r ( e ) q ( d r , d e ) t [ s , T ]
is an F -martingale. Indeed, from Young’s inequality and the fact that sup t [ 0 , T ] , x Γ ν ( t , x , Γ ) is finite, we obtain
s τ n Γ Y r K r e ν ( r , X r , d e ) d r 1 2 sup t [ 0 , T ] , x Γ ν ( t , x , Γ ) 0 T Y r 2 d r + 1 2 0 T Γ K r e 2 ν ( r , X r , d e ) d r < + .
In addition, we can rewrite the last term in equality (5) as the following:
s r τ n Δ Y r 2 = s τ n Γ K r ( e ) 2 p ( d r , d e ) = s τ n Γ K r ( e ) 2 q ( d r , d e ) + s τ n Γ K r e 2 ν ( r , X r , d e ) d r .
Then, from (5) and (6), we obtain
Y s 2 = Y τ n 2 + 2 s τ n Y r f r d r s τ n Z r 2 d r 2 s τ n Y r Z r d B r 2 s τ n Γ Y r K r ( e ) q ( d r , d e ) s τ n Γ K r ( e ) 2 q ( d r , d e ) s τ n K r ( · ) ν 2 d r .
Taking the expectation, we obtain
E Y s 2 + E s τ n Z r 2 d r + E s τ n K r ( · ) ν 2 d r = E Y τ n 2 + 2 E s τ n Y r f r d r .
We deduce, using (4) and Young’s inequality, 2 x y 2 x 2 + y 2 2 ,
E Y s 2 + E s τ n Z r 2 d r + E s τ n K r ( · ) ν 2 d r E Y T 2 + 2 T + 1 E 0 T f r 2 d r .
Consequently,
sup s 0 , τ n E Y s 2 + E 0 τ n Z r 2 d r + E 0 τ n K r ( · ) ν 2 d r C .
Now, we turn back to (7) and, using the Burkholder–Davis–Gundy inequality together with (8), we obtain
E sup 0 s τ n Y s 2 + E 0 τ n Z r 2 d r + E 0 τ n K r ( · ) ν 2 d r C ,
where C is a constant not dependent on n . Now, with n + in the above inequality, we infer, using Fatou’s lemma,
E sup 0 s T Y s 2 + 0 T E Z r 2 d r + 0 T E K r ( · ) ν 2 d r < + .
Then, we conclude that ( Y , Z , K ( · ) ) B . This achieves the proof of the Lemma.
Let us define the following sequence ( Y n , Z n , K n ( · ) ) n N as follows:
Y 0 = Z 0 = K 0 ( · ) = 0 ,
and ( Y n + 1 , Z n + 1 , K n + 1 ( · ) ) is the solution of the following BSDEJ,
Y s n + 1 = h ( X T ) + s T f ( r , X r n , Y r n , Z r n , K r n ( · ) ) d r s T Z r n + 1 d B r s T Γ K r n + 1 ( e ) q ( d r , d e ) ,
for all s [ 0 , T ] .
Lemma 2.
Let Hypothesis 1 hold true. Then, ( Y n , Z n , K r n ( · ) ) is a Cauchy sequence in the Banach space B .
Proof. 
First, let us denote
δ Y n , m = Y m Y n , δ Z n , m = Z m Z n , δ K n , m = K m ( · ) K n ( · ) ,
and
δ f n , m = f ( r , X r m , Y r m , Z r m , K r m ( · ) ) f ( r , X r n , Y r n , Z r n , K r n ( · ) ) .
Obviously δ Y T = 0 , so Itô’s formula applied to exp ( β s ) δ Y s n + 1 , m + 1 2 shows that
2 E s T exp ( β r ) δ Y r n + 1 , m + 1 δ f r n , m d r = E [ exp ( β s ) δ Y s n + 1 , m + 1 2 ] + β E s T exp ( β r ) δ Y r n + 1 , m + 1 2 d r + E s T exp ( β r ) δ Z r n + 1 , m + 1 2 d r + E s T Γ exp ( β r ) δ K r n + 1 , m + 1 ( e ) 2 ν ( r , X r , d e ) d r .
From the Lipschitz condition on f and the inequality 2 x y α 2 x 2 + y 2 α 2 , we obtain
E [ exp ( β s ) δ Y s n + 1 , m + 1 2 ] + ( β 3 L α 2 ) E s T exp ( β r ) δ Y r n + 1 , m + 1 2 d r + E s T exp ( β r ) δ Z r n + 1 , m + 1 2 d r + E s T Γ exp ( β r ) δ K r n + 1 , m + 1 ( e ) 2 ν ( r , X r , d e ) d r L α 2 E s T exp ( β r ) δ Y r n , m 2 d r + E s T exp ( β r ) δ Z r n , m 2 d r + L α 2 E s T Γ exp ( β r ) δ K r n , m ( e ) 2 ν ( r , X r , d e ) d r ,
choosing β and α such that β 3 L α 2 = 1 and L α 2 = 1 2 , we obtain
E s T exp ( β r ) δ Y r n + 1 , m + 1 2 d r + E s T exp ( β r ) δ Z r n + 1 , m + 1 2 d r 1 2 E s T exp ( β r ) δ Y r n , m 2 d r + E s T exp ( β r ) δ Z r n , m 2 d r + E s T Γ exp ( β r ) δ K r n , m ( e ) 2 ν ( r , X r , d e ) d r .
Again using Itô’s formula, Burkholder–Davis–Gundy and Gronwall’s lemma, it follows that for all m > n , there exists a universal constant M, such that
E sup s 0 , T exp ( β s ) δ Y s n + 1 , m + 1 2 + E s T exp ( β r ) δ Z r n , m 2 d r + E s T Γ exp ( β r ) δ K r n , m ( e ) 2 ν ( r , X r , d e ) d r M 2 n .
Hence, ( Y n , Z n , K r n ( · ) ) is a Cauchy sequence in the Banach space B . □
(Proof of Theorem 1). 
Now, we turn to giving the proof of the first main result of this section.
Existence: Thanks to Lemma 1, the sequence ( Y n , Z n , K n ( · ) ) in (9) is well defined and due to Lemma 2 ( Y n , Z n , K n ( · ) ) is a Cauchy sequence in the Banach space B .
Set ( Y , Z , K ( · ) ) : = lim n + ( Y n , Z n , K n ( · ) ) , using classical limit arguments one can check that ( Y , Z , K ( · ) ) is a solution of BSDEJ (1).
Uniqueness: To prove the uniqueness let us consider ( Y , Z , K ( · ) ) and ( Y ´ , Z ´ , K ´ ( · ) ) as two solutions of Equation (1); we note:
Y ¯ s = Y s Y ´ s , K ¯ s ( · ) = K s ( · ) K ´ s ( · ) , Z ¯ s = Z s Z ´ s ,
and
f ¯ s = f ( s , X s , Y s , Z s , K s ( · ) ) f ( s , X s , Y ´ s , Z ´ r , K ´ s ( · ) ) .
Noting that Y ¯ T = 0 , Itô’s formula applied to Y ¯ s 2 gives for all s [ 0 , T ]
E Y ¯ s 2 + E s T Z ¯ r 2 d r + E s T K ¯ r ( · ) ν 2 d r = 2 E s T Y ¯ r f ¯ r d r .
Since f is Lipschitz, we obtain
E Y ¯ s 2 + E s T Z ¯ r 2 d r + E s T K ¯ r ( · ) ν 2 d r 2 L E s T Y ¯ r Y ¯ r + Z ¯ r + K ¯ r ( · ) ν d r .
From the inequality 2 x y α 2 x 2 + y 2 α 2 , we obtain
E Y ¯ s 2 + s T E Z ¯ r 2 d r + s T E K ¯ r ( · ) ν 2 d r 2 L s T E Y ¯ r 2 d r + α 2 L s T E Y ¯ r 2 d r + L α 2 s T E Z ¯ r 2 d r + α 2 L s T E Y ¯ r 2 d r + L α 2 s T E K ¯ r ( · ) ν 2 d r .
Hence
E Y ¯ s 2 + 1 L α 2 s T E Z ¯ r 2 d r + s T E K ¯ r ( · ) ν 2 d r 2 L 1 + α 2 s T E Y ¯ r 2 d r .
If we choose α such that L α 2 = 1 2 , we obtain after a simple computation
E Y ¯ s 2 + 1 2 s T E Z ¯ r 2 d r + 1 2 s T E K ¯ r ( · ) ν 2 d r 2 L ( 1 + 2 L ) s T E Y ¯ r 2 d r .
The uniqueness of the solution follows immediately using Gronwall’s Lemma. □

3.2. Comparison Principle

In this subsection, we shall compare the solutions of two BSDEJs whenever we can compare their inputs which are described by their generators and terminal conditions. To this end, we consider the following assumptions on the generator f:
Hypothesis 2: 
Hypothesis 2.1
( H 2.1 ). There exists L 0 such that for every s 0 , T , x Γ , r , r ´ R , z , z ´ R , k ( · ) L 2 ( Γ , E , ν ( s , x , d e ) ) , we have
f ( s , x , r , z , k ( · ) ) f ( s , x , r ´ , z ´ , k ( · ) ) L r r ´ + z z ´ .
Hypothesis 2.2
( H 2.2 ). There exist two constants a and b, 1 < a < 0 , b > 0 such that for every s 0 , T , x Γ , r , z R and k , k ´ L 2 ( Γ , E , ν ( s , x , d e ) ) , we have
f ( s , x , r , z , k ( · ) ) f ( s , x , r , z , k ´ ( · ) ) Γ ( k ( e ) k ´ ( e ) ) φ s ( e ) ν ( s , x , d e ) ,
where φ : Ω × [ 0 , T ] × Γ R is P E -measurable and satisfies a < φ s ( · ) < b .
Theorem 2(Comparison Theorem).
Let h 1 and h 2 be two final conditions E -measurable for two BSDEJs driven by f 1 and f 2 , respectively, such that f 1 satisfies H 1.1 H 1.5 and f 2 satisfies H 1.1 H 1.3 , H 1.5 and H 2.1 H 2.2 . We denote by ( Y 1 , Z 1 , K 1 ( · ) ) and ( Y 2 , Z 2 , K 2 ( · ) ) the associated solutions in B .
(i) If
h 1 ( X T ) h 2 ( X T ) P - a . s . and f 1 ( s , x , y , z , k ( · ) ) f 2 ( s , x , y , z , k ( · ) ) ,
then
Y s 1 Y s 2 , s [ 0 , T ] , P - a . s .
(ii) Assume that the function φ s ( · ) defined in H 2.2 is nonnegative and for all s , x , y , z 0 , T × R × R × R , k ( · ) L 2 ( Γ , E , ν ( s , x , d e ) ) we have
f 2 ( s , x , y , z , k ( · ) ) = λ 1 + y + z + k φ s ( · ) .
Then we obtain (10).
(iii) If Y 0 1 = Y 0 2 , P -a.s., then Y s 1 = Y s 2   s 0 , T , P -a.s., Z s 1 = Z s 2 d s d P -a.e. and K s 1 ( e ) = K s 2 ( e ) ν ( s , x , d e ) d s d P -a.e.
Proof. 
Let us first prove (i). We denote δ Y s = Y s 1 Y s 2 , δ K s ( · ) = K s 1 ( · ) K s 2 ( · ) , δ Z s = Z s 1 Z s 2 and δ h = h 1 ( X T ) h 2 ( X T ) ; thus, the difference of the two solutions can be decomposed as follows
δ Y s = δ h + s T f 1 ( r , X r , Y r 1 , Z r 1 , K r 1 ( · ) ) f 2 ( r , X r , Y r 1 , Z r 1 , K r 1 ( · ) ) d r + s T f 2 ( r , X r , Y r 1 , Z r 1 , K r 1 ( · ) ) f 2 ( r , X r , Y r 1 , Z r 1 , K r 2 ( · ) ) d r + s T f 2 ( r , X r , Y r 1 , Z r 1 , K r 2 ( · ) ) f 2 ( r , X r , Y r 1 , Z r 2 , K r 2 ( · ) ) d r + s T f 2 ( r , X r , Y r 1 , Z r 2 , K r 2 ( · ) ) f 2 ( r , X r , Y r 2 , Z r 2 , K r 2 ( · ) ) d r s T δ Z r d B r s T Γ δ K r ( e ) q ( d r , d e ) .
We denote Λ r = e 0 r β u d u , where
β u = f 2 ( u , X u , Y u 1 , Z u 2 , K u 2 ( · ) ) f 2 ( u , X u , Y u 2 , Z u 2 , K u 2 ( · ) ) Y u 1 Y u 2 if Y u 1 Y u 2 0 0 otherwise .
We apply Itô’s formula to Λ s δ Y s between s and T to obtain:
Λ s δ Y s = Λ T δ h + s T Λ r f 1 ( r , X r , Y r 1 , Z r 1 , K r 1 ( · ) ) f 2 ( r , X r , Y r 1 , Z r 1 , K r 1 ( · ) ) d r + s T Λ r f 2 ( r , X r , Y r 1 , Z r 1 , K r 1 ( · ) ) f 2 ( r , X r , Y r 1 , Z r 1 , K r 2 ( · ) ) d r + s T Λ r f 2 ( r , X r , Y r 1 , Z r 1 , K r 2 ( · ) ) f 2 ( r , X r , Y r 1 , Z r 2 , K r 2 ( · ) ) d r s T Λ r δ Z r d B r s T Γ Λ r δ K r ( e ) q ( d r , d e ) .
Using H 2.2 and the fact that
Λ T δ h + s T Λ r f 1 ( r , X r , Y r 1 , Z r 1 , K r 1 ( · ) ) f 2 ( r , X r , Y r 1 , Z r 1 , K r 1 ( · ) ) d r 0 ,
we obtain
Λ s δ Y s s T Γ Λ r δ K r ( e ) φ r ( e ) ν ( r , X r , d e ) d r + λ s T Λ r δ Z r d r s T Γ Λ r δ K r ( e ) q ( d r , d e ) s T Λ r δ Z r d B r .
Set
M s = 0 s Γ φ r ( e ) q ( d r , d e ) + λ 0 s sgn δ Z r d B r .
and
U s = 0 s Γ Λ r δ K r ( e ) q ( d r , d e ) + 0 s Λ r δ Z r d B r .
Thus,
U ˜ s = 0 s Γ Λ r δ K r ( e ) q ( d r , d e ) 0 s Γ Λ r δ K r ( e ) φ r ( e ) ν ( r , X r , d e ) d r + 0 s Λ r δ Z r d B r λ 0 s Λ r δ Z r d r .
Girsanov’s theorem (see Proposition 2) claims that the process U ˜ is a martingale under the probability measure d Q : = E T ( M ) d P ; taking the conditional expectation under the probability measure Q on both sides of (12), we obtain Λ s δ Y s 0 Q-a.s. and thus P -a.s. Then, Y s 1 Y s 2 , s [ 0 , T ] , P -a.s.
Next, we proceed to prove (ii). Arguing as in the proof of assertion (i), one can easily show that
Λ s δ Y s s T Γ Λ r K 1 e K 2 e φ r ( e ) ν ( r , X r , d e ) d r + s T Λ r δ Z r d r s T Γ Λ r δ K r ( e ) q ( d r , d e ) s T Λ r δ Z r d B r , s T Γ Λ r δ K r ( e ) φ r ( e ) ν ( r , X r , d e ) d r + s T Λ r δ Z r d r s T Γ Λ r δ K r ( e ) q ( d r , d e ) s T Λ r δ Z r d B r .
Define the new martingale
N s = 0 s Γ sgn δ K r ( e ) φ r ( e ) q ( d r , d e ) + λ 0 s sgn δ Z r d B r .
Again using Girsanov’s theorem, it is not difficult to see that
K ^ s = 0 s Γ Λ r δ K r ( e ) q ( d r , d e ) 0 s Γ Λ r δ K r ( e ) φ r ( e ) ν ( r , X r , d e ) d r + 0 s Λ r δ Z r d B r λ 0 s Λ r δ Z r d r ,
is an F -martingale under the probability measure d Q ^ : = E T ( N ) d P and thus the result follows immediately by taking the conditional expectation under the probability measure Q in both sides of the inequality (13).
To prove (iii), we turn back to (11) and take s = 0 ; thus,
Λ T h 2 h 1 + 0 T Λ r f 2 ( r , X r , Y r 1 , Z r 1 , K r 1 ( · ) ) f 1 ( r , X r , Y r 1 , Z r 1 , K r 1 ( · ) ) d r = 0 T Λ r f 2 ( r , X r , Y r 1 , Z r 1 , K r 1 ( · ) ) f 2 ( r , X r , Y r 1 , Z r 1 , K r 2 ( · ) ) d r + 0 T Λ r f 2 ( r , X r , Y r 1 , Z r 1 , K r 2 ( · ) ) f 2 ( r , X r , Y r 1 , Z r 2 , K r 2 ( · ) ) d r 0 T Λ r δ Z r d B r 0 T Γ Λ r δ K r ( e ) q ( d r , d e ) ,
by the fact that the right-hand side of the above equality is an F -martingale under the probability measure d Q ^ : = E T ( N ) d P ; taking the expectation, we obtain
E Q ^ Λ T h 2 h 1 = 0 ,
and
E Q ^ 0 T Λ r f 2 ( r , X r , Y r 1 , Z r 1 , K r 1 ( · ) ) f 1 ( r , X r , Y r 1 , Z r 1 , K r 1 ( · ) ) d r = 0 .
Hence, h 2 = h 1 P -a.s. and f 1 = f 2 d t d P -a.e., which implies that Y s 1 = Y s 2 P -a.s. for all s [ 0 , T ] . Therefore, Z s 1 = Z s 2 d s d P -a.e. and K s 1 ( e ) = K s 2 ( e )   ν ( s , x , d e ) d s d P -a.e. □

4. BSDEJs with Non-Lipschitz Generators

4.1. BSDEJs with Continuous Coefficients

The purpose of this subsection is to prove an existence result for BSDEJ (1), covering the case where the generator f is continuous in y , z , with Lipschitz in k ( · ) , and satisfies the following linear growth type condition:
Hypothesis 3: 
Hypothesis 3.1
( H 3.1 ). For all s , ω , x , y , z 0 , T × Ω × R × R × R and k ( · ) L 2 ( Γ , E , ν ( s , x , d e ) ) , we have
f ( s , x , y , z , k ( · ) ) λ 1 + y + z + k φ s ( · ) ν .
where the function φ s ( · ) is defined in Theorem 2 (ii).
Theorem 3.
Let Assumptions H 1.1 H 1.3 , H 3.1 and H 2.2 hold true. Then, BSDEJ (1) has at least a minimal solution Y , Z , K ( · ) .
To prove this theorem, we use an approximation of the generator by increasing sequences of Lipschitz functions f n n 0 defined in the following lemma.
Lemma 3.
For all n λ , we consider f n n λ , defined by
f n ( s , x , y , z , k ( · ) ) = inf a , b Q × Q f ( s , x , a , b , k ( · ) ) + n a y + b z .
The sequence f n n λ has the following properties: for all ( s , x ) [ 0 , T ] × R and ( y , z , k ( · ) ) , ( y ´ , z ´ , k ´ ( · ) ) R × R × L 2 ( Γ , E , ν ( s , x , d e ) ) .
  • (A1) There exists n 0 such that
    f n ( s , x , y , z , k ( · ) ) f n ( s , x , y ´ , z ´ , k ´ ( · ) ) n y y ´ + z z ´ + k ( · ) k ´ ( · ) ν ,
  • (A2)  f n + 1 ( s , x , y , z , k ( · ) ) f n ( s , x , y , z , k ( · ) ) ,
  • (A3) There exist two constants a and b, 1 < a > 0 , b > 0 such that for every s 0 , T , x Γ , r , z R and k ( · ) , k ´ ( · ) L 2 ( Γ , E , ν ( s , x , d e ) ) , we have
    f n ( s , x , r , z , k ( · ) ) f n ( s , x , r , z , k ´ ( · ) ) Γ ( k ( e ) k ´ ( e ) ) φ s ( e ) ν ( s , x , d e ) ,
    where φ s ( e ) : Ω × [ 0 , T ] × Γ R is P E -measurable and satisfies a < φ s ( e ) < b .
  • (A4)  f n ( s , x , y , z , k ( · ) ) λ 1 + y + z + ( k φ s ) ( · ) ν ,
  • (A5) If lim n + y n , z n = y , z then lim n + f n ( s , x , y n , z n , k ( · ) ) = f ( s , x , y , z , k ( · ) ) .
Proof. 
Firstly, the proof can be performed as that of Lemma 1 in [21]. □
We note that for all n λ , the function f n verifies Hypothesis 3, which implies that there exists a unique solution Y n , Z n , K n ( · ) of BSDEJ with data f n , h ( X T ) . We establish a priori estimates on the sequence Y n , Z n , K n ( · ) .
Lemma 4.
There exists a constant C > 0 depending only on h , T , λ 2 such that for all n 1
sup n λ E sup s 0 , T Y s n 2 + 0 T E Z r n 2 d r + 0 T E K r n ( · ) ν 2 d r C .
Proof. 
From Itô’s formula applied to Y s n 2 , it follows that
Y s n 2 = h X T 2 + 2 s T Y r n f n ( r , X r , Y r n , Z r n , K r n ( · ) ) d r 2 s T Γ Y r n K r n ( e ) q ( d r , d e ) 2 s T Y r n Z r n d B r s T Z r n 2 d r s T Γ K r n ( e ) 2 q ( d r , d e ) s T K r n ( · ) ν 2 d r .
Taking the expectation in both sides of the previous inequality, we obtain
E Y s n 2 + s T E Z r n 2 d r + s T E K r n ( · ) ν 2 d r = E h X T 2 + 2 s T E Y r n f n ( r , X r , Y r n , Z r n , K r n ( · ) ) d r .
Therefore, we obtain from ( A 3 ) , Young’s inequality 2 x y 2 x 2 + y 2 2 and H 2.1 ,
E Y s n 2 + E s T Z r n 2 d r + E s T K r n ( · ) 2 d r 2 C + 2 λ 2 T + 2 1 2 + 4 λ 2 + 2 λ E s T Y r n 2 d r .
Hence, Gronwall’s lemma yields
sup s [ 0 , T ] E Y s n 2 + E 0 T Z r n 2 d r + E 0 T K r n ( · ) 2 d r C .
Now, returning to (14), we use ( A 3 ) , Young’s inequality 2 x y 2 x 2 + y 2 2 and Burkholder–Davis–Gundy inequality to obtain
E sup s [ 0 , T ] Y s n 2 + 0 T E Z r n 2 d r + 0 T E K r n ( · ) ν 2 d r E h X T 2 + 2 λ 2 T + 1 2 + 2 λ 2 + 2 λ 0 T E Y s n 2 d r + 1 2 0 T E Z r n 2 d r + 0 T E K r n ( · ) ν 2 d r + 4 C κ E sup s 0 , T Y s 2 + 4 κ C 0 T E Z r 2 d r + 0 T E K r n ( · ) ν 2 d r .
Then
1 4 C κ E sup t 0 , T Y t n 2 + 0 T E Z r n 2 d r + 0 T E K r n ( · ) ν 2 d r C ´ + 2 λ 2 T + 1 2 + 4 κ C + 1 2 + 2 λ 2 + 2 λ 0 T E sup t 0 , r Y t n 2 d r .
Finally, Gronwall’s lemma gives the desired result. □
(Proof of Theorem 3). 
We split the proof into the following three steps
Step 1: In this step we prove that there exists a process Y · S 2 as the infimum limit of Y · n . Set
g ( s , x , y , z , k ( · ) ) = λ 1 + y + z + k φ s ( · ) ν .
Let ( Y ´ , Z ´ , K ´ ( · ) ) be the unique solution of the BSDEJ with data g , h ( X T ) , which is ensured by Theorem 1. Remember that for each n, Y n , Z n , K n ( · ) is the unique solution of BSDEJ with data f n , h ( X T ) . Now, thanks to A 2 and Theorem 2, the sequence Y s n n 1 is non-decreasing and bounded by Y ´ s . Therefore, there exists a stochastic process Y as the limit of the sequence Y s n : Y s = lim n + Y s n . From Lemma 4, we have E [ sup s 0 , T Y s n 2 ] C ; then, Fatou’s Lemma gives
E sup s 0 , T Y s 2 = E sup s 0 , T lim n + Y s n 2 lim inf n + E sup s 0 , T Y s n 2 C ,
which implies that Y S 2 . Then, from Lebesgue’s dominated convergence theorem, we obtain
lim n + 0 T E Y r n Y r 2 d r = 0 .
Step 2: In this step, we shall prove that Z n , K n ( · ) n 1 is a Cauchy sequence on M 2 × L 2 ( Γ , E , ν ( · , x , d e ) ) .
Using Itô’s formula and Holder’s inequality, we obtain for n , m 1 :
E Y s n Y s m 2 + s T E Z r n Z r m 2 d r + s T E K r n ( · ) K r m ( · ) ν 2 d r 2 s T E Y r n Y r m 2 d r 1 2 s T E f n ( r , X r , Y r n , Z r n , K r n ( · ) ) 2 d r 1 2 + 2 s T E Y r n Y r m 2 d r 1 2 s T E f m ( r , X r , Y r m , Z r m , K r m ( · ) ) 2 d r 1 2 .
Thanks to A 3 , Lemma 4 and ( a + b + c + d ) 2 4 a 2 + b 2 + c 2 + d 2 , we obtain
s T E f m ( r , X r , Y r m , Z r m , K r m ( · ) ) 2 d r 4 λ s T d r + s T E Y r m 2 d r + s T E Z r m 2 d r + s T E K r m ( · ) ν 2 d r 4 λ T + C ˜ = C .
Thus
E Y s n Y s m 2 + s T E Z r n Z r m 2 d r + s T E K r n ( · ) K r m ( · ) ν 2 d r C s T E Y r n Y r m 2 d r 1 2 n + 0 .
The term in the right-hand side of the previous inequality tends to 0 as n goes to infinity; therefore, Z n , K n ( · ) n 1 is a Cauchy sequence on M 2 × L 2 ( Γ , E , ν ( s , x , d e ) ) , and thus there exists a process Y , Z , K ( · ) B as the limit of the sequence Y n , Z n , K n ( · ) .
Step 3: In this step, we will show that Y , Z , K ( · ) satisfies (1). As n tends towards infinity, Y n , Z n , K n ( · ) converges to Y , Z , K ( · ) in the space B ; then, we obtain the convergence d t d P a.s. to Y , Z , K ( · ) . Then from ( A 5 ) we have
lim n + f n ( s , X s , Y s n , Z s n , K s ( · ) ) = f ( s , X s , Y s , Z s , K s ( · ) ) P - a . s .
Set G s , H s = sup n λ ( Y s n , Z s n ) ; then, from ( A 4 ), we obtain
sup n λ f n ( s , X s , Y s n , Z s n , K s ( · ) ) λ 1 + G s + H s + K s φ s ( · ) ν L 1 Ω .
By subtracting and adding f n ( r , X r , Y r n , Z r n , K r ( · ) ) , we obtain
0 T E f n ( r , X r , Y r n , Z r n , K r n ( · ) ) f ( r , X r , Y r , Z r , K r ( · ) ) 2 d r 0 T E f n ( r , X r , Y r n , Z r n , K r n ( · ) ) f n ( r , X r , Y r n , Z r n , K r ( · ) ) 2 d r + 0 T E f n ( r , X r , Y r n , Z r n , K r ( · ) ) f ( r , X r , Y r , Z r , K r ( · ) ) 2 d r .
Since f n is Lipschitz in k ( · ) , the first term in the right-hand side of the above equality tends to 0 as n goes to infinity; then, the dominated convergence theorem yields the convergence of the second term to 0. Therefore, there exists a subsection (still indexed by n) such that
lim n + 0 T f n ( r , X r , Y r n , Z r n , K r n ( · ) ) d r = 0 T f ( r , X r , Y r , Z r , K r ( · ) ) d r P - a . s .
Then, the Burkholder–Davis–Gundy inequality leads to
lim n + 0 T Z r n d B r = 0 T Z r d B r ,
and
lim n + 0 T Γ K r n ( e ) q ( d r , d e ) 0 T Γ K r ( e ) q ( d r , d e ) .
It remains to verify that lim n + Y s n = Y s P -a.s.
E Y s n Y s 2 = E s T f n ( r , X r , Y r n , Z r n , K r n ( · ) ) d r s T Z r n d B r s T Γ K r n ( e ) q ( d r , d e ) s T f ( r , X r , Y r , Z r , K r ( · ) ) d r + s T Z r d B r + s T Γ K r ( e ) q ( d r , d e ) 2 .
Since ( a + b + c ) 2 3 a 2 + b 2 + c 2 , the above equality becomes
E Y s n Y s 2 3 E s T f n ( r , X r , Y r n , Z r n , K r n ( · ) ) f ( r , X r , Y r , Z r , K r ( · ) ) d r 2 + 3 E s T Z r n Z r d B r 2 + 3 E s T Γ K r n ( e ) K r ( e ) q ( d r , d e ) 2 .
But all the three terms on the right-hand side of the above inequality converge to zero as n goes to infinity; hence, the theorem is proved. □

4.2. On the Set of Solutions of BSDEJ

In this subsection, we draw attention to the set of solutions of a one-dimensional BSDEJ with jump when the drift term is assumed to be continuous and of linear growth in ( y , z , k ( · ) ) . We then prove that there exists either one or uncountably many solutions of Equation (1). We note Y max , Z max , K max ( · ) as the maximal solution and Y min , Z min , K min ( · ) as the minimal solution of BSDEJ (1).
Hypothesis 3.2
( H 3.2 ). Assume that for every s 0 , T , x Γ , the mapping r , z , k ( · ) f ( s , x , r , z , k ( · ) ) is continuous and there exists L 0 such that for every r , z R , k ( · ) , k ´ ( · ) L 2 ( Γ , E , ν ( s , x , d e ) )
f ( s , x , r , z , k ( · ) ) f ( s , x , r , z , k ´ ( · ) ) L k ( · ) k ´ ( · ) ν .
Theorem 4.
We assume that H 1.1 H 1.4 and H 3.1 H 3.2 hold true. Then, for each t 0 0 , T and ξ L 2 Ω , F t 0 , P such that Y t 0 min ξ Y t 0 max a.s., there exists at least one solution Y , Z , K ( · ) B to BSDEJ (1), satisfying Y t 0 = ξ .
Proof. 
We consider the following BSDEJ for any t t 0 , T
Y t 1 = ξ + t t 0 f ( r , X r , Y r 1 , Z r 1 , K r 1 ( · ) ) d r t t 0 Z r 1 d B r t t 0 Γ K r 1 ( e ) q ( d r , d e ) .
From Theorem 3, the previous equation has at least one solution Y 1 , Z 1 , K 1 ( · ) ; we also consider the following SDE
Y t 2 = ξ t 0 t f ( r , X r , Y r 2 , Z r 2 , K r 2 ( · ) ) d r + t 0 t Z r 2 d B r + t 0 t Γ K r 2 ( e ) q ( d r , d e ) ,
for a fixed Z 2 , K 2 ( · ) M 2 × L 2 ( Γ , E , ν ( s , x , d e ) ) ; let ( Y t 2 ) t [ t 0 , T ] be a strong solution to SDE (16). Now, we define a stopping time τ = inf { t t 0 : Y t 2 ( Y t 0 min , Y t 0 max ) } , such that Y T min = Y T max , and
Y t , Z t , K t = 1 0 , t 0 Y t 1 , Z t 1 , K t 1 ( · ) + 1 t 0 , τ Y t 2 , Z t 2 , K t 2 ( · ) + 1 τ , T Y t max , Z t max , K t max ( · ) 1 Y τ = Y τ max + 1 τ , T Y t min , Z t min , K t min ( · ) 1 Y τ < Y τ max ,
is a solution to BSDEJ (1) with Y T = h X T and Y t 0 = ξ .
This result is an extension of the one obtained by Jia and Peng [22] corresponding to the Brownian setting to BSDEs with jumps.

4.3. BSDEJ with Left Continuous and Increasing Coefficients

The aim of this subsection is to prove that BSDEJ (1) has at least one solution, which belongs to the Banach space B , assuming that f is only left-continuous in y and bounded. We fix a deterministic terminal time T > 0 and we assume further that:
Hypothesis 3.3
( H 3.3 ). There exists L 0 such that for every s 0 , T , x Γ , r R , z , z ´ R , k ( · ) , k ´ ( · ) L 2 ( Γ , E , ν ( s , x , d e ) ) , we have
f ( s , x , r , z , k ( · ) ) f ( s , x , r , z ´ , k ´ ( · ) ) L z z ´ + k ( · ) k ´ ( · ) ν ,
Hypothesis 3.4
( H 3.4 ). The function y f ( s , x , y , z , k ( · ) ) is left-continuous and increasing.
Hypothesis 3.5
( H 3.5 ). There exists M > 0 such that for all ( s , x , y , z , k ( · ) ) , f ( s , x , y , z , k ( · ) ) M .
Theorem 5.
Let Assumptions H 1.1 H 1.4 , H 2.2 and H 3.2 H 3.5 hold true. Then, BSDEJ (1) has at least one solution Y , Z , K ( · ) B .
To prove this theorem, we use the classical approximation by convolution on the generator f. We define f n n 0 by
f n s , x , y , z , k ( · ) = n y 1 n y f s , x , r , z , k ( · ) d r .
The sequence f n n 0 enjoins the following properties:
P1
There exist C n 0 for each n such that
f n ( s , x , y , z , k ( · ) ) f n ( s , x , y ´ , z ´ , k ´ ( · ) ) C n y y ´ + z z ´ + k ( · ) k ´ ( · ) ν ;
P2
The sequence f n n 0 is increasing;
P3
There exist two constants a and b, 1 < a < 0 , b > 0 such that for every s 0 , T , x Γ , r , z R and k ( · ) , k ´ ( · ) L 2 ( Γ , E , ν ( s , x , d e ) ) , we have
f n ( s , x , r , z , k ( · ) ) f n ( s , x , r , z , k ´ ( · ) ) C Γ ( k ( e ) k ´ ( e ) ) φ s ( e ) ν ( s , x , d e ) ;
P4
ζ 0 , T × R × R × L 2 ( Γ , E , ν ( s , x , d e ) ) , sup n 1 f n ζ M ;
P5
If lim n + y n , z n = y , z , then lim n + f n ( s , x , y n , z n , k ( · ) ) = f ( s , x , y , z , k ( · ) ) .
For all n 1 , the function f n verifies Hypothesis 2; then, from Theorem 1 there is a unique solution Y n , Z n , K n ( · ) of BSDEJ with data f n , h ( X ) . The following lemma gives a priori estimates for the sequence Y n , Z n , K n ( · ) which will be needed in the sequel.
Lemma 5.
There exists a constant C > 0 depending only on h, T and M such that for all n 1
E sup s 0 , T Y s n 2 + 0 T E Z r n 2 d r + 0 T E K r n ( · ) ν 2 d r C .
Proof. 
The proof of this lemma can be performed as the one of Lemma 4. □
(Proof of Theorem 5). 
By a method similar to that in the proof of Theorem 3, we deduce that there exists a process Y B as the infimum limit of the sequence Y n :
Y r = lim n + Y r n and lim n + 0 T E Y r n Y r 2 d r = 0 .
Now, we show that Z n , K n ( · ) is a Cauchy sequence in B ; for n , m 1 , we use Itô’s formula to obtain
E Y s n Y s m 2 + s T E Z r n Z r m 2 d r + s T E K r n ( · ) K r m ( · ) ν 2 d r 2 s T E Y r n Y r m f n ( r , X r n , Y r n , Z r n , K r n ( · ) d r + 2 s T E Y r n Y r m f m ( r , X r m , Y r m , Z r m , K r m ( · ) ) d r .
By invoking P 3 and using Holder’s inequality, we obtain
E Y s n Y s m 2 + s T E Z r n Z r m 2 d r + s T E K r n ( · ) K r m ( · ) ν 2 d r 4 M T 0 T E Y r n Y r m 2 d r 1 2 .
However, the right-hand side of the above inequality converges to zero as n tends towards infinity. So Y n , Z n , K n ( · ) n 1 is a Cauchy sequence on B ; then, there exists a process Y , Z , K ( · ) B as a limit of the sequence Y n , Z n , K n ( · ) .
To prove that Y , Z , K ( · ) verifies (1), we use the same method as that in the proof of Theorem 3. □

5. Application

In this section, we aim to go beyond the linear growth condition of the BSDEJ’s generator. More precisely, we use Theorem 3 to show the existence of an unnecessarily unique solution to one kind of quadratic BSDEJ. We define the following function which plays a crucial role in the proof of Theorem 6 below. It allows us to eliminate the additive quadratic and the non-linear functional terms in the BSDEJ (17).
Let ψ be a measurable continuous function that belongs to L 1 R . Define the following function
F ( x ) = 0 x exp 2 0 y ψ ( t ) d t d y .
It was shown in [23] that the function which belongs to C 2 R enjoys the following properties: (i) For a.e. x, F ( x ) 2 ψ ( x ) F ( x ) = 0 . (ii) F is a quasi-isometry; that is, there exist two positive constants m and M such that, for any x, y R ,
m x y F ( x ) F ( y ) M x y .
(iii) F is a one-to-one function from R onto R . Both F and its inverse function F 1 belong to C 2 ( R ) . Next, we use Theorem 3 to solve the following quadratic BSDEJ
Y s = h ( X T ) s T Z r d B r s T Γ K r ( e ) q ( d r , d e ) + s T H ( r , X r , Y r , Z r , K r · ) d s ,
where H ( r , X r , y , z , k ( · ) ) = f ( r , X r , y , z , k · ) + ψ y z 2 + K r , X r , y ψ and
k s , x , y ψ : = Γ F ( y + k ( e ) ) F ( y ) F ( y ) k ( e ) F ( y ) ν ( s , x , d e ) .
Theorem 6.
Assume that h satisfies H 1.1 and f satisfies Assumptions H 1.1 , H 1.2 and H 2.2 . Then, Equation (17) has at least one solution.
Proof. 
Let Y , Z , K ( · ) be a solution of BSDEJ (17). Then, Itô’s formula applied to F ( Y s ) shows that
F ( Y s ) = F ( h ( X T ) ) + s T F ( Y r ) f ( r , X r , Y r , Z r , K r ( · ) ) d r + s T F ( Y r ) ψ ( Y r ) Z r 2 + K r , X r , Y r ψ 1 2 F ( Y r ) Z r 2 d r s T F ( Y r ) Z r d B r s T Γ F ( Y r ) K r ( e ) q ( d r , d e ) s < r T F ( Y r ) F ( Y r ) F ( Y r ) Δ Y r ,
since F ( x ) ψ ( x ) 1 2 F ( x ) = 0 , and
s < r T F ( Y r ) F ( Y r ) F ( Y r ) Δ Y r = s T Γ F ( Y r + K r ( e ) ) F ( Y r ) F ( Y r ) K r ( e ) p ( d r , d e ) .
Moreover, by adding and subtracting the same term
s T Γ F ( Y r ) F ( Y r ) F ( Y r ) K r ( e ) ν ( r , X r , d e ) d r
we obtain
F ( Y s ) = F ( h ( X T ) ) + s T F ( Y r ) f ( r , X r , Y r , Z r , K r ( · ) ) d r + s T F ( Y r ) K r , X r , Y r ψ d r s T F ( Y r ) Z r d B r s T Γ F ( Y r ) K r ( e ) q ( d r , d e ) s T Γ F ( Y r + K r ( e ) ) F ( Y r ) F ( Y r ) K r ( e ) p ( d r , d e ) + s T Γ F ( Y r + K r ( e ) ) F ( Y r ) F ( Y r ) K r ( e ) ν ( r , X r , d e ) d r s T Γ F ( Y r + K r ( y ) ) F ( Y r ) F ( Y r ) K r ( e ) ν ( r , X r , d e ) d r .
This implies
F ( Y s ) = F ( h ( X T ) ) + s T F ( Y r ) f ( r , X r , Y r , Z r , K r ( · ) ) d r + s T F ( Y r ) K r , X r , Y r ψ d r s T F ( Y r ) Z r d B r s T Γ F ( Y r ) K r ( e ) q ( d r , d e ) s T Γ F ( Y r + K r ( e ) ) F ( Y r ) F ( Y r ) K r ( e ) q ( d r , d e ) s T Γ F ( Y r + K r ( y ) ) F ( Y r ) F ( Y r ) K r ( e ) ν ( r , X r , d e ) d r .
According to Lemma 4 and the definition of K r , X r , Y r ψ , we obtain,
F ( Y s ) = F ( h ( X T ) ) s T F ( Y r ) Z r d B r + s T F ( Y r ) f ( r , X r , Y r , Z r , K r ( · ) ) d r t T Γ F ( Y r + K r ( e ) ) F ( Y r ) q ( d r , d e ) .
Set
y r = F ( Y r ) , z r = F ( Y r ) Z r , k r ( e ) = F ( Y r + K r ( e ) ) F ( Y r ) ,
and
F ( Y r ) f ( r , X r , Y r , Z r , K r ( · ) ) = : g r , X r , y r , z r , k r ( · ) ,
we can write the previous equation in the following form
y s = F ( h ( X T ) ) + s T g r , X r , y r , z r , k r ( · ) d r s T z r d B r s T Γ k r ( e ) q ( d r , d e ) ) .
Conversely, let y , z , k ( . ) be a solution to BSDEJ (19); then, Itô’s formula applied to Y s = F 1 ( y s ) shows that
F 1 ( y s ) = h ( X T ) + s T ( F 1 ) ( y r ) g r , X r , y r , z r , k r ( · ) d r s T Γ ( F 1 ) ( y r ) k r ( e ) q ( d r , d e ) s T ( F 1 ) ( y r ) z r d B r 1 2 s T ( F 1 ) ( y r ) z r 2 d s s < r T F 1 ( y r ) F 1 ( y r ) ( F 1 ) ( y r ) Δ y r .
Then,
Y s = h ( X T ) s T ( F 1 ) ( y r ) z r d B r s T Γ ( F 1 ) ( y r ) k r ( e ) q ( d r , d e ) + s T ( F 1 ) ( y r ) g r , X r , y r , z r , k r ( · ) d s 1 2 s T ( F 1 ) ( y r ) z r 2 d r s T Γ F 1 ( y r + k r ( e ) ) F 1 ( y r ) ( F 1 ) ( y r ) k r ( e ) p ( d r , d e ) = h ( X T ) s T ( F 1 ) ( y r ) z r d B r s T Γ ( F 1 ) ( y r ) k r ( e ) q ( d r , d e ) + s T ( F 1 ) ( y r ) g r , X r , y r , z r , k r ( · ) d r 1 2 s T ( F 1 ) ( y r ) z r 2 d r + s T Γ F 1 ( y r + k r ( e ) ) F 1 ( y r ) ( F 1 ) ( y r ) k r ( e ) ν ( r , X r , d e ) d r s T Γ F 1 ( y r + k r ( e ) ) F 1 ( y r ) ( F 1 ) ( y r ) k r ( e ) q ( d r , d e ) .
Notice that
( F 1 ) ( x ) = 1 F ( F 1 ( x ) )
and
( F 1 ) ( x ) = F ( F 1 ( x ) ) ( F ( F 1 ( x ) ) ) 2 ( F 1 ) ( x ) = F ( F 1 ( x ) ) ( F ( F 1 ( x ) ) ) 3 .
Set Z s = ( F 1 ) ( y s ) z s and K s ( e ) = F 1 ( y s + k s ( e ) ) F 1 ( y s ) ; this implies
1 2 ( F 1 ) ( y r ) z r 2 = 1 2 F ( Y r ) ( F ( Y r ) ) 3 Z r 2 ( ( F 1 ) ( y r ) ) 2 = d s a . e . 1 2 F ( Y r ) F ( Y r ) Z r 2 = ψ ( Y r ) Z r 2
and
Γ F 1 ( y r + k r ( e ) ) F 1 ( y r ) ( F 1 ) ( y r ) k r ( e ) ν ( r , X r , d e ) = Γ K r ( e ) 1 F ( Y r ) F ( Y r ) F ( Y r ) ν ( r , X r , d e ) = Γ F ( Y r ) F ( Y r ) F ( Y r ) K r ( e ) F ( Y r ) ν ( r , X r , d e ) = K r , X r , Y r ψ
and
( F 1 ) ( y s ) g r , X r , y r , z r , k r ( · ) = f ( r , X r , F 1 y r , z r F ( F 1 y r ) , F 1 y r + k r ( e ) F 1 y r ) = f ( r , X r , Y r , Z r , K r ( · ) ) .
Substituting (21), (22) and (23) into (20) we end up with
Y t = h ( X T ) + s T H ( r , X r , Y r , Z r , K r ( · ) ) d r s T Z r d B r s T Γ K r ( e ) q ( d r , d e ) .
So far we have shown that BSDEJ (17) has a solution if and only if BSDEJ (19) has a solution. Therefore, since h satisfies H 1.1 and F is a Lipschitz function, it is easy to see that F h also satisfies H 1.1 . On the other hand, using the fact that F is bounded and f satisfies H 1.1 , H 1.2 and H 2.2 , one can show that g also satisfies assumptions H 1.1 , H 1.2 and H 2.2 . Therefore, Theorem 3 confirms that BSDEJ (19) has at least one solution y , z , k ( · ) B . This implies that BSDEJ (17) also has at least one solution Y , Z , K ( · ) which also belongs to B . Indeed, thanks to the Lipschitz continuity of F 1 , we can easily show that: Y s M y s , Z s M z s and K s ( e ) M k s ( e ) .

Concluding Remarks and Perspectives

In this paper, we proved some results concerning the existence and uniqueness of the solution to BSDEJ (1) under a global Lipschitz condition on the generator f. A comparison principle is discussed and is used to improve or relax some assumptions concerning the generators, especially the continuity and linear growth assumptions. Another interesting class of quadratic BSDEs with jumps in the Markovian case was studied, as well as their connection with PIDEs. Finally, the link between quadratic BSDEJs and BSDEJs with continuous generators to solve some quadratic backward stochastic differential equations in the case where the generator f is continuous with linear growth in y and bounded in ( z , k ( · ) ) is established.
We would like to investigate the existence of a solution for one-dimensional backward stochastic differential equations driven both by a Brownian motion and a pure jump martingale with two reflecting barriers. We expect that once this problem is solved, one can take the advantage of the domination method used for the example in [23,24] as well as the techniques developed in the recently published paper [25] to prove some results on quadratic BSDEs with jumps. In this direction, the comparison principle proved in this paper will be of great interest.

Author Contributions

Supervision, M.E. and N.K.; Writing—original draft, K.A.; Writing—review & editing, A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Deanship of Scientific Research at King Saud University (No. RG-1441-339).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The second and last named authors extend their appreciations to the Deanship of Scientific Research at King Saud University for funding this work through the Research Group No. (RG-1441-339).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pardoux, E.; Peng, S. Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 1990, 4, 55–61. [Google Scholar] [CrossRef]
  2. Hamadene, S. Équations différentielles stochastiques rétrogrades: Le cas localement lipschitzien. Ann. l’IHP Probab. Stat. 1996, 32, 645–659. [Google Scholar]
  3. Lepeltier, J.P.; Martin, J.S. Backward stochastic differential equations with continuous coefficient. Stat. Probab. Lett. 1997, 32, 425–430. [Google Scholar] [CrossRef]
  4. Bahlali, K. Existence and uniqueness of solutions for BSDEs with locally Lipschitz coefficient. Electron. Commun. Probab. 2002, 7, 169–179. [Google Scholar] [CrossRef]
  5. Abdelhadi, K.; Khelfallah, N. Locally Lipschitz BSDE with jumps and related Kolmogorov equation. Stochastics Dyn. 2022, 2250021. [Google Scholar] [CrossRef]
  6. Gashi, B.; Li, J. Backward stochastic differential equations with unbounded generators. Stochastics Dyn. 2019, 19, 1950008. [Google Scholar] [CrossRef]
  7. Bandini, E.; Confortola, F. Optimal control of semi-Markov processes with a backward stochastic differential equations approach. Math. Control Signals Syst. 2017, 29, 1. [Google Scholar] [CrossRef] [Green Version]
  8. Becherer, D. Bounded solutions to backward SDEs with jumps for utility optimization and indifference hedging. Ann. Appl. Probab. 2006, 16, 2027–2054. [Google Scholar] [CrossRef] [Green Version]
  9. Confortola, F.; Fuhrman, M. Backward stochastic differential equations and optimal control of marked point processes. SIAM J. Control Optim. 2013, 51, 3592–3623. [Google Scholar] [CrossRef] [Green Version]
  10. Confortola, F.; Fuhrman, M. Backward stochastic differential equations associated to jump Markov processes and their applications. Stoch. Process. Their Appl. 2014, 124, 289–316. [Google Scholar] [CrossRef]
  11. Kharroubi, I.; Ma, J.; Pham, H.; Zhang, J. Backward SDEs with constrained jumps and quasi-variational inequalities. Ann. Probab. 2010, 38, 794–840. [Google Scholar] [CrossRef]
  12. Royer, M. Backward stochastic differential equations with jumps and related nonlinear expectations. Stoch. Process. Their Appl. 2006, 116, 1358–1376. [Google Scholar] [CrossRef] [Green Version]
  13. Tang, S.; Li, X. Necessary Conditions for Optimal Control of Systems with Random Jumps. SIAM J. Control Optim. 1994, 32, 1447–1475. [Google Scholar] [CrossRef]
  14. Xia, J. Backward stochastic differential equations with random measures. Acta Math. Appl. Sin. 2000, 16, 225–234. [Google Scholar]
  15. Confortola, F. Lp solution of backward stochastic differential equations driven by a marked point process. Math. Control Signals Syst. 2019, 31, 1. [Google Scholar] [CrossRef]
  16. El Otmani, M. BSDE driven by a simple Lévy process with continuous coefficient. Stat. Probab. Lett. 2008, 78, 1259–1265. [Google Scholar] [CrossRef]
  17. Yin, J.; Mao, X. The adapted solution and comparison theorem for backward stochastic differential equations with Poisson jumps and applications. J. Math. Anal. Appl. 2008, 346, 345–358. [Google Scholar] [CrossRef] [Green Version]
  18. Qin, Y.; Xia, N.-M. BSDE driven by Poisson point processes with discontinuous coefficient. J. Math. Anal. Appl. 2013, 406, 365–372. [Google Scholar] [CrossRef]
  19. Kunita, H.; Watanabe, S. On square integrable martingales. Nagoya Math. J. 1967, 30, 209–245. [Google Scholar] [CrossRef] [Green Version]
  20. Protter, P. Stochastic Integration and Differential Equations; Springer: Berlin, Germany, 1990. [Google Scholar]
  21. Jacord, J. Multivariate point processes: Predictable projection, Radon-Nikodym derivatives, representation of martingales. Z. Wahrscheinlichkeitstheorie Verw Gebeite 1975, 31, 235–253. [Google Scholar] [CrossRef]
  22. Jia, G.; Peng, S. On the set of solutions of a BSDE with continuous coefficient. C. R. Acad. Sci. Paris Ser. I 2007, 344, 395–397. [Google Scholar] [CrossRef]
  23. Bahlali, K.; Eddahbi, M.; Ouknine, Y. Quadratic BSDE with L2-terminal data: Krylov’s estimate, Itô-Krylov’s formula and existence results. Ann. Probab. 2017, 45, 2377–2397. [Google Scholar] [CrossRef]
  24. Bahlali, K. A domination method for solving unbounded quadratic BSDEs. Grad. J. Math. 2020, 555, 20–36. [Google Scholar]
  25. Madoui, I.; Eddahbi, M.; Khelfallah, N. Quadratic BSDEs with jumps and related PIDEs. Stochastics 2022, 94, 386–414. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abdelhadi, K.; Eddahbi, M.; Khelfallah, N.; Almualim, A. Backward Stochastic Differential Equations Driven by a Jump Markov Process with Continuous and Non-Necessary Continuous Generators. Fractal Fract. 2022, 6, 331. https://doi.org/10.3390/fractalfract6060331

AMA Style

Abdelhadi K, Eddahbi M, Khelfallah N, Almualim A. Backward Stochastic Differential Equations Driven by a Jump Markov Process with Continuous and Non-Necessary Continuous Generators. Fractal and Fractional. 2022; 6(6):331. https://doi.org/10.3390/fractalfract6060331

Chicago/Turabian Style

Abdelhadi, Khaoula, Mhamed Eddahbi, Nabil Khelfallah, and Anwar Almualim. 2022. "Backward Stochastic Differential Equations Driven by a Jump Markov Process with Continuous and Non-Necessary Continuous Generators" Fractal and Fractional 6, no. 6: 331. https://doi.org/10.3390/fractalfract6060331

Article Metrics

Back to TopTop